qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
---|---|---|---|---|---|
11,632,154 | In python if I have two dictionaries, specifically Counter objects that look like so
```
c1 = Counter({'item1': 4, 'item2':2, 'item3': 5, 'item4': 3})
c2 = Counter({'item1': 6, 'item2':2, 'item3': 1, 'item5': 9})
```
Can I combine these dictionaries so that the results is a dictionary of lists, as follows:
```
c3 = {'item1': [4,6], 'item2':[2,2], 'item3': [5,1], 'item4': [3], 'item5': [9]}
```
where each value is a list of all the values of the preceding dictionaries from the appropriate key, and where there are no matching keys between the two original dictionaries, a new kew is added that contains a one element list. | 2012/07/24 | [
"https://Stackoverflow.com/questions/11632154",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/801348/"
] | Or with a list comprehension:
```
from collections import Counter
c1 = Counter({'item1': 4, 'item2':2, 'item3': 5, 'item4': 3})
c2 = Counter({'item1': 6, 'item2':2, 'item3': 1, 'item5': 9})
merged = {}
for k in set().union(c1, c2):
merged[k] = [d[k] for d in [c1, c2] if k in d]
>>> merged
{'item2': [2, 2], 'item3': [5, 1], 'item1': [4, 6], 'item4': [3], 'item5': [9]}
```
Explanation
-----------
1. Throw all keys that exist into an anonymous set. (It's a set => no duplicate keys)
2. For every key, do 3.
3. For every dictionary d in the list of dictionaries `[c1, c2]`
* Check whether the currently being processed key `k` exists
+ If true: include the expression `d[k]` in the resulting list
+ If not: proceed with next iteration
[Here](http://docs.python.org/tutorial/datastructures.html#list-comprehensions) is a detailed introduction to list comprehension with many examples. | You can use `defaultdict`:
```
>>> from collections import Counter, defaultdict
>>> c1 = Counter({'item1': 4, 'item2':2, 'item3': 5, 'item4': 3})
>>> c2 = Counter({'item1': 6, 'item2':2, 'item3': 1, 'item5': 9})
>>> c3 = defaultdict(list)
>>> for c in c1, c2:
... for k, v in c.items():
... c3[k].append(v)
...
>>> c3
defaultdict(<type 'list'>, {'item2': [2, 2], 'item3': [5, 1], 'item1': [4, 6],
'item4': [3], 'item5': [9]})
``` |
51,745,894 | I am new to using python, and am wanting to be able to install packages for python using pip. I am having trouble running pip on my windows computer. When typing in "pip --version" into command prompt I get:
```
ModuleNotFoundError: No module named 'pip._internal'; 'pip' is not a package
```
I have added the scripts folder to the PATH environment variable as shown on the picture in this link
[Environment variables photo](https://i.stack.imgur.com/lXiFz.png)
(Stack overflow does not allow embedded pictures if you are new)
This is the contents of my scripts directory where pip is present:
```
Directory of C:\Users\....\AppData\Local\Programs\Python\Python37-32\Scripts
[.] [..] easy_install-3.7.exe
easy_install.exe pip-script.py pip.exe
pip.exe.manifest pip3 pip3-script.py
pip3.7-script.py pip3.7.exe pip3.7.exe.manifest
pip3.exe pip3.exe.manifest wheel.exe
```
Any help on this would be appreciated | 2018/08/08 | [
"https://Stackoverflow.com/questions/51745894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6814024/"
] | Force a reinstall of pip:
```
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python3 get-pip.py --force-reinstall
```
For windows you may have to `choco install curl` or set PATH to where python3 is located | In cmd try using
`py -3.6 -m pip install pygmae`
replace 3.6 with your version of python and add -32 fot 32 bit version
```
py -3.6-32 pip install pygame
```
replace pygame with the module you want to install
this works for most people using python on windows also reboot your pc after adding system variable path |
62,713,607 | I deployed an Azure Functions App with Python `3.8`. Later on I tried to use dataclasses and it failed with the exception that the version available does not support dataclasses. I then SSHed to the host of the Function App and by using `python --version` figured out that version `3.6` was actually installed. As dataclasses are available from `3.7` on it makes sense why this module can't be used.
But what can I do to actually have version `3.8` running on the Function App host? | 2020/07/03 | [
"https://Stackoverflow.com/questions/62713607",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7009990/"
] | This is a known issue (see e.g. <https://learn.microsoft.com/en-us/answers/questions/39124/azure-functions-always-using-python-36.html>) and hopefully fixed soon.
As workaround you can run the following command e.g. in the Cloud shell:
`az functionapp config set --name <func app name> --resource-group <rg name> --subscription <subscription id> --linux-fx-version "DOCKER|mcr.microsoft.com/azure-functions/python:3.0.13353-python3.8-appservice"`
After that you need to wait for a while so that the function app becomes usable again. Additionally I have made the experience that the installed packages are gone. Therefore you need also to republish your functions (having the necessary packages defined in `requirements.txt`). | For anyone running into this problem downgrading to Python 3.6 is a workaround.
I tried @quervernetzt solution but it didn't work, my pipelines started giving the following error.
```
##[error]Error: Error: Failed to deploy web package to App Service. Conflict (CODE: 409)
``` |
15,424,895 | I'm new here in the world of coding and I haven't received a very warm welcome. I've been trying to learn python via the online tutorial <http://learnpythonthehardway.org/book/>. I've been able to struggle my way through the book up until exercise 48 & 49. That's where he turns students loose and says "You figure it out." But I simply can't. I understand that I need to create a Lexicon of possible words and that I need to scan the user input to see if it matches anything in the Lexicon but that's about it! From what I can tell, I need to create a list called lexicon:
```
lexicon = [
('directions', 'north'),
('directions', 'south'),
('directions', 'east'),
('directions', 'west'),
('verbs', 'go'),
('verbs', 'stop'),
('verbs', 'look'),
('verbs', 'give'),
('stops', 'the'),
('stops', 'in'),
('stops', 'of'),
('stops', 'from'),
('stops', 'at')
]
```
Is that right? I don't know what to do next? I know that each item in the list is called a tuple, but that doesn't really mean anything to me. How do I take raw input and assign it to the tuple? You know what I mean? So in exercise 49 he imports the lexicon and just inside python prints lexicon.scan("input") and it returns the list of tuples so for example:
```
from ex48 import lexicon
>>> print lexicon.scan("go north")
[('verb', 'go'), ('direction', 'north')]
```
Is 'scan()' a predefined function or did he create the function within the lexicon module? I know that if you use 'split()' it creates a list with all of the words from the input but then how does it assign 'go' to the tuple ('verb', 'go')?
Am I just way off? I know I'm asking a lot but I searched around everywhere for hours and I can't figure this one out on my own. Please help! I will love you forever! | 2013/03/15 | [
"https://Stackoverflow.com/questions/15424895",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2172498/"
] | Based on the ex48 instructions, you could create a few lists for each kind of word. Here's a sample for the first test case. The returned value is a list of tuples, so you can append to that list for each word given.
```
direction = ['north', 'south', 'east', 'west', 'down', 'up', 'left', 'right', 'back']
class Lexicon:
def scan(self, sentence):
self.sentence = sentence
self.words = sentence.split()
stuff = []
for word in self.words:
if word in direction:
stuff.append(('direction', word))
return stuff
lexicon = Lexicon()
```
He notes that numbers and exceptions are handled differently. | Like the most here I am new to the world of coding and I though I attach my solution below as it might help other students.
I already saw a few more efficient approaches that I could implement. However, the code handles every use case of the exercise and since I am wrote it on my own with my beginners mind it does not take complicated shortcuts and should be very easy to understand for other beginners.
I therefore thought it might beneficial for someone else learning. Let me know what you think. Cheers!
```
class Lexicon(object):
def __init__(self):
self.sentence = []
self.dictionary = {
'north' : ('direction','north'),
'south' : ('direction','south'),
'east' : ('direction','east'),
'west' : ('direction','west'),
'down' : ('direction','down'),
'up' : ('direction','up'),
'left' : ('direction','left'),
'right' : ('direction','right'),
'back' : ('direction','back'),
'go' : ('verb','go'),
'stop' : ('verb','stop'),
'kill' : ('verb','kill'),
'eat' : ('verb', 'eat'),
'the' : ('stop','the'),
'in' : ('stop','in'),
'of' : ('stop','of'),
'from' : ('stop','from'),
'at' : ('stop','at'),
'it' : ('stop','it'),
'door' : ('noun','door'),
'bear' : ('noun','bear'),
'princess' : ('noun','princess'),
'cabinet' : ('noun','cabinet'),
}
def scan(self, input):
loaded_imput = input.split()
self.sentence.clear()
for item in loaded_imput:
try:
int(item)
number = ('number', int(item))
self.sentence.append(number)
except ValueError:
word = self.dictionary.get(item.lower(), ('error', item))
self.sentence.append(word)
return self.sentence
lexicon = Lexicon()
``` |
15,424,895 | I'm new here in the world of coding and I haven't received a very warm welcome. I've been trying to learn python via the online tutorial <http://learnpythonthehardway.org/book/>. I've been able to struggle my way through the book up until exercise 48 & 49. That's where he turns students loose and says "You figure it out." But I simply can't. I understand that I need to create a Lexicon of possible words and that I need to scan the user input to see if it matches anything in the Lexicon but that's about it! From what I can tell, I need to create a list called lexicon:
```
lexicon = [
('directions', 'north'),
('directions', 'south'),
('directions', 'east'),
('directions', 'west'),
('verbs', 'go'),
('verbs', 'stop'),
('verbs', 'look'),
('verbs', 'give'),
('stops', 'the'),
('stops', 'in'),
('stops', 'of'),
('stops', 'from'),
('stops', 'at')
]
```
Is that right? I don't know what to do next? I know that each item in the list is called a tuple, but that doesn't really mean anything to me. How do I take raw input and assign it to the tuple? You know what I mean? So in exercise 49 he imports the lexicon and just inside python prints lexicon.scan("input") and it returns the list of tuples so for example:
```
from ex48 import lexicon
>>> print lexicon.scan("go north")
[('verb', 'go'), ('direction', 'north')]
```
Is 'scan()' a predefined function or did he create the function within the lexicon module? I know that if you use 'split()' it creates a list with all of the words from the input but then how does it assign 'go' to the tuple ('verb', 'go')?
Am I just way off? I know I'm asking a lot but I searched around everywhere for hours and I can't figure this one out on my own. Please help! I will love you forever! | 2013/03/15 | [
"https://Stackoverflow.com/questions/15424895",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2172498/"
] | This is a really cool exercise. I had to research for days and finally got it working. The other answers here don't show how to actually use a list with tuples inside like the e-book sugests, so this will do it like that. Owner's answer doesn't quite work, lexicon[word] asks for interger and not str.
```
lexicon = [('direction', 'north', 'south', 'east', 'west'),
('verb', 'go', 'kill', 'eat'),
('nouns', 'princess', 'bear')]
def scan():
stuff = raw_input('> ')
words = stuff.split()
pairs = []
for word in words:
if word in lexicon[0]:
pairs.append(('direction', word))
elif word in lexicon[1]:
pairs.append(('verb', word))
elif word in lexicon[2]:
pairs.append(('nouns', word))
else:
pairs.append(('error', word))
print pairs
```
Cheers! | clearly Lexicon is another python file in ex48 folder.
>
>
> ```
> like: ex48
> ----lexicon.py
>
> ```
>
>
so you are importing lexicon.py from ex 48 folder.
scan is a function inside lexicon.py |
15,424,895 | I'm new here in the world of coding and I haven't received a very warm welcome. I've been trying to learn python via the online tutorial <http://learnpythonthehardway.org/book/>. I've been able to struggle my way through the book up until exercise 48 & 49. That's where he turns students loose and says "You figure it out." But I simply can't. I understand that I need to create a Lexicon of possible words and that I need to scan the user input to see if it matches anything in the Lexicon but that's about it! From what I can tell, I need to create a list called lexicon:
```
lexicon = [
('directions', 'north'),
('directions', 'south'),
('directions', 'east'),
('directions', 'west'),
('verbs', 'go'),
('verbs', 'stop'),
('verbs', 'look'),
('verbs', 'give'),
('stops', 'the'),
('stops', 'in'),
('stops', 'of'),
('stops', 'from'),
('stops', 'at')
]
```
Is that right? I don't know what to do next? I know that each item in the list is called a tuple, but that doesn't really mean anything to me. How do I take raw input and assign it to the tuple? You know what I mean? So in exercise 49 he imports the lexicon and just inside python prints lexicon.scan("input") and it returns the list of tuples so for example:
```
from ex48 import lexicon
>>> print lexicon.scan("go north")
[('verb', 'go'), ('direction', 'north')]
```
Is 'scan()' a predefined function or did he create the function within the lexicon module? I know that if you use 'split()' it creates a list with all of the words from the input but then how does it assign 'go' to the tuple ('verb', 'go')?
Am I just way off? I know I'm asking a lot but I searched around everywhere for hours and I can't figure this one out on my own. Please help! I will love you forever! | 2013/03/15 | [
"https://Stackoverflow.com/questions/15424895",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2172498/"
] | I wouldn't use a list to make the lexicon. You're mapping words to their types, so make a dictionary.
Here's the biggest hint that I can give without writing the entire thing:
```
lexicon = {
'north': 'directions',
'south': 'directions',
'east': 'directions',
'west': 'directions',
'go': 'verbs',
'stop': 'verbs',
'look': 'verbs',
'give': 'verbs',
'the': 'stops',
'in': 'stops',
'of': 'stops',
'from': 'stops',
'at': 'stops'
}
def scan(sentence):
words = sentence.lower().split()
pairs = []
# Iterate over `words`,
# pull each word and its corresponding type
# out of the `lexicon` dictionary and append the tuple
# to the `pairs` list
return pairs
``` | clearly Lexicon is another python file in ex48 folder.
>
>
> ```
> like: ex48
> ----lexicon.py
>
> ```
>
>
so you are importing lexicon.py from ex 48 folder.
scan is a function inside lexicon.py |
15,424,895 | I'm new here in the world of coding and I haven't received a very warm welcome. I've been trying to learn python via the online tutorial <http://learnpythonthehardway.org/book/>. I've been able to struggle my way through the book up until exercise 48 & 49. That's where he turns students loose and says "You figure it out." But I simply can't. I understand that I need to create a Lexicon of possible words and that I need to scan the user input to see if it matches anything in the Lexicon but that's about it! From what I can tell, I need to create a list called lexicon:
```
lexicon = [
('directions', 'north'),
('directions', 'south'),
('directions', 'east'),
('directions', 'west'),
('verbs', 'go'),
('verbs', 'stop'),
('verbs', 'look'),
('verbs', 'give'),
('stops', 'the'),
('stops', 'in'),
('stops', 'of'),
('stops', 'from'),
('stops', 'at')
]
```
Is that right? I don't know what to do next? I know that each item in the list is called a tuple, but that doesn't really mean anything to me. How do I take raw input and assign it to the tuple? You know what I mean? So in exercise 49 he imports the lexicon and just inside python prints lexicon.scan("input") and it returns the list of tuples so for example:
```
from ex48 import lexicon
>>> print lexicon.scan("go north")
[('verb', 'go'), ('direction', 'north')]
```
Is 'scan()' a predefined function or did he create the function within the lexicon module? I know that if you use 'split()' it creates a list with all of the words from the input but then how does it assign 'go' to the tuple ('verb', 'go')?
Am I just way off? I know I'm asking a lot but I searched around everywhere for hours and I can't figure this one out on my own. Please help! I will love you forever! | 2013/03/15 | [
"https://Stackoverflow.com/questions/15424895",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2172498/"
] | Finally I did it!
```
lexicon = {
('directions', 'north'),
('directions', 'south'),
('directions', 'east'),
('directions', 'west'),
('verbs', 'go'),
('verbs', 'stop'),
('verbs', 'look'),
('verbs', 'give'),
('stops', 'the'),
('stops', 'in'),
('stops', 'of'),
('stops', 'from'),
('stops', 'at')
}
def scan(sentence):
words = sentence.lower().split()
pairs = []
for word in words:
word_type = lexicon[word]
tupes = (word, word_type)
pairs.append(tupes)
return pairs
``` | Like the most here I am new to the world of coding and I though I attach my solution below as it might help other students.
I already saw a few more efficient approaches that I could implement. However, the code handles every use case of the exercise and since I am wrote it on my own with my beginners mind it does not take complicated shortcuts and should be very easy to understand for other beginners.
I therefore thought it might beneficial for someone else learning. Let me know what you think. Cheers!
```
class Lexicon(object):
def __init__(self):
self.sentence = []
self.dictionary = {
'north' : ('direction','north'),
'south' : ('direction','south'),
'east' : ('direction','east'),
'west' : ('direction','west'),
'down' : ('direction','down'),
'up' : ('direction','up'),
'left' : ('direction','left'),
'right' : ('direction','right'),
'back' : ('direction','back'),
'go' : ('verb','go'),
'stop' : ('verb','stop'),
'kill' : ('verb','kill'),
'eat' : ('verb', 'eat'),
'the' : ('stop','the'),
'in' : ('stop','in'),
'of' : ('stop','of'),
'from' : ('stop','from'),
'at' : ('stop','at'),
'it' : ('stop','it'),
'door' : ('noun','door'),
'bear' : ('noun','bear'),
'princess' : ('noun','princess'),
'cabinet' : ('noun','cabinet'),
}
def scan(self, input):
loaded_imput = input.split()
self.sentence.clear()
for item in loaded_imput:
try:
int(item)
number = ('number', int(item))
self.sentence.append(number)
except ValueError:
word = self.dictionary.get(item.lower(), ('error', item))
self.sentence.append(word)
return self.sentence
lexicon = Lexicon()
``` |
15,424,895 | I'm new here in the world of coding and I haven't received a very warm welcome. I've been trying to learn python via the online tutorial <http://learnpythonthehardway.org/book/>. I've been able to struggle my way through the book up until exercise 48 & 49. That's where he turns students loose and says "You figure it out." But I simply can't. I understand that I need to create a Lexicon of possible words and that I need to scan the user input to see if it matches anything in the Lexicon but that's about it! From what I can tell, I need to create a list called lexicon:
```
lexicon = [
('directions', 'north'),
('directions', 'south'),
('directions', 'east'),
('directions', 'west'),
('verbs', 'go'),
('verbs', 'stop'),
('verbs', 'look'),
('verbs', 'give'),
('stops', 'the'),
('stops', 'in'),
('stops', 'of'),
('stops', 'from'),
('stops', 'at')
]
```
Is that right? I don't know what to do next? I know that each item in the list is called a tuple, but that doesn't really mean anything to me. How do I take raw input and assign it to the tuple? You know what I mean? So in exercise 49 he imports the lexicon and just inside python prints lexicon.scan("input") and it returns the list of tuples so for example:
```
from ex48 import lexicon
>>> print lexicon.scan("go north")
[('verb', 'go'), ('direction', 'north')]
```
Is 'scan()' a predefined function or did he create the function within the lexicon module? I know that if you use 'split()' it creates a list with all of the words from the input but then how does it assign 'go' to the tuple ('verb', 'go')?
Am I just way off? I know I'm asking a lot but I searched around everywhere for hours and I can't figure this one out on my own. Please help! I will love you forever! | 2013/03/15 | [
"https://Stackoverflow.com/questions/15424895",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2172498/"
] | This is a really cool exercise. I had to research for days and finally got it working. The other answers here don't show how to actually use a list with tuples inside like the e-book sugests, so this will do it like that. Owner's answer doesn't quite work, lexicon[word] asks for interger and not str.
```
lexicon = [('direction', 'north', 'south', 'east', 'west'),
('verb', 'go', 'kill', 'eat'),
('nouns', 'princess', 'bear')]
def scan():
stuff = raw_input('> ')
words = stuff.split()
pairs = []
for word in words:
if word in lexicon[0]:
pairs.append(('direction', word))
elif word in lexicon[1]:
pairs.append(('verb', word))
elif word in lexicon[2]:
pairs.append(('nouns', word))
else:
pairs.append(('error', word))
print pairs
```
Cheers! | This is my version of scanning lexicon for ex48. I am also beginner in programming, python is my first language. So the program may not be efficient for its purpose, anyway the result is good after many testing. Please feel free to improve the code.
**WARNING**
===========
If you haven't try to do the exercise by your own, I encourage you to try without looking into any example.
**WARNING**
===========
One thing I love about programming is that, every time I encounter some problem, I spend a lot of time trying different method to solve the problem. I spend over few weeks trying to create structure, and it is really rewarding as a beginner that I really learn a lot instead of copying from other.
Below is my lexicon and search in one file.
```
direction = [('direction', 'north'),
('direction', 'south'),
('direction', 'east'),
('direction', 'west'),
('direction', 'up'),
('direction', 'down'),
('direction', 'left'),
('direction', 'right'),
('direction', 'back')
]
verbs = [('verb', 'go'),
('verb', 'stop'),
('verb', 'kill'),
('verb', 'eat')
]
stop_words = [('stop', 'the'),
('stop', 'in'),
('stop', 'of'),
('stop', 'from'),
('stop', 'at'),
('stop', 'it')
]
nouns = [('noun', 'door'),
('noun', 'bear'),
('noun', 'princess'),
('noun', 'cabinet')
]
library = tuple(nouns + stop_words + verbs + direction)
#below is the search method with explanation.
def convert_number(x):
try:
return int(x)
except ValueError:
return None
def scan(input):
#include uppercase input for searching. (Study Drills no.3)
lowercase = input.lower()
#element is what i want to search.
element = lowercase.split()
#orielement is the original input which have uppercase, for 'error' type
orielement = input.split()
#library is tuple of the word types from above. You can replace with your data source.
data = library
#i is used to evaluate the position of element
i = 0
#z is used to indicate the position of output, which is the data that match what i search, equals to "i".
z = 0
#create a place to store my output.
output = []
#temp is just a on/off switch. Turn off the switch when i get any match for that particular input.
temp = True
#creating a condition which evaluates the total search needed to be done and follows the sequence by +1.
while not(i == len(element)):
try:
#j is used to position the word in the library, eg 'door', 'bear', 'go', etc which exclude the word type.
j = 0
while not (j == len(data)):
#data[j][1] all the single word in library
matching = data[j][1]
#when the word match, it will save the match into the output.
if (matching == element[i]):
output.append(data[j])
#print output[z]
j += 1
z += 1
#to switch off the search for else: below and go to next input search. Otherwise they would be considerd 'error'
temp = False
#else is everything that is not in the library.
else:
while (data[j][1] == data [-1][1]) and (temp == True):
#refer to convert_number, to test if the input is a number, here i use orielement which includes uppercase
convert = convert_number(orielement[i])
#a is used to save number only.
a = tuple(['number', convert])
#b is to save everything
b = tuple(['error', orielement[i]])
#conver is number a[1] is the access the number inside, if it returns None from number then it wont append.
if convert == a[1] and not(convert == None):
output.append(a)
temp = False
else:
output.append(b)
#keep the switch off to escape the while loop!
temp = False
#searching in next data
j += 1
#next word of input
i += 1
temp = True
except ValueError:
return output
else:
pass
return output
``` |
15,424,895 | I'm new here in the world of coding and I haven't received a very warm welcome. I've been trying to learn python via the online tutorial <http://learnpythonthehardway.org/book/>. I've been able to struggle my way through the book up until exercise 48 & 49. That's where he turns students loose and says "You figure it out." But I simply can't. I understand that I need to create a Lexicon of possible words and that I need to scan the user input to see if it matches anything in the Lexicon but that's about it! From what I can tell, I need to create a list called lexicon:
```
lexicon = [
('directions', 'north'),
('directions', 'south'),
('directions', 'east'),
('directions', 'west'),
('verbs', 'go'),
('verbs', 'stop'),
('verbs', 'look'),
('verbs', 'give'),
('stops', 'the'),
('stops', 'in'),
('stops', 'of'),
('stops', 'from'),
('stops', 'at')
]
```
Is that right? I don't know what to do next? I know that each item in the list is called a tuple, but that doesn't really mean anything to me. How do I take raw input and assign it to the tuple? You know what I mean? So in exercise 49 he imports the lexicon and just inside python prints lexicon.scan("input") and it returns the list of tuples so for example:
```
from ex48 import lexicon
>>> print lexicon.scan("go north")
[('verb', 'go'), ('direction', 'north')]
```
Is 'scan()' a predefined function or did he create the function within the lexicon module? I know that if you use 'split()' it creates a list with all of the words from the input but then how does it assign 'go' to the tuple ('verb', 'go')?
Am I just way off? I know I'm asking a lot but I searched around everywhere for hours and I can't figure this one out on my own. Please help! I will love you forever! | 2013/03/15 | [
"https://Stackoverflow.com/questions/15424895",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2172498/"
] | Finally I did it!
```
lexicon = {
('directions', 'north'),
('directions', 'south'),
('directions', 'east'),
('directions', 'west'),
('verbs', 'go'),
('verbs', 'stop'),
('verbs', 'look'),
('verbs', 'give'),
('stops', 'the'),
('stops', 'in'),
('stops', 'of'),
('stops', 'from'),
('stops', 'at')
}
def scan(sentence):
words = sentence.lower().split()
pairs = []
for word in words:
word_type = lexicon[word]
tupes = (word, word_type)
pairs.append(tupes)
return pairs
``` | This is my version of scanning lexicon for ex48. I am also beginner in programming, python is my first language. So the program may not be efficient for its purpose, anyway the result is good after many testing. Please feel free to improve the code.
**WARNING**
===========
If you haven't try to do the exercise by your own, I encourage you to try without looking into any example.
**WARNING**
===========
One thing I love about programming is that, every time I encounter some problem, I spend a lot of time trying different method to solve the problem. I spend over few weeks trying to create structure, and it is really rewarding as a beginner that I really learn a lot instead of copying from other.
Below is my lexicon and search in one file.
```
direction = [('direction', 'north'),
('direction', 'south'),
('direction', 'east'),
('direction', 'west'),
('direction', 'up'),
('direction', 'down'),
('direction', 'left'),
('direction', 'right'),
('direction', 'back')
]
verbs = [('verb', 'go'),
('verb', 'stop'),
('verb', 'kill'),
('verb', 'eat')
]
stop_words = [('stop', 'the'),
('stop', 'in'),
('stop', 'of'),
('stop', 'from'),
('stop', 'at'),
('stop', 'it')
]
nouns = [('noun', 'door'),
('noun', 'bear'),
('noun', 'princess'),
('noun', 'cabinet')
]
library = tuple(nouns + stop_words + verbs + direction)
#below is the search method with explanation.
def convert_number(x):
try:
return int(x)
except ValueError:
return None
def scan(input):
#include uppercase input for searching. (Study Drills no.3)
lowercase = input.lower()
#element is what i want to search.
element = lowercase.split()
#orielement is the original input which have uppercase, for 'error' type
orielement = input.split()
#library is tuple of the word types from above. You can replace with your data source.
data = library
#i is used to evaluate the position of element
i = 0
#z is used to indicate the position of output, which is the data that match what i search, equals to "i".
z = 0
#create a place to store my output.
output = []
#temp is just a on/off switch. Turn off the switch when i get any match for that particular input.
temp = True
#creating a condition which evaluates the total search needed to be done and follows the sequence by +1.
while not(i == len(element)):
try:
#j is used to position the word in the library, eg 'door', 'bear', 'go', etc which exclude the word type.
j = 0
while not (j == len(data)):
#data[j][1] all the single word in library
matching = data[j][1]
#when the word match, it will save the match into the output.
if (matching == element[i]):
output.append(data[j])
#print output[z]
j += 1
z += 1
#to switch off the search for else: below and go to next input search. Otherwise they would be considerd 'error'
temp = False
#else is everything that is not in the library.
else:
while (data[j][1] == data [-1][1]) and (temp == True):
#refer to convert_number, to test if the input is a number, here i use orielement which includes uppercase
convert = convert_number(orielement[i])
#a is used to save number only.
a = tuple(['number', convert])
#b is to save everything
b = tuple(['error', orielement[i]])
#conver is number a[1] is the access the number inside, if it returns None from number then it wont append.
if convert == a[1] and not(convert == None):
output.append(a)
temp = False
else:
output.append(b)
#keep the switch off to escape the while loop!
temp = False
#searching in next data
j += 1
#next word of input
i += 1
temp = True
except ValueError:
return output
else:
pass
return output
``` |
15,424,895 | I'm new here in the world of coding and I haven't received a very warm welcome. I've been trying to learn python via the online tutorial <http://learnpythonthehardway.org/book/>. I've been able to struggle my way through the book up until exercise 48 & 49. That's where he turns students loose and says "You figure it out." But I simply can't. I understand that I need to create a Lexicon of possible words and that I need to scan the user input to see if it matches anything in the Lexicon but that's about it! From what I can tell, I need to create a list called lexicon:
```
lexicon = [
('directions', 'north'),
('directions', 'south'),
('directions', 'east'),
('directions', 'west'),
('verbs', 'go'),
('verbs', 'stop'),
('verbs', 'look'),
('verbs', 'give'),
('stops', 'the'),
('stops', 'in'),
('stops', 'of'),
('stops', 'from'),
('stops', 'at')
]
```
Is that right? I don't know what to do next? I know that each item in the list is called a tuple, but that doesn't really mean anything to me. How do I take raw input and assign it to the tuple? You know what I mean? So in exercise 49 he imports the lexicon and just inside python prints lexicon.scan("input") and it returns the list of tuples so for example:
```
from ex48 import lexicon
>>> print lexicon.scan("go north")
[('verb', 'go'), ('direction', 'north')]
```
Is 'scan()' a predefined function or did he create the function within the lexicon module? I know that if you use 'split()' it creates a list with all of the words from the input but then how does it assign 'go' to the tuple ('verb', 'go')?
Am I just way off? I know I'm asking a lot but I searched around everywhere for hours and I can't figure this one out on my own. Please help! I will love you forever! | 2013/03/15 | [
"https://Stackoverflow.com/questions/15424895",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2172498/"
] | Based on the ex48 instructions, you could create a few lists for each kind of word. Here's a sample for the first test case. The returned value is a list of tuples, so you can append to that list for each word given.
```
direction = ['north', 'south', 'east', 'west', 'down', 'up', 'left', 'right', 'back']
class Lexicon:
def scan(self, sentence):
self.sentence = sentence
self.words = sentence.split()
stuff = []
for word in self.words:
if word in direction:
stuff.append(('direction', word))
return stuff
lexicon = Lexicon()
```
He notes that numbers and exceptions are handled differently. | clearly Lexicon is another python file in ex48 folder.
>
>
> ```
> like: ex48
> ----lexicon.py
>
> ```
>
>
so you are importing lexicon.py from ex 48 folder.
scan is a function inside lexicon.py |
15,424,895 | I'm new here in the world of coding and I haven't received a very warm welcome. I've been trying to learn python via the online tutorial <http://learnpythonthehardway.org/book/>. I've been able to struggle my way through the book up until exercise 48 & 49. That's where he turns students loose and says "You figure it out." But I simply can't. I understand that I need to create a Lexicon of possible words and that I need to scan the user input to see if it matches anything in the Lexicon but that's about it! From what I can tell, I need to create a list called lexicon:
```
lexicon = [
('directions', 'north'),
('directions', 'south'),
('directions', 'east'),
('directions', 'west'),
('verbs', 'go'),
('verbs', 'stop'),
('verbs', 'look'),
('verbs', 'give'),
('stops', 'the'),
('stops', 'in'),
('stops', 'of'),
('stops', 'from'),
('stops', 'at')
]
```
Is that right? I don't know what to do next? I know that each item in the list is called a tuple, but that doesn't really mean anything to me. How do I take raw input and assign it to the tuple? You know what I mean? So in exercise 49 he imports the lexicon and just inside python prints lexicon.scan("input") and it returns the list of tuples so for example:
```
from ex48 import lexicon
>>> print lexicon.scan("go north")
[('verb', 'go'), ('direction', 'north')]
```
Is 'scan()' a predefined function or did he create the function within the lexicon module? I know that if you use 'split()' it creates a list with all of the words from the input but then how does it assign 'go' to the tuple ('verb', 'go')?
Am I just way off? I know I'm asking a lot but I searched around everywhere for hours and I can't figure this one out on my own. Please help! I will love you forever! | 2013/03/15 | [
"https://Stackoverflow.com/questions/15424895",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2172498/"
] | Like the most here I am new to the world of coding and I though I attach my solution below as it might help other students.
I already saw a few more efficient approaches that I could implement. However, the code handles every use case of the exercise and since I am wrote it on my own with my beginners mind it does not take complicated shortcuts and should be very easy to understand for other beginners.
I therefore thought it might beneficial for someone else learning. Let me know what you think. Cheers!
```
class Lexicon(object):
def __init__(self):
self.sentence = []
self.dictionary = {
'north' : ('direction','north'),
'south' : ('direction','south'),
'east' : ('direction','east'),
'west' : ('direction','west'),
'down' : ('direction','down'),
'up' : ('direction','up'),
'left' : ('direction','left'),
'right' : ('direction','right'),
'back' : ('direction','back'),
'go' : ('verb','go'),
'stop' : ('verb','stop'),
'kill' : ('verb','kill'),
'eat' : ('verb', 'eat'),
'the' : ('stop','the'),
'in' : ('stop','in'),
'of' : ('stop','of'),
'from' : ('stop','from'),
'at' : ('stop','at'),
'it' : ('stop','it'),
'door' : ('noun','door'),
'bear' : ('noun','bear'),
'princess' : ('noun','princess'),
'cabinet' : ('noun','cabinet'),
}
def scan(self, input):
loaded_imput = input.split()
self.sentence.clear()
for item in loaded_imput:
try:
int(item)
number = ('number', int(item))
self.sentence.append(number)
except ValueError:
word = self.dictionary.get(item.lower(), ('error', item))
self.sentence.append(word)
return self.sentence
lexicon = Lexicon()
``` | This is my version of scanning lexicon for ex48. I am also beginner in programming, python is my first language. So the program may not be efficient for its purpose, anyway the result is good after many testing. Please feel free to improve the code.
**WARNING**
===========
If you haven't try to do the exercise by your own, I encourage you to try without looking into any example.
**WARNING**
===========
One thing I love about programming is that, every time I encounter some problem, I spend a lot of time trying different method to solve the problem. I spend over few weeks trying to create structure, and it is really rewarding as a beginner that I really learn a lot instead of copying from other.
Below is my lexicon and search in one file.
```
direction = [('direction', 'north'),
('direction', 'south'),
('direction', 'east'),
('direction', 'west'),
('direction', 'up'),
('direction', 'down'),
('direction', 'left'),
('direction', 'right'),
('direction', 'back')
]
verbs = [('verb', 'go'),
('verb', 'stop'),
('verb', 'kill'),
('verb', 'eat')
]
stop_words = [('stop', 'the'),
('stop', 'in'),
('stop', 'of'),
('stop', 'from'),
('stop', 'at'),
('stop', 'it')
]
nouns = [('noun', 'door'),
('noun', 'bear'),
('noun', 'princess'),
('noun', 'cabinet')
]
library = tuple(nouns + stop_words + verbs + direction)
#below is the search method with explanation.
def convert_number(x):
try:
return int(x)
except ValueError:
return None
def scan(input):
#include uppercase input for searching. (Study Drills no.3)
lowercase = input.lower()
#element is what i want to search.
element = lowercase.split()
#orielement is the original input which have uppercase, for 'error' type
orielement = input.split()
#library is tuple of the word types from above. You can replace with your data source.
data = library
#i is used to evaluate the position of element
i = 0
#z is used to indicate the position of output, which is the data that match what i search, equals to "i".
z = 0
#create a place to store my output.
output = []
#temp is just a on/off switch. Turn off the switch when i get any match for that particular input.
temp = True
#creating a condition which evaluates the total search needed to be done and follows the sequence by +1.
while not(i == len(element)):
try:
#j is used to position the word in the library, eg 'door', 'bear', 'go', etc which exclude the word type.
j = 0
while not (j == len(data)):
#data[j][1] all the single word in library
matching = data[j][1]
#when the word match, it will save the match into the output.
if (matching == element[i]):
output.append(data[j])
#print output[z]
j += 1
z += 1
#to switch off the search for else: below and go to next input search. Otherwise they would be considerd 'error'
temp = False
#else is everything that is not in the library.
else:
while (data[j][1] == data [-1][1]) and (temp == True):
#refer to convert_number, to test if the input is a number, here i use orielement which includes uppercase
convert = convert_number(orielement[i])
#a is used to save number only.
a = tuple(['number', convert])
#b is to save everything
b = tuple(['error', orielement[i]])
#conver is number a[1] is the access the number inside, if it returns None from number then it wont append.
if convert == a[1] and not(convert == None):
output.append(a)
temp = False
else:
output.append(b)
#keep the switch off to escape the while loop!
temp = False
#searching in next data
j += 1
#next word of input
i += 1
temp = True
except ValueError:
return output
else:
pass
return output
``` |
15,424,895 | I'm new here in the world of coding and I haven't received a very warm welcome. I've been trying to learn python via the online tutorial <http://learnpythonthehardway.org/book/>. I've been able to struggle my way through the book up until exercise 48 & 49. That's where he turns students loose and says "You figure it out." But I simply can't. I understand that I need to create a Lexicon of possible words and that I need to scan the user input to see if it matches anything in the Lexicon but that's about it! From what I can tell, I need to create a list called lexicon:
```
lexicon = [
('directions', 'north'),
('directions', 'south'),
('directions', 'east'),
('directions', 'west'),
('verbs', 'go'),
('verbs', 'stop'),
('verbs', 'look'),
('verbs', 'give'),
('stops', 'the'),
('stops', 'in'),
('stops', 'of'),
('stops', 'from'),
('stops', 'at')
]
```
Is that right? I don't know what to do next? I know that each item in the list is called a tuple, but that doesn't really mean anything to me. How do I take raw input and assign it to the tuple? You know what I mean? So in exercise 49 he imports the lexicon and just inside python prints lexicon.scan("input") and it returns the list of tuples so for example:
```
from ex48 import lexicon
>>> print lexicon.scan("go north")
[('verb', 'go'), ('direction', 'north')]
```
Is 'scan()' a predefined function or did he create the function within the lexicon module? I know that if you use 'split()' it creates a list with all of the words from the input but then how does it assign 'go' to the tuple ('verb', 'go')?
Am I just way off? I know I'm asking a lot but I searched around everywhere for hours and I can't figure this one out on my own. Please help! I will love you forever! | 2013/03/15 | [
"https://Stackoverflow.com/questions/15424895",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2172498/"
] | This is a really cool exercise. I had to research for days and finally got it working. The other answers here don't show how to actually use a list with tuples inside like the e-book sugests, so this will do it like that. Owner's answer doesn't quite work, lexicon[word] asks for interger and not str.
```
lexicon = [('direction', 'north', 'south', 'east', 'west'),
('verb', 'go', 'kill', 'eat'),
('nouns', 'princess', 'bear')]
def scan():
stuff = raw_input('> ')
words = stuff.split()
pairs = []
for word in words:
if word in lexicon[0]:
pairs.append(('direction', word))
elif word in lexicon[1]:
pairs.append(('verb', word))
elif word in lexicon[2]:
pairs.append(('nouns', word))
else:
pairs.append(('error', word))
print pairs
```
Cheers! | Like the most here I am new to the world of coding and I though I attach my solution below as it might help other students.
I already saw a few more efficient approaches that I could implement. However, the code handles every use case of the exercise and since I am wrote it on my own with my beginners mind it does not take complicated shortcuts and should be very easy to understand for other beginners.
I therefore thought it might beneficial for someone else learning. Let me know what you think. Cheers!
```
class Lexicon(object):
def __init__(self):
self.sentence = []
self.dictionary = {
'north' : ('direction','north'),
'south' : ('direction','south'),
'east' : ('direction','east'),
'west' : ('direction','west'),
'down' : ('direction','down'),
'up' : ('direction','up'),
'left' : ('direction','left'),
'right' : ('direction','right'),
'back' : ('direction','back'),
'go' : ('verb','go'),
'stop' : ('verb','stop'),
'kill' : ('verb','kill'),
'eat' : ('verb', 'eat'),
'the' : ('stop','the'),
'in' : ('stop','in'),
'of' : ('stop','of'),
'from' : ('stop','from'),
'at' : ('stop','at'),
'it' : ('stop','it'),
'door' : ('noun','door'),
'bear' : ('noun','bear'),
'princess' : ('noun','princess'),
'cabinet' : ('noun','cabinet'),
}
def scan(self, input):
loaded_imput = input.split()
self.sentence.clear()
for item in loaded_imput:
try:
int(item)
number = ('number', int(item))
self.sentence.append(number)
except ValueError:
word = self.dictionary.get(item.lower(), ('error', item))
self.sentence.append(word)
return self.sentence
lexicon = Lexicon()
``` |
15,424,895 | I'm new here in the world of coding and I haven't received a very warm welcome. I've been trying to learn python via the online tutorial <http://learnpythonthehardway.org/book/>. I've been able to struggle my way through the book up until exercise 48 & 49. That's where he turns students loose and says "You figure it out." But I simply can't. I understand that I need to create a Lexicon of possible words and that I need to scan the user input to see if it matches anything in the Lexicon but that's about it! From what I can tell, I need to create a list called lexicon:
```
lexicon = [
('directions', 'north'),
('directions', 'south'),
('directions', 'east'),
('directions', 'west'),
('verbs', 'go'),
('verbs', 'stop'),
('verbs', 'look'),
('verbs', 'give'),
('stops', 'the'),
('stops', 'in'),
('stops', 'of'),
('stops', 'from'),
('stops', 'at')
]
```
Is that right? I don't know what to do next? I know that each item in the list is called a tuple, but that doesn't really mean anything to me. How do I take raw input and assign it to the tuple? You know what I mean? So in exercise 49 he imports the lexicon and just inside python prints lexicon.scan("input") and it returns the list of tuples so for example:
```
from ex48 import lexicon
>>> print lexicon.scan("go north")
[('verb', 'go'), ('direction', 'north')]
```
Is 'scan()' a predefined function or did he create the function within the lexicon module? I know that if you use 'split()' it creates a list with all of the words from the input but then how does it assign 'go' to the tuple ('verb', 'go')?
Am I just way off? I know I'm asking a lot but I searched around everywhere for hours and I can't figure this one out on my own. Please help! I will love you forever! | 2013/03/15 | [
"https://Stackoverflow.com/questions/15424895",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2172498/"
] | I wouldn't use a list to make the lexicon. You're mapping words to their types, so make a dictionary.
Here's the biggest hint that I can give without writing the entire thing:
```
lexicon = {
'north': 'directions',
'south': 'directions',
'east': 'directions',
'west': 'directions',
'go': 'verbs',
'stop': 'verbs',
'look': 'verbs',
'give': 'verbs',
'the': 'stops',
'in': 'stops',
'of': 'stops',
'from': 'stops',
'at': 'stops'
}
def scan(sentence):
words = sentence.lower().split()
pairs = []
# Iterate over `words`,
# pull each word and its corresponding type
# out of the `lexicon` dictionary and append the tuple
# to the `pairs` list
return pairs
``` | Like the most here I am new to the world of coding and I though I attach my solution below as it might help other students.
I already saw a few more efficient approaches that I could implement. However, the code handles every use case of the exercise and since I am wrote it on my own with my beginners mind it does not take complicated shortcuts and should be very easy to understand for other beginners.
I therefore thought it might beneficial for someone else learning. Let me know what you think. Cheers!
```
class Lexicon(object):
def __init__(self):
self.sentence = []
self.dictionary = {
'north' : ('direction','north'),
'south' : ('direction','south'),
'east' : ('direction','east'),
'west' : ('direction','west'),
'down' : ('direction','down'),
'up' : ('direction','up'),
'left' : ('direction','left'),
'right' : ('direction','right'),
'back' : ('direction','back'),
'go' : ('verb','go'),
'stop' : ('verb','stop'),
'kill' : ('verb','kill'),
'eat' : ('verb', 'eat'),
'the' : ('stop','the'),
'in' : ('stop','in'),
'of' : ('stop','of'),
'from' : ('stop','from'),
'at' : ('stop','at'),
'it' : ('stop','it'),
'door' : ('noun','door'),
'bear' : ('noun','bear'),
'princess' : ('noun','princess'),
'cabinet' : ('noun','cabinet'),
}
def scan(self, input):
loaded_imput = input.split()
self.sentence.clear()
for item in loaded_imput:
try:
int(item)
number = ('number', int(item))
self.sentence.append(number)
except ValueError:
word = self.dictionary.get(item.lower(), ('error', item))
self.sentence.append(word)
return self.sentence
lexicon = Lexicon()
``` |
12,424,351 | I want to run a shell command from python and receive its output with subprocess.Popen. The problem is, when I close the process, sending Ctrl-C, I don't get any output. What am I doing wrong? Code:
```
>>> import subprocess
>>> sub = subprocess.Popen(["xinput", "test", "8"], stdout=subprocess.PIPE) #receive mouse events
>>> output = sub.communicate()[0].read()
^CTraceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.6/subprocess.py", line 693, in communicate
stdout = self.stdout.read()
KeyboardInterrupt
>>> output
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'output' is not defined
```
Inspired by this post by Jett:
[Reading stdout from xinput test in python](https://stackoverflow.com/questions/12420999/reading-stdout-from-xinput-test-in-python) | 2012/09/14 | [
"https://Stackoverflow.com/questions/12424351",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1360544/"
] | The issue here is that the `KeyboardInterrupt` is sent during the call to `communicate`. As a result, `communicate` never returns and so it's output is never stored in the variable `output` and you get the `NameError` when you try to use it. One workaround would be the following:
```
import subprocess
sub = subprocess.Popen(["xinput", "test", "8"], stdout=subprocess.PIPE)
lines = [] #Need someplace to store the data as it comes
try:
for line in sub.stdout: #read one line from standard out, store it in lines
lines.append(line)
except KeyboardInterrupt: #keyboardInterrupt happened. Stop process
sub.terminate()
finally: #Join our lines into a single buffer (like `communicate`)
output = ''.join(lines)
del lines #clean `lines` out of our namespace (just because).
``` | @pythonm already explained the `NameError`.
Furthermore, you're using the output of `Popen.communicate()` conceptually wrong. It returns a 2-tuple of strings: `(stdout, stderr)`. It does not return two file-like objects. That's why your `sub.communicate()[0].read()` would fail if `communicate()` returned.
Until the subprocess returns, `communicate()` aggregates all of its stdout and stderr (considering that you provided `stdout=subprocess.PIPE` and `stderr=subprocess.PIPE` to the constructor). Only after the subprocess has terminated, you have access to what `communicate()` collected during the runtime of the subprocess.
If you would like to monitor a subprocess' output in real time, then `communicate()` is the wrong method. Run the subprocess, monitor it (within for example a loop) and interact with its `Popen.stdout` and `Popen.stderr` attributes (which are file-like objects then). @mgilson's answer shows you one way how to do it :) |
65,495,956 | I have searched far and wide, and have followed just about everything... I cannot figure out why this keeps happening to my Python package I've created. It's not a simple "install dependency and you're good" as it's my own project I am attempting to create.
Here's my file structure:
```
-jarvis-discord
--jarvis_discord_bot
---__init__.py
---jarvis.py
---config.py
---cogs
----__init__.py
----all the cogs are here
```
The error given:
```
++ PWD
line 3: PWD: command not found
export PYTHONPATH=
PYTHONPATH=
python3 jarvis_discord_bot/jarvis.py
Traceback (most recent call last):
File "/buddy/jarvis-discord/jarvis_discord_bot/jarvis.py", line 40, in <module>
from jarvis_discord_bot.cogs import (
ModuleNotFoundError: No module named 'jarvis_discord_bot'
```
I've tried creating a `pipenv` as well and have had no luck either. Same error as above. There's something wrong with how I'm setting up my Python environment... granted I'm also a newbie.
The weird thing, to top this all off, is that it runs locally on my own machine just fine. So I am at a complete and utter loss for what to do and could use some help and direction on where to go from here.
Thanks! | 2020/12/29 | [
"https://Stackoverflow.com/questions/65495956",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13002900/"
] | If you are using relative file paths, you have to use
`from .cogs import (`
because it jarvis.py can't see jarvis\_discord\_bot from one level below.
The . in front of cogs means that it is one level up. | Figured out what was the issue!
In my run file, I had to set `PYTHONPATH` from `PWD` to the actual folder of the project. Good luck to anyone reading this in the future! |
50,151,698 | i have two table like this:
```
table1
id(int) | desc(TEXT)
--------------------
0 | "desc1"
1 | "desc2"
table2
id(int) | table1_id(TEXT)
------------------------
0 | "0"
1 | "0;1"
```
i want to select data into table2 and replace table1\_id by the desc field in table1, when i have string with ';' separator it means i have multiple selections.
im able to do it for single selection like this
```
SELECT table1.desc
FROM table2 LEFT JOIN table1 ON table1.id = CAST(table2.table1_id as integer);
```
Output wanted with a SELECT on table2 where id = 1:
```
"desc"
------
"desc1, desc2"
```
Im using Postgresql10, python3.5 and sqlalchemy
I know how to do it by extracting data and processing it with python then query again but im looking for a way to do it with one SQL query.
PS: I cant modify the table2. | 2018/05/03 | [
"https://Stackoverflow.com/questions/50151698",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5494686/"
] | You can convert the CSV value into an array, then join on that:
```
select string_agg(t1.descr, ',') as descr
from table2 t2
join table1 t1 on t1.id = any (string_to_array(t2.table1_id, ';')::int[])
where t2.id = 1
``` | That is really an abominable data design.
Consequently you will have to write a complicated query to get your desired result:
```
SELECT string_agg(table1."desc", ', ')
FROM table2
CROSS JOIN LATERAL regexp_split_to_table(table2.table1_id, ';') x(d)
JOIN table1 ON x.d::integer = table1.id
WHERE table2.id = 1;
string_agg
--------------
desc1, desc2
(1 row)
``` |
64,791,458 | Here is my docker-compose.yml used to create the database container.
```
version: '3.7'
services:
application:
build:
context: ./app
dockerfile: dockerfile #dockerfile-prod
depends_on:
- database_mongo
- database_neo4j
- etl_pipeline
environment:
- flask_env=dev #flask_env=prod
volumes:
- ./app:/app
ports:
- "8080:8080" #- 8080:8080
database_mongo:
image: "mongo:4.2"
expose:
- 27017
volumes:
- ./data/database/mongo:/data/db
database_neo4j:
image: neo4j:latest
expose:
- 27018
volumes:
- ./data/database/neo4j:/data
ports:
- "7474:7474" # web client
- "7687:7687" # DB default port
environment:
- NEO4J_AUTH=none
etl_pipeline:
depends_on:
- database_mongo
- database_neo4j
build:
context: ./data/etl
dockerfile: dockerfile #dockerfile-prod
volumes:
- ./data/:/data/
- ./data/etl:/app/
```
I'm trying to connect to my neo4j database with python driver. I have already been able to connect to mongoDb with this line:
```
mongo_client = MongoClient(host="database_mongo")
```
I'm trying to do something similar to the mongoDb to connect to my neo4j with the GraphDatabase in neo4j like this:
```
url = "{scheme}://{host_name}:{port}".format(scheme = "bolt", host_name="database_neo4j", port = 7687)
baseNeo4j = GraphDatabase.driver(url, encrypted=False)
```
or with py2neo like this
```
neo_client = Graph(host="database_neo4j")
```
However, nothing of this has worked yet and so I'm not sure if I'm using the right syntax in order to use neo4j with docker. I've tried many things and looked around, but couldn't find the answer...
The whole error message is:
```
etl_pipeline_1 | MongoClient(host=['database_mongo:27017'], document_class=dict, tz_aware=False, connect=True)
etl_pipeline_1 | Traceback (most recent call last):
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 929, in _connect
etl_pipeline_1 | s.connect(resolved_address)
etl_pipeline_1 | ConnectionRefusedError: [Errno 111] Connection refused
etl_pipeline_1 |
etl_pipeline_1 | During handling of the above exception, another exception occurred:
etl_pipeline_1 |
etl_pipeline_1 | Traceback (most recent call last):
etl_pipeline_1 | File "main.py", line 26, in <module>
etl_pipeline_1 | baseNeo4j = GraphDatabase.driver(url, encrypted=False)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 183, in driver
etl_pipeline_1 | return cls.bolt_driver(parsed.netloc, auth=auth, **config)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 196, in bolt_driver
etl_pipeline_1 | return BoltDriver.open(target, auth=auth, **config)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 359, in open
etl_pipeline_1 | pool = BoltPool.open(address, auth=auth, pool_config=pool_config, workspace_config=default_workspace_config)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 531, in open
etl_pipeline_1 | seeds = [pool.acquire() for _ in range(pool_config.init_size)]
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 531, in <listcomp>
etl_pipeline_1 | seeds = [pool.acquire() for _ in range(pool_config.init_size)]
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 545, in acquire
etl_pipeline_1 | return self._acquire(self.address, timeout)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 409, in _acquire
etl_pipeline_1 | connection = self.opener(address, timeout)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 528, in opener
etl_pipeline_1 | return Bolt.open(addr, auth=auth, timeout=timeout, routing_context=routing_context, **pool_config)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 198, in open
etl_pipeline_1 | keep_alive=pool_config.keep_alive,
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 1049, in connect
etl_pipeline_1 | raise last_error
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 1039, in connect
etl_pipeline_1 | s = _connect(resolved_address, timeout, keep_alive)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 943, in _connect
etl_pipeline_1 | raise ServiceUnavailable("Failed to establish connection to {!r} (reason {})".format(resolved_address, error))
etl_pipeline_1 | neo4j.exceptions.ServiceUnavailable: Failed to establish connection to IPv4Address(('172.29.0.2', 7687)) (reason [Errno 111] Connection refused)
``` | 2020/11/11 | [
"https://Stackoverflow.com/questions/64791458",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14620901/"
] | Usually languages implement functionalities as simply as possible.
Class methods are under the hood just simple functions containing object pointer as an argument, where object in fact is just data structure + functions that can operate on this data structure.
Normally compiler knows which function should operate on the object.
However if there is a case of polymorphism where function may be overriden.
Then compiler doesn't know what is the type of class, it may be Derived1 or Derived2.
Then compiler will add a VTable to this object that will contain function pointers to functions that could have been overridden.
Then for overridable methods the program will make a lookup in this table to see which function should be executed.
You can see how it can be implemented by seeing how polymorphism can be implemented in C:
[How can I simulate OO-style polymorphism in C?](https://stackoverflow.com/questions/524033/how-can-i-simulate-oo-style-polymorphism-in-c) | No, it does not. Functions are class-wide. When you allocate an object in C++ it will contain space for all its attributes plus a VTable with pointers to all its methods/functions, be it from its own class or inherited from parent classes.
When you call a method on that object, you essentially perform a look-up on that VTable and the appropriate method is called. |
45,155,336 | I am running Ubuntu Desktop 16.04 on a VM and am trying to run [Volttron](https://github.com/VOLTTRON/volttron) using the standard install instructions, however I keep getting an error after the following steps:
```
sudo apt-get update
sudo apt-get install build-essential python-dev openssl libssl-dev libevent-dev git
git clone https://github.com/VOLTTRON/volttron
cd volttron
python bootstrap.py
```
My problem is with the last step `python bootstrap.py`. As soon as I get to this step, I get the error `bootstrap.py: error: refusing to run as root to prevent potential damage.` from my terminal window.
Has anyone else encountered this problem? Thoughts? | 2017/07/17 | [
"https://Stackoverflow.com/questions/45155336",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8322226/"
] | I would recommend passing in the name of the value you would like to update into the handle change function, for example:
```
import React, { Component } from 'react'
import { Dropdown, Grid } from 'semantic-ui-react'
class DropdownExampleRemote extends Component {
componentWillMount() {
this.setState({
optionsMembers: [
{ key: 1, text: 'DAILY', value: 'DAILY' },
{ key: 2, text: 'MONTHLY', value: 'MONTHLY' },
{ key: 3, text: 'WEEKLY', value: 'WEEKLY' },
],
optionsDays: [
{ key: 1, text: 'SUNDAY', value: 'SUNDAY' },
{ key: 2, text: 'MONDAY', value: 'MONDAY' },
{ key: 3, text: 'TUESDAY', value: 'TUESDAY' },
],
value: '',
member: '',
day: '',
})
}
handleChange = (value, key) => {
this.setState({ [key]: value });
}
render() {
const {optionsMembers, optionsDays, value, member, day } = this.state
return (
<Grid>
<Grid.Column width={6}>
<Dropdown
selection
options={optionsMembers}
value={member}
placeholder='Select Member'
onChange={(e,{value})=>this.handleChange(value, 'member')}
/>
</Grid.Column>
<Grid.Column width={6}>
<Dropdown
selection
options={optionsDays}
value={day}
placeholder='Select Day'
onChange={(e,{value})=>this.handleChange(value, 'day')}
/>
</Grid.Column>
<Grid.Column width={4}>
<div>{member}</div>
<div>{day}</div>
</Grid.Column>
</Grid>
)
}
}
export default DropdownExampleRemote
``` | Something along these lines can maybe work for you.
```
handleChange = (propName, e) => {
let state = Object.assign({}, state);
state[propName] = e.target.value;
this.setState(state)
}
```
You can pass in the name of the property you want to update and then use bracket notation to update that part of your state.
Hope this helps. |
53,435,428 | After reading all the existing post related to this issue, i still did not manage to fix it.
```
ModuleNotFoundError: No module named 'plotly'
```
I have tried all the following:
```
pip3 install plotly
pip3 install plotly --upgrade
```
as well as uninstalling plotly with:
```
pip3 uninstall plotly
```
And reinstalling it again, i get the following on terminal:
```
Requirement already satisfied, skipping upgrade: six in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (1.11.0)
Requirement already satisfied, skipping upgrade: nbformat>=4.2 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (4.4.0)
Requirement already satisfied, skipping upgrade: retrying>=1.3.3 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (1.3.3)
Requirement already satisfied, skipping upgrade: urllib3<1.25,>=1.21.1 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (1.24.1)
Requirement already satisfied, skipping upgrade: idna<2.8,>=2.5 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (2.7)
Requirement already satisfied, skipping upgrade: chardet<3.1.0,>=3.0.2 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (3.0.4)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (2018.10.15)
Requirement already satisfied, skipping upgrade: jsonschema!=2.5.0,>=2.4 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (2.6.0)
Requirement already satisfied, skipping upgrade: jupyter-core in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (4.4.0)
Requirement already satisfied, skipping upgrade: traitlets>=4.1 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (4.3.2)
Requirement already satisfied, skipping upgrade: ipython-genutils in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (0.2.0)
import plotly
import plotly.plotly as py
```
yield:
```
ModuleNotFoundError: No module named 'plotly'
```
my version of pip(3) as well as python(3) seem to be both fine
May somebody please help?
Using Python3 on Atom 1.32.2 x64 | 2018/11/22 | [
"https://Stackoverflow.com/questions/53435428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10438271/"
] | Just run this to uninstall plotly and then build it from source. That should fix the import
```
pip uninstall plotly && python -m pip install plotly
``` | That sounds like a classic dependency issue.
* Check that your pip version is using the same python version (3.6) as what you launch your script with (IE: Use `python3(.6)` to launch your script, not just `python`)
* Your logs aren't showing plotly already installed. In fact, you probably forgot a line when pasting but installing with `pip3.6 install -U plotly` should install the package if not already installed. |
53,435,428 | After reading all the existing post related to this issue, i still did not manage to fix it.
```
ModuleNotFoundError: No module named 'plotly'
```
I have tried all the following:
```
pip3 install plotly
pip3 install plotly --upgrade
```
as well as uninstalling plotly with:
```
pip3 uninstall plotly
```
And reinstalling it again, i get the following on terminal:
```
Requirement already satisfied, skipping upgrade: six in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (1.11.0)
Requirement already satisfied, skipping upgrade: nbformat>=4.2 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (4.4.0)
Requirement already satisfied, skipping upgrade: retrying>=1.3.3 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (1.3.3)
Requirement already satisfied, skipping upgrade: urllib3<1.25,>=1.21.1 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (1.24.1)
Requirement already satisfied, skipping upgrade: idna<2.8,>=2.5 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (2.7)
Requirement already satisfied, skipping upgrade: chardet<3.1.0,>=3.0.2 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (3.0.4)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (2018.10.15)
Requirement already satisfied, skipping upgrade: jsonschema!=2.5.0,>=2.4 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (2.6.0)
Requirement already satisfied, skipping upgrade: jupyter-core in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (4.4.0)
Requirement already satisfied, skipping upgrade: traitlets>=4.1 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (4.3.2)
Requirement already satisfied, skipping upgrade: ipython-genutils in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (0.2.0)
import plotly
import plotly.plotly as py
```
yield:
```
ModuleNotFoundError: No module named 'plotly'
```
my version of pip(3) as well as python(3) seem to be both fine
May somebody please help?
Using Python3 on Atom 1.32.2 x64 | 2018/11/22 | [
"https://Stackoverflow.com/questions/53435428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10438271/"
] | * First of all, Make sure your Python file is NOT called `plotly.py`
but something else.
* If you are using Anaconda, open Anaconda Navigator and launch cmd
prompt (cmd.exe) from there. Then run `pip install plotly` or
`conda install -c plotly` from that terminal window.
* Or just type `pip install plotly` it will install the package and just restart the kernel hopefully you would be good to go | That sounds like a classic dependency issue.
* Check that your pip version is using the same python version (3.6) as what you launch your script with (IE: Use `python3(.6)` to launch your script, not just `python`)
* Your logs aren't showing plotly already installed. In fact, you probably forgot a line when pasting but installing with `pip3.6 install -U plotly` should install the package if not already installed. |
53,435,428 | After reading all the existing post related to this issue, i still did not manage to fix it.
```
ModuleNotFoundError: No module named 'plotly'
```
I have tried all the following:
```
pip3 install plotly
pip3 install plotly --upgrade
```
as well as uninstalling plotly with:
```
pip3 uninstall plotly
```
And reinstalling it again, i get the following on terminal:
```
Requirement already satisfied, skipping upgrade: six in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (1.11.0)
Requirement already satisfied, skipping upgrade: nbformat>=4.2 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (4.4.0)
Requirement already satisfied, skipping upgrade: retrying>=1.3.3 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (1.3.3)
Requirement already satisfied, skipping upgrade: urllib3<1.25,>=1.21.1 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (1.24.1)
Requirement already satisfied, skipping upgrade: idna<2.8,>=2.5 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (2.7)
Requirement already satisfied, skipping upgrade: chardet<3.1.0,>=3.0.2 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (3.0.4)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (2018.10.15)
Requirement already satisfied, skipping upgrade: jsonschema!=2.5.0,>=2.4 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (2.6.0)
Requirement already satisfied, skipping upgrade: jupyter-core in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (4.4.0)
Requirement already satisfied, skipping upgrade: traitlets>=4.1 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (4.3.2)
Requirement already satisfied, skipping upgrade: ipython-genutils in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (0.2.0)
import plotly
import plotly.plotly as py
```
yield:
```
ModuleNotFoundError: No module named 'plotly'
```
my version of pip(3) as well as python(3) seem to be both fine
May somebody please help?
Using Python3 on Atom 1.32.2 x64 | 2018/11/22 | [
"https://Stackoverflow.com/questions/53435428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10438271/"
] | Just run this to uninstall plotly and then build it from source. That should fix the import
```
pip uninstall plotly && python -m pip install plotly
``` | I could with:
```sh
conda install -c https://conda.anaconda.org/plotly plotly
``` |
53,435,428 | After reading all the existing post related to this issue, i still did not manage to fix it.
```
ModuleNotFoundError: No module named 'plotly'
```
I have tried all the following:
```
pip3 install plotly
pip3 install plotly --upgrade
```
as well as uninstalling plotly with:
```
pip3 uninstall plotly
```
And reinstalling it again, i get the following on terminal:
```
Requirement already satisfied, skipping upgrade: six in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (1.11.0)
Requirement already satisfied, skipping upgrade: nbformat>=4.2 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (4.4.0)
Requirement already satisfied, skipping upgrade: retrying>=1.3.3 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (1.3.3)
Requirement already satisfied, skipping upgrade: urllib3<1.25,>=1.21.1 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (1.24.1)
Requirement already satisfied, skipping upgrade: idna<2.8,>=2.5 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (2.7)
Requirement already satisfied, skipping upgrade: chardet<3.1.0,>=3.0.2 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (3.0.4)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (2018.10.15)
Requirement already satisfied, skipping upgrade: jsonschema!=2.5.0,>=2.4 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (2.6.0)
Requirement already satisfied, skipping upgrade: jupyter-core in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (4.4.0)
Requirement already satisfied, skipping upgrade: traitlets>=4.1 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (4.3.2)
Requirement already satisfied, skipping upgrade: ipython-genutils in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (0.2.0)
import plotly
import plotly.plotly as py
```
yield:
```
ModuleNotFoundError: No module named 'plotly'
```
my version of pip(3) as well as python(3) seem to be both fine
May somebody please help?
Using Python3 on Atom 1.32.2 x64 | 2018/11/22 | [
"https://Stackoverflow.com/questions/53435428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10438271/"
] | * First of all, Make sure your Python file is NOT called `plotly.py`
but something else.
* If you are using Anaconda, open Anaconda Navigator and launch cmd
prompt (cmd.exe) from there. Then run `pip install plotly` or
`conda install -c plotly` from that terminal window.
* Or just type `pip install plotly` it will install the package and just restart the kernel hopefully you would be good to go | I could with:
```sh
conda install -c https://conda.anaconda.org/plotly plotly
``` |
53,435,428 | After reading all the existing post related to this issue, i still did not manage to fix it.
```
ModuleNotFoundError: No module named 'plotly'
```
I have tried all the following:
```
pip3 install plotly
pip3 install plotly --upgrade
```
as well as uninstalling plotly with:
```
pip3 uninstall plotly
```
And reinstalling it again, i get the following on terminal:
```
Requirement already satisfied, skipping upgrade: six in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (1.11.0)
Requirement already satisfied, skipping upgrade: nbformat>=4.2 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (4.4.0)
Requirement already satisfied, skipping upgrade: retrying>=1.3.3 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (1.3.3)
Requirement already satisfied, skipping upgrade: urllib3<1.25,>=1.21.1 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (1.24.1)
Requirement already satisfied, skipping upgrade: idna<2.8,>=2.5 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (2.7)
Requirement already satisfied, skipping upgrade: chardet<3.1.0,>=3.0.2 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (3.0.4)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (2018.10.15)
Requirement already satisfied, skipping upgrade: jsonschema!=2.5.0,>=2.4 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (2.6.0)
Requirement already satisfied, skipping upgrade: jupyter-core in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (4.4.0)
Requirement already satisfied, skipping upgrade: traitlets>=4.1 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (4.3.2)
Requirement already satisfied, skipping upgrade: ipython-genutils in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (0.2.0)
import plotly
import plotly.plotly as py
```
yield:
```
ModuleNotFoundError: No module named 'plotly'
```
my version of pip(3) as well as python(3) seem to be both fine
May somebody please help?
Using Python3 on Atom 1.32.2 x64 | 2018/11/22 | [
"https://Stackoverflow.com/questions/53435428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10438271/"
] | Just run this to uninstall plotly and then build it from source. That should fix the import
```
pip uninstall plotly && python -m pip install plotly
``` | I did pip install plotly. It did not work.
Then, I just closed my jupyter-notebook from terminal and opened it again. It worked. Strangely restarting the kernel did not work! |
53,435,428 | After reading all the existing post related to this issue, i still did not manage to fix it.
```
ModuleNotFoundError: No module named 'plotly'
```
I have tried all the following:
```
pip3 install plotly
pip3 install plotly --upgrade
```
as well as uninstalling plotly with:
```
pip3 uninstall plotly
```
And reinstalling it again, i get the following on terminal:
```
Requirement already satisfied, skipping upgrade: six in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (1.11.0)
Requirement already satisfied, skipping upgrade: nbformat>=4.2 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (4.4.0)
Requirement already satisfied, skipping upgrade: retrying>=1.3.3 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (1.3.3)
Requirement already satisfied, skipping upgrade: urllib3<1.25,>=1.21.1 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (1.24.1)
Requirement already satisfied, skipping upgrade: idna<2.8,>=2.5 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (2.7)
Requirement already satisfied, skipping upgrade: chardet<3.1.0,>=3.0.2 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (3.0.4)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (2018.10.15)
Requirement already satisfied, skipping upgrade: jsonschema!=2.5.0,>=2.4 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (2.6.0)
Requirement already satisfied, skipping upgrade: jupyter-core in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (4.4.0)
Requirement already satisfied, skipping upgrade: traitlets>=4.1 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (4.3.2)
Requirement already satisfied, skipping upgrade: ipython-genutils in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (0.2.0)
import plotly
import plotly.plotly as py
```
yield:
```
ModuleNotFoundError: No module named 'plotly'
```
my version of pip(3) as well as python(3) seem to be both fine
May somebody please help?
Using Python3 on Atom 1.32.2 x64 | 2018/11/22 | [
"https://Stackoverflow.com/questions/53435428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10438271/"
] | * First of all, Make sure your Python file is NOT called `plotly.py`
but something else.
* If you are using Anaconda, open Anaconda Navigator and launch cmd
prompt (cmd.exe) from there. Then run `pip install plotly` or
`conda install -c plotly` from that terminal window.
* Or just type `pip install plotly` it will install the package and just restart the kernel hopefully you would be good to go | I did pip install plotly. It did not work.
Then, I just closed my jupyter-notebook from terminal and opened it again. It worked. Strangely restarting the kernel did not work! |
53,435,428 | After reading all the existing post related to this issue, i still did not manage to fix it.
```
ModuleNotFoundError: No module named 'plotly'
```
I have tried all the following:
```
pip3 install plotly
pip3 install plotly --upgrade
```
as well as uninstalling plotly with:
```
pip3 uninstall plotly
```
And reinstalling it again, i get the following on terminal:
```
Requirement already satisfied, skipping upgrade: six in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (1.11.0)
Requirement already satisfied, skipping upgrade: nbformat>=4.2 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (4.4.0)
Requirement already satisfied, skipping upgrade: retrying>=1.3.3 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (1.3.3)
Requirement already satisfied, skipping upgrade: urllib3<1.25,>=1.21.1 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (1.24.1)
Requirement already satisfied, skipping upgrade: idna<2.8,>=2.5 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (2.7)
Requirement already satisfied, skipping upgrade: chardet<3.1.0,>=3.0.2 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (3.0.4)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (2018.10.15)
Requirement already satisfied, skipping upgrade: jsonschema!=2.5.0,>=2.4 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (2.6.0)
Requirement already satisfied, skipping upgrade: jupyter-core in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (4.4.0)
Requirement already satisfied, skipping upgrade: traitlets>=4.1 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (4.3.2)
Requirement already satisfied, skipping upgrade: ipython-genutils in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (0.2.0)
import plotly
import plotly.plotly as py
```
yield:
```
ModuleNotFoundError: No module named 'plotly'
```
my version of pip(3) as well as python(3) seem to be both fine
May somebody please help?
Using Python3 on Atom 1.32.2 x64 | 2018/11/22 | [
"https://Stackoverflow.com/questions/53435428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10438271/"
] | Just run this to uninstall plotly and then build it from source. That should fix the import
```
pip uninstall plotly && python -m pip install plotly
``` | If you are using Jupyter notebook, try below:
```py
import sys
!conda install --yes --prefix {sys.prefix} plotly
``` |
53,435,428 | After reading all the existing post related to this issue, i still did not manage to fix it.
```
ModuleNotFoundError: No module named 'plotly'
```
I have tried all the following:
```
pip3 install plotly
pip3 install plotly --upgrade
```
as well as uninstalling plotly with:
```
pip3 uninstall plotly
```
And reinstalling it again, i get the following on terminal:
```
Requirement already satisfied, skipping upgrade: six in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (1.11.0)
Requirement already satisfied, skipping upgrade: nbformat>=4.2 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (4.4.0)
Requirement already satisfied, skipping upgrade: retrying>=1.3.3 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (1.3.3)
Requirement already satisfied, skipping upgrade: urllib3<1.25,>=1.21.1 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (1.24.1)
Requirement already satisfied, skipping upgrade: idna<2.8,>=2.5 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (2.7)
Requirement already satisfied, skipping upgrade: chardet<3.1.0,>=3.0.2 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (3.0.4)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (2018.10.15)
Requirement already satisfied, skipping upgrade: jsonschema!=2.5.0,>=2.4 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (2.6.0)
Requirement already satisfied, skipping upgrade: jupyter-core in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (4.4.0)
Requirement already satisfied, skipping upgrade: traitlets>=4.1 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (4.3.2)
Requirement already satisfied, skipping upgrade: ipython-genutils in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (0.2.0)
import plotly
import plotly.plotly as py
```
yield:
```
ModuleNotFoundError: No module named 'plotly'
```
my version of pip(3) as well as python(3) seem to be both fine
May somebody please help?
Using Python3 on Atom 1.32.2 x64 | 2018/11/22 | [
"https://Stackoverflow.com/questions/53435428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10438271/"
] | Just run this to uninstall plotly and then build it from source. That should fix the import
```
pip uninstall plotly && python -m pip install plotly
``` | best way is to
1-run anaconda navigator
2-go to environment
3- select all
4-and search plotly
5- if not their install it.
[](https://i.stack.imgur.com/hQF5i.png)
got my issue fixed |
53,435,428 | After reading all the existing post related to this issue, i still did not manage to fix it.
```
ModuleNotFoundError: No module named 'plotly'
```
I have tried all the following:
```
pip3 install plotly
pip3 install plotly --upgrade
```
as well as uninstalling plotly with:
```
pip3 uninstall plotly
```
And reinstalling it again, i get the following on terminal:
```
Requirement already satisfied, skipping upgrade: six in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (1.11.0)
Requirement already satisfied, skipping upgrade: nbformat>=4.2 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (4.4.0)
Requirement already satisfied, skipping upgrade: retrying>=1.3.3 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (1.3.3)
Requirement already satisfied, skipping upgrade: urllib3<1.25,>=1.21.1 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (1.24.1)
Requirement already satisfied, skipping upgrade: idna<2.8,>=2.5 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (2.7)
Requirement already satisfied, skipping upgrade: chardet<3.1.0,>=3.0.2 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (3.0.4)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (2018.10.15)
Requirement already satisfied, skipping upgrade: jsonschema!=2.5.0,>=2.4 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (2.6.0)
Requirement already satisfied, skipping upgrade: jupyter-core in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (4.4.0)
Requirement already satisfied, skipping upgrade: traitlets>=4.1 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (4.3.2)
Requirement already satisfied, skipping upgrade: ipython-genutils in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (0.2.0)
import plotly
import plotly.plotly as py
```
yield:
```
ModuleNotFoundError: No module named 'plotly'
```
my version of pip(3) as well as python(3) seem to be both fine
May somebody please help?
Using Python3 on Atom 1.32.2 x64 | 2018/11/22 | [
"https://Stackoverflow.com/questions/53435428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10438271/"
] | * First of all, Make sure your Python file is NOT called `plotly.py`
but something else.
* If you are using Anaconda, open Anaconda Navigator and launch cmd
prompt (cmd.exe) from there. Then run `pip install plotly` or
`conda install -c plotly` from that terminal window.
* Or just type `pip install plotly` it will install the package and just restart the kernel hopefully you would be good to go | If you are using Jupyter notebook, try below:
```py
import sys
!conda install --yes --prefix {sys.prefix} plotly
``` |
53,435,428 | After reading all the existing post related to this issue, i still did not manage to fix it.
```
ModuleNotFoundError: No module named 'plotly'
```
I have tried all the following:
```
pip3 install plotly
pip3 install plotly --upgrade
```
as well as uninstalling plotly with:
```
pip3 uninstall plotly
```
And reinstalling it again, i get the following on terminal:
```
Requirement already satisfied, skipping upgrade: six in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (1.11.0)
Requirement already satisfied, skipping upgrade: nbformat>=4.2 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (4.4.0)
Requirement already satisfied, skipping upgrade: retrying>=1.3.3 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (1.3.3)
Requirement already satisfied, skipping upgrade: urllib3<1.25,>=1.21.1 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (1.24.1)
Requirement already satisfied, skipping upgrade: idna<2.8,>=2.5 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (2.7)
Requirement already satisfied, skipping upgrade: chardet<3.1.0,>=3.0.2 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (3.0.4)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (2018.10.15)
Requirement already satisfied, skipping upgrade: jsonschema!=2.5.0,>=2.4 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (2.6.0)
Requirement already satisfied, skipping upgrade: jupyter-core in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (4.4.0)
Requirement already satisfied, skipping upgrade: traitlets>=4.1 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (4.3.2)
Requirement already satisfied, skipping upgrade: ipython-genutils in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (0.2.0)
import plotly
import plotly.plotly as py
```
yield:
```
ModuleNotFoundError: No module named 'plotly'
```
my version of pip(3) as well as python(3) seem to be both fine
May somebody please help?
Using Python3 on Atom 1.32.2 x64 | 2018/11/22 | [
"https://Stackoverflow.com/questions/53435428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10438271/"
] | * First of all, Make sure your Python file is NOT called `plotly.py`
but something else.
* If you are using Anaconda, open Anaconda Navigator and launch cmd
prompt (cmd.exe) from there. Then run `pip install plotly` or
`conda install -c plotly` from that terminal window.
* Or just type `pip install plotly` it will install the package and just restart the kernel hopefully you would be good to go | best way is to
1-run anaconda navigator
2-go to environment
3- select all
4-and search plotly
5- if not their install it.
[](https://i.stack.imgur.com/hQF5i.png)
got my issue fixed |
73,646,583 | In short, is there a pythonic way to write `SETTING_A = os.environ['SETTING_A']`?
I want to provide a module `environment.py` from which I can import constants that are read from environment variables.
##### Approach 1:
```
import os
try:
SETTING_A = os.environ['SETTING_A']
SETTING_B = os.environ['SETTING_B']
SETTING_C = os.environ['SETTING_C']
except KeyError as e:
raise EnvironmentError(f'env var {e} is not defined')
```
##### Approach 2
```
import os
vs = ('SETTING_A', 'SETTING_B', 'SETTING_C')
try:
for v in vs:
locals()[v] = os.environ[v]
except KeyError as e:
raise EnvironmentError(f'env var {e} is not defined')
```
Approach 1 repeats the names of the variables, approach 2 manipulates `locals` and it's harder to see what constants will be importable from the module.
Is there a best practice to this problem? | 2022/09/08 | [
"https://Stackoverflow.com/questions/73646583",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10909217/"
] | You should describe the type of PersonDto:
```js
interface PersonDto {
id: string;
name: string;
country: string;
}
class Person {
private id: string;
private name: string;
private country: string;
constructor(personDto: PersonDto) {
this.id = personDto.id;
this.name = personDto.name;
this.country = personDto.country;
}
}
const data = {
"id": "1234fc8-33aa-4a39-9625-b435479e6328",
"name": "02_Aug 10:00",
"country": "UK"
};
const person = new Person(data);
console.log(person);
```
In a case you a sure that all PersonDto properties are string - you can simplify the type description:
`type PersonDto = { [key: string]: string };` | Try [`Object.assign`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/assign) to not have to type every property.
```typescript
interface PersonDto {
id: string;
name: string;
country: string;
}
class Person {
private id: string;
private name: string;
private country: string;
constructor(personDto: PersonDto) {
Object.assign(this, personDto);
}
}
const data = {
id: "1234fc8-33aa-4a39-9625-b435479e6328",
name: "02_Aug 10:00",
country: "UK"
};
const person = new Person(data);
console.log(person);
``` |
21,890,220 | tried multiplication of 109221975\*123222821 in python 2.7 prompt in two different ways
```
Python 2.7.3 (default, Sep 26 2013, 20:08:41)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> 109221975*123222821
13458639874691475L
>>> 109221975*123222821.0
1.3458639874691476e+16
>>> int(109221975.0*123222821.0)
13458639874691476L
>>> 109221975*123222821 == int(109221975.0*123222821.0)
False
>>>
```
what I am suspecting here is that there is some precision inconsistency which is causing such problem , is it possible to speculate when can inconsistency like this happen ? | 2014/02/19 | [
"https://Stackoverflow.com/questions/21890220",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1955093/"
] | Your `int` is 54 bits long. `float` can hold 53 significant digits, so effectively the last digit is rounded to an even number.
Internally, your float is represented as:
>
> 2225720309975242\*2-1
>
>
>
Your `int` and `float` is stored in binary like the following:
```
101111110100001000111111001000111001000001000110010011
0 10000110100 0111111010000100011111100100011100100000100011001010
```
For `float`, the first part is the **sign**, the second is the **exponent**, and the third is the **significand**. Because space is allocated for an exponent, there isn't enough room left over for the significant digits
How I aligned the two representations, you can see the data is the same, but the `int` needs one extra digit on the right, while the `float` uses more (in this case wasted) space on the left | Because `int` in python has infinite precision, but `float` does not. (`float` is a double precision floating point number, which has 53 bits of precision.) |
21,890,220 | tried multiplication of 109221975\*123222821 in python 2.7 prompt in two different ways
```
Python 2.7.3 (default, Sep 26 2013, 20:08:41)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> 109221975*123222821
13458639874691475L
>>> 109221975*123222821.0
1.3458639874691476e+16
>>> int(109221975.0*123222821.0)
13458639874691476L
>>> 109221975*123222821 == int(109221975.0*123222821.0)
False
>>>
```
what I am suspecting here is that there is some precision inconsistency which is causing such problem , is it possible to speculate when can inconsistency like this happen ? | 2014/02/19 | [
"https://Stackoverflow.com/questions/21890220",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1955093/"
] | Once upon a time, there was a string named `st` that wanted to be a number. What number should I be said st?
The string's fairy god mother said: Well st, if you want to be an accurate number, a number for counting whole things, I would be an arbitrary precision integer:
```
>>> st='123456789123456789'
>>> int(st)
123456789123456789
>>> int(st)*int(st)*int(st)
1881676377434183981909562699940347954480361860897069
```
'But I also want to count partial things, like the 1/2 of the sandwich I still have!' said st. So be a float said the fair godmother, but know that you may loose track of a few 1/2 sandwiches after a while. In fact, after `9007199254740992` things you may start to forget a few because you only have 53 fingers to count with when you are a float:
```
>>> float(int('1'*53,2)+1)
9007199254740992.0
>>> float(int('1'*53,2)+1)+1
9007199254740992.0
>>> int(float(int(st)))
123456789123456784
>>> int(st)-int(float(st))
5
``` | Because `int` in python has infinite precision, but `float` does not. (`float` is a double precision floating point number, which has 53 bits of precision.) |
66,395,018 | I am new to python. at the moment I am coding a game with a friend. we are currently working on a combat system the only problem is we don't know how to update the the enemy's health once damage has been dealt. The code is as following.
```
enemy1_health = 150
broadsword_attack = 20
rusty_knife = 10.5
attacks = ["broadsword swing " + str(broadsword_attack), "rusty knife jab " + str(rusty_knife) ]
while enemy1_health > 0:
while player_health > 0:
enemy1_health = 150
print(attacks)
attackchoice = input("Choose an attack: ")
if attackchoice == ("broadsword swing"):
print (int(enemy1_health - 20))
if attackchoice == ("rusty knife jab"):
print (int(enemy1_health - 10.5))
print("you died")
quit()
print("you cleared the level")```
``` | 2021/02/27 | [
"https://Stackoverflow.com/questions/66395018",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15293735/"
] | You need to change the enemy health outside of the print statement with a statement like this:
```
enemy1_health = enemy1_health - 20
```
or like this, which does the same thing:
```
enemy1_health -= 20
```
You also reset enemy1\_health every time the loop loops, remove that.
You don't define player\_health, define that.
Your loop goes forever until you die.
So your code should end up looking more like this:
```
enemy1_health = 150
broadsword_attack = 20
rusty_knife = 10.5
player_health = 100
attacks = ["broadsword swing " + str(broadsword_attack), "rusty knife jab " + str(rusty_knife)]
while enemy1_health > 0:
print(attacks)
attackchoice = input("Choose an attack: ")
if attackchoice == ("broadsword swing"):
enemy1_health -= 20
if attackchoice == ("rusty knife jab"):
enemy1_health -= 10.5
print(enemy1_health)
if player_health <= 0:
print("you died")
quit()
print("you cleared the level")
```
This still requires quite a bit of tweaking, it'd be a complete working game if it was like this (basically, you win if you spam broadsword attacks because they do more damage):
```
enemy1_health = 150
enemy1_attack = 10
player_health = 100
broadsword_attack = 20
rusty_knife = 10.5
attacks = ["broadsword swing " + str(broadsword_attack), "rusty knife jab " + str(rusty_knife)]
while enemy1_health > 0:
print(attacks)
attackchoice = input("Choose an attack: ")
if attackchoice == ("broadsword swing"):
enemy1_health -= broadsword_attack
if attackchoice == ("rusty knife jab"):
enemy1_health -= rusty_knife
print(f'A hit! The enemy has {enemy1_health} health left.')
if enemy1_health > 0:
player_health -= enemy1_attack
print(f'The enemy attacks and leaves you with {player_health} health.')
if player_health <= 0:
print("you died")
quit()
print("you cleared the level")
``` | You need to change the enemy health outside the print statement.
do:
```
if attackchoice == ("rusty knife jab"):
enemy1_health = enemy1_health - 10.5
print(enemy1_health)
```
and you can do the same for the other attacks.
You also have enemy health defined in the while loop. you need to define it outside of the loop. |
66,395,018 | I am new to python. at the moment I am coding a game with a friend. we are currently working on a combat system the only problem is we don't know how to update the the enemy's health once damage has been dealt. The code is as following.
```
enemy1_health = 150
broadsword_attack = 20
rusty_knife = 10.5
attacks = ["broadsword swing " + str(broadsword_attack), "rusty knife jab " + str(rusty_knife) ]
while enemy1_health > 0:
while player_health > 0:
enemy1_health = 150
print(attacks)
attackchoice = input("Choose an attack: ")
if attackchoice == ("broadsword swing"):
print (int(enemy1_health - 20))
if attackchoice == ("rusty knife jab"):
print (int(enemy1_health - 10.5))
print("you died")
quit()
print("you cleared the level")```
``` | 2021/02/27 | [
"https://Stackoverflow.com/questions/66395018",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15293735/"
] | You have to define the health above the while loops like this :
```
broadsword_attack = 20
rusty_knife = 10.5
attacks = ["broadsword swing " + str(broadsword_attack), "rusty knife jab " + str(rusty_knife) ]
enemy1_health = 150
while enemy1_health > 0:
while player_health > 0:
print(attacks)
attackchoice = input("Choose an attack: ")
if attackchoice == ("broadsword swing"):
enemy1_health -=20
print (int(enemy1_health))
if attackchoice == ("rusty knife jab"):
enemy1_health -=10.5
print (int(enemy1_health))
print("you died")
quit()
print("you cleared the level")
```
To avoid the health to be redefined to 150 in every iteration of the loop. And also assigning the values to health not just print it. Which means that the enemy never losses its health no matter what you do.
Also there is also another problem as you are never decreasing players health which will result in an infinite loop | You need to change the enemy health outside the print statement.
do:
```
if attackchoice == ("rusty knife jab"):
enemy1_health = enemy1_health - 10.5
print(enemy1_health)
```
and you can do the same for the other attacks.
You also have enemy health defined in the while loop. you need to define it outside of the loop. |
44,659,242 | During development of Pylint, we encountered [interesting problem related to non-dependency that may break `pylint` package](https://github.com/PyCQA/pylint/issues/1318).
Case is following:
* `python-future` had a conflicting alias to `configparser` package. [Quoting official docs](http://python-future.org/whatsnew.html#what-s-new-in-version-0-16-0-2016-10-27):
>
> This release removes the configparser package as an alias for ConfigParser on Py2 to improve compatibility with Lukasz Langa’s backported configparser package. Previously python-future and the configparser backport clashed, causing various compatibility issues. (Issues #118, #181)
>
>
>
* `python-future` itself **is not** a dependency of Pylint
What would be a standard way to enforce *if python-future is present, force it to 0.16 or later* limitation? I want to avoid defining dependency as `future>=0.16` - by doing this I'd force users to install package that they don't need and won't use in a general case. | 2017/06/20 | [
"https://Stackoverflow.com/questions/44659242",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2912340/"
] | ```
kw = {}
try:
import future
except ImportError:
pass
else:
kw['install_requires'] = ['future>=0.16']
setup(
…
**kw
)
``` | One workaround for this issue is to define this requirement only for the `all` target, so only if someone adds `pylint[all]>=1.2.3` as a requirement they will have futures installed/upgraded.
At this moment I don't know another way to "ignore or upgrade" a dependency.
Also, I would avoid adding Python code to `setup.py` in order to make it "smart",... is a well known distribution anti-pattern ;) |
44,659,242 | During development of Pylint, we encountered [interesting problem related to non-dependency that may break `pylint` package](https://github.com/PyCQA/pylint/issues/1318).
Case is following:
* `python-future` had a conflicting alias to `configparser` package. [Quoting official docs](http://python-future.org/whatsnew.html#what-s-new-in-version-0-16-0-2016-10-27):
>
> This release removes the configparser package as an alias for ConfigParser on Py2 to improve compatibility with Lukasz Langa’s backported configparser package. Previously python-future and the configparser backport clashed, causing various compatibility issues. (Issues #118, #181)
>
>
>
* `python-future` itself **is not** a dependency of Pylint
What would be a standard way to enforce *if python-future is present, force it to 0.16 or later* limitation? I want to avoid defining dependency as `future>=0.16` - by doing this I'd force users to install package that they don't need and won't use in a general case. | 2017/06/20 | [
"https://Stackoverflow.com/questions/44659242",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2912340/"
] | ```
kw = {}
try:
import future
except ImportError:
pass
else:
kw['install_requires'] = ['future>=0.16']
setup(
…
**kw
)
``` | There is no supported way to tell pip or setuptools that a package needs to satisfy a constraint only if installed. There might be some hacks but I imagine they'll all be fragile and likely breaking in the future versions of pip/setuptools.
Honestly, the only *good* way is to document it for users that `future < 16.0` would break pylint, in the appropriate location in the documentation.
---
Making your `setup.py` script contain conditional dependencies is something that has been strongly discouraged for some time now. Once a wheel is built, the package is installed with the same dependency information as the wheel holds - setup.py is not run on the end-user's system, only on the packager's system, which means any setup.py hack (like @phd's) would not be useful (since pylint distributes wheels). |
37,369,079 | I have a lab colorspace
[](https://i.stack.imgur.com/3pXgm.png)
And I want to "bin" the colorspace in a grid of 10x10 squares.
So the first bin might be (-110,-110) to (-100,-100) then the next one might be (-100,-110) to (-90,-100) and so on. These bins could be bin 1 and bin 2
I have seen np.digitize() but it appears that you have to pass it 1-dimensional bins.
A rudimentary approach that I have tried is this:
```
for fn in filenames:
image = color.rgb2lab(io.imread(fn))
ab = image[:,:,1:]
width,height,d = ab.shape
reshaped_ab = np.reshape(ab,(width*height,d))
print reshaped_ab.shape
images.append(reshaped_ab)
all_abs = np.vstack(images)
all_abs = shuffle(all_abs,random_state=0)
sns
df = pd.DataFrame(all_abs[:3000],columns=["a","b"])
top_a,top_b = df.max()
bottom_a,bottom_b = df.min()
range_a = top_a-bottom_a
range_b = top_b-bottom_b
corner_a = bottom_a
corner_b = bottom_b
bins = []
for i in xrange(int(range_a/10)):
for j in xrange(int(range_b/10)):
bins.append([corner_a,corner_b,corner_a+10,corner_b+10])
corner_b = bottom_b+10
corner_a = corner_a+10
```
but the "bins" that results seem kinda sketchy. For one thing there are many empty bins as the color space does have values in a square arrangement and that code pretty much just boxes off from the max and min values. Additionally, the rounding might cause issues. I am wondering if there is a better way to do this? I have heard of color histograms which count the values in each "bin". I don't need the values but the bins are I think what I am looking for here.
Ideally the bins would be an object that each have a label. So I could do bins.indices[0] and it would return the bounding box I gave it. Then also I could bin each observation, like if a new color was color = [15.342,-6.534], color.bin would return 15 or the 15th bin.
I realize this is a lot to ask for, but I think it must be a somewhat common need for people working with color spaces. So is there any python module or tool that can accomplish what I'm asking? How would you approach this? thanks! | 2016/05/21 | [
"https://Stackoverflow.com/questions/37369079",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1123905/"
] | The answer is to not use SSTATE\_DUPWHITELIST for this at all. Instead, in the libftdi recipe's do\_install (or do\_install\_append, if the recipe itself doesn't define its own do\_install) you should delete the duplicate files from within ${D} and then they won't get staged and the error won't occur. | I managed to solve this problem by adding the SSTATE\_DUPWHITELIST to the bitbake recipe of the package as follows:
SSTATE\_DUPWHITELIST = "${TMPDIR}/PATH/TO/THE/FILES"
I added the absolute path of all of the 6,7 files that had the conflict to the list. I did that because they were basically coming from a same source and it was all safe to do that. correct me if there is a better way though.
Hope this helps someone! |
37,369,079 | I have a lab colorspace
[](https://i.stack.imgur.com/3pXgm.png)
And I want to "bin" the colorspace in a grid of 10x10 squares.
So the first bin might be (-110,-110) to (-100,-100) then the next one might be (-100,-110) to (-90,-100) and so on. These bins could be bin 1 and bin 2
I have seen np.digitize() but it appears that you have to pass it 1-dimensional bins.
A rudimentary approach that I have tried is this:
```
for fn in filenames:
image = color.rgb2lab(io.imread(fn))
ab = image[:,:,1:]
width,height,d = ab.shape
reshaped_ab = np.reshape(ab,(width*height,d))
print reshaped_ab.shape
images.append(reshaped_ab)
all_abs = np.vstack(images)
all_abs = shuffle(all_abs,random_state=0)
sns
df = pd.DataFrame(all_abs[:3000],columns=["a","b"])
top_a,top_b = df.max()
bottom_a,bottom_b = df.min()
range_a = top_a-bottom_a
range_b = top_b-bottom_b
corner_a = bottom_a
corner_b = bottom_b
bins = []
for i in xrange(int(range_a/10)):
for j in xrange(int(range_b/10)):
bins.append([corner_a,corner_b,corner_a+10,corner_b+10])
corner_b = bottom_b+10
corner_a = corner_a+10
```
but the "bins" that results seem kinda sketchy. For one thing there are many empty bins as the color space does have values in a square arrangement and that code pretty much just boxes off from the max and min values. Additionally, the rounding might cause issues. I am wondering if there is a better way to do this? I have heard of color histograms which count the values in each "bin". I don't need the values but the bins are I think what I am looking for here.
Ideally the bins would be an object that each have a label. So I could do bins.indices[0] and it would return the bounding box I gave it. Then also I could bin each observation, like if a new color was color = [15.342,-6.534], color.bin would return 15 or the 15th bin.
I realize this is a lot to ask for, but I think it must be a somewhat common need for people working with color spaces. So is there any python module or tool that can accomplish what I'm asking? How would you approach this? thanks! | 2016/05/21 | [
"https://Stackoverflow.com/questions/37369079",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1123905/"
] | I got it to work by using:
SSTATE\_DUPWHITELIST = "/"
Dont forget the quotes. Here's my bb excerpt:
```
SSTATE_DUPWHITELIST = "/"
DEPENDS = ""
do_unpack() {
mkdir -pv ${S}
tar xvf ${DL_DIR}/${FILENAME}.tar -C ${S}
}
do_install() {
install -d -m 755 ${D}${includedir}
install -m 644 ${S}/${MYPATH}/inc/myHeader1.h ${D}${includedir}
install -m 644 ${S}/${MYPATH}/inc/myHeader2.h ${D}${includedir}
install -m 644 ${S}/${MYPATH}/inc/myHeader3.h ${D}${includedir}
}
``` | I managed to solve this problem by adding the SSTATE\_DUPWHITELIST to the bitbake recipe of the package as follows:
SSTATE\_DUPWHITELIST = "${TMPDIR}/PATH/TO/THE/FILES"
I added the absolute path of all of the 6,7 files that had the conflict to the list. I did that because they were basically coming from a same source and it was all safe to do that. correct me if there is a better way though.
Hope this helps someone! |
37,369,079 | I have a lab colorspace
[](https://i.stack.imgur.com/3pXgm.png)
And I want to "bin" the colorspace in a grid of 10x10 squares.
So the first bin might be (-110,-110) to (-100,-100) then the next one might be (-100,-110) to (-90,-100) and so on. These bins could be bin 1 and bin 2
I have seen np.digitize() but it appears that you have to pass it 1-dimensional bins.
A rudimentary approach that I have tried is this:
```
for fn in filenames:
image = color.rgb2lab(io.imread(fn))
ab = image[:,:,1:]
width,height,d = ab.shape
reshaped_ab = np.reshape(ab,(width*height,d))
print reshaped_ab.shape
images.append(reshaped_ab)
all_abs = np.vstack(images)
all_abs = shuffle(all_abs,random_state=0)
sns
df = pd.DataFrame(all_abs[:3000],columns=["a","b"])
top_a,top_b = df.max()
bottom_a,bottom_b = df.min()
range_a = top_a-bottom_a
range_b = top_b-bottom_b
corner_a = bottom_a
corner_b = bottom_b
bins = []
for i in xrange(int(range_a/10)):
for j in xrange(int(range_b/10)):
bins.append([corner_a,corner_b,corner_a+10,corner_b+10])
corner_b = bottom_b+10
corner_a = corner_a+10
```
but the "bins" that results seem kinda sketchy. For one thing there are many empty bins as the color space does have values in a square arrangement and that code pretty much just boxes off from the max and min values. Additionally, the rounding might cause issues. I am wondering if there is a better way to do this? I have heard of color histograms which count the values in each "bin". I don't need the values but the bins are I think what I am looking for here.
Ideally the bins would be an object that each have a label. So I could do bins.indices[0] and it would return the bounding box I gave it. Then also I could bin each observation, like if a new color was color = [15.342,-6.534], color.bin would return 15 or the 15th bin.
I realize this is a lot to ask for, but I think it must be a somewhat common need for people working with color spaces. So is there any python module or tool that can accomplish what I'm asking? How would you approach this? thanks! | 2016/05/21 | [
"https://Stackoverflow.com/questions/37369079",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1123905/"
] | The answer is to not use SSTATE\_DUPWHITELIST for this at all. Instead, in the libftdi recipe's do\_install (or do\_install\_append, if the recipe itself doesn't define its own do\_install) you should delete the duplicate files from within ${D} and then they won't get staged and the error won't occur. | I got it to work by using:
SSTATE\_DUPWHITELIST = "/"
Dont forget the quotes. Here's my bb excerpt:
```
SSTATE_DUPWHITELIST = "/"
DEPENDS = ""
do_unpack() {
mkdir -pv ${S}
tar xvf ${DL_DIR}/${FILENAME}.tar -C ${S}
}
do_install() {
install -d -m 755 ${D}${includedir}
install -m 644 ${S}/${MYPATH}/inc/myHeader1.h ${D}${includedir}
install -m 644 ${S}/${MYPATH}/inc/myHeader2.h ${D}${includedir}
install -m 644 ${S}/${MYPATH}/inc/myHeader3.h ${D}${includedir}
}
``` |
70,008,841 | I was able to follow this example1 and let my ec2 instance read from S3.
In order to write to the same bucket I thought changing line 572 from `grant_read()` to `grant_read_write()`
should work.
```py
...
# Userdata executes script from S3
instance.user_data.add_execute_file_command(
file_path=local_path
)
# asset.grant_read(instance.role)
asset.grant_read_write(instance.role)
...
```
Yet the documented3 function cannot be accessed according to the error message.
```
>> 57: Pyright: Cannot access member "grant_read_write" for type "Asset"
```
What am I missing?
---
1 <https://github.com/aws-samples/aws-cdk-examples/tree/master/python/ec2/instance>
2 <https://github.com/aws-samples/aws-cdk-examples/blob/master/python/ec2/instance/app.py#L57>
3 <https://docs.aws.amazon.com/cdk/latest/guide/permissions.html#permissions_grants> | 2021/11/17 | [
"https://Stackoverflow.com/questions/70008841",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1172907/"
] | This is the [documentation](https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.aws_s3_assets/Asset.html) for Asset:
>
> An asset represents a local file or directory, which is automatically
> uploaded to S3 and then can be referenced within a CDK application.
>
>
>
The method grant\_read\_write isn't provided, as it is pointless. The documentation you've linked doesn't apply here. | an asset is just a Zip file that will be uploded to the bootstraped CDK s3 bucket, then referenced by Cloudformation when deploying.
if you have an script you want ot put into an s3 bucket, you dont want to use any form of asset cause that is a zip file. You would be better suited using a boto3 command to upload it once the bucket already exists, or making it part of a codePipeline to create the bucket with CDK then the next step in the pipeline uploads it.
the grant\_read\_write is for `aws_cdk.aws_s3.Bucket` constructs in this case. |
2,433,703 | I am running Cygwin Python version 2.5.2.
I have a three-line source file, called import.py:
```
#!/usr/bin/python
import xml.etree.ElementTree as ET
print "Success!"
```
When I execute "python import.py", it works:
```
C:\Temp>python import.py
Success!
```
When I run the python interpreter and type the commands, it works:
```
C:\Temp>python
Python 2.5.2 (r252:60911, Dec 2 2008, 09:26:14)
[GCC 3.4.4 (cygming special, gdc 0.12, using dmd 0.125)] on cygwin
Type "help", "copyright", "credits" or "license" for more information.
>>> #!/usr/bin/python
... import xml.etree.ElementTree as ET
>>> print "Success!"
Success!
>>>
```
But when I execute "import.py', it does not work:
```
C:\Temp>which python
/usr/bin/python
C:\Temp>import.py
Traceback (most recent call last):
File "C:\Temp\import.py", line 2, in ?
import xml.etree.ElementTree as ET
ImportError: No module named etree.ElementTree
```
When I remove the first line (#!/usr/bin/python), I get the same error. I need that line in there, though, for when this script runs on Linux. And it works fine on Linux.
Any ideas?
Thanks. | 2010/03/12 | [
"https://Stackoverflow.com/questions/2433703",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5397/"
] | Probably py extension is connected to some other python interpreter than the one in /usr/bin/python | Try:
```
./import.py
```
Most people don't have "." in their path.
just typing python will call the cygwin python.
import.py will likely call whichever python is associated with .py files under windows.
You are using two different python executables. |
2,433,703 | I am running Cygwin Python version 2.5.2.
I have a three-line source file, called import.py:
```
#!/usr/bin/python
import xml.etree.ElementTree as ET
print "Success!"
```
When I execute "python import.py", it works:
```
C:\Temp>python import.py
Success!
```
When I run the python interpreter and type the commands, it works:
```
C:\Temp>python
Python 2.5.2 (r252:60911, Dec 2 2008, 09:26:14)
[GCC 3.4.4 (cygming special, gdc 0.12, using dmd 0.125)] on cygwin
Type "help", "copyright", "credits" or "license" for more information.
>>> #!/usr/bin/python
... import xml.etree.ElementTree as ET
>>> print "Success!"
Success!
>>>
```
But when I execute "import.py', it does not work:
```
C:\Temp>which python
/usr/bin/python
C:\Temp>import.py
Traceback (most recent call last):
File "C:\Temp\import.py", line 2, in ?
import xml.etree.ElementTree as ET
ImportError: No module named etree.ElementTree
```
When I remove the first line (#!/usr/bin/python), I get the same error. I need that line in there, though, for when this script runs on Linux. And it works fine on Linux.
Any ideas?
Thanks. | 2010/03/12 | [
"https://Stackoverflow.com/questions/2433703",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5397/"
] | I have the feeling that
```
C:\Temp>import.py
```
uses a different interpreter. Can you try with the following scripts:
```
#!/usr/bin/env python
import sys
print sys.executable
import xml.etree.ElementTree as ET
print "Success!"
``` | Try:
```
./import.py
```
Most people don't have "." in their path.
just typing python will call the cygwin python.
import.py will likely call whichever python is associated with .py files under windows.
You are using two different python executables. |
2,433,703 | I am running Cygwin Python version 2.5.2.
I have a three-line source file, called import.py:
```
#!/usr/bin/python
import xml.etree.ElementTree as ET
print "Success!"
```
When I execute "python import.py", it works:
```
C:\Temp>python import.py
Success!
```
When I run the python interpreter and type the commands, it works:
```
C:\Temp>python
Python 2.5.2 (r252:60911, Dec 2 2008, 09:26:14)
[GCC 3.4.4 (cygming special, gdc 0.12, using dmd 0.125)] on cygwin
Type "help", "copyright", "credits" or "license" for more information.
>>> #!/usr/bin/python
... import xml.etree.ElementTree as ET
>>> print "Success!"
Success!
>>>
```
But when I execute "import.py', it does not work:
```
C:\Temp>which python
/usr/bin/python
C:\Temp>import.py
Traceback (most recent call last):
File "C:\Temp\import.py", line 2, in ?
import xml.etree.ElementTree as ET
ImportError: No module named etree.ElementTree
```
When I remove the first line (#!/usr/bin/python), I get the same error. I need that line in there, though, for when this script runs on Linux. And it works fine on Linux.
Any ideas?
Thanks. | 2010/03/12 | [
"https://Stackoverflow.com/questions/2433703",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5397/"
] | Probably py extension is connected to some other python interpreter than the one in /usr/bin/python | Create a batch file next to your program that calls it the right way ... and I'm fairly sure you've got the problem because of an ambiguity between "windows python" (a python interpreter compiled for windows) and "cygwin python" (a python interpreter running on cygwin). |
2,433,703 | I am running Cygwin Python version 2.5.2.
I have a three-line source file, called import.py:
```
#!/usr/bin/python
import xml.etree.ElementTree as ET
print "Success!"
```
When I execute "python import.py", it works:
```
C:\Temp>python import.py
Success!
```
When I run the python interpreter and type the commands, it works:
```
C:\Temp>python
Python 2.5.2 (r252:60911, Dec 2 2008, 09:26:14)
[GCC 3.4.4 (cygming special, gdc 0.12, using dmd 0.125)] on cygwin
Type "help", "copyright", "credits" or "license" for more information.
>>> #!/usr/bin/python
... import xml.etree.ElementTree as ET
>>> print "Success!"
Success!
>>>
```
But when I execute "import.py', it does not work:
```
C:\Temp>which python
/usr/bin/python
C:\Temp>import.py
Traceback (most recent call last):
File "C:\Temp\import.py", line 2, in ?
import xml.etree.ElementTree as ET
ImportError: No module named etree.ElementTree
```
When I remove the first line (#!/usr/bin/python), I get the same error. I need that line in there, though, for when this script runs on Linux. And it works fine on Linux.
Any ideas?
Thanks. | 2010/03/12 | [
"https://Stackoverflow.com/questions/2433703",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5397/"
] | I have the feeling that
```
C:\Temp>import.py
```
uses a different interpreter. Can you try with the following scripts:
```
#!/usr/bin/env python
import sys
print sys.executable
import xml.etree.ElementTree as ET
print "Success!"
``` | Probably py extension is connected to some other python interpreter than the one in /usr/bin/python |
2,433,703 | I am running Cygwin Python version 2.5.2.
I have a three-line source file, called import.py:
```
#!/usr/bin/python
import xml.etree.ElementTree as ET
print "Success!"
```
When I execute "python import.py", it works:
```
C:\Temp>python import.py
Success!
```
When I run the python interpreter and type the commands, it works:
```
C:\Temp>python
Python 2.5.2 (r252:60911, Dec 2 2008, 09:26:14)
[GCC 3.4.4 (cygming special, gdc 0.12, using dmd 0.125)] on cygwin
Type "help", "copyright", "credits" or "license" for more information.
>>> #!/usr/bin/python
... import xml.etree.ElementTree as ET
>>> print "Success!"
Success!
>>>
```
But when I execute "import.py', it does not work:
```
C:\Temp>which python
/usr/bin/python
C:\Temp>import.py
Traceback (most recent call last):
File "C:\Temp\import.py", line 2, in ?
import xml.etree.ElementTree as ET
ImportError: No module named etree.ElementTree
```
When I remove the first line (#!/usr/bin/python), I get the same error. I need that line in there, though, for when this script runs on Linux. And it works fine on Linux.
Any ideas?
Thanks. | 2010/03/12 | [
"https://Stackoverflow.com/questions/2433703",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5397/"
] | I have the feeling that
```
C:\Temp>import.py
```
uses a different interpreter. Can you try with the following scripts:
```
#!/usr/bin/env python
import sys
print sys.executable
import xml.etree.ElementTree as ET
print "Success!"
``` | Create a batch file next to your program that calls it the right way ... and I'm fairly sure you've got the problem because of an ambiguity between "windows python" (a python interpreter compiled for windows) and "cygwin python" (a python interpreter running on cygwin). |
1,550,226 | I have installed a python package with `python setup.py install`.
How do I uninstall it? | 2009/10/11 | [
"https://Stackoverflow.com/questions/1550226",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/63051/"
] | At `{virtualenv}/lib/python2.7/site-packages/` (if not using virtualenv then `{system_dir}/lib/python2.7/dist-packages/`)
* Remove the egg file (e.g. `distribute-0.6.34-py2.7.egg`)
* If there is any from file `easy-install.pth`, remove the corresponding line (it should be a path to the source directory or of an egg file). | **Install from local**
`python setup.py install`
**Uninstall from local**
`pip uninstall mypackage` |
1,550,226 | I have installed a python package with `python setup.py install`.
How do I uninstall it? | 2009/10/11 | [
"https://Stackoverflow.com/questions/1550226",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/63051/"
] | I think you can open the setup.py, locate the package name, and then ask pip to uninstall it.
Assuming the name is available in a 'METADATA' variable:
```
pip uninstall $(python -c "from setup import METADATA; print METADATA['name']")
``` | **Install from local**
`python setup.py install`
**Uninstall from local**
`pip uninstall mypackage` |
1,550,226 | I have installed a python package with `python setup.py install`.
How do I uninstall it? | 2009/10/11 | [
"https://Stackoverflow.com/questions/1550226",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/63051/"
] | Probably you can do this as an alternative :-
1) Get the python version -
```
[linux machine]# python
Python 2.4.3 (#1, Jun 18 2012, 14:38:55)
```
-> The above command gives you the current python Version which is **2.4.3**
2) Get the installation directory of python -
```
[linux machine]# whereis python
python: /usr/bin/python /usr/bin/python2.4 /usr/lib/python2.4 /usr/local/bin/python2.5 /usr/include/python2.4 /usr/share/man/man1/python.1.gz
```
-> From above command you can get the installation directory which is - **/usr/lib/python2.4/site-packages**
3) From here you can remove the packages and python egg files
```
[linux machine]# cd /usr/lib/python2.4/site-packages
[linux machine]# rm -rf paramiko-1.12.0-py2.4.egg paramiko-1.7.7.1-py2.4.egg paramiko-1.9.0-py2.4.egg
```
This worked for me.. And i was able to uninstall package which was troubling me :) | It might be better to remove related files by using bash to read commands, like the following:
```
sudo python setup.py install --record files.txt
sudo bash -c "cat files.txt | xargs rm -rf"
``` |
1,550,226 | I have installed a python package with `python setup.py install`.
How do I uninstall it? | 2009/10/11 | [
"https://Stackoverflow.com/questions/1550226",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/63051/"
] | Go to your python package directory and remove your .egg file,
e.g.:
In python 2.5(ubuntu): /usr/lib/python2.5/site-packages/
In python 2.6(ubuntu): /usr/local/lib/python2.6/dist-packages/ | I think you can open the setup.py, locate the package name, and then ask pip to uninstall it.
Assuming the name is available in a 'METADATA' variable:
```
pip uninstall $(python -c "from setup import METADATA; print METADATA['name']")
``` |
1,550,226 | I have installed a python package with `python setup.py install`.
How do I uninstall it? | 2009/10/11 | [
"https://Stackoverflow.com/questions/1550226",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/63051/"
] | Now python gives you the choice to install `pip` during the installation (I am on Windows, and at least python does so for Windows!). Considering you had chosen to install `pip` during installation of python (you don't actually have to choose because it is default), `pip` is already installed for you. Then, type in `pip` in command prompt, you should see a help come up. You can find necessary usage instructions there. E.g. `pip list` shows you the list of installed packages. You can use
```
pip uninstall package_name
```
to uninstall any package that you don't want anymore. Read more [here (pip documentation)](https://pip.pypa.io/en/stable/quickstart/). | **Install from local**
`python setup.py install`
**Uninstall from local**
`pip uninstall mypackage` |
1,550,226 | I have installed a python package with `python setup.py install`.
How do I uninstall it? | 2009/10/11 | [
"https://Stackoverflow.com/questions/1550226",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/63051/"
] | For me, the following mostly works:
have pip installed, e.g.:
```
$ easy_install pip
```
Check, how is your installed package named from pip point of view:
```
$ pip freeze
```
This shall list names of all packages, you have installed (and which were detected by pip).
The name can be sometime long, then use just the name of the package being shown at the and after `#egg=`. You can also in most cases ignore the version part (whatever follows `==` or `-`).
Then uninstall the package:
```
$ pip uninstall package.name.you.have.found
```
If it asks for confirmation about removing the package, then you are lucky guy and it will be removed.
pip shall detect all packages, which were installed by pip. It shall also detect most of the packages installed via easy\_install or setup.py, but this may in some rare cases fail.
Here is real sample from my local test with package named `ttr.rdstmc` on MS Windows.
```
$ pip freeze |grep ttr
ttr.aws.s3==0.1.1dev
ttr.aws.utils.s3==0.3.0
ttr.utcutils==0.1.1dev
$ python setup.py develop
.....
.....
Finished processing dependencies for ttr.rdstmc==0.0.1dev
$ pip freeze |grep ttr
ttr.aws.s3==0.1.1dev
ttr.aws.utils.s3==0.3.0
-e hg+https://vlcinsky@bitbucket.org/vlcinsky/ttr.rdstmc@d61a9922920c508862602f7f39e496f7b99315f0#egg=ttr.rdstmc-dev
ttr.utcutils==0.1.1dev
$ pip uninstall ttr.rdstmc
Uninstalling ttr.rdstmc:
c:\python27\lib\site-packages\ttr.rdstmc.egg-link
Proceed (y/n)? y
Successfully uninstalled ttr.rdstmc
$ pip freeze |grep ttr
ttr.aws.s3==0.1.1dev
ttr.aws.utils.s3==0.3.0
ttr.utcutils==0.1.1dev
```
Edit 2015-05-20
---------------
All what is written above still applies, anyway, there are small modifications available now.
### Install pip in python 2.7.9 and python 3.4
Recent python versions come with a package `ensurepip` allowing to install pip even when being offline:
$ python -m ensurepip --upgrade
On some systems (like Debian Jessie) this is not available (to prevent breaking system python installation).
### Using `grep` or `find`
Examples above assume, you have `grep` installed. I had (at the time I had MS Windows on my machine) installed set of linux utilities (incl. grep). Alternatively, use native MS Windows `find` or simply ignore that filtering and find the name in a bit longer list of detected python packages. | Extending on what Martin said, recording the install output and a little bash scripting does the trick quite nicely. Here's what I do...
```
for i in $(less install.record);
sudo rm $i;
done;
```
And presto. Uninstalled. |
1,550,226 | I have installed a python package with `python setup.py install`.
How do I uninstall it? | 2009/10/11 | [
"https://Stackoverflow.com/questions/1550226",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/63051/"
] | First record the files you have installed. You can repeat this command, even if you have previously run `setup.py install`:
```
python setup.py install --record files.txt
```
When you want to uninstall you can just:
```
sudo rm $(cat files.txt)
```
This works because the rm command takes a whitespace-seperated list of files to delete and your installation record is just such a list. | I had run "python setup.py install" at some point in the past accidentally in my global environment, and had much difficulty uninstalling. These solutions didn't help. "pip uninstall " didn't work with "Can't uninstall 'splunk-appinspect'. No files were found to uninstall." "sudo pip uninstall " didn't work "Cannot uninstall requirement splunk-appinspect, not installed". I tried uninstalling pip, deleting the pip cache, searching my hard drive for the package, etc...
"pip show " eventually led me to the solution, the "Location:" was pointing to a directory, and renaming that directory caused the packaged to be removed from pip's list. I renamed the directory back, and it didn't reappear in pip's list, and now I can reinstall my package in a virtualenv. |
1,550,226 | I have installed a python package with `python setup.py install`.
How do I uninstall it? | 2009/10/11 | [
"https://Stackoverflow.com/questions/1550226",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/63051/"
] | Not exactly answering the question, but something that helps me every day:
Install your packages with
```
pip install .
```
This puts the package in `$HOME/.local`. Uninstall with
```
pip uninstall <package_name>
``` | Extending on what Martin said, recording the install output and a little bash scripting does the trick quite nicely. Here's what I do...
```
for i in $(less install.record);
sudo rm $i;
done;
```
And presto. Uninstalled. |
1,550,226 | I have installed a python package with `python setup.py install`.
How do I uninstall it? | 2009/10/11 | [
"https://Stackoverflow.com/questions/1550226",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/63051/"
] | Go to your python package directory and remove your .egg file,
e.g.:
In python 2.5(ubuntu): /usr/lib/python2.5/site-packages/
In python 2.6(ubuntu): /usr/local/lib/python2.6/dist-packages/ | **Install from local**
`python setup.py install`
**Uninstall from local**
`pip uninstall mypackage` |
1,550,226 | I have installed a python package with `python setup.py install`.
How do I uninstall it? | 2009/10/11 | [
"https://Stackoverflow.com/questions/1550226",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/63051/"
] | Not exactly answering the question, but something that helps me every day:
Install your packages with
```
pip install .
```
This puts the package in `$HOME/.local`. Uninstall with
```
pip uninstall <package_name>
``` | It might be better to remove related files by using bash to read commands, like the following:
```
sudo python setup.py install --record files.txt
sudo bash -c "cat files.txt | xargs rm -rf"
``` |
49,093,290 | I'm trying to install Python 3 alongside 2.7 with Homebrew but am receiving an error message I can't find a resolution to.
When attempting `brew update && brew install python3` I get the following error:
```
Error: python 2.7.12_2 is already installed
To upgrade to 3.6.4_3, run `brew upgrade python`
```
I want to leave the python 2.7 installation alone so I can have both Python 2 & 3 accessible on my machine so I'm nervous that upgrading will overwrite the current 2.7 installation.
I figure I can still perform a clean side-by-side install with the package from python.org, but I want to know why I'm getting this homebrew error
`brew doctor` shows the following Warnings containing python
```
Warning: "config" scripts exist outside your system or Homebrew directories.
`./configure` scripts often look for *-config scripts to determine if
software packages are installed, and what additional flags to use when
compiling and linking.
Having additional scripts in your path can confuse software installed via
Homebrew if the config script overrides a system or Homebrew provided
script of the same name. We found the following "config" scripts:
/Library/Frameworks/Python.framework/Versions/2.7/bin/python-config
/Library/Frameworks/Python.framework/Versions/2.7/bin/python2-config
/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7-config
Warning: Python is installed at /Library/Frameworks/Python.framework
Homebrew only supports building against the System-provided Python or a
brewed Python. In particular, Pythons installed to /Library can interfere
with other software installs.
Warning: Some installed formulae are missing dependencies.
You should `brew install` the missing dependencies:
brew install python@2
``` | 2018/03/04 | [
"https://Stackoverflow.com/questions/49093290",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3673055/"
] | You can press F9 inside [7zip](https://7zipguides.com/), you'll get two panes. In the first, you navigate to the archive you want to extract, and in the second you navigate to the folder where you want your files extracted. This will skip the temp folder step... | you can change **root** value in config/filesystems.php
```
'public' => [
'driver' => 'local',
'root' => storage_path('app/public'),
'url' => env('APP_URL').'/storage',
'visibility' => 'public',
],
``` |
69,476,449 | I was working with two instances of a python class when I realize they where using the same values. I think I have a missunderestanding of what classes are used for.
A much simpler example:
```
class C():
def __init__(self,err = []):
self.err = err
def add(self):
self.err.append(0)
a = C()
print(a.err) # []
a.add()
print(a.err) # [0]
b = C()
print(b.err) # [0]
b.add()
print(a.err) # [0,0]
print(b.err) # [0,0]
```
I don't underestand why b.err starts as [0] instead of []. And why when adding an element to b it affects a too. | 2021/10/07 | [
"https://Stackoverflow.com/questions/69476449",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11934583/"
] | The reason is here:
`def __init__(self,err = []):`
default `err` value is saved inside class `C`. But `err` itself is mutable, so every time you append anything to it, next time it will have stored value and this default `err` value is saved as `a.err` and `b.err`:
```
a = C()
print(a.err) # a.err is err ([])
a.add()
print(a.err) # err is [0]
b = C()
print(b.err) # reused err that is [0]
b.add() # err is [0, 0]
print(a.err) # [0,0]
print(b.err) # [0,0]
```
So basically `err` inside `a` and `b` is the same
Article: <https://florimond.dev/en/posts/2018/08/python-mutable-defaults-are-the-source-of-all-evil/> | I recommend that you check the Python core language *features* first. Check the official FAQs for Python 3, particularly <https://docs.python.org/3/faq/programming.html#why-are-default-values-shared-between-objects> is what you are looking for.
According to the recommendations, you have to change your code like so
```py
from typing import List
class C():
def __init__(self,err: List = None):
self.err = [] if err is None else err
def add(self):
self.err.append(0)
a = C()
print(a.err) # []
a.add()
print(a.err) # [0]
b = C()
print(b.err) # []
b.add()
print(a.err) # [0]
print(b.err) # [0]
```
I will also link to the concept of mutability in the docs as it seems that this was the issue for OP: <https://docs.python.org/3/glossary.html#term-mutable> |
69,476,449 | I was working with two instances of a python class when I realize they where using the same values. I think I have a missunderestanding of what classes are used for.
A much simpler example:
```
class C():
def __init__(self,err = []):
self.err = err
def add(self):
self.err.append(0)
a = C()
print(a.err) # []
a.add()
print(a.err) # [0]
b = C()
print(b.err) # [0]
b.add()
print(a.err) # [0,0]
print(b.err) # [0,0]
```
I don't underestand why b.err starts as [0] instead of []. And why when adding an element to b it affects a too. | 2021/10/07 | [
"https://Stackoverflow.com/questions/69476449",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11934583/"
] | The reason is here:
`def __init__(self,err = []):`
default `err` value is saved inside class `C`. But `err` itself is mutable, so every time you append anything to it, next time it will have stored value and this default `err` value is saved as `a.err` and `b.err`:
```
a = C()
print(a.err) # a.err is err ([])
a.add()
print(a.err) # err is [0]
b = C()
print(b.err) # reused err that is [0]
b.add() # err is [0, 0]
print(a.err) # [0,0]
print(b.err) # [0,0]
```
So basically `err` inside `a` and `b` is the same
Article: <https://florimond.dev/en/posts/2018/08/python-mutable-defaults-are-the-source-of-all-evil/> | In here we want to identify a list is a mutable data structure in python. It is means we can change it as we want. Class is a blueprint of the object
so after creating the object it should be working individually<- this is your question. yeah typically it is correct but in here thing is when we
don't give the default value for after constructor parameters. It changes after we create an object. we want to determine the concept correctly.
code review given below,
```
a = C()
print(a.err) # [] #empty err list
a.add()
print(a.err) # append 0 into err list
b = C()
print(b.err) # after create object that create a showing you in here
b.add() #aappend another 0 to err list
print(a.err) # [0,0] #err list
print(b.err) # [0,0] #err list
``` |
69,476,449 | I was working with two instances of a python class when I realize they where using the same values. I think I have a missunderestanding of what classes are used for.
A much simpler example:
```
class C():
def __init__(self,err = []):
self.err = err
def add(self):
self.err.append(0)
a = C()
print(a.err) # []
a.add()
print(a.err) # [0]
b = C()
print(b.err) # [0]
b.add()
print(a.err) # [0,0]
print(b.err) # [0,0]
```
I don't underestand why b.err starts as [0] instead of []. And why when adding an element to b it affects a too. | 2021/10/07 | [
"https://Stackoverflow.com/questions/69476449",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11934583/"
] | I recommend that you check the Python core language *features* first. Check the official FAQs for Python 3, particularly <https://docs.python.org/3/faq/programming.html#why-are-default-values-shared-between-objects> is what you are looking for.
According to the recommendations, you have to change your code like so
```py
from typing import List
class C():
def __init__(self,err: List = None):
self.err = [] if err is None else err
def add(self):
self.err.append(0)
a = C()
print(a.err) # []
a.add()
print(a.err) # [0]
b = C()
print(b.err) # []
b.add()
print(a.err) # [0]
print(b.err) # [0]
```
I will also link to the concept of mutability in the docs as it seems that this was the issue for OP: <https://docs.python.org/3/glossary.html#term-mutable> | In here we want to identify a list is a mutable data structure in python. It is means we can change it as we want. Class is a blueprint of the object
so after creating the object it should be working individually<- this is your question. yeah typically it is correct but in here thing is when we
don't give the default value for after constructor parameters. It changes after we create an object. we want to determine the concept correctly.
code review given below,
```
a = C()
print(a.err) # [] #empty err list
a.add()
print(a.err) # append 0 into err list
b = C()
print(b.err) # after create object that create a showing you in here
b.add() #aappend another 0 to err list
print(a.err) # [0,0] #err list
print(b.err) # [0,0] #err list
``` |
46,906,854 | I just started to bash and I have been stuck for sometime on a simple if;then statement.
I use bash to run QIIME commands which are written in python. These commands allow me to deal with microbial DNA. From the raw dataset from the sequencing I first have to first check if they match the format that QIIME can deal with before I can proceed to the rest of the commands.
```
module load QIIME/1.9.1-foss-2016a-Python-2.7.11
echo 'checking mapping file and demultiplexing'
validate_mapping_file.py -m $PWD/map.tsv -o $PWD/mapcheck > tmp.txt
n_words=`wc -w tmp.txt`
echo "n_words:"$n_words
if [ n_words = '9 temp.txt' ];then
split_libraries_fastq.py -i $PWD/forward_reads.fastq.gz -b $PWD/barcodes.fastq.gz -m $PWD/map.tsv -o $PWD/demultiplexed
else
echo 'Error(s) in map'
exit 1
fi
```
If the map is good I expect the following output (9 words):
```
No errors or warnings were found in mapping file.
```
If it is bad (16 words):
```
Errors and/or warnings detected in mapping file. Please check the log and html file for details.
```
I want to used this output to condition the following commands split\_libraries\_fastq.py.
I tried many different version of the if;then statement, asked help around but nothing seems to be working.
Anyone of you had an idea of why the 'then' command is not ran?
Also I run it through a cluster.
Here is the output when my map is good, the second command is not ran:
```
checking mapping file and demultiplexing
n_words:9 tmp.txt
Error(s) in map
```
Thanks | 2017/10/24 | [
"https://Stackoverflow.com/questions/46906854",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8759792/"
] | **Key idea :** You can add `UITapGestureRecognizer` to `UIImageView`. Setting up a `selector` which will be fired for each tap. In the `selector` you can check for the co-ordinate where the tap was done. If the co-ordinate satisfy your condition for firing up an event, you can execute your task then.
**Adding the gesture recognizer:**
```
UITapGestureRecognizer *singleTap = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(handleImgViewTap:)];
[singleTap setNumberOfTapsRequired:1];
[yourImgView addGestureRecognizer:singleTap];
```
**Setting up the selector:**
```
-(void)handleImgViewTap:(UITapGestureRecognizer *)gestureRecognizer
{
// this method gonna fire everytime you tap on the
// image view. you have to check does the point where
// the tap was done, satisfy your path/area condition.
CGPoint point = [gestureRecognizer locationInView:yourImgView];
// here point.x and point.y is the location of the tap
// inside your image view.
if(/*your condition goes here*/)
{
// execute your staff here.
}
}
```
Hope it helps, Happy ios coding. | Given a view (ora imageview) you should define a UIBezierPath of your shape.
Add a taprecognizer to this view, and set the same view as the recognizer delegate.
In the delegate method use UIBezierPath.contains(\_:) to know if the tap is inside the path or not and decide to fire the tap event or not.
Let me know if you need code example. |
42,553,713 | Currently, I have some issue with Xcode and the proccess **IBDesignablesAgentCocoaTouch** freeze Xcode each time I edit Storyboard.
So, I want to kill this proccess with a bash or python script by checking every x seconds if this proccess is running.
I think I can use this script, but how to do with a timer ( each x seconds checking ? )
```
pid=$(ps -fe | grep 'IBDesignablesAgentCocoaTouch' | awk '{print $2}')
if [[ -n $pid ]]; then
kill $pid
else
echo "Does not exist"
fi
``` | 2017/03/02 | [
"https://Stackoverflow.com/questions/42553713",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4824110/"
] | Just use a while loop,
```
while sleep 20; do
pid=$(ps -fe | grep 'IBDesignablesAgentCocoaTouch' | awk '{print $2}')
if [[ -n $pid ]]; then
kill $pid
else
echo "Does not exist"
fi
done
```
The syntax `while sleep 20; do <code>` is similar to the one showed in comments `while true; do sleep 20 <code>`, except saving a few keystrokes. | Use this **If the process is named IBDesignablesAgentCocoaTouch**:
```
kill $(pgrep -x IBDesignablesAgentCocoaTouch)
```
If the process exists it will get killed, if not nothing will happen.
`pgrep` will get PID for you.
```
#!/bin/bash
while sleep 20; do
kill $(pgrep IBDesignablesAgentCocoaTouch)
done
```
If you dont want to use sleep, You can use `cron`. |
42,553,713 | Currently, I have some issue with Xcode and the proccess **IBDesignablesAgentCocoaTouch** freeze Xcode each time I edit Storyboard.
So, I want to kill this proccess with a bash or python script by checking every x seconds if this proccess is running.
I think I can use this script, but how to do with a timer ( each x seconds checking ? )
```
pid=$(ps -fe | grep 'IBDesignablesAgentCocoaTouch' | awk '{print $2}')
if [[ -n $pid ]]; then
kill $pid
else
echo "Does not exist"
fi
``` | 2017/03/02 | [
"https://Stackoverflow.com/questions/42553713",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4824110/"
] | Just use a while loop,
```
while sleep 20; do
pid=$(ps -fe | grep 'IBDesignablesAgentCocoaTouch' | awk '{print $2}')
if [[ -n $pid ]]; then
kill $pid
else
echo "Does not exist"
fi
done
```
The syntax `while sleep 20; do <code>` is similar to the one showed in comments `while true; do sleep 20 <code>`, except saving a few keystrokes. | Have you tried to make it sleep for 20 seconds?
```bash
sleep 20
``` |
44,036,372 | Could anyone tell me what files I should download and which statements I must execute in the command line to install Matplotlib?
I have Python 2.7.13 on Windows 10 64 bit.
These are the files I unzipped:

All downloaded from: <http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy>
Commands I executed:
```
python -m pip install -U pip setuptools
python -m pip install matplotlib
python -m pip install -U pip
```
I am getting these two errors when checking if Numpy and Matplotlib are installed.
```
>>> import numpy
**Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
import numpy
File "numpy\__init__.py", line 142, in <module>
from . import add_newdocs
File "numpy\add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "numpy\lib\__init__.py", line 8, in <module>
from .type_check import *
File "numpy\lib\type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "numpy\core\__init__.py", line 26, in <module>
raise ImportError(msg)
ImportError:
Importing the multiarray numpy extension module failed. Most
likely you are trying to import a failed build of numpy.
If you're working with a numpy git repo, try `git clean -xdf` (removes all
files not under version control). Otherwise reinstall numpy.
Original error was: DLL load failed: %1 no es una aplicación Win32 válida.**
>>> import matplotlib
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
import matplotlib
File "matplotlib\__init__.py", line 122, in <module>
from matplotlib.cbook import is_string_like, mplDeprecation, dedent, get_label
File "matplotlib\cbook.py", line 33, in <module>
import numpy as np
File "numpy\__init__.py", line 142, in <module>
from . import add_newdocs
File "numpy\add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "numpy\lib\__init__.py", line 8, in <module>
from .type_check import *
File "numpy\lib\type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "numpy\core\__init__.py", line 26, in <module>
raise ImportError(msg)
ImportError:
Importing the multiarray numpy extension module failed. Most
likely you are trying to import a failed build of numpy.
If you're working with a numpy git repo, try `git clean -xdf` (removes all
files not under version control). Otherwise reinstall numpy.
Original error was: DLL load failed: %1 no es una aplicación Win32 válida.
``` | 2017/05/17 | [
"https://Stackoverflow.com/questions/44036372",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5513436/"
] | Instead of iterating over a simple list of strings. You can store the section along with it's target element as an object then iterate.
```
<div id="introDiv"></div>
<div id="aboutDiv"></div>
<div id="linksDiv"></div>
var sections = [
{ section: "intro", target: "introDiv" },
{ section: "about", target: "aboutDiv" },
{ section: "links", target: "linksDiv" }
];
$.each(sections, function(index, value) {
$.ajax({
url: "/data/" + index,
method: "get"
})
.then(function(result) {
$("#" + value.target).html(result);
});
});
```
You can dynamically create the element too but I made them static to illustrate the mapping. You'll also need a delegate to find the dynamically created element.
If you want the numeric id... you don't even need to know the target since it's all being created on-the-fly.
```
var sections = ["intro", "about", "links"];
$.each(sections, function(index, value) {
$.ajax({
url: "/data/" + index,
method: "get"
})
.then(function(result) {
var div = $("<div></div>").attr({ id: "id_" + index });
div.html(result);
$("#page").append(div);
});
});
```
But you can't guarantee the order of the responses -- that's the nature of the asynchronous requests. | Just set the Ajax to run synchronously, so the each loop will wait for your Ajax to finish before incrementing `counter`.
```
var counter = 1;
["intro","about","links"].each( function (index) {
var frag='<div id="id_'+counter+'"></div>\n";
$("#page").append(frag);
$.ajax({
url: "/data/"+index,
success: function (response) {
$("#id_"+counter).html(response.responseText);
},
async: false
});
counter++;
}
``` |
44,036,372 | Could anyone tell me what files I should download and which statements I must execute in the command line to install Matplotlib?
I have Python 2.7.13 on Windows 10 64 bit.
These are the files I unzipped:

All downloaded from: <http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy>
Commands I executed:
```
python -m pip install -U pip setuptools
python -m pip install matplotlib
python -m pip install -U pip
```
I am getting these two errors when checking if Numpy and Matplotlib are installed.
```
>>> import numpy
**Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
import numpy
File "numpy\__init__.py", line 142, in <module>
from . import add_newdocs
File "numpy\add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "numpy\lib\__init__.py", line 8, in <module>
from .type_check import *
File "numpy\lib\type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "numpy\core\__init__.py", line 26, in <module>
raise ImportError(msg)
ImportError:
Importing the multiarray numpy extension module failed. Most
likely you are trying to import a failed build of numpy.
If you're working with a numpy git repo, try `git clean -xdf` (removes all
files not under version control). Otherwise reinstall numpy.
Original error was: DLL load failed: %1 no es una aplicación Win32 válida.**
>>> import matplotlib
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
import matplotlib
File "matplotlib\__init__.py", line 122, in <module>
from matplotlib.cbook import is_string_like, mplDeprecation, dedent, get_label
File "matplotlib\cbook.py", line 33, in <module>
import numpy as np
File "numpy\__init__.py", line 142, in <module>
from . import add_newdocs
File "numpy\add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "numpy\lib\__init__.py", line 8, in <module>
from .type_check import *
File "numpy\lib\type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "numpy\core\__init__.py", line 26, in <module>
raise ImportError(msg)
ImportError:
Importing the multiarray numpy extension module failed. Most
likely you are trying to import a failed build of numpy.
If you're working with a numpy git repo, try `git clean -xdf` (removes all
files not under version control). Otherwise reinstall numpy.
Original error was: DLL load failed: %1 no es una aplicación Win32 válida.
``` | 2017/05/17 | [
"https://Stackoverflow.com/questions/44036372",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5513436/"
] | Instead of iterating over a simple list of strings. You can store the section along with it's target element as an object then iterate.
```
<div id="introDiv"></div>
<div id="aboutDiv"></div>
<div id="linksDiv"></div>
var sections = [
{ section: "intro", target: "introDiv" },
{ section: "about", target: "aboutDiv" },
{ section: "links", target: "linksDiv" }
];
$.each(sections, function(index, value) {
$.ajax({
url: "/data/" + index,
method: "get"
})
.then(function(result) {
$("#" + value.target).html(result);
});
});
```
You can dynamically create the element too but I made them static to illustrate the mapping. You'll also need a delegate to find the dynamically created element.
If you want the numeric id... you don't even need to know the target since it's all being created on-the-fly.
```
var sections = ["intro", "about", "links"];
$.each(sections, function(index, value) {
$.ajax({
url: "/data/" + index,
method: "get"
})
.then(function(result) {
var div = $("<div></div>").attr({ id: "id_" + index });
div.html(result);
$("#page").append(div);
});
});
```
But you can't guarantee the order of the responses -- that's the nature of the asynchronous requests. | You don't need the `counter` variable, you can use `index` which should work as you want it to since you aren't incrementing it, rather it is managed by the loop.
I've not tested it, so I'm not sure if it works as expected.
```
["intro","about","links"].each( function (index) {
var frag='<div id="id_'+(index+1)+'"></div>\n";
$("#page").append(frag);
$.ajax({
url: "/data/"+index,
success: function (response) {
$("#id_"+(index+1)).html(response.responseText);
}
});
}
``` |
44,036,372 | Could anyone tell me what files I should download and which statements I must execute in the command line to install Matplotlib?
I have Python 2.7.13 on Windows 10 64 bit.
These are the files I unzipped:

All downloaded from: <http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy>
Commands I executed:
```
python -m pip install -U pip setuptools
python -m pip install matplotlib
python -m pip install -U pip
```
I am getting these two errors when checking if Numpy and Matplotlib are installed.
```
>>> import numpy
**Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
import numpy
File "numpy\__init__.py", line 142, in <module>
from . import add_newdocs
File "numpy\add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "numpy\lib\__init__.py", line 8, in <module>
from .type_check import *
File "numpy\lib\type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "numpy\core\__init__.py", line 26, in <module>
raise ImportError(msg)
ImportError:
Importing the multiarray numpy extension module failed. Most
likely you are trying to import a failed build of numpy.
If you're working with a numpy git repo, try `git clean -xdf` (removes all
files not under version control). Otherwise reinstall numpy.
Original error was: DLL load failed: %1 no es una aplicación Win32 válida.**
>>> import matplotlib
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
import matplotlib
File "matplotlib\__init__.py", line 122, in <module>
from matplotlib.cbook import is_string_like, mplDeprecation, dedent, get_label
File "matplotlib\cbook.py", line 33, in <module>
import numpy as np
File "numpy\__init__.py", line 142, in <module>
from . import add_newdocs
File "numpy\add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "numpy\lib\__init__.py", line 8, in <module>
from .type_check import *
File "numpy\lib\type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "numpy\core\__init__.py", line 26, in <module>
raise ImportError(msg)
ImportError:
Importing the multiarray numpy extension module failed. Most
likely you are trying to import a failed build of numpy.
If you're working with a numpy git repo, try `git clean -xdf` (removes all
files not under version control). Otherwise reinstall numpy.
Original error was: DLL load failed: %1 no es una aplicación Win32 válida.
``` | 2017/05/17 | [
"https://Stackoverflow.com/questions/44036372",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5513436/"
] | Instead of iterating over a simple list of strings. You can store the section along with it's target element as an object then iterate.
```
<div id="introDiv"></div>
<div id="aboutDiv"></div>
<div id="linksDiv"></div>
var sections = [
{ section: "intro", target: "introDiv" },
{ section: "about", target: "aboutDiv" },
{ section: "links", target: "linksDiv" }
];
$.each(sections, function(index, value) {
$.ajax({
url: "/data/" + index,
method: "get"
})
.then(function(result) {
$("#" + value.target).html(result);
});
});
```
You can dynamically create the element too but I made them static to illustrate the mapping. You'll also need a delegate to find the dynamically created element.
If you want the numeric id... you don't even need to know the target since it's all being created on-the-fly.
```
var sections = ["intro", "about", "links"];
$.each(sections, function(index, value) {
$.ajax({
url: "/data/" + index,
method: "get"
})
.then(function(result) {
var div = $("<div></div>").attr({ id: "id_" + index });
div.html(result);
$("#page").append(div);
});
});
```
But you can't guarantee the order of the responses -- that's the nature of the asynchronous requests. | Ideally you would have the server send the index back as part of the ajax response. Then you could just do something like this:
```
<div id="page"></div>
...
var counter = 1;
["intro","about","links"].each( function (index) {
var frag='<div id="id_'+counter+'"></div>\n";
$("#page").append(frag);
$.ajax({
url: "/data/"+index,
data: {index: index},
success: function (response) {
$("#id_"+response.index).html(response.responseText);
}
});
counter++;
}
```
Alternatively, instead of sending `data: {index: index}`, you could just make the server parse the index out of the URL text. |
21,123,963 | I am trying to write a primes module in python. One thing I would like to be able to write is
```
>>> primes.primesLessThan(12)
[2, 3, 5, 7, 11]
```
However, I would also like to be able to write
```
>>> primes.primesLessThan.Sundaram(12)
[2, 3, 5, 7, 11]
```
to force it to use the Sieve of Sundaram. My original idea was to make primesLessThan a class with several static methods, but since \_\_init\_\_ can't return anything, this didn't let me achieve the first example. Would this be better done as a seperate module that primes imports or is there something else I missed? | 2014/01/14 | [
"https://Stackoverflow.com/questions/21123963",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3195702/"
] | As a rule of thumb, if you have a class without any instance variables, an empty init method and just a bunch of static methods, then its probably going to be simpler to organize it as a module instead.
```
#sieves module
def Sundaram(n):
return [2,3,5,7]
def Eratosthenes(n):
return [2,3,5,7]
```
And then you can use the functions from the module
```
import primes.sieves
primes.sieves.Sundaram(12)
```
Finally, python functions are first class and can be passed around in function parameter or stored in data structures. This means that if you ever need to write some code that depends on an algorithm choice, you can just pass that as a parameter.
```
def test_first_primes(algorithm):
return algorithm(10) == [2,3,5,7]
print (test_first_primes(Sundaram))
print (test_first_primes(Eratosthenes))
``` | Two ways I can think of, to get these kinds of semantics.
* Make primes a class, and then make primesLessThan a property. It would also be a class, which implements `__iter__` etc. to simulate a list, while also having some subfunctions. primesLessThan would be a constructor to that class, with the argument having a default to allow passing through.
* Make primes itself support `__getitem__`/`__iter__`/etc. You can still use properties (with default), but make primesLessThan just set some internal variable in the class, and then return self. This lets you do them in any order i.e. primes.Sundaram.primesLessThan(12) would work the same way as primes.primesLessThan.Sundaram(12), though, that looks strange to me.
Either one of these are going to be a bit weird on the return values... you can create something that acts like a list, but it obviously won't be. You can have repr show it like a list, and you'll be able to iterate over like a list (i.e. `for prime in primes.Sundaram(12)`), but it can't return an actual list for obvious reasons.... |
42,230,269 | Searching for an alternative as OpenCV would not provide timestamps for **live** camera stream *(on Windows)*, which are required in my computer vision algorithm, I found ffmpeg and this excellent article <https://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/>
The solution uses ffmpeg, accessing its standard output (stdout) stream. I extended it to read the standard error (stderr) stream as well.
Working up the python code on windows, while I received the video frames from ffmpeg stdout, but the stderr freezes after delivering the showinfo videofilter details (timestamp) for first frame.
I recollected seeing on ffmpeg forum somewhere that the video filters like showinfo are bypassed when redirected. Is this why the following code does not work as expected?
*Expected: It should write video frames to disk as well as print timestamp details.
Actual: It writes video files but does not get the timestamp (showinfo) details.*
Here's the code I tried:
```
import subprocess as sp
import numpy
import cv2
command = [ 'ffmpeg',
'-i', 'e:\sample.wmv',
'-pix_fmt', 'rgb24',
'-vcodec', 'rawvideo',
'-vf', 'showinfo', # video filter - showinfo will provide frame timestamps
'-an','-sn', #-an, -sn disables audio and sub-title processing respectively
'-f', 'image2pipe', '-'] # we need to output to a pipe
pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.PIPE) # TODO someone on ffmpeg forum said video filters (e.g. showinfo) are bypassed when stdout is redirected to pipes???
for i in range(10):
raw_image = pipe.stdout.read(1280*720*3)
img_info = pipe.stderr.read(244) # 244 characters is the current output of showinfo video filter
print "showinfo output", img_info
image1 = numpy.fromstring(raw_image, dtype='uint8')
image2 = image1.reshape((720,1280,3))
# write video frame to file just to verify
videoFrameName = 'Video_Frame{0}.png'.format(i)
cv2.imwrite(videoFrameName,image2)
# throw away the data in the pipe's buffer.
pipe.stdout.flush()
pipe.stderr.flush()
```
So how to still get the frame timestamps from ffmpeg into python code so that it can be used in my computer vision algorithm... | 2017/02/14 | [
"https://Stackoverflow.com/questions/42230269",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/468716/"
] | Redirecting stderr works in python.
So instead of this `pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.PIPE)`
do this `pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.STDOUT)`
We could avoid redirection by adding an asynchronous call to read both the standard streams (stdout and stderr) of ffmpeg. This would avoid any mixing of the video frame and timestamp and thus the error prone seperation.
So modifying the original code to use `threading` module would look like this:
```
# Python script to read video frames and timestamps using ffmpeg
import subprocess as sp
import threading
import matplotlib.pyplot as plt
import numpy
import cv2
ffmpeg_command = [ 'ffmpeg',
'-nostats', # do not print extra statistics
#'-debug_ts', # -debug_ts could provide timestamps avoiding showinfo filter (-vcodec copy). Need to check by providing expected fps TODO
'-r', '30', # output 30 frames per second
'-i', 'e:\sample.wmv',
'-an','-sn', #-an, -sn disables audio and sub-title processing respectively
'-pix_fmt', 'rgb24',
'-vcodec', 'rawvideo',
#'-vcodec', 'copy', # very fast!, direct copy - Note: No Filters, No Decode/Encode, no quality loss
#'-vframes', '20', # process n video frames only. For Debugging
'-vf', 'showinfo', # showinfo videofilter provides frame timestamps as pts_time
'-f', 'image2pipe', 'pipe:1' ] # outputs to stdout pipe. can also use '-' which is redirected to pipe
# seperate method to read images on stdout asynchronously
def AppendProcStdout(proc, nbytes, AppendList):
while proc.poll() is None: # continue while the process is alive
AppendList.append(proc.stdout.read(nbytes)) # read image bytes at a time
# seperate method to read image info. on stderr asynchronously
def AppendProcStderr(proc, AppendList):
while proc.poll() is None: # continue while the process is alive
try: AppendList.append(proc.stderr.next()) # read stderr until empty
except StopIteration: continue # ignore stderr empty exception and continue
if __name__ == '__main__':
# run ffmpeg command
pipe = sp.Popen(ffmpeg_command, stdout=sp.PIPE, stderr=sp.PIPE)
# 2 threads to talk with ffmpeg stdout and stderr pipes
framesList = [];
frameDetailsList = []
appendFramesThread = threading.Thread(group=None, target=AppendProcStdout, name='FramesThread', args=(pipe, 1280*720*3, framesList), kwargs=None, verbose=None) # assuming rgb video frame with size 1280*720
appendInfoThread = threading.Thread(group=None, target=AppendProcStderr, name='InfoThread', args=(pipe, frameDetailsList), kwargs=None, verbose=None)
# start threads to capture ffmpeg frames and info.
appendFramesThread.start()
appendInfoThread.start()
# wait for few seconds and close - simulating cancel
import time; time.sleep(2)
pipe.terminate()
# check if threads finished and close
appendFramesThread.join()
appendInfoThread.join()
# save an image per 30 frames to disk
savedList = []
for cnt,raw_image in enumerate(framesList):
if (cnt%30 != 0): continue
image1 = numpy.fromstring(raw_image, dtype='uint8')
image2 = image1.reshape((720,1280,3)) # assuming rgb image with size 1280 X 720
# write video frame to file just to verify
videoFrameName = 'video_frame{0}.png'.format(cnt)
cv2.imwrite(videoFrameName,image2)
savedList.append('{} {}'.format(videoFrameName, image2.shape))
print '### Results ###'
print 'Images captured: ({}) \nImages saved to disk:{}\n'.format(len(framesList), savedList) # framesList contains all the video frames got from the ffmpeg
print 'Images info captured: \n', ''.join(frameDetailsList) # this contains all the timestamp details got from the ffmpeg showinfo videofilter and some initial noise text which can be easily removed while parsing
``` | You can use [MoviePy](http://zulko.github.io/moviepy/index.html):
```
import moviepy.editor as mpy
vid = mpy.VideoFileClip('e:\\sample.wmv')
for timestamp, raw_img in vid.iter_frames(with_times=True):
# do stuff
``` |
42,230,269 | Searching for an alternative as OpenCV would not provide timestamps for **live** camera stream *(on Windows)*, which are required in my computer vision algorithm, I found ffmpeg and this excellent article <https://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/>
The solution uses ffmpeg, accessing its standard output (stdout) stream. I extended it to read the standard error (stderr) stream as well.
Working up the python code on windows, while I received the video frames from ffmpeg stdout, but the stderr freezes after delivering the showinfo videofilter details (timestamp) for first frame.
I recollected seeing on ffmpeg forum somewhere that the video filters like showinfo are bypassed when redirected. Is this why the following code does not work as expected?
*Expected: It should write video frames to disk as well as print timestamp details.
Actual: It writes video files but does not get the timestamp (showinfo) details.*
Here's the code I tried:
```
import subprocess as sp
import numpy
import cv2
command = [ 'ffmpeg',
'-i', 'e:\sample.wmv',
'-pix_fmt', 'rgb24',
'-vcodec', 'rawvideo',
'-vf', 'showinfo', # video filter - showinfo will provide frame timestamps
'-an','-sn', #-an, -sn disables audio and sub-title processing respectively
'-f', 'image2pipe', '-'] # we need to output to a pipe
pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.PIPE) # TODO someone on ffmpeg forum said video filters (e.g. showinfo) are bypassed when stdout is redirected to pipes???
for i in range(10):
raw_image = pipe.stdout.read(1280*720*3)
img_info = pipe.stderr.read(244) # 244 characters is the current output of showinfo video filter
print "showinfo output", img_info
image1 = numpy.fromstring(raw_image, dtype='uint8')
image2 = image1.reshape((720,1280,3))
# write video frame to file just to verify
videoFrameName = 'Video_Frame{0}.png'.format(i)
cv2.imwrite(videoFrameName,image2)
# throw away the data in the pipe's buffer.
pipe.stdout.flush()
pipe.stderr.flush()
```
So how to still get the frame timestamps from ffmpeg into python code so that it can be used in my computer vision algorithm... | 2017/02/14 | [
"https://Stackoverflow.com/questions/42230269",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/468716/"
] | You can use [MoviePy](http://zulko.github.io/moviepy/index.html):
```
import moviepy.editor as mpy
vid = mpy.VideoFileClip('e:\\sample.wmv')
for timestamp, raw_img in vid.iter_frames(with_times=True):
# do stuff
``` | You can try to specify the buffer size so you're sure the whole frame fits in it :
```
bufsize = w*h*3 + 100
pipe = sp.Popen(command, bufsize=bufsize, stdout = sp.PIPE, stderr = sp.PIPE)
```
with this set up, you can normally read on pipe.stdout for your frames and pipe.stderr for its info |
42,230,269 | Searching for an alternative as OpenCV would not provide timestamps for **live** camera stream *(on Windows)*, which are required in my computer vision algorithm, I found ffmpeg and this excellent article <https://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/>
The solution uses ffmpeg, accessing its standard output (stdout) stream. I extended it to read the standard error (stderr) stream as well.
Working up the python code on windows, while I received the video frames from ffmpeg stdout, but the stderr freezes after delivering the showinfo videofilter details (timestamp) for first frame.
I recollected seeing on ffmpeg forum somewhere that the video filters like showinfo are bypassed when redirected. Is this why the following code does not work as expected?
*Expected: It should write video frames to disk as well as print timestamp details.
Actual: It writes video files but does not get the timestamp (showinfo) details.*
Here's the code I tried:
```
import subprocess as sp
import numpy
import cv2
command = [ 'ffmpeg',
'-i', 'e:\sample.wmv',
'-pix_fmt', 'rgb24',
'-vcodec', 'rawvideo',
'-vf', 'showinfo', # video filter - showinfo will provide frame timestamps
'-an','-sn', #-an, -sn disables audio and sub-title processing respectively
'-f', 'image2pipe', '-'] # we need to output to a pipe
pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.PIPE) # TODO someone on ffmpeg forum said video filters (e.g. showinfo) are bypassed when stdout is redirected to pipes???
for i in range(10):
raw_image = pipe.stdout.read(1280*720*3)
img_info = pipe.stderr.read(244) # 244 characters is the current output of showinfo video filter
print "showinfo output", img_info
image1 = numpy.fromstring(raw_image, dtype='uint8')
image2 = image1.reshape((720,1280,3))
# write video frame to file just to verify
videoFrameName = 'Video_Frame{0}.png'.format(i)
cv2.imwrite(videoFrameName,image2)
# throw away the data in the pipe's buffer.
pipe.stdout.flush()
pipe.stderr.flush()
```
So how to still get the frame timestamps from ffmpeg into python code so that it can be used in my computer vision algorithm... | 2017/02/14 | [
"https://Stackoverflow.com/questions/42230269",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/468716/"
] | Redirecting stderr works in python.
So instead of this `pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.PIPE)`
do this `pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.STDOUT)`
We could avoid redirection by adding an asynchronous call to read both the standard streams (stdout and stderr) of ffmpeg. This would avoid any mixing of the video frame and timestamp and thus the error prone seperation.
So modifying the original code to use `threading` module would look like this:
```
# Python script to read video frames and timestamps using ffmpeg
import subprocess as sp
import threading
import matplotlib.pyplot as plt
import numpy
import cv2
ffmpeg_command = [ 'ffmpeg',
'-nostats', # do not print extra statistics
#'-debug_ts', # -debug_ts could provide timestamps avoiding showinfo filter (-vcodec copy). Need to check by providing expected fps TODO
'-r', '30', # output 30 frames per second
'-i', 'e:\sample.wmv',
'-an','-sn', #-an, -sn disables audio and sub-title processing respectively
'-pix_fmt', 'rgb24',
'-vcodec', 'rawvideo',
#'-vcodec', 'copy', # very fast!, direct copy - Note: No Filters, No Decode/Encode, no quality loss
#'-vframes', '20', # process n video frames only. For Debugging
'-vf', 'showinfo', # showinfo videofilter provides frame timestamps as pts_time
'-f', 'image2pipe', 'pipe:1' ] # outputs to stdout pipe. can also use '-' which is redirected to pipe
# seperate method to read images on stdout asynchronously
def AppendProcStdout(proc, nbytes, AppendList):
while proc.poll() is None: # continue while the process is alive
AppendList.append(proc.stdout.read(nbytes)) # read image bytes at a time
# seperate method to read image info. on stderr asynchronously
def AppendProcStderr(proc, AppendList):
while proc.poll() is None: # continue while the process is alive
try: AppendList.append(proc.stderr.next()) # read stderr until empty
except StopIteration: continue # ignore stderr empty exception and continue
if __name__ == '__main__':
# run ffmpeg command
pipe = sp.Popen(ffmpeg_command, stdout=sp.PIPE, stderr=sp.PIPE)
# 2 threads to talk with ffmpeg stdout and stderr pipes
framesList = [];
frameDetailsList = []
appendFramesThread = threading.Thread(group=None, target=AppendProcStdout, name='FramesThread', args=(pipe, 1280*720*3, framesList), kwargs=None, verbose=None) # assuming rgb video frame with size 1280*720
appendInfoThread = threading.Thread(group=None, target=AppendProcStderr, name='InfoThread', args=(pipe, frameDetailsList), kwargs=None, verbose=None)
# start threads to capture ffmpeg frames and info.
appendFramesThread.start()
appendInfoThread.start()
# wait for few seconds and close - simulating cancel
import time; time.sleep(2)
pipe.terminate()
# check if threads finished and close
appendFramesThread.join()
appendInfoThread.join()
# save an image per 30 frames to disk
savedList = []
for cnt,raw_image in enumerate(framesList):
if (cnt%30 != 0): continue
image1 = numpy.fromstring(raw_image, dtype='uint8')
image2 = image1.reshape((720,1280,3)) # assuming rgb image with size 1280 X 720
# write video frame to file just to verify
videoFrameName = 'video_frame{0}.png'.format(cnt)
cv2.imwrite(videoFrameName,image2)
savedList.append('{} {}'.format(videoFrameName, image2.shape))
print '### Results ###'
print 'Images captured: ({}) \nImages saved to disk:{}\n'.format(len(framesList), savedList) # framesList contains all the video frames got from the ffmpeg
print 'Images info captured: \n', ''.join(frameDetailsList) # this contains all the timestamp details got from the ffmpeg showinfo videofilter and some initial noise text which can be easily removed while parsing
``` | You can try to specify the buffer size so you're sure the whole frame fits in it :
```
bufsize = w*h*3 + 100
pipe = sp.Popen(command, bufsize=bufsize, stdout = sp.PIPE, stderr = sp.PIPE)
```
with this set up, you can normally read on pipe.stdout for your frames and pipe.stderr for its info |
14,981,069 | How can I use [Brython](https://www.brython.info/) to compile Python to Javascript? I want to do this on my computer, so I can the run Javascript with nodejs, eg.
```
$ python hello.py
Hello world
$ brython hello.py -o hello.js
$ node hello.js
Hello world
```
The examples on the Brython website only explain how do this in the browser <http://www.brython.info/index_en.html> | 2013/02/20 | [
"https://Stackoverflow.com/questions/14981069",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/284795/"
] | It seems they are very browser oriented, there is no command line option out of the box.
You can try to use their code youself from node.js, perhaps it will work easily. It seems the `$py2js(src, module)` function does the actual conversion so maybe you can just run it with the python code string as first parameter.
Another option is to use pyjs: <http://pyjs.org/> which does something similar and has command line tool to do the conversion. | Brython has a console that runs in the browser, but not a compiler. It is meant for you to either import your python scripts into the html file, or write your python code into the html file. See pyjs if you wish a conversion tool before the page loads. |
14,981,069 | How can I use [Brython](https://www.brython.info/) to compile Python to Javascript? I want to do this on my computer, so I can the run Javascript with nodejs, eg.
```
$ python hello.py
Hello world
$ brython hello.py -o hello.js
$ node hello.js
Hello world
```
The examples on the Brython website only explain how do this in the browser <http://www.brython.info/index_en.html> | 2013/02/20 | [
"https://Stackoverflow.com/questions/14981069",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/284795/"
] | It seems they are very browser oriented, there is no command line option out of the box.
You can try to use their code youself from node.js, perhaps it will work easily. It seems the `$py2js(src, module)` function does the actual conversion so maybe you can just run it with the python code string as first parameter.
Another option is to use pyjs: <http://pyjs.org/> which does something similar and has command line tool to do the conversion. | It is possible to compile Python code to javascript and load it afterwards using import statement . See [brython:ticket:222](https://github.com/brython-dev/brython/issues/222) for further details. You'll have to load brython js lib in advance because , in the end, Python semantics are quite different from Javascript's . You can include compiled .pyc.js code in .vfs.js files in order to speed up module import times.
Disclaimer : I'm a committer of Brython project . |
14,981,069 | How can I use [Brython](https://www.brython.info/) to compile Python to Javascript? I want to do this on my computer, so I can the run Javascript with nodejs, eg.
```
$ python hello.py
Hello world
$ brython hello.py -o hello.js
$ node hello.js
Hello world
```
The examples on the Brython website only explain how do this in the browser <http://www.brython.info/index_en.html> | 2013/02/20 | [
"https://Stackoverflow.com/questions/14981069",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/284795/"
] | It is possible to compile Python code to javascript and load it afterwards using import statement . See [brython:ticket:222](https://github.com/brython-dev/brython/issues/222) for further details. You'll have to load brython js lib in advance because , in the end, Python semantics are quite different from Javascript's . You can include compiled .pyc.js code in .vfs.js files in order to speed up module import times.
Disclaimer : I'm a committer of Brython project . | Brython has a console that runs in the browser, but not a compiler. It is meant for you to either import your python scripts into the html file, or write your python code into the html file. See pyjs if you wish a conversion tool before the page loads. |
41,460,013 | ```
#!/usr/bin/env python2.7
import vobject
abfile='/foo/bar/directory/file.vcf' #ab stands for address book
ablist = []
with open(abfile) as source_file:
for vcard in vobject.readComponents(source_file):
ablist.append(vcard)
print ablist[0]==ablist[1]
```
The above code should return True but it does not because the vcards are considered different even though they are the same. One of the ultimate objectives is to find a way to remove duplicates from the vcard file. Bonus points: Is there a way to make the comparison compatible with using one of the fast ways to uniqify a list in Python such as:
```
set(ablist)
```
to remove duplicates? (e.g. convert the vcards to strings somehow...). In the code above len(set(ablist)) returns 2 and not 1 as expected...
In contrast, if instead of comparing the whole vcard we compare one component of it as in:
```
print ablist[0].fn==ablist[1].fn
```
then we do see the expected behavior and receive True as response...
Here is the file contents used in the test (with only two identical vcards):
```
BEGIN:VCARD
VERSION:3.0
FN:Foo_bar1
N:;Foo_bar1;;;
EMAIL;TYPE=INTERNET:foobar1@foo.bar.com
END:VCARD
BEGIN:VCARD
VERSION:3.0
FN:Foo_bar1
N:;Foo_bar1;;;
EMAIL;TYPE=INTERNET:foobar1@foo.bar.com
END:VCARD
``` | 2017/01/04 | [
"https://Stackoverflow.com/questions/41460013",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5965670/"
] | @Brian Barcelona, concerning your answer, just to let you know, instead of:
```
ablist = []
with open(abfile) as source_file:
for vcard in vobject.readComponents(source_file):
ablist.append(vcard)
```
You could do:
```
with open(abfile) as source_file:
ablist = list(vobject.readComponents(source_file))
```
By the way, I have looked in the source code of this module and your solution is not guaranteed to work because different components of a vcard could be the same but not in the same order. I think the best way is for you to check each relevant component yourself. | I have found the following will work - the insight is to "serialize()" the vcard:
```
#!/usr/bin/env python2.7
import vobject
abfile='/foo/bar/directory/file.vcf' #ab stands for address book
ablist = []
with open(abfile) as source_file:
for vcard in vobject.readComponents(source_file):
ablist.append(vcard)
print ablist[0].serialize()==ablist[1].serialize()
```
However, there should be a better way to do this... any help would be most welcomed! |
32,652,485 | I'm trying to convert string date object to date object in python.
I did this so far
```
old_date = '01 April 1986'
new_date = datetime.strptime(old_date,'%d %M %Y')
print new_date
```
But I get the following error.
>
> ValueError: time data '01 April 1986' does not match format '%d %M %Y'
>
>
>
Any guess? | 2015/09/18 | [
"https://Stackoverflow.com/questions/32652485",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2728494/"
] | `%M` parses *minutes*, a numeric value, not a month. Your date specifies the month as `'April'`, so use `%B` to parse a *named* month:
```
>>> from datetime import datetime
>>> old_date = '01 April 1986'
>>> datetime.strptime(old_date,'%d %B %Y')
datetime.datetime(1986, 4, 1, 0, 0)
```
From the [*`strftime()` and `strptime()` Behavior* section](https://docs.python.org/2/library/datetime.html#strftime-strptime-behavior):
>
> `%B`
>
> Month as locale’s full name.
>
> January, February, ..., December (en\_US);
>
> Januar, Februar, ..., Dezember (de\_DE)
>
>
> `%M`
>
> Minute as a zero-padded decimal number.
>
> 00, 01, ..., 59
>
>
> | You can first guess the type of date format the string is using and then convert to the same system recognised date format.
I wrote a simple date\_tools utilities that you can find here at [<https://github.com/henin/date_tools/]>
### Installation: pip install date-tools
### Usage:
>
> from date\_tools import date\_guesser
>
>
> from datetime import datetime
>
>
> old\_date = '01 April 1986'
>
>
> date\_format = date\_guesser.guess\_date\_format(old\_date)
>
>
> new\_date = datetime.strptime(old\_date, date\_format)
>
>
> print(new\_date)
>
>
> |
64,934,782 | I am trying to read JSON File but it gives error as below
*Data reference:
<https://github.com/ankitgoel1602/data-science/blob/master/json-data/level_1.json>
<https://github.com/ankitgoel1602/data-science/blob/master/json-data/multiple_levels.json>*
Code
```
with open("multiple_levels.json", 'r') as j:
contents = json.loads(j.read())
```
Error:
```
---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
<ipython-input-14-0fce326c8851> in <module>
1 with open("multiple_levels.json", 'r') as j:
----> 2 contents = json.loads(j.read())
~\AppData\Local\Continuum\anaconda3\lib\json\__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
346 parse_int is None and parse_float is None and
347 parse_constant is None and object_pairs_hook is None and not kw):
--> 348 return _default_decoder.decode(s)
349 if cls is None:
350 cls = JSONDecoder
~\AppData\Local\Continuum\anaconda3\lib\json\decoder.py in decode(self, s, _w)
335
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
339 if end != len(s):
~\AppData\Local\Continuum\anaconda3\lib\json\decoder.py in raw_decode(self, s, idx)
353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
--> 355 raise JSONDecodeError("Expecting value", s, err.value) from None
356 return obj, end
JSONDecodeError: Expecting value: line 7 column 1 (char 6)
``` | 2020/11/20 | [
"https://Stackoverflow.com/questions/64934782",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5866905/"
] | You will want to use something like `map` instead
This is a simply change to your code:
```
formatedCharcters = data.results.map(character => {
``` | I am not sure that I completely understand your question, but here is one way you could achieve the result you are probably looking for. I have kept the forEach loop in case there is a specific reason for keeping it:
```
// Json data example
function getCharacters() {
const data = {
info: {
count: 671,
pages: 34,
next: 'https://rickandmortyapi.com/api/character?page=2',
prev: null,
},
results: [
{
id: 1,
name: 'Rick Sanchez',
status: 'Alive',
species: 'Human',
type: '',
gender: 'Male',
origin: {
name: 'Earth (C-137)',
url: 'https://rickandmortyapi.com/api/location/1',
},
location: {
name: 'Earth (Replacement Dimension)',
url: 'https://rickandmortyapi.com/api/location/20',
},
image: 'https://rickandmortyapi.com/api/character/avatar/1.jpeg',
episode: [
'https://rickandmortyapi.com/api/episode/1',
'https://rickandmortyapi.com/api/episode/2',
],
url: 'https://rickandmortyapi.com/api/character/1',
created: '2017-11-04T18:48:46.250Z'
},
{
id: 2,
name: 'second name',
status: 'Alive',
species: 'Human',
type: '',
gender: 'Female',
origin: {
name: 'Mars???',
url: 'sample-url.com/sample/example',
},
location: {
name: 'Mars??? (Replacement Dimension)',
url: 'sample-url.com/sample/example',
},
image: 'sample-url.com/sample/example',
episode: [
'sample-url.com/sample/example',
'sample-url.com/sample/example',
],
url: 'sample-url.com/sample/example',
created: '2019-12-04T11:48:46.250Z'
}
]
}
// here is the problem
const formattedCharacters = data.results;
const character_array = [];
formattedCharacters.forEach(character=>{
//here instead of returning multiple times, just push value into an array
character_array.push({
id: character.id,
name: character.name,
status: character.status,
species: character.species,
gender: character.gender,
location: character.location.name,
image: character.image
});
})
return character_array;
}
const characters = getCharacters();
// therefore:
const character_1 = characters[0];
console.log(character_1);
```
The above would produce an array of all the elements inside of `data.results` with the values you need.
Hope that helped, AlphaHowl. |
64,934,782 | I am trying to read JSON File but it gives error as below
*Data reference:
<https://github.com/ankitgoel1602/data-science/blob/master/json-data/level_1.json>
<https://github.com/ankitgoel1602/data-science/blob/master/json-data/multiple_levels.json>*
Code
```
with open("multiple_levels.json", 'r') as j:
contents = json.loads(j.read())
```
Error:
```
---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
<ipython-input-14-0fce326c8851> in <module>
1 with open("multiple_levels.json", 'r') as j:
----> 2 contents = json.loads(j.read())
~\AppData\Local\Continuum\anaconda3\lib\json\__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
346 parse_int is None and parse_float is None and
347 parse_constant is None and object_pairs_hook is None and not kw):
--> 348 return _default_decoder.decode(s)
349 if cls is None:
350 cls = JSONDecoder
~\AppData\Local\Continuum\anaconda3\lib\json\decoder.py in decode(self, s, _w)
335
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
339 if end != len(s):
~\AppData\Local\Continuum\anaconda3\lib\json\decoder.py in raw_decode(self, s, idx)
353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
--> 355 raise JSONDecodeError("Expecting value", s, err.value) from None
356 return obj, end
JSONDecodeError: Expecting value: line 7 column 1 (char 6)
``` | 2020/11/20 | [
"https://Stackoverflow.com/questions/64934782",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5866905/"
] | Here's a guess of what you're trying to achieve. I think you're trying to map data into objects with a forEach loop. Sadly this is not possible with forEach , but rather with the map function instead. Let me know if this is what you wanted. I am willing to edit my answer depending on any other details.
```js
const results = [{
id: 1,
name: 'Rick Sanchez',
status: 'Alive',
species: 'Human',
type: '',
gender: 'Male',
origin: {
name: 'Earth (C-137)',
url: 'https://rickandmortyapi.com/api/location/1',
},
location: {
name: 'Earth (Replacement Dimension)',
url: 'https://rickandmortyapi.com/api/location/20',
},
image: 'https://rickandmortyapi.com/api/character/avatar/1.jpeg',
episode: [
'https://rickandmortyapi.com/api/episode/1',
'https://rickandmortyapi.com/api/episode/2',
],
url: 'https://rickandmortyapi.com/api/character/1',
created: '2017-11-04T18:48:46.250Z',
},
{
id: 2,
name: 'Morty Smith',
status: 'Alive',
species: 'Human',
type: '',
gender: 'Male',
origin: {
name: 'Earth (C-137)',
url: 'https://rickandmortyapi.com/api/location/1',
},
location: {
name: 'Earth (Replacement Dimension)',
url: 'https://rickandmortyapi.com/api/location/20',
},
image: 'https://rickandmortyapi.com/api/character/avatar/2.jpeg',
episode: [
'https://rickandmortyapi.com/api/episode/1',
'https://rickandmortyapi.com/api/episode/2',
],
url: 'https://rickandmortyapi.com/api/character/1',
created: '2017-11-04T18:48:46.250Z',
}]
function getCharacters() {
const charachters = results.map(character => {
return {
id: character.id,
name: character.name,
status: character.status,
species: character.species,
gender: character.gender,
location: character.location.name,
image: character.image,
};
});
return charachters;
}
console.log(getCharacters());
``` | I am not sure that I completely understand your question, but here is one way you could achieve the result you are probably looking for. I have kept the forEach loop in case there is a specific reason for keeping it:
```
// Json data example
function getCharacters() {
const data = {
info: {
count: 671,
pages: 34,
next: 'https://rickandmortyapi.com/api/character?page=2',
prev: null,
},
results: [
{
id: 1,
name: 'Rick Sanchez',
status: 'Alive',
species: 'Human',
type: '',
gender: 'Male',
origin: {
name: 'Earth (C-137)',
url: 'https://rickandmortyapi.com/api/location/1',
},
location: {
name: 'Earth (Replacement Dimension)',
url: 'https://rickandmortyapi.com/api/location/20',
},
image: 'https://rickandmortyapi.com/api/character/avatar/1.jpeg',
episode: [
'https://rickandmortyapi.com/api/episode/1',
'https://rickandmortyapi.com/api/episode/2',
],
url: 'https://rickandmortyapi.com/api/character/1',
created: '2017-11-04T18:48:46.250Z'
},
{
id: 2,
name: 'second name',
status: 'Alive',
species: 'Human',
type: '',
gender: 'Female',
origin: {
name: 'Mars???',
url: 'sample-url.com/sample/example',
},
location: {
name: 'Mars??? (Replacement Dimension)',
url: 'sample-url.com/sample/example',
},
image: 'sample-url.com/sample/example',
episode: [
'sample-url.com/sample/example',
'sample-url.com/sample/example',
],
url: 'sample-url.com/sample/example',
created: '2019-12-04T11:48:46.250Z'
}
]
}
// here is the problem
const formattedCharacters = data.results;
const character_array = [];
formattedCharacters.forEach(character=>{
//here instead of returning multiple times, just push value into an array
character_array.push({
id: character.id,
name: character.name,
status: character.status,
species: character.species,
gender: character.gender,
location: character.location.name,
image: character.image
});
})
return character_array;
}
const characters = getCharacters();
// therefore:
const character_1 = characters[0];
console.log(character_1);
```
The above would produce an array of all the elements inside of `data.results` with the values you need.
Hope that helped, AlphaHowl. |
10,049,054 | I want it to give 'Number of donuts: many' when the argument is more than 10 and 'Number of donuts: ' when the argument is count<10. But it always gives me 'Number of donuts: many'
```
#!/usr/bin/python2.6 -tt
import sys
def donuts(count):
if count < 10:
x = str(count)
else:
x = 'many'
print 'Number of donuts: ' + x
def main():
donuts(sys.argv[1])
if __name__ == "__main__":
main()
``` | 2012/04/06 | [
"https://Stackoverflow.com/questions/10049054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1318217/"
] | `count` is a string. Try `count = int(count)` | You should convert count to an int before comparing it to 10:
```
#!/usr/bin/python2.6 -tt
import sys
def donuts(count):
if int(count) < 10:
x = count
else:
x = 'many'
print 'Number of donuts: %s' % x
def main():
donuts(sys.argv[1])
if __name__ == "__main__":
main()
``` |
10,049,054 | I want it to give 'Number of donuts: many' when the argument is more than 10 and 'Number of donuts: ' when the argument is count<10. But it always gives me 'Number of donuts: many'
```
#!/usr/bin/python2.6 -tt
import sys
def donuts(count):
if count < 10:
x = str(count)
else:
x = 'many'
print 'Number of donuts: ' + x
def main():
donuts(sys.argv[1])
if __name__ == "__main__":
main()
``` | 2012/04/06 | [
"https://Stackoverflow.com/questions/10049054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1318217/"
] | `sys.argv` will only contain strings, even if you provide integers as arguments, so for example if you called this as `python donuts.py 4`, then `sys.argv[1]` would be `'4'` and you would attempt the comparison `'4' < 10` when you actually want `4 < 10`.
So first, try to convert `sys.argv[1]` to an int:
```
def main():
donuts(int(sys.argv[1]))
```
You may want to add some error handling as well, in case an argument is not provided or it is not an integer:
```
def main():
try:
donuts(int(sys.argv[1]))
except IndexError:
print 'Missing argument'
except ValueError:
print 'Invalid argument'
``` | You should convert count to an int before comparing it to 10:
```
#!/usr/bin/python2.6 -tt
import sys
def donuts(count):
if int(count) < 10:
x = count
else:
x = 'many'
print 'Number of donuts: %s' % x
def main():
donuts(sys.argv[1])
if __name__ == "__main__":
main()
``` |
10,049,054 | I want it to give 'Number of donuts: many' when the argument is more than 10 and 'Number of donuts: ' when the argument is count<10. But it always gives me 'Number of donuts: many'
```
#!/usr/bin/python2.6 -tt
import sys
def donuts(count):
if count < 10:
x = str(count)
else:
x = 'many'
print 'Number of donuts: ' + x
def main():
donuts(sys.argv[1])
if __name__ == "__main__":
main()
``` | 2012/04/06 | [
"https://Stackoverflow.com/questions/10049054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1318217/"
] | `count` is a string. Try `count = int(count)` | Try converting the `count` parameter to an int for the comparison:
```
if int (count) < 10:
x = str (count)
else:
x = 'many'
```
Because you're passing in a string argument for the `count` parameter, the comparison is failing. |
10,049,054 | I want it to give 'Number of donuts: many' when the argument is more than 10 and 'Number of donuts: ' when the argument is count<10. But it always gives me 'Number of donuts: many'
```
#!/usr/bin/python2.6 -tt
import sys
def donuts(count):
if count < 10:
x = str(count)
else:
x = 'many'
print 'Number of donuts: ' + x
def main():
donuts(sys.argv[1])
if __name__ == "__main__":
main()
``` | 2012/04/06 | [
"https://Stackoverflow.com/questions/10049054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1318217/"
] | `sys.argv` will only contain strings, even if you provide integers as arguments, so for example if you called this as `python donuts.py 4`, then `sys.argv[1]` would be `'4'` and you would attempt the comparison `'4' < 10` when you actually want `4 < 10`.
So first, try to convert `sys.argv[1]` to an int:
```
def main():
donuts(int(sys.argv[1]))
```
You may want to add some error handling as well, in case an argument is not provided or it is not an integer:
```
def main():
try:
donuts(int(sys.argv[1]))
except IndexError:
print 'Missing argument'
except ValueError:
print 'Invalid argument'
``` | `count` is a string. Try `count = int(count)` |
10,049,054 | I want it to give 'Number of donuts: many' when the argument is more than 10 and 'Number of donuts: ' when the argument is count<10. But it always gives me 'Number of donuts: many'
```
#!/usr/bin/python2.6 -tt
import sys
def donuts(count):
if count < 10:
x = str(count)
else:
x = 'many'
print 'Number of donuts: ' + x
def main():
donuts(sys.argv[1])
if __name__ == "__main__":
main()
``` | 2012/04/06 | [
"https://Stackoverflow.com/questions/10049054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1318217/"
] | `count` is a string. Try `count = int(count)` | You don't take the input in as an integer to begin with.
Change:
```
def main():
donuts(sys.argv[1])
```
to:
```
def main():
donuts(int(sys.argv[1]))
```
You will run into an issue if they put in a string or decimal instead of a numerical value, though. |
10,049,054 | I want it to give 'Number of donuts: many' when the argument is more than 10 and 'Number of donuts: ' when the argument is count<10. But it always gives me 'Number of donuts: many'
```
#!/usr/bin/python2.6 -tt
import sys
def donuts(count):
if count < 10:
x = str(count)
else:
x = 'many'
print 'Number of donuts: ' + x
def main():
donuts(sys.argv[1])
if __name__ == "__main__":
main()
``` | 2012/04/06 | [
"https://Stackoverflow.com/questions/10049054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1318217/"
] | `sys.argv` will only contain strings, even if you provide integers as arguments, so for example if you called this as `python donuts.py 4`, then `sys.argv[1]` would be `'4'` and you would attempt the comparison `'4' < 10` when you actually want `4 < 10`.
So first, try to convert `sys.argv[1]` to an int:
```
def main():
donuts(int(sys.argv[1]))
```
You may want to add some error handling as well, in case an argument is not provided or it is not an integer:
```
def main():
try:
donuts(int(sys.argv[1]))
except IndexError:
print 'Missing argument'
except ValueError:
print 'Invalid argument'
``` | Try converting the `count` parameter to an int for the comparison:
```
if int (count) < 10:
x = str (count)
else:
x = 'many'
```
Because you're passing in a string argument for the `count` parameter, the comparison is failing. |
10,049,054 | I want it to give 'Number of donuts: many' when the argument is more than 10 and 'Number of donuts: ' when the argument is count<10. But it always gives me 'Number of donuts: many'
```
#!/usr/bin/python2.6 -tt
import sys
def donuts(count):
if count < 10:
x = str(count)
else:
x = 'many'
print 'Number of donuts: ' + x
def main():
donuts(sys.argv[1])
if __name__ == "__main__":
main()
``` | 2012/04/06 | [
"https://Stackoverflow.com/questions/10049054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1318217/"
] | `sys.argv` will only contain strings, even if you provide integers as arguments, so for example if you called this as `python donuts.py 4`, then `sys.argv[1]` would be `'4'` and you would attempt the comparison `'4' < 10` when you actually want `4 < 10`.
So first, try to convert `sys.argv[1]` to an int:
```
def main():
donuts(int(sys.argv[1]))
```
You may want to add some error handling as well, in case an argument is not provided or it is not an integer:
```
def main():
try:
donuts(int(sys.argv[1]))
except IndexError:
print 'Missing argument'
except ValueError:
print 'Invalid argument'
``` | You don't take the input in as an integer to begin with.
Change:
```
def main():
donuts(sys.argv[1])
```
to:
```
def main():
donuts(int(sys.argv[1]))
```
You will run into an issue if they put in a string or decimal instead of a numerical value, though. |
28,656,559 | I am trying to build the \_pjsua C extension in windows, using Visual studio 2012.
I downloaded the source code from here - <http://www.pjsip.org/download.htm>.
I have Python27 installed, and have added the **C:\Python27\include** and the **C:\Python27\libs** directories to the project **include** and **library** directories.
I followed the instructions here - <https://trac.pjsip.org/repos/wiki/Python_SIP/Build_Install>.
In the **Microsoft Windows with Visual Studio** under **Step 1: Building the C Extension** its says:
```
Visual Studio 2005:
1. Open pjproject-vs8.sln from the PJSIP distribution directory.
2. Select either Debug or Release from the build configuration
Note: the Python module does not support other build configurations.
3. In Visual Studio, right click python_pjsua project from the Solution Explorer panel, and select Build from the pop-up menu.
Note: the python_pjsua project is not built by default if you build the solution, hence it needs to be built manually by right-clicking and select Build from the pop-up menu.
4. The _pjsua.pyd Python module will be placed in pjsip-apps\lib directory.
or in case of debug, it will be _pjsua_d.pyd
```
In step 3 (building the python\_pjsua project) I get error
```
pjsua error lnk1181 cannot open input file python24.lib
```
in the **C:/Python27/libs** I have file **python27.lib**.
Does this C extension works only with Python 2.4 (python24)??
thanks in advance | 2015/02/22 | [
"https://Stackoverflow.com/questions/28656559",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1662033/"
] | No, it not so.
You can use simple hack:
**Copy python27.lib** and rename it **to python24.lib**, then place it to **C:/Python27/libs** folder. Now you can build you extension, then run in cmd **python setup-vc.py install** command. | The right solution for this is:
1. Open python\_pjsua property pages (righ click->Properties);
2. Linker->Input->Additional Dependencies.
3. Change python24.lib to python27.lib (or python24\_d.lib to python27\_d.lib if debugging).
It should work and compile with no problem. |
33,326,193 | I need help finding a way to calculate the total cost of items when there is a change in the price once items go up to certain number in python 3.5.
For example,
First 6 items cost $8 each and after that, it costs $5 per item.
How can I achieve this without using an `if` statement and loop? | 2015/10/25 | [
"https://Stackoverflow.com/questions/33326193",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5426865/"
] | I would agree with the replies to this post concerning MCVE.
As for an answer to the question (to get the grader to accept your answer), remember that when inheriting the (Parent) `class Person` for (child) `class USResident`, (Parent) `class Person` will need to be initialized in (child) `class USResident` with:
`Person.__init__(self, name)`
So the code that gave me a correct answer was:
```
class USResident(Person):
"""
A Person who resides in the US.
"""
def __init__(self, name, status):
"""
Initializes a Person object. A USResident object inherits
from Person and has one additional attribute:
status: a string, one of "citizen", "legal_resident", "illegal_resident"
Raises a ValueError if status is not one of those 3 strings
"""
Person.__init__(self, name)
if status != 'citizen' and status != 'legal_resident' and \
status != 'illegal_resident':
raise ValueError()
else:
self.status = status
def getStatus(self):
"""
Returns the status
"""
return self.status
```
The final exam is over but you can go to Final Exam Code Graders in the sidebar of the course to check this code.
I started this course late so I just got to this question and I too was perplexed as to why I wasn't getting the "correct" output as well (for upwards of an hour!).
For those of you not in the course, here's a picture:
[](https://i.stack.imgur.com/lq9Fy.png)
The course, for those who are interested, is "Introduction to Computer Science and Programming Using Python", or 6.00.1x, from [edX.org](https://www.edx.org/course/introduction-computer-science-mitx-6-00-1x-6) .
Unfortunately, only enrolled persons can access the code grader.
Cheers! | Actually it is very simple, just to test you if you can use a constant in the class.
Just like something: `STATUS = ("c", "i", "l")` and then raise the `ValueError` if the condition failed. |
42,689,852 | I'm trying to using the [Azure Python SDK](https://github.com/Azure/azure-sdk-for-python) to drive some server configuration management, but I'm having difficulty working out how I'm supposed to use the API to upload and configure SSL certificates.
I can successfully interrogate my Azure account to discovering the App Services that are available with the `WebSiteManagementClient`, and I can interrogate and manipulate DNS configurations using the `DnsManagementClient`.
I am also able to manually add an SSL certificate to an Azure App Service using [the instructions on the Azure website](https://learn.microsoft.com/en-us/azure/app-service-web/web-sites-configure-ssl-certificate).
However, it isn't at all clear to me what API endpoints I should be using to install a custom SSL certificate.
If I've got a `WebSiteManagementClient` named `client`, then I can see that:
* `client.certificates.get_certificate()` allows me to get a specific certificate by name - but `client.certificates` doesn't appear to have an API to list all available certificates.
* `client.certificates.create_or_update_certificate()` allows me to presumably idempotently create/update a certificate - but it requires a `CertificateEnvelope` argument, and I can't see where that object should be created.
* Assuming I manually upload a certificate, I can't work out what API endpoint I would use to install that certificate on a site. There are calls to `get_site_host_name_bindings` and `delete_site_host_name_binding`, but no obvious API to *create* the binding; there are dozens of calls to `configure_...` and `create_or_update_...`, but neither the naming of the API endpoints nor the API documentation is in any way illuminating as to which calls should be used.
Can anyone point me in the right direction? What Python API calls do I need to make to upload a certificate obtained from a third party, and install that certificate on an AppService under a specific domain?
Addendum
========
Here's some sample code, based on suggestions from @peter-pan-msft:
```
creds = ServicePrincipalCredentials(
client_id=UUID('<client>'),
secret='<secret>',
tenant=UUID('<tenant>'),
resource='https://vault.azure.net'
)
kv = KeyVaultClient(
credentials=creds
)
KEY_VAULT_URI = 'https://<vault>.vault.azure.net/'
with open('example.pfx', 'rb') as f:
data = f.read()
# Try to get the certificates
for cert in kv.get_certificates(KEY_VAULT_URI):
print(cert)
# or...
kv.import_certificate(KEY_VAULT_URI, 'cert name', data, 'password')
```
This code raises:
```
KeyVaultErrorException: Operation returned an invalid status code 'Forbidden'
```
The values for the credentials have worked for other operations, including getting and creating keys in the key store. If I modify the credentials to be known bad values, I get:
```
KeyVaultErrorException: Operation returned an invalid status code 'Unauthorized'
``` | 2017/03/09 | [
"https://Stackoverflow.com/questions/42689852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/218383/"
] | If you follow the [App Service walkthrough for importing certificates from Key Vault](https://learn.microsoft.com/azure/app-service/configure-ssl-certificate#import-a-certificate-from-key-vault), it'll tell you that your app needs read permissions to access certificates from the vault. But to initially import your certificate to Key Vault as you're doing, you'll need to grant your service principal certificate import permissions as well. Trying to import a certificate without import permissions will yield a "Forbidden" error like the one you're seeing.
There are also new packages for working with Key Vault in Python that replace `azure-keyvault`:
* [azure-keyvault-certificates](https://pypi.org/project/azure-keyvault-certificates/) [(Migration guide)](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/keyvault/azure-keyvault-certificates/migration_guide.md)
* [azure-keyvault-keys](https://pypi.org/project/azure-keyvault-keys/) [(Migration guide)](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/keyvault/azure-keyvault-keys/migration_guide.md)
* [azure-keyvault-secrets](https://pypi.org/project/azure-keyvault-secrets/) [(Migration guide)](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/keyvault/azure-keyvault-secrets/migration_guide.md)
[azure-identity](https://pypi.org/project/azure-identity/) is the package that should be used with these for authentication.
Here's an example of importing a certificate using `azure-keyvault-certificates`:
```
from azure.identity import DefaultAzureCredential
from azure.keyvault.certificates import CertificateClient
KEY_VAULT_URI = 'https://<vault>.vault.azure.net/'
credential = DefaultAzureCredential()
client = CertificateClient(KEY_VAULT_URI, credential)
with open('example.pfx', 'rb') as f:
data = f.read()
client.import_certificate("cert-name", data.encode(), password="password")
```
You can provide the same credentials that you used for `ServicePrincipalCredentials` by setting environment variables corresponding to the `client_id`, `secret`, and `tenant`:
```
export AZURE_CLIENT_ID="<client>"
export AZURE_CLIENT_SECRET="<secret>"
export AZURE_TENANT_ID="<tenant>"
```
(I work on the Azure SDK in Python) | According to your description, based on my understanding, I think you want to upload a certificate and use it on Azure App Service.
Per my experience for Azure Python SDK, there seems not to be any Python API for directly uploading a certificate to Azure App Service. However, there is a workaround way for doing it via import a certificate into Azure Key Vault and use it from Azure App Service. And for more details, please see the docuemtn list below.
1. The [`Import Certificate`](https://learn.microsoft.com/en-us/rest/api/keyvault/importcertificate) REST API of `Key Valut`. And the related Azure Python API is the method `import_certificate` from [here](https://github.com/Azure/azure-sdk-for-python/blob/61d49db3e3cc3d4821e823ce811f82b44a734b2a/azure-keyvault/azure/keyvault/key_vault_client.py#L173), you can refer to the [reference](http://azure-sdk-for-python.readthedocs.io/en/latest/sample_azure-keyvault.html) for key Vault to know how to use it.
2. There are two documents about using Key Vault certificate from Azue WebApp: [Use Azure Key Vault from a Web Application](https://learn.microsoft.com/en-us/azure/key-vault/key-vault-use-from-web-application) & [Deploying Azure Web App Certificate through Key Vault](https://blogs.msdn.microsoft.com/appserviceteam/2016/05/24/deploying-azure-web-app-certificate-through-key-vault/). The [`Create Or Update`](https://learn.microsoft.com/en-us/rest/api/appservice/certificates#Certificates_CreateOrUpdate) REST API of Certificates on Azure App Service is used for deploying, and the related Python API is [`create_or_update`](https://github.com/Azure/azure-sdk-for-python/blob/00678eb1cff3053077374dd527b6f564fd0fbb34/azure-mgmt-web/azure/mgmt/web/operations/certificates_operations.py#L236), for which usage, please refer to [here](http://azure-sdk-for-python.readthedocs.io/en/latest/resourcemanagementapps.html).
Hope it helps.
---
As Azure Python SDK reference of KeyVault for [`Access Policy`](http://azure-sdk-for-python.readthedocs.io/en/latest/sample_azure-keyvault.html#access-policies) said, as below.
>
> **Access policies**
>
>
> Some operations require the correct access policies for your credentials.
>
>
> If you get an “Unauthorized” error, please add the correct access policies to this credentials using the Azure Portal, the Azure CLI or the Key Vault Management SDK itself
>
>
>
Here is the steps for set access policies for certificates operations via [Azure CLI](https://learn.microsoft.com/en-us/azure/xplat-cli-install).
1. Get Azure AD service principals for your application, command `azure ad sp show --search <your-application-display-name>`, then copy the `Service Principal Names`(spn) like `xxxx-xxxx-xxxxx-xxxx-xxxx`.
2. Set policy for certificate operations, command `azure keyvault set-policy brucechen --spn <your-applicaiton-spn> --perms-to-certificates <perms-to-certificates, such as [\"all\"]>`. Explaination for The `<perms-to-certificates>` as below.
>
> JSON-encoded array of strings representing certificate operations; each string can be one of [all, get, list, delete, create, import, update, managecontacts, getissuers, listissuers, setissuers, deleteissuer
>
>
> |
63,482,435 | from the below table I want to pull records with ID 1 and ID 3.
```
ID Status assigned
1 low yes
1 High no
2 low no
3 high yes
3 low yes
```
Please let me know in python how can this be done. | 2020/08/19 | [
"https://Stackoverflow.com/questions/63482435",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10214628/"
] | You can target attribute lang on the blockquote tag and add direction rule:
```
blockquote[lang="ar"] {
direction: rtl;
}
```
```css
blockquote {
background-color: #f4f7fc;
font-size: 20px;
color: #191514;
line-height: 1.7;
position: relative;
padding: 50px 30px 30px 115px;
font-family: 'Poppins', sans-serif;
clear: both;
margin: 40px 0;
overflow: hidden;
}
blockquote[lang="ar"] {
direction: rtl;
}
blockquote p {
margin-bottom: 0 !important;
}
blockquote cite {
font-style: normal;
display: block;
color: #9b6f45;
font-weight: 700;
font-size: 16px;
margin-top: 11px;
}
blockquote:before {
content: '\f10d';
font-family: "FontAwesome";
color: #d5aa6d;
font-size: 28px;
position: absolute;
left: 22px;
top: 10px;
font-style: normal;
background-image: -webkit-gradient(linear, left top, left bottom, from(#d5aa6d), to(#9b6f45));
background-image: -webkit-linear-gradient(top, #d5aa6d, #9b6f45);
background-image: -moz-linear-gradient(top, #d5aa6d, #9b6f45);
background-image: -ms-linear-gradient(top, #d5aa6d, #9b6f45);
background-image: -o-linear-gradient(top, #d5aa6d, #9b6f45);
background-image: linear-gradient(top, #d5aa6d, #9b6f45);
filter: progid:DXImageTransform.Microsoft.gradient(startColorStr='#d5aa6d', endColorStr='#9b6f45');
background-color: transparent;
background-clip: text;
-moz-background-clip: text;
-webkit-background-clip: text;
text-fill-color: transparent;
-moz-text-fill-color: transparent;
-webkit-text-fill-color: transparent;
z-index: 2;
}
blockquote[lang="ar"]:before {
content: '\f10e';
right: 22px;
left: auto;
}
blockquote:after {
content: '\f10e';
font-family: "FontAwesome";
color: #d5aa6d;
font-size: 28px;
position: absolute;
right: 22px;
bottom: 10px;
font-style: normal;
background-image: -webkit-gradient(linear, left top, left bottom, from(#d5aa6d), to(#9b6f45));
background-image: -webkit-linear-gradient(top, #d5aa6d, #9b6f45);
background-image: -moz-linear-gradient(top, #d5aa6d, #9b6f45);
background-image: -ms-linear-gradient(top, #d5aa6d, #9b6f45);
background-image: -o-linear-gradient(top, #d5aa6d, #9b6f45);
background-image: linear-gradient(top, #d5aa6d, #9b6f45);
filter: progid:DXImageTransform.Microsoft.gradient(startColorStr='#d5aa6d', endColorStr='#9b6f45');
background-color: transparent;
background-clip: text;
-moz-background-clip: text;
-webkit-background-clip: text;
text-fill-color: transparent;
-moz-text-fill-color: transparent;
-webkit-text-fill-color: transparent;
z-index: 2;
}
blockquote[lang="ar"]:after {
content: '\f10d';
right: auto;
left: 22px;
}
'''
'''
```
```html
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.14.0/css/all.min.css" integrity="sha512-1PKOgIY59xJ8Co8+NE6FZ+LOAZKjy+KY8iq0G4B3CyeY6wYHN3yt9PW0XpSriVlkMXe40PTKnXrLnZ9+fkDaog==" crossorigin="anonymous" />
<blockquote lang="en">
<ul>
<li>This is in english</li>
</ul>
</blockquote>
<blockquote lang="ar">
<ul>
<li>هذا باللغة العربية</li>
</ul>
</blockquote>
``` | add class to blockquote element, and set the class styling direction attribute to rtl |
73,581,339 | I want to show status every second in a very slow loop in python code, e.g.
```
for i in range(100):
sleep(1000000) # think there is a very slow job
# I want to show status in console every second
# to know if the job stop or not
```
The output image is, e.g.
```
$ python somejob.py
> 2022-09-02 13:04:10 | Status: running...
```
and the output updates every second, e.g.
```
$ python somejob.py
> 2022-09-02 13:04:11 | Status: running...
```
```
$ python somejob.py
> 2022-09-02 13:04:12 | Status: running...
```
```
$ python somejob.py
> 2022-09-02 13:04:13 | Status: running...
```
Any idea will by helpful. Thx!!! | 2022/09/02 | [
"https://Stackoverflow.com/questions/73581339",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6766052/"
] | I think what you're looking for is someting like the tqdm library: [github repo](https://github.com/tqdm/tqdm)
for example
```
from tqdm import tqdm
for i in tqdm(range(1000)):
continue # do something complex here
``` | You may us the [rich module](https://pypi.org/project/rich/) to disply a progress bar:
```
import time
from rich.progress import track
for i in track(range(100)):
time.sleep(0.5)
```
Here's a screenshot within the run:
[](https://i.stack.imgur.com/hrEhp.png) |
64,327,172 | I am running a django app with a postgreSQL database and I am trying to send a very large dictionary (consisting of time-series data) to the database.
My goal is to write my data into the DB as fast as possible. I am using the library requests to send the data via an API-call (built with django REST):
My API-view is simple:
```
@api_view(["POST"])
def CreateDummy(request):
for elem, ts in request.data['time_series'] :
TimeSeries.objects.create(data_json=ts)
msg = {"detail": "Created successfully"}
return Response(msg, status=status.HTTP_201_CREATED)
```
`request.data['time_series']` is a huge dictionary structured like this:
```
{Building1: {1:123, 2: 345, 4:567 .... 31536000: 2345}, .... Building30: {..... }}
```
That means I am having **30 keys with 30 values, whereas the values are each a dict with 31536000 elements.**
My API request looks like this (where data is my dictionary described above):
```
payload = {
"time_series": data,
}
requests.request(
"post", url=endpoint, json=payload
)
```
The code saves the time-series data to a jsonb-field in the backend. Now that works if I only loop over the first 4 elements of the dictionary. I can get that data in in about 1minute. But when I loop over the whole dict, my development server shuts down. I guess it's because the memory is insufficient. I get a `requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))`. Is the whole dict saved to memory before it starts iterating? I doubt it because I read that in python3 looping with `.items()` returns an iterator and is the preferred way to do this.
Is there a better way to deal with massive dicts in django/python? Should I loop through half of it and then through the other half? Or is there a faster way? Maybe using `pandas`? Or maybe sending the data differently? I guess I am looking for the most performant way to do this.
Happy to provide more code if needed.
Any help, hints or guides are very much appreciated! Thanks in advance
EDIT2: I think it is not my RAM usage or the size of the dict. I still have 5GiB of RAM left when the server shuts down. ~~And the size of the dict is 1176bytes~~ *Dict is much larger, see comments*
EDIT3: I can't even print the huge dict. It also shuts down then
EDIT4: When split the data up and send it not all at once the server can handle it. But when I try to query it back the server breaks again. It breaks on my production server (nginx AWS RDS setup) and it breaks on my local dev server. I am pretty sure it's because django can't handle queries that big with my current setup. But how could I solve this?
EDIT5: So what I am looking for is a two part solution. One for the creation of the data and one for the querying of the data. The creation of the data I described above. But even if I get all that data into the database, I will still have problems getting it out again.
I tried this by creating the data not all together but every time-series on its own. So let's assume I have this huge data in my DB and I try to query it back. All time-series objects belong to a network so I tried this like so:
```
class TimeSeriesByTypeAndCreationMethod(ListAPIView):
"""Query time-series in specific network."""
serializer_class = TimeSeriesSerializer
def get_queryset(self):
"""Query time-series
Query by name of network, type of data, creation method and
source.
"""
network = self.kwargs["name_network"]
if TimeSeries.objects.filter(
network_element__network__name=network,
).exists():
time_series = TimeSeries.objects.filter(
network_element__network__name=network,
)
return time_series
else:
raise NotFound()
```
But the query breaks the server like the data creation before. I think also this is too much data load. I thought I could use raw sql avoid breaking the server... Or is there also a better way?
EDIT6: Relevant models:
```
class TimeSeries(models.Model):
TYPE_DATA_CHOICES = [
....many choices...
]
CREATION_METHOD_CHOICES = [
....many choices...
]
description = models.CharField(
max_length=120,
null=True,
blank=True,
)
network_element = models.ForeignKey(
Building,
on_delete=models.CASCADE,
null=True,
blank=True,
)
type_data = models.CharField(
null=True,
blank=True,
max_length=30,
choices=TYPE_DATA_CHOICES,
)
creation_method = models.CharField(
null=True,
blank=True,
max_length=30,
choices=CREATION_METHOD_CHOICES,
)
source = models.CharField(
null=True,
blank=True,
max_length=300
)
data_json = JSONField(
help_text="Data for time series in JSON format. Valid JSON expected."
)
creation_date = models.DateTimeField(auto_now=True, null=True, blank=True)
def __str__(self):
return f"{self.creation_method}:{self.type_data}"
class Building(models.Model):
USAGE_CHOICES = [
...
]
name = models.CharField(
max_length=120,
null=True,
blank=True,
)
street = models.CharField(
max_length=120,
null=True,
blank=True,
)
house_number = models.CharField(
max_length=20,
null=True,
blank=True,
)
zip_code = models.CharField(
max_length=5,
null=True,
blank=True,
)
city = models.CharField(
max_length=120,
null=True,
blank=True,
)
usage = models.CharField(
max_length=120,
choices=USAGE_CHOICES,
null=True,
blank=True,
)
.....many more fields....
``` | 2020/10/13 | [
"https://Stackoverflow.com/questions/64327172",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9893391/"
] | You can solve your issues using two techniques.
Data Creation
-------------
Use bulk\_create to insert a large number of records, if SQL error happens due to large query size etc then provide the `batch_size` in `bulk_create`.
```
records = []
for elem, ts in request.data['time_series'] :
records.append(
TimeSeries(data_json=ts)
)
# setting batch size t 1000
TimeSeries.objects.bulk_create(records, batch_size=1000)
```
There're some caveats with bulk\_create like it will not generate signals and others see more in [Doc](https://docs.djangoproject.com/en/3.1/ref/models/querysets/#bulk-create)
Data Retrieval
--------------
Configure rest framework to use pagination **default configuration**
```
REST_FRAMEWORK = {
'DEFAULT_PAGINATION_CLASS': 'rest_framework.pagination.LimitOffsetPagination',
'PAGE_SIZE': 100
}
```
For custom configuration use
```
class TimeSeriesResultsSetPagination(PageNumberPagination):
page_size = 50
page_size_query_param = 'page_size'
max_page_size = 10000
class BillingRecordsView(generics.ListAPIView):
serializer_class = TimeSeriesSerializer
pagination_class = TimeSeriesResultsSetPagination
def get_queryset(self):
"""Query time-series
Query by name of network, type of data, creation method and
source.
"""
network = self.kwargs["name_network"]
if TimeSeries.objects.filter(
network_element__network__name=network,
).exists():
time_series = TimeSeries.objects.filter(
network_element__network__name=network,
)
return time_series
else:
raise NotFound()
```
See other techniques for pagination at <https://www.django-rest-framework.org/api-guide/pagination/> | @micromegas when your solution is correct theoretically, however calling create() many times in a loop, I believe that causes the ConnectionError exception.
try to refactor to something like:
```
big_data_holder = []
for elem, ts in request.data['time_series'] :
big_data_holder.append(
TimeSeries(data_json=ts)
)
# examine the structure
print(big_data_holder)
TimeSeries.objects.bulk_create(big_data_holder)
```
please check for some downsides for this method
[Django Docs bulk\_create](https://docs.djangoproject.com/en/3.1/ref/models/querysets/#bulk-create) |
46,145,221 | what is different between `os.path.getsize(path)` and `os.stat`? which one is best to used in python 3? and when do we use them? and why we have two same solution?
I found [this](https://stackoverflow.com/questions/18962166/python-os-statfile-name-st-size-versus-os-path-getsizefile-name) answer but I couldn't understand what this quote means:
>
> From this, it seems pretty clear that there is no reason to expect the two approaches to behave differently (except perhaps due to the different structures of the loops in your code)
>
>
>
specifically why we have two approach and what is there different? | 2017/09/10 | [
"https://Stackoverflow.com/questions/46145221",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4958447/"
] | `stat` is a POSIX system call (available on Linux, Unix and even Windows) which returns a bunch of information (size, type, protection bits...)
Python has to call it at some point to get the size ([and it does](https://stackoverflow.com/questions/18962166/python-os-statfile-name-st-size-versus-os-path-getsizefile-name)), but there's no system call to get *only* the size.
So they're the same performance-wise (maybe faster with `stat` but that's only 1 more function call so not I/O related). It's just that `os.path.getsize` is simpler to write.
that said, to be able to call `os.path.getsize` you have to make sure that the path is actually a *file*. When called on a directory, `getsize` returns some value (tested on Windows) which is probably related to the size of the node, so you have to use `os.path.isfile` first: another call to `os.stat`.
In the end, if you want to maximize performance, you have to use `os.stat`, check infos to see if path is a file, then use the `st_size` information. That way you're calling `stat` only once.
If you're using `os.walk` to scan the directory, you're exposed to more hidden `stat` calls, so look into `os.scandir` (Python 3.5).
Related:
* [Faster way to find large files with Python?](https://stackoverflow.com/questions/46144952/faster-way-to-find-large-files-with-python/46145070#46145070)
* [Python os.stat(file\_name).st\_size versus os.path.getsize(file\_name)](https://stackoverflow.com/questions/18962166/python-os-statfile-name-st-size-versus-os-path-getsizefile-name) looks like a duplicate but the question (and answer) is different | The answer you are linking to shows that the one calls the other:
```
def getsize(filename):
"""Return the size of a file, reported by os.stat()."""
return os.stat(filename).st_size
```
so fundamentally, both functions are using `os.stat`.
Why? probably because they had similar needs in two different packages, `path` and `stat`, and didn't want to duplicate code. |
68,856,582 | Is there a similar substituite to `.exit()` and `sys.exit()` that stops the program from running **but without terminating python entirely**?
Here's something similar to what I want to achieve:
```
import random
my_num = random.uniform(0, 1)
if my_num > 0.9:
# stop the code here
# some other huge blocks of codes
```
Here's why I think I need to find such a command/function:
1. I want the code to run automatically so definitely not "Ctrl+C"
2. I don't want python to terminate because I want to check other previously defined variables
3. I think `else` does not work well because there will be a huge amount of other codes after the condition check and there will be other .py to be running by `os.system()`
4. Of course, force triggering an error message like would do but is that the only way? | 2021/08/20 | [
"https://Stackoverflow.com/questions/68856582",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14610650/"
] | When you run your script, use the `-i` option. Then call `sys.exit()` where you want to stop.
```
python3 -i myscript.py
```
```py
if my_num > 0.9:
sys.exit()
```
Python won't actually exit when the `-i` used. It will instead place you in the REPL prompt.
---
The next best method, if you can't use the `-i` option, is to enter an emulated REPL provided by the `code` module.
```py
import sys
import code
import random
import readline
while True:
my_num = random.uniform(0, 1)
if my_num > 0.9:
console = code.InteractiveConsole(globals())
console.interact(banner="You are now in Python REPL. ^D exits.",
exitmsg="Bye!")
break
```
That will start a REPL that is not the built-in one, but one written in Python itself. | If you don't want to terminate the code, you can tell python to "sleep":
```
import random
import time
my_num = random.uniform(0, 1)
if my_num > 0.9:
time.sleep(50) #==== 50 seconds. Use any number.
``` |
15,661,841 | Is there any video tutorial or book from where I can learn python web programming in django platform in Eclipse(pydev).Please Help | 2013/03/27 | [
"https://Stackoverflow.com/questions/15661841",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2183898/"
] | Try [The Django Book](http://www.djangobook.com/en/2.0/index.html), or start with the [tutorial](https://docs.djangoproject.com/en/1.5/intro/tutorial01/). | <http://pydev.org/manual_adv_django.html> should get you started. If you're new to eclipse, I would find a tutorial on that first as they have a lot of their own lingo. |
15,661,841 | Is there any video tutorial or book from where I can learn python web programming in django platform in Eclipse(pydev).Please Help | 2013/03/27 | [
"https://Stackoverflow.com/questions/15661841",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2183898/"
] | If you insist to start with Eclipse, [this series is a good start point](https://www.youtube.com/watch?v=o1STjuSTKcU), I guess.. | <http://pydev.org/manual_adv_django.html> should get you started. If you're new to eclipse, I would find a tutorial on that first as they have a lot of their own lingo. |
38,219,216 | I'm using `python` to crawl a webpage and save it. And the code works properly. But when I open the web page it just shows the website name i.e., **<http://www.indiabix.com>** and not the actual content.
You can just go the website and save one of it's pages **NOT** the homepage but other pages like **<http://www.indiabix.com/database/questions-and-answers/>**. And when you open it, the page just shows this
[](https://i.stack.imgur.com/iw4w7.png)
and not this
[](https://i.stack.imgur.com/xPspu.png)
The code I've written is simple
```
def writeToFile(link, name, title):
response = urllib2.urlopen(link)
webContent = response.read()
f = open(name + '/' + title, 'w')
f.write(webContent)
f.close
```
You just pass the link, directory name and title of file.
I have checked in Chrome, Firefox and Safari and all show the same output. How can I resolve this issue to display the entire saved page fully.
Thank you. | 2016/07/06 | [
"https://Stackoverflow.com/questions/38219216",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3620992/"
] | in this version
```
implementation 'com.github.PhilJay:MPAndroidChart:v3.0.3'
```
try it
```
public class MainActivity extends AppCompatActivity {
private LineChart lc;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
initView();
initData();
}
public int ran() {
Random ran = new Random();
int i = ran.nextInt(199);
return i;
}
public int ran2() {
Random ran = new Random();
int i = ran.nextInt(49);
return i;
}
public void initData() {
lc.setExtraOffsets(12,50,24,0); //padding
setDescription("two lines example");
lc.animateXY(500, 0);
setLegend();
setYAxis();
setXAxis();
setChartData();
}
public void setLegend() {
Legend legend = lc.getLegend();
legend.setForm(Legend.LegendForm.LINE);
legend.setFormSize(20);
legend.setTextSize(20f);
legend.setFormLineWidth(1);
legend.setHorizontalAlignment(Legend.LegendHorizontalAlignment.CENTER);
legend.setTextColor(Color.BLACK);
}
public void setDescription(String descriptionStr) {
Description description = new Description();
description.setText(descriptionStr);
WindowManager wm = (WindowManager) getSystemService(Context.WINDOW_SERVICE);
DisplayMetrics outMetrics = new DisplayMetrics();
wm.getDefaultDisplay().getMetrics(outMetrics);
Paint paint = new Paint();
paint.setTextSize(20);
float x = outMetrics.widthPixels - Utils.convertDpToPixel(12);
float y = Utils.calcTextHeight(paint, descriptionStr) + Utils.convertDpToPixel(12);
description.setPosition(x, y);
lc.setDescription(description);
}
public void setYAxis() {
final YAxis yAxisLeft = lc.getAxisLeft();
yAxisLeft.setAxisMaximum(200);
yAxisLeft.setAxisMinimum(0);
yAxisLeft.setGranularity(10);
yAxisLeft.setTextSize(12f);
yAxisLeft.setTextColor(Color.BLACK);
yAxisLeft.setValueFormatter(new IAxisValueFormatter() {
@Override
public String getFormattedValue(float value, AxisBase axis) {
return value == yAxisLeft.getAxisMinimum() ? (int) value + "" : (int) value +"";
}
});
lc.getAxisRight().setEnabled(false);
}
public void setXAxis() {
XAxis xAxis = lc.getXAxis();
xAxis.setPosition(XAxis.XAxisPosition.BOTTOM);
xAxis.setDrawGridLines(false);
xAxis.setLabelCount(20);
xAxis.setTextColor(Color.BLACK);
xAxis.setTextSize(12f);
xAxis.setGranularity(1);
xAxis.setAxisMinimum(0);
xAxis.setAxisMaximum(100);
xAxis.setValueFormatter(new IAxisValueFormatter() {
@Override
public String getFormattedValue(float value, AxisBase axis) {
return value == 0 ? "example" : (int) value + "";
}
});
}
public void setChartData() {
List<Entry> yVals1 = new ArrayList<>();
for (int i = 0; i < 100; i++) {
int j = ran();
yVals1.add(new Entry(1 + i,j));
}
List<Entry> yVals2 = new ArrayList<>();
for (int i = 0; i < 100; i++) {
int j = ran2();
yVals2.add(new Entry(1 + i,j));
}
LineDataSet lineDataSet1 = new LineDataSet(yVals1, "ex1");
lineDataSet1.setValueTextSize(20);
lineDataSet1.setDrawCircleHole(true);
lineDataSet1.setColor(Color.MAGENTA);
lineDataSet1.setMode(LineDataSet.Mode.LINEAR);
lineDataSet1.setDrawCircles(true);
lineDataSet1.setCubicIntensity(0.15f);
lineDataSet1.setCircleColor(Color.MAGENTA);
lineDataSet1.setLineWidth(1);
LineDataSet lineDataSet2 = new LineDataSet(yVals2, "ex2");
lineDataSet2.setValueTextSize(20);
lineDataSet2.setDrawCircleHole(true);
lineDataSet2.setColor(Color.BLUE);
lineDataSet2.setMode(LineDataSet.Mode.LINEAR);
lineDataSet2.setDrawCircles(true);
lineDataSet2.setCubicIntensity(0.15f);
lineDataSet2.setCircleColor(Color.BLUE);
lineDataSet2.setLineWidth(1);
.
.
.
ArrayList<ILineDataSet> dataSets = new ArrayList<ILineDataSet>();
dataSets.add(lineDataSet1);
dataSets.add(lineDataSet2);
LineData lineData = new LineData(dataSets);
lc.setVisibleXRangeMaximum(5);
lc.setScaleXEnabled(true);
lc.setData(lineData);
}
```
and like this.
 | Version 3.0 is initialized like so:
```
LineChart lineChart = new LineChart(context);
lineChart.setMinimumHeight(ToolBox.dpToPixels(context, 300));
lineChart.setMinimumWidth(ToolBox.getScreenWidth());
ArrayList<Entry> yVals = new ArrayList<>();
for(int i = 0; i < frigbot.getEquipment().getTemperatures().size(); i++)
{
Temperature temperature = frigbot.getEquipment().getTemperatures().get(i);
yVals.add(new Entry(
i, temperature.getValue().floatValue()
));
}
LineDataSet dataSet = new LineDataSet(yVals, "graph name");
dataSet.setMode(LineDataSet.Mode.CUBIC_BEZIER);
dataSet.setCubicIntensity(0.2f);
LineData data = new LineData(dataSet);
lineChart.setData(data);
```
It appears we can't specify custom horizontal labels, LineChart itself will automatically generate the horizontal and vertical axis labelling. |
71,153,492 | I'm having multiple errors while running this VGG training code (code and errors shown below). I don't know if its because of my dataset or is it something else.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.vgg16 import preprocess_input
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from sklearn.metrics.pairwise import cosine_similarity
import os
import scipy
train_directory = 'sign_data/train' #To be changed
test_directory = 'sign_data/test' #To be changed
train_datagen = ImageDataGenerator(
rescale = 1./255,
rotation_range = 0.1,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.1
)
train_generator = train_datagen.flow_from_directory(
train_directory,
target_size = (224, 224),
color_mode = 'rgb',
shuffle = True,
batch_size=32
)
test_datagen = ImageDataGenerator(
rescale = 1./255,
)
test_generator = test_datagen.flow_from_directory(
test_directory,
target_size = (224, 224),
color_mode = 'rgb',
shuffle = True,
batch_size=32
)
from tensorflow.keras.applications.vgg16 import VGG16
vgg_basemodel = VGG16(include_top=True)
from tensorflow.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, EarlyStopping
early_stopping = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=5)
vgg_model = tf.keras.Sequential(vgg_basemodel.layers[:-1])
vgg_model.add(tf.keras.layers.Dense(10, activation = 'softmax'))
# Freezing original layers
for layer in vgg_model.layers[:-1]:
layer.trainable = False
vgg_model.compile(loss='categorical_crossentropy',
optimizer=tf.keras.optimizers.SGD(momentum=0.9, learning_rate=0.001, decay=0.01),
metrics=['accuracy'])
history = vgg_model.fit(train_generator,
epochs=30,
batch_size=64,
validation_data=test_generator,
callbacks=[early_stopping])
# finetuning with all layers set trainable
for layer in vgg_model.layers:
layer.trainable = True
vgg_model.compile(loss='categorical_crossentropy',
optimizer=tf.keras.optimizers.SGD(momentum=0.9, lr=0.0001),
metrics=['accuracy'])
history2 = vgg_model.fit(train_generator,
epochs=5,
batch_size=64,
validation_data=test_generator,
callbacks=[early_stopping])
vgg_model.save('saved_models/vgg_finetuned_model')
```
First error: Invalid Argument Error
```
InvalidArgumentError Traceback (most recent call last)
<ipython-input-13-292bf57ef59f> in <module>()
14 batch_size=64,
15 validation_data=test_generator,
---> 16 callbacks=[early_stopping])
17
18 # finetuning with all layers set trainable
/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
53 ctx.ensure_initialized()
54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 55 inputs, attrs, num_outputs)
56 except core._NotOkStatusException as e:
57 if name is not None:
```
Second Error: Graph Execution Error
```
InvalidArgumentError: Graph execution error:
Detected at node 'categorical_crossentropy/softmax_cross_entropy_with_logits' defined at (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "/usr/local/lib/python3.7/dist-packages/traitlets/config/application.py", line 846, in launch_instance
app.start()
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelapp.py", line 499, in start
self.io_loop.start()
File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 132, in start
self.asyncio_loop.run_forever()
File "/usr/lib/python3.7/asyncio/base_events.py", line 541, in run_forever
self._run_once()
File "/usr/lib/python3.7/asyncio/base_events.py", line 1786, in _run_once
handle._run()
File "/usr/lib/python3.7/asyncio/events.py", line 88, in _run
self._context.run(self._callback, *self._args)
File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 122, in _handle_events
handler_func(fileobj, events)
File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 452, in _handle_events
self._handle_recv()
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 481, in _handle_recv
self._run_callback(callback, msg)
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 431, in _run_callback
callback(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 233, in dispatch_shell
handler(stream, idents, msg)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 399, in execute_request
user_expressions, allow_stdin)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/ipkernel.py", line 208, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/zmqshell.py", line 537, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2718, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2822, in run_ast_nodes
if self.run_code(code, result):
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-13-292bf57ef59f>", line 16, in <module>
callbacks=[early_stopping])
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1384, in fit
tmp_logs = self.train_function(iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 860, in train_step
loss = self.compute_loss(x, y, y_pred, sample_weight)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 919, in compute_loss
y, y_pred, sample_weight, regularization_losses=self.losses)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/compile_utils.py", line 201, in __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 141, in __call__
losses = call_fn(y_true, y_pred)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 245, in call
return ag_fn(y_true, y_pred, **self._fn_kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 1790, in categorical_crossentropy
y_true, y_pred, from_logits=from_logits, axis=axis)
File "/usr/local/lib/python3.7/dist-packages/keras/backend.py", line 5099, in categorical_crossentropy
labels=target, logits=output, axis=axis)
Node: 'categorical_crossentropy/softmax_cross_entropy_with_logits'
logits and labels must be broadcastable: logits_size=[32,10] labels_size=[32,128]
[[{{node categorical_crossentropy/softmax_cross_entropy_with_logits}}]] [Op:__inference_train_function_11227]
```
I'm running this on google colaboratory. Is there a module that I should install? Or is it purely an error on the code itself? | 2022/02/17 | [
"https://Stackoverflow.com/questions/71153492",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15336528/"
] | I faced the same error and tried to test everything with no value, but I heard that you have to make the number of **folders** in the **dataset** the SAME as the one in `Dense`.
I don't know if this will solve your specific bug or not but try this with your code:
```
vgg_model.add(tf.keras.layers.Dense(10, activation = 'softmax'))
```
Replace `10` with the number of training dataset folders or can call 'classes'. | Check the image size. Size of image defined in model.add(.., input\_shape=(100,100,3)) should be same as the **target\_size=(100,100) in train\_gererator.**
And also check if number of neurons in last dense layer are equal to number of output classes or not.
By the way, there isn't any need to install any other module. It is some error in code. |
71,153,492 | I'm having multiple errors while running this VGG training code (code and errors shown below). I don't know if its because of my dataset or is it something else.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.vgg16 import preprocess_input
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from sklearn.metrics.pairwise import cosine_similarity
import os
import scipy
train_directory = 'sign_data/train' #To be changed
test_directory = 'sign_data/test' #To be changed
train_datagen = ImageDataGenerator(
rescale = 1./255,
rotation_range = 0.1,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.1
)
train_generator = train_datagen.flow_from_directory(
train_directory,
target_size = (224, 224),
color_mode = 'rgb',
shuffle = True,
batch_size=32
)
test_datagen = ImageDataGenerator(
rescale = 1./255,
)
test_generator = test_datagen.flow_from_directory(
test_directory,
target_size = (224, 224),
color_mode = 'rgb',
shuffle = True,
batch_size=32
)
from tensorflow.keras.applications.vgg16 import VGG16
vgg_basemodel = VGG16(include_top=True)
from tensorflow.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, EarlyStopping
early_stopping = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=5)
vgg_model = tf.keras.Sequential(vgg_basemodel.layers[:-1])
vgg_model.add(tf.keras.layers.Dense(10, activation = 'softmax'))
# Freezing original layers
for layer in vgg_model.layers[:-1]:
layer.trainable = False
vgg_model.compile(loss='categorical_crossentropy',
optimizer=tf.keras.optimizers.SGD(momentum=0.9, learning_rate=0.001, decay=0.01),
metrics=['accuracy'])
history = vgg_model.fit(train_generator,
epochs=30,
batch_size=64,
validation_data=test_generator,
callbacks=[early_stopping])
# finetuning with all layers set trainable
for layer in vgg_model.layers:
layer.trainable = True
vgg_model.compile(loss='categorical_crossentropy',
optimizer=tf.keras.optimizers.SGD(momentum=0.9, lr=0.0001),
metrics=['accuracy'])
history2 = vgg_model.fit(train_generator,
epochs=5,
batch_size=64,
validation_data=test_generator,
callbacks=[early_stopping])
vgg_model.save('saved_models/vgg_finetuned_model')
```
First error: Invalid Argument Error
```
InvalidArgumentError Traceback (most recent call last)
<ipython-input-13-292bf57ef59f> in <module>()
14 batch_size=64,
15 validation_data=test_generator,
---> 16 callbacks=[early_stopping])
17
18 # finetuning with all layers set trainable
/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
53 ctx.ensure_initialized()
54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 55 inputs, attrs, num_outputs)
56 except core._NotOkStatusException as e:
57 if name is not None:
```
Second Error: Graph Execution Error
```
InvalidArgumentError: Graph execution error:
Detected at node 'categorical_crossentropy/softmax_cross_entropy_with_logits' defined at (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "/usr/local/lib/python3.7/dist-packages/traitlets/config/application.py", line 846, in launch_instance
app.start()
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelapp.py", line 499, in start
self.io_loop.start()
File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 132, in start
self.asyncio_loop.run_forever()
File "/usr/lib/python3.7/asyncio/base_events.py", line 541, in run_forever
self._run_once()
File "/usr/lib/python3.7/asyncio/base_events.py", line 1786, in _run_once
handle._run()
File "/usr/lib/python3.7/asyncio/events.py", line 88, in _run
self._context.run(self._callback, *self._args)
File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 122, in _handle_events
handler_func(fileobj, events)
File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 452, in _handle_events
self._handle_recv()
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 481, in _handle_recv
self._run_callback(callback, msg)
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 431, in _run_callback
callback(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 233, in dispatch_shell
handler(stream, idents, msg)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 399, in execute_request
user_expressions, allow_stdin)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/ipkernel.py", line 208, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/zmqshell.py", line 537, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2718, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2822, in run_ast_nodes
if self.run_code(code, result):
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-13-292bf57ef59f>", line 16, in <module>
callbacks=[early_stopping])
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1384, in fit
tmp_logs = self.train_function(iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 860, in train_step
loss = self.compute_loss(x, y, y_pred, sample_weight)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 919, in compute_loss
y, y_pred, sample_weight, regularization_losses=self.losses)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/compile_utils.py", line 201, in __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 141, in __call__
losses = call_fn(y_true, y_pred)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 245, in call
return ag_fn(y_true, y_pred, **self._fn_kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 1790, in categorical_crossentropy
y_true, y_pred, from_logits=from_logits, axis=axis)
File "/usr/local/lib/python3.7/dist-packages/keras/backend.py", line 5099, in categorical_crossentropy
labels=target, logits=output, axis=axis)
Node: 'categorical_crossentropy/softmax_cross_entropy_with_logits'
logits and labels must be broadcastable: logits_size=[32,10] labels_size=[32,128]
[[{{node categorical_crossentropy/softmax_cross_entropy_with_logits}}]] [Op:__inference_train_function_11227]
```
I'm running this on google colaboratory. Is there a module that I should install? Or is it purely an error on the code itself? | 2022/02/17 | [
"https://Stackoverflow.com/questions/71153492",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15336528/"
] | I faced the same error and tried to test everything with no value, but I heard that you have to make the number of **folders** in the **dataset** the SAME as the one in `Dense`.
I don't know if this will solve your specific bug or not but try this with your code:
```
vgg_model.add(tf.keras.layers.Dense(10, activation = 'softmax'))
```
Replace `10` with the number of training dataset folders or can call 'classes'. | In my case, the reason was incompatible shapes. My model takes [batch\_size, 784] image shape, but data where [batch\_size, 28, 28, 1] shape. So I easily fixed it with tf.reshape(x, [-1]). |
71,153,492 | I'm having multiple errors while running this VGG training code (code and errors shown below). I don't know if its because of my dataset or is it something else.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.vgg16 import preprocess_input
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from sklearn.metrics.pairwise import cosine_similarity
import os
import scipy
train_directory = 'sign_data/train' #To be changed
test_directory = 'sign_data/test' #To be changed
train_datagen = ImageDataGenerator(
rescale = 1./255,
rotation_range = 0.1,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.1
)
train_generator = train_datagen.flow_from_directory(
train_directory,
target_size = (224, 224),
color_mode = 'rgb',
shuffle = True,
batch_size=32
)
test_datagen = ImageDataGenerator(
rescale = 1./255,
)
test_generator = test_datagen.flow_from_directory(
test_directory,
target_size = (224, 224),
color_mode = 'rgb',
shuffle = True,
batch_size=32
)
from tensorflow.keras.applications.vgg16 import VGG16
vgg_basemodel = VGG16(include_top=True)
from tensorflow.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, EarlyStopping
early_stopping = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=5)
vgg_model = tf.keras.Sequential(vgg_basemodel.layers[:-1])
vgg_model.add(tf.keras.layers.Dense(10, activation = 'softmax'))
# Freezing original layers
for layer in vgg_model.layers[:-1]:
layer.trainable = False
vgg_model.compile(loss='categorical_crossentropy',
optimizer=tf.keras.optimizers.SGD(momentum=0.9, learning_rate=0.001, decay=0.01),
metrics=['accuracy'])
history = vgg_model.fit(train_generator,
epochs=30,
batch_size=64,
validation_data=test_generator,
callbacks=[early_stopping])
# finetuning with all layers set trainable
for layer in vgg_model.layers:
layer.trainable = True
vgg_model.compile(loss='categorical_crossentropy',
optimizer=tf.keras.optimizers.SGD(momentum=0.9, lr=0.0001),
metrics=['accuracy'])
history2 = vgg_model.fit(train_generator,
epochs=5,
batch_size=64,
validation_data=test_generator,
callbacks=[early_stopping])
vgg_model.save('saved_models/vgg_finetuned_model')
```
First error: Invalid Argument Error
```
InvalidArgumentError Traceback (most recent call last)
<ipython-input-13-292bf57ef59f> in <module>()
14 batch_size=64,
15 validation_data=test_generator,
---> 16 callbacks=[early_stopping])
17
18 # finetuning with all layers set trainable
/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
53 ctx.ensure_initialized()
54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 55 inputs, attrs, num_outputs)
56 except core._NotOkStatusException as e:
57 if name is not None:
```
Second Error: Graph Execution Error
```
InvalidArgumentError: Graph execution error:
Detected at node 'categorical_crossentropy/softmax_cross_entropy_with_logits' defined at (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "/usr/local/lib/python3.7/dist-packages/traitlets/config/application.py", line 846, in launch_instance
app.start()
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelapp.py", line 499, in start
self.io_loop.start()
File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 132, in start
self.asyncio_loop.run_forever()
File "/usr/lib/python3.7/asyncio/base_events.py", line 541, in run_forever
self._run_once()
File "/usr/lib/python3.7/asyncio/base_events.py", line 1786, in _run_once
handle._run()
File "/usr/lib/python3.7/asyncio/events.py", line 88, in _run
self._context.run(self._callback, *self._args)
File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 122, in _handle_events
handler_func(fileobj, events)
File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 452, in _handle_events
self._handle_recv()
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 481, in _handle_recv
self._run_callback(callback, msg)
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 431, in _run_callback
callback(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 233, in dispatch_shell
handler(stream, idents, msg)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 399, in execute_request
user_expressions, allow_stdin)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/ipkernel.py", line 208, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/zmqshell.py", line 537, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2718, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2822, in run_ast_nodes
if self.run_code(code, result):
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-13-292bf57ef59f>", line 16, in <module>
callbacks=[early_stopping])
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1384, in fit
tmp_logs = self.train_function(iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 860, in train_step
loss = self.compute_loss(x, y, y_pred, sample_weight)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 919, in compute_loss
y, y_pred, sample_weight, regularization_losses=self.losses)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/compile_utils.py", line 201, in __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 141, in __call__
losses = call_fn(y_true, y_pred)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 245, in call
return ag_fn(y_true, y_pred, **self._fn_kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 1790, in categorical_crossentropy
y_true, y_pred, from_logits=from_logits, axis=axis)
File "/usr/local/lib/python3.7/dist-packages/keras/backend.py", line 5099, in categorical_crossentropy
labels=target, logits=output, axis=axis)
Node: 'categorical_crossentropy/softmax_cross_entropy_with_logits'
logits and labels must be broadcastable: logits_size=[32,10] labels_size=[32,128]
[[{{node categorical_crossentropy/softmax_cross_entropy_with_logits}}]] [Op:__inference_train_function_11227]
```
I'm running this on google colaboratory. Is there a module that I should install? Or is it purely an error on the code itself? | 2022/02/17 | [
"https://Stackoverflow.com/questions/71153492",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15336528/"
] | In my case, the reason was incompatible shapes. My model takes [batch\_size, 784] image shape, but data where [batch\_size, 28, 28, 1] shape. So I easily fixed it with tf.reshape(x, [-1]). | Check the image size. Size of image defined in model.add(.., input\_shape=(100,100,3)) should be same as the **target\_size=(100,100) in train\_gererator.**
And also check if number of neurons in last dense layer are equal to number of output classes or not.
By the way, there isn't any need to install any other module. It is some error in code. |
20,893,752 | I started trying to make a script to send emails using python, but nothing worked. I eventually got to the point where I just started copying and pasting email scripts and filling in my info. Still nothing worked. So i eventually just got rid of everything except this:
```
#!/usr/bin/python
import smtplib
```
This still did not work. Can someone explain to me why this doesn't work? I'm sure its really simple. I'm using mac os x 10.9 if that makes a difference. here is my error:
```
Traceback (most recent call last):
File "the_email.py", line 2, in <module>
import smtplib
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/smtplib.py", line 46, in <module>
import email.utils
ImportError: No module named utils
``` | 2014/01/02 | [
"https://Stackoverflow.com/questions/20893752",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2402862/"
] | Change the name of your script from `email.py` to something else. It is interfering with the Python standard library module of the same name, `email`. | Read this: [Syntax: python smtplib not working in script](https://stackoverflow.com/questions/14102113/syntax-python-smtplib-not-working-in-script)
A user says that you have to remove email.py from the folder. |
28,262,400 | I am changing the original post to memory leak, as what i have observed that cassandra python driver do not release sessions from memory. And during heavy inserts its eat up all the memory (Thus crashes cassandra as not enough room left for GC).
This was raised earlier but i see the issue in latest drivers as well.
<https://github.com/datastax/python-driver/pull/131>
```
In [2]: cassandra.__version__
Out[2]: '2.1.4'
class SimpleClient(object):
session = None
def connect(self, nodes):
cluster = Cluster(nodes)
metadata = cluster.metadata
self.session = cluster.connect()
logging.info('Connected to cluster: ' + metadata.cluster_name)
for host in metadata.all_hosts():
logging.info('Datacenter: %s; Host: %s; Rack: %s', host.datacenter, host.address, host.rack)
print ("Datacenter: %s; Host: %s; Rack: %s"%(host.datacenter, host.address, host.rack))
def close(self):
self.session.cluster.shutdown()
logging.info('Connection closed.')
def main():
logging.basicConfig()
client = SimpleClient()
client.connect(['127.0.0.1'])
client.close()
if __name__ == "__main__":
count = 0
while count != 1:
main()
time.sleep(1)
```
If any one have found the solution of it please share. | 2015/02/01 | [
"https://Stackoverflow.com/questions/28262400",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4460263/"
] | Calling `id.Hex()` will return a string representation of the `bson.ObjectId`.
This is also the default behavior if you try to marshal one `bson.ObjectId` to json string. | Things like to work [playground](https://play.golang.org/p/1LG1NlFEK-)
Just define dot `.` for your template
```
{{ .Name }} {{ .Food }}
<a href="/remove/{{ .Id }}">Remove me</a>
``` |
28,262,400 | I am changing the original post to memory leak, as what i have observed that cassandra python driver do not release sessions from memory. And during heavy inserts its eat up all the memory (Thus crashes cassandra as not enough room left for GC).
This was raised earlier but i see the issue in latest drivers as well.
<https://github.com/datastax/python-driver/pull/131>
```
In [2]: cassandra.__version__
Out[2]: '2.1.4'
class SimpleClient(object):
session = None
def connect(self, nodes):
cluster = Cluster(nodes)
metadata = cluster.metadata
self.session = cluster.connect()
logging.info('Connected to cluster: ' + metadata.cluster_name)
for host in metadata.all_hosts():
logging.info('Datacenter: %s; Host: %s; Rack: %s', host.datacenter, host.address, host.rack)
print ("Datacenter: %s; Host: %s; Rack: %s"%(host.datacenter, host.address, host.rack))
def close(self):
self.session.cluster.shutdown()
logging.info('Connection closed.')
def main():
logging.basicConfig()
client = SimpleClient()
client.connect(['127.0.0.1'])
client.close()
if __name__ == "__main__":
count = 0
while count != 1:
main()
time.sleep(1)
```
If any one have found the solution of it please share. | 2015/02/01 | [
"https://Stackoverflow.com/questions/28262400",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4460263/"
] | The [bson.ObjectId](http://gopkg.in/mgo.v2/bson#ObjectId) type offers a [Hex](http://gopkg.in/mgo.v2/bson#ObjectId.Hex) method that will return the hex representation you are looking for, and the [template](http://golang.org/pkg/html/template) package allows one to call arbitrary methods on values you have at hand, so there's no need to store that value in duplicity anywhere else as a string.
This would work, for example:
```
<a href="/remove/{{ .Id.Hex }}">Remove me</a>
``` | Things like to work [playground](https://play.golang.org/p/1LG1NlFEK-)
Just define dot `.` for your template
```
{{ .Name }} {{ .Food }}
<a href="/remove/{{ .Id }}">Remove me</a>
``` |
28,262,400 | I am changing the original post to memory leak, as what i have observed that cassandra python driver do not release sessions from memory. And during heavy inserts its eat up all the memory (Thus crashes cassandra as not enough room left for GC).
This was raised earlier but i see the issue in latest drivers as well.
<https://github.com/datastax/python-driver/pull/131>
```
In [2]: cassandra.__version__
Out[2]: '2.1.4'
class SimpleClient(object):
session = None
def connect(self, nodes):
cluster = Cluster(nodes)
metadata = cluster.metadata
self.session = cluster.connect()
logging.info('Connected to cluster: ' + metadata.cluster_name)
for host in metadata.all_hosts():
logging.info('Datacenter: %s; Host: %s; Rack: %s', host.datacenter, host.address, host.rack)
print ("Datacenter: %s; Host: %s; Rack: %s"%(host.datacenter, host.address, host.rack))
def close(self):
self.session.cluster.shutdown()
logging.info('Connection closed.')
def main():
logging.basicConfig()
client = SimpleClient()
client.connect(['127.0.0.1'])
client.close()
if __name__ == "__main__":
count = 0
while count != 1:
main()
time.sleep(1)
```
If any one have found the solution of it please share. | 2015/02/01 | [
"https://Stackoverflow.com/questions/28262400",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4460263/"
] | The [bson.ObjectId](http://gopkg.in/mgo.v2/bson#ObjectId) type offers a [Hex](http://gopkg.in/mgo.v2/bson#ObjectId.Hex) method that will return the hex representation you are looking for, and the [template](http://golang.org/pkg/html/template) package allows one to call arbitrary methods on values you have at hand, so there's no need to store that value in duplicity anywhere else as a string.
This would work, for example:
```
<a href="/remove/{{ .Id.Hex }}">Remove me</a>
``` | Calling `id.Hex()` will return a string representation of the `bson.ObjectId`.
This is also the default behavior if you try to marshal one `bson.ObjectId` to json string. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.