qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
---|---|---|---|---|---|
33,511,259 | **How to find the majority votes for a list that can contain -1s, 1s and 0s?**
For example, given a list of:
```
x = [-1, -1, -1, -1, 0]
```
The majority is -1 , so the output should return `-1`
Another example, given a list of:
```
x = [1, 1, 1, 0, 0, -1]
```
The majority vote would be `1`
And when we have a tie, the majority vote should return 0, e.g.:
```
x = [1, 1, 1, -1, -1, -1]
```
This should also return zero:
```
x = [1, 1, 0, 0, -1, -1]
```
The simplest case to get the majority vote seem to sum the list up and check whether it's negative, positive or 0.
```
>>> x = [-1, -1, -1, -1, 0]
>>> sum(x) # So majority -> 0
-4
>>> x = [-1, 1, 1, 1, 0]
>>> sum(x) # So majority -> 1
2
>>> x = [-1, -1, 1, 1, 0]
>>> sum(x) # So majority is tied, i.e. -> 0
0
```
After the sum, I could do this check to get the majority vote, i.e.:
```
>>> x = [-1, 1, 1, 1, 0]
>>> majority = -1 if sum(x) < 0 else 1 if sum(x)!=0 else 0
>>> majority
1
>>> x = [-1, -1, 1, 1, 0]
>>> majority = -1 if sum(x) < 0 else 1 if sum(x)!=0 else 0
>>> majority
0
```
But as noted previously, it's ugly: [Python putting an if-elif-else statement on one line](https://stackoverflow.com/questions/14029245/python-putting-an-if-elif-else-statement-on-one-line) and not pythonic.
So the solution seems to be
```
>>> x = [-1, -1, 1, 1, 0]
>>> if sum(x) == 0:
... majority = 0
... else:
... majority = -1 if sum(x) < 0 else 1
...
>>> majority
0
```
---
EDITED
======
But there are cases that `sum()` won't work, @RobertB's e.g.
```
>>> x = [-1, -1, 0, 0, 0, 0]
>>> sum(x)
-2
```
But in this case the majority vote should be 0!! | 2015/11/03 | [
"https://Stackoverflow.com/questions/33511259",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/610569/"
] | I am assuming that votes for 0 count as votes. So `sum` is not a reasonable option.
Try a Counter:
```
>>> from collections import Counter
>>> x = Counter([-1,-1,-1, 1,1,1,1,0,0,0,0,0,0,0,0])
>>> x
Counter({0: 8, 1: 4, -1: 3})
>>> x.most_common(1)
[(0, 8)]
>>> x.most_common(1)[0][0]
0
```
So you could write code like:
```
from collections import Counter
def find_majority(votes):
vote_count = Counter(votes)
top_two = vote_count.most_common(2)
if len(top_two)>1 and top_two[0][1] == top_two[1][1]:
# It is a tie
return 0
return top_two[0][0]
>>> find_majority([1,1,-1,-1,0]) # It is a tie
0
>>> find_majority([1,1,1,1, -1,-1,-1,0])
1
>>> find_majority([-1,-1,0,0,0]) # Votes for zero win
0
>>> find_majority(['a','a','b',]) # Totally not asked for, but would work
'a'
``` | You could use [statistics.mode](https://docs.python.org/3/library/statistics.html#statistics.mode) if you were using python >= 3.4 ,catching a `StatisticsError` for when you have no unique mode:
```
from statistics import mode, StatisticsError
def majority(l):
try:
return mode(l)
except StatisticsError:
return 0
```
The [statistics](https://hg.python.org/cpython/file/3.5/Lib/statistics.py) implementation itself uses a Counter dict:
```
import collections
def _counts(data):
# Generate a table of sorted (value, frequency) pairs.
table = collections.Counter(iter(data)).most_common()
if not table:
return table
# Extract the values with the highest frequency.
maxfreq = table[0][1]
for i in range(1, len(table)):
if table[i][1] != maxfreq:
table = table[:i]
break
return table
def mode(data):
"""Return the most common data point from discrete or nominal data.
``mode`` assumes discrete data, and returns a single value. This is the
standard treatment of the mode as commonly taught in schools:
>>> mode([1, 1, 2, 3, 3, 3, 3, 4])
3
This also works with nominal (non-numeric) data:
>>> mode(["red", "blue", "blue", "red", "green", "red", "red"])
'red'
If there is not exactly one most common value, ``mode`` will raise
StatisticsError.
"""
# Generate a table of sorted (value, frequency) pairs.
table = _counts(data)
if len(table) == 1:
return table[0][0]
elif table:
raise StatisticsError(
'no unique mode; found %d equally common values' % len(table)
)
else:
raise StatisticsError('no mode for empty data')
```
Another way using a Counter and catching an empty list:
```
def majority(l):
cn = Counter(l).most_common(2)
return 0 if len(cn) > 1 and cn[0][1] == cn[1][1] else next(iter(cn),[0])[0]
``` |
33,511,259 | **How to find the majority votes for a list that can contain -1s, 1s and 0s?**
For example, given a list of:
```
x = [-1, -1, -1, -1, 0]
```
The majority is -1 , so the output should return `-1`
Another example, given a list of:
```
x = [1, 1, 1, 0, 0, -1]
```
The majority vote would be `1`
And when we have a tie, the majority vote should return 0, e.g.:
```
x = [1, 1, 1, -1, -1, -1]
```
This should also return zero:
```
x = [1, 1, 0, 0, -1, -1]
```
The simplest case to get the majority vote seem to sum the list up and check whether it's negative, positive or 0.
```
>>> x = [-1, -1, -1, -1, 0]
>>> sum(x) # So majority -> 0
-4
>>> x = [-1, 1, 1, 1, 0]
>>> sum(x) # So majority -> 1
2
>>> x = [-1, -1, 1, 1, 0]
>>> sum(x) # So majority is tied, i.e. -> 0
0
```
After the sum, I could do this check to get the majority vote, i.e.:
```
>>> x = [-1, 1, 1, 1, 0]
>>> majority = -1 if sum(x) < 0 else 1 if sum(x)!=0 else 0
>>> majority
1
>>> x = [-1, -1, 1, 1, 0]
>>> majority = -1 if sum(x) < 0 else 1 if sum(x)!=0 else 0
>>> majority
0
```
But as noted previously, it's ugly: [Python putting an if-elif-else statement on one line](https://stackoverflow.com/questions/14029245/python-putting-an-if-elif-else-statement-on-one-line) and not pythonic.
So the solution seems to be
```
>>> x = [-1, -1, 1, 1, 0]
>>> if sum(x) == 0:
... majority = 0
... else:
... majority = -1 if sum(x) < 0 else 1
...
>>> majority
0
```
---
EDITED
======
But there are cases that `sum()` won't work, @RobertB's e.g.
```
>>> x = [-1, -1, 0, 0, 0, 0]
>>> sum(x)
-2
```
But in this case the majority vote should be 0!! | 2015/11/03 | [
"https://Stackoverflow.com/questions/33511259",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/610569/"
] | ```
# These are your actual votes
votes = [-1, -1, -1, -1, 0]
# These are the options on the ballot
ballot = (-1, 0, 1)
# This is to initialize your counters
counters = {x: 0 for x in ballot}
# Count the number of votes
for vote in votes:
counters[vote] += 1
results = counters.values().sort()
if len(set(values)) < len(ballot) and values[-1] == values [-2]:
# Return 0 if there's a tie
return 0
else:
# Return your winning vote if there isn't a tie
return max(counters, key=counters.get)
``` | ```
import numpy as np
def fn(vote):
n=vote[np.where(vote<0)].size
p=vote[np.where(vote>0)].size
ret=np.sign(p-n)
z=vote.size-p-n
if z>=max(p,n):
ret=0
return ret
# some test cases
print fn(np.array([-1,-1, 1,1,1,1,0,0,0,0,0,0,0,0]))
print fn(np.array([-1, -1, -1, 1,1,1,0,0]))
print fn(np.array([0,0,0,1,1,1]))
print fn(np.array([1,1,1,1, -1,-1,-1,0]))
print fn(np.array([-1, -1, -1, -1, 1, 0]))
``` |
33,511,259 | **How to find the majority votes for a list that can contain -1s, 1s and 0s?**
For example, given a list of:
```
x = [-1, -1, -1, -1, 0]
```
The majority is -1 , so the output should return `-1`
Another example, given a list of:
```
x = [1, 1, 1, 0, 0, -1]
```
The majority vote would be `1`
And when we have a tie, the majority vote should return 0, e.g.:
```
x = [1, 1, 1, -1, -1, -1]
```
This should also return zero:
```
x = [1, 1, 0, 0, -1, -1]
```
The simplest case to get the majority vote seem to sum the list up and check whether it's negative, positive or 0.
```
>>> x = [-1, -1, -1, -1, 0]
>>> sum(x) # So majority -> 0
-4
>>> x = [-1, 1, 1, 1, 0]
>>> sum(x) # So majority -> 1
2
>>> x = [-1, -1, 1, 1, 0]
>>> sum(x) # So majority is tied, i.e. -> 0
0
```
After the sum, I could do this check to get the majority vote, i.e.:
```
>>> x = [-1, 1, 1, 1, 0]
>>> majority = -1 if sum(x) < 0 else 1 if sum(x)!=0 else 0
>>> majority
1
>>> x = [-1, -1, 1, 1, 0]
>>> majority = -1 if sum(x) < 0 else 1 if sum(x)!=0 else 0
>>> majority
0
```
But as noted previously, it's ugly: [Python putting an if-elif-else statement on one line](https://stackoverflow.com/questions/14029245/python-putting-an-if-elif-else-statement-on-one-line) and not pythonic.
So the solution seems to be
```
>>> x = [-1, -1, 1, 1, 0]
>>> if sum(x) == 0:
... majority = 0
... else:
... majority = -1 if sum(x) < 0 else 1
...
>>> majority
0
```
---
EDITED
======
But there are cases that `sum()` won't work, @RobertB's e.g.
```
>>> x = [-1, -1, 0, 0, 0, 0]
>>> sum(x)
-2
```
But in this case the majority vote should be 0!! | 2015/11/03 | [
"https://Stackoverflow.com/questions/33511259",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/610569/"
] | I am assuming that votes for 0 count as votes. So `sum` is not a reasonable option.
Try a Counter:
```
>>> from collections import Counter
>>> x = Counter([-1,-1,-1, 1,1,1,1,0,0,0,0,0,0,0,0])
>>> x
Counter({0: 8, 1: 4, -1: 3})
>>> x.most_common(1)
[(0, 8)]
>>> x.most_common(1)[0][0]
0
```
So you could write code like:
```
from collections import Counter
def find_majority(votes):
vote_count = Counter(votes)
top_two = vote_count.most_common(2)
if len(top_two)>1 and top_two[0][1] == top_two[1][1]:
# It is a tie
return 0
return top_two[0][0]
>>> find_majority([1,1,-1,-1,0]) # It is a tie
0
>>> find_majority([1,1,1,1, -1,-1,-1,0])
1
>>> find_majority([-1,-1,0,0,0]) # Votes for zero win
0
>>> find_majority(['a','a','b',]) # Totally not asked for, but would work
'a'
``` | ```
# These are your actual votes
votes = [-1, -1, -1, -1, 0]
# These are the options on the ballot
ballot = (-1, 0, 1)
# This is to initialize your counters
counters = {x: 0 for x in ballot}
# Count the number of votes
for vote in votes:
counters[vote] += 1
results = counters.values().sort()
if len(set(values)) < len(ballot) and values[-1] == values [-2]:
# Return 0 if there's a tie
return 0
else:
# Return your winning vote if there isn't a tie
return max(counters, key=counters.get)
``` |
33,511,259 | **How to find the majority votes for a list that can contain -1s, 1s and 0s?**
For example, given a list of:
```
x = [-1, -1, -1, -1, 0]
```
The majority is -1 , so the output should return `-1`
Another example, given a list of:
```
x = [1, 1, 1, 0, 0, -1]
```
The majority vote would be `1`
And when we have a tie, the majority vote should return 0, e.g.:
```
x = [1, 1, 1, -1, -1, -1]
```
This should also return zero:
```
x = [1, 1, 0, 0, -1, -1]
```
The simplest case to get the majority vote seem to sum the list up and check whether it's negative, positive or 0.
```
>>> x = [-1, -1, -1, -1, 0]
>>> sum(x) # So majority -> 0
-4
>>> x = [-1, 1, 1, 1, 0]
>>> sum(x) # So majority -> 1
2
>>> x = [-1, -1, 1, 1, 0]
>>> sum(x) # So majority is tied, i.e. -> 0
0
```
After the sum, I could do this check to get the majority vote, i.e.:
```
>>> x = [-1, 1, 1, 1, 0]
>>> majority = -1 if sum(x) < 0 else 1 if sum(x)!=0 else 0
>>> majority
1
>>> x = [-1, -1, 1, 1, 0]
>>> majority = -1 if sum(x) < 0 else 1 if sum(x)!=0 else 0
>>> majority
0
```
But as noted previously, it's ugly: [Python putting an if-elif-else statement on one line](https://stackoverflow.com/questions/14029245/python-putting-an-if-elif-else-statement-on-one-line) and not pythonic.
So the solution seems to be
```
>>> x = [-1, -1, 1, 1, 0]
>>> if sum(x) == 0:
... majority = 0
... else:
... majority = -1 if sum(x) < 0 else 1
...
>>> majority
0
```
---
EDITED
======
But there are cases that `sum()` won't work, @RobertB's e.g.
```
>>> x = [-1, -1, 0, 0, 0, 0]
>>> sum(x)
-2
```
But in this case the majority vote should be 0!! | 2015/11/03 | [
"https://Stackoverflow.com/questions/33511259",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/610569/"
] | You could use [statistics.mode](https://docs.python.org/3/library/statistics.html#statistics.mode) if you were using python >= 3.4 ,catching a `StatisticsError` for when you have no unique mode:
```
from statistics import mode, StatisticsError
def majority(l):
try:
return mode(l)
except StatisticsError:
return 0
```
The [statistics](https://hg.python.org/cpython/file/3.5/Lib/statistics.py) implementation itself uses a Counter dict:
```
import collections
def _counts(data):
# Generate a table of sorted (value, frequency) pairs.
table = collections.Counter(iter(data)).most_common()
if not table:
return table
# Extract the values with the highest frequency.
maxfreq = table[0][1]
for i in range(1, len(table)):
if table[i][1] != maxfreq:
table = table[:i]
break
return table
def mode(data):
"""Return the most common data point from discrete or nominal data.
``mode`` assumes discrete data, and returns a single value. This is the
standard treatment of the mode as commonly taught in schools:
>>> mode([1, 1, 2, 3, 3, 3, 3, 4])
3
This also works with nominal (non-numeric) data:
>>> mode(["red", "blue", "blue", "red", "green", "red", "red"])
'red'
If there is not exactly one most common value, ``mode`` will raise
StatisticsError.
"""
# Generate a table of sorted (value, frequency) pairs.
table = _counts(data)
if len(table) == 1:
return table[0][0]
elif table:
raise StatisticsError(
'no unique mode; found %d equally common values' % len(table)
)
else:
raise StatisticsError('no mode for empty data')
```
Another way using a Counter and catching an empty list:
```
def majority(l):
cn = Counter(l).most_common(2)
return 0 if len(cn) > 1 and cn[0][1] == cn[1][1] else next(iter(cn),[0])[0]
``` | ```
from collections import Counter
result = Counter(votes).most_common(2)
result = 0 if result[0][1] == result[1][1] else result[0][0]
```
Error handling for empty `votes` lists or `votes` lists with a set cardinality of 1 is trivial and left as an exercise for the reader. |
7,692,121 | I saw [this question](https://stackoverflow.com/questions/4978738/is-there-a-python-equivalent-of-the-c-null-coalescing-operator) but it uses the ?? operator as a null check, I want to use it as a bool true/false test.
I have this code in Python:
```
if self.trait == self.spouse.trait:
trait = self.trait
else:
trait = defualtTrait
```
In C# I could write this as:
```
trait = this.trait == this.spouse.trait ? this.trait : defualtTrait;
```
Is there a similar way to do this in Python? | 2011/10/07 | [
"https://Stackoverflow.com/questions/7692121",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/116286/"
] | Yes, you can write:
```
trait = self.trait if self.trait == self.spouse.trait else defaultTrait
```
This is called a [Conditional Expression](http://docs.python.org/reference/expressions.html#conditional-expressions) in Python. | On the null-coalescing operator in C#, what you have in the question isn't a correct usage. That would fail at compile time.
In C#, the correct way to write what you're attempting would be this:
```
trait = this.trait == this.spouse.trait ? self.trait : defaultTrait
```
Null coalesce in C# returns the first value that isn't null in a chain of values (or null if there are no non-null values). For example, what you'd write in C# to return the first non-null trait or a default trait if all the others were null is actually this:
```
trait = this.spouse.trait ?? self.trait ?? defaultTrait;
``` |
42,871,090 | As the title says, is there a way to change the default pip to pip2.7
When I run `sudo which pip`, I get `/usr/local/bin/pip`
When I run `sudo pip -V`, I get `pip 1.5.6 from /usr/lib/python3/dist-packages (python 3.4)`
If there is no problem at all with this mixed version, please do tell. If there is a problem with downloading dependencies from different pip versions, how can I change to pip2.7?
I know I can `pip2.7 install somePackage` but I don't like it. I feel I could forget to do this at any point.
Other info: Ubuntu 15.10 | 2017/03/18 | [
"https://Stackoverflow.com/questions/42871090",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2400585/"
] | * You can use `alias pip = 'pip2.7'`Put this in your `.bashrc` file(If you're using bash,if zsh it should be `.zshrc`).
By the way,you should know that `sudo` command change current user,default `root`.So if you have to change user to `root`,maybe you should put it in `/root/.bashrc`
* Or you can make a link
```
ln -s /usr/local/bin/pip2.7 /usr/local/bin/pip
```
Also you can try to use `virtualenv`,it's the best choice for multiple versions in my opinion. | A very intuitive and straightforward method is just modify the settings in `/usr/local/bin/pip`. You don't need alias and symbolic links. For mine:
1. Check the infor:
===================
```
lerner@lerner:~/$ pip -V
```
>
> `pip 1.5.4 from /usr/lib/python3/dist-packages (python 3.4)`
>
>
>
```
lerner@lerner:~/$ pip2 -V
```
>
> `pip 9.0.1 from /usr/local/lib/python2.7/dist-packages (python 2.7)`
>
>
>
```
lerner@lerner:~/$ whereis pip
```
>
>
> ```
> pip: /usr/local/bin/pip3.4 /usr/local/bin/pip2.7 /usr/local/bin/pip
>
> ```
>
>
2. Change the setting:
======================
Change the python3 to python2, be careful of its version(1.5.4 to 9.0.1 everywhere). And I just change the pip file to this:
```
lerner@lerner:~/$ sudo vim /usr/local/bin/pip
```
>
>
> ```
> #!/usr/bin/python2
> # EASY-INSTALL-ENTRY-SCRIPT: 'pip==9.0.1','console_scripts','pip'
> __requires__ = 'pip==9.0.1' import sys from pkg_resources import load_entry_point
>
> if __name__ == '__main__':
> sys.exit(
> load_entry_point('pip==9.0.1', 'console_scripts', 'pip')()
> )
>
> ```
>
>
3. Now save and check:
======================
```
lerner@lerner:~/$ pip -V
```
>
> `pip 9.0.1 from /usr/local/lib/python2.7/dist-packages (python 2.7)`
>
>
>
Done. |
42,871,090 | As the title says, is there a way to change the default pip to pip2.7
When I run `sudo which pip`, I get `/usr/local/bin/pip`
When I run `sudo pip -V`, I get `pip 1.5.6 from /usr/lib/python3/dist-packages (python 3.4)`
If there is no problem at all with this mixed version, please do tell. If there is a problem with downloading dependencies from different pip versions, how can I change to pip2.7?
I know I can `pip2.7 install somePackage` but I don't like it. I feel I could forget to do this at any point.
Other info: Ubuntu 15.10 | 2017/03/18 | [
"https://Stackoverflow.com/questions/42871090",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2400585/"
] | **Concise Answer**
1. Locate pip:
```
$ which pip
/usr/local/bin/pip
```
2. List all pips in location learned above:
```
$ ls /usr/local/bin/pip*
/usr/local/bin/pip /usr/local/bin/pip2.7 /usr/local/bin/pip3.5
/usr/local/bin/pip2 /usr/local/bin/pip3
```
3. Select which one should be your default, i.e. `/usr/local/bin/pip2.7`, and copy it into `pip`:
```
$ sudo cp /usr/local/bin/pip2.7 /usr/local/bin/pip
```
---
Verify:
```
$ pip -V
pip 10.0.1 from /usr/local/lib/python2.7/dist-packages/pip (python 2.7)
``` | * You can use `alias pip = 'pip2.7'`Put this in your `.bashrc` file(If you're using bash,if zsh it should be `.zshrc`).
By the way,you should know that `sudo` command change current user,default `root`.So if you have to change user to `root`,maybe you should put it in `/root/.bashrc`
* Or you can make a link
```
ln -s /usr/local/bin/pip2.7 /usr/local/bin/pip
```
Also you can try to use `virtualenv`,it's the best choice for multiple versions in my opinion. |
42,871,090 | As the title says, is there a way to change the default pip to pip2.7
When I run `sudo which pip`, I get `/usr/local/bin/pip`
When I run `sudo pip -V`, I get `pip 1.5.6 from /usr/lib/python3/dist-packages (python 3.4)`
If there is no problem at all with this mixed version, please do tell. If there is a problem with downloading dependencies from different pip versions, how can I change to pip2.7?
I know I can `pip2.7 install somePackage` but I don't like it. I feel I could forget to do this at any point.
Other info: Ubuntu 15.10 | 2017/03/18 | [
"https://Stackoverflow.com/questions/42871090",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2400585/"
] | **Concise Answer**
1. Locate pip:
```
$ which pip
/usr/local/bin/pip
```
2. List all pips in location learned above:
```
$ ls /usr/local/bin/pip*
/usr/local/bin/pip /usr/local/bin/pip2.7 /usr/local/bin/pip3.5
/usr/local/bin/pip2 /usr/local/bin/pip3
```
3. Select which one should be your default, i.e. `/usr/local/bin/pip2.7`, and copy it into `pip`:
```
$ sudo cp /usr/local/bin/pip2.7 /usr/local/bin/pip
```
---
Verify:
```
$ pip -V
pip 10.0.1 from /usr/local/lib/python2.7/dist-packages/pip (python 2.7)
``` | A very intuitive and straightforward method is just modify the settings in `/usr/local/bin/pip`. You don't need alias and symbolic links. For mine:
1. Check the infor:
===================
```
lerner@lerner:~/$ pip -V
```
>
> `pip 1.5.4 from /usr/lib/python3/dist-packages (python 3.4)`
>
>
>
```
lerner@lerner:~/$ pip2 -V
```
>
> `pip 9.0.1 from /usr/local/lib/python2.7/dist-packages (python 2.7)`
>
>
>
```
lerner@lerner:~/$ whereis pip
```
>
>
> ```
> pip: /usr/local/bin/pip3.4 /usr/local/bin/pip2.7 /usr/local/bin/pip
>
> ```
>
>
2. Change the setting:
======================
Change the python3 to python2, be careful of its version(1.5.4 to 9.0.1 everywhere). And I just change the pip file to this:
```
lerner@lerner:~/$ sudo vim /usr/local/bin/pip
```
>
>
> ```
> #!/usr/bin/python2
> # EASY-INSTALL-ENTRY-SCRIPT: 'pip==9.0.1','console_scripts','pip'
> __requires__ = 'pip==9.0.1' import sys from pkg_resources import load_entry_point
>
> if __name__ == '__main__':
> sys.exit(
> load_entry_point('pip==9.0.1', 'console_scripts', 'pip')()
> )
>
> ```
>
>
3. Now save and check:
======================
```
lerner@lerner:~/$ pip -V
```
>
> `pip 9.0.1 from /usr/local/lib/python2.7/dist-packages (python 2.7)`
>
>
>
Done. |
67,948,945 | I want to force the Huggingface transformer (BERT) to make use of CUDA.
nvidia-smi showed that all my CPU cores were maxed out during the code execution, but my GPU was at 0% utilization. Unfortunately, I'm new to the Hugginface library as well as PyTorch and don't know where to place the CUDA attributes `device = cuda:0` or `.to(cuda:0)`.
The code below is basically a customized part from [german sentiment BERT working example](https://huggingface.co/oliverguhr/german-sentiment-bert)
```
class SentimentModel_t(pt.nn.Module):
def __init__(self, model_name: str = "oliverguhr/german-sentiment-bert"):
DEVICE = "cuda:0" if pt.cuda.is_available() else "cpu"
print(DEVICE)
super(SentimentModel_t,self).__init__()
self.model = AutoModelForSequenceClassification.from_pretrained(model_name).to(DEVICE)
self.tokenizer = BertTokenizerFast.from_pretrained(model_name)
def predict_sentiment(self, texts: List[str])-> List[str]:
texts = [self.clean_text(text) for text in texts]
# Add special tokens takes care of adding [CLS], [SEP], <s>... tokens in the right way for each model.
input_ids = self.tokenizer.batch_encode_plus(texts,padding=True, add_special_tokens=True, truncation=True, max_length=self.tokenizer.max_len_single_sentence)
input_ids = pt.tensor(input_ids["input_ids"])
with pt.no_grad():
logits = self.model(input_ids)
label_ids = pt.argmax(logits[0], axis=1)
labels = [self.model.config.id2label[label_id] for label_id in label_ids.tolist()]
return labels
```
EDIT: After applying the suggestions of @KonstantinosKokos (see edited code above) I got a
```
RuntimeError: Input, output and indices must be on the current device
```
pointing to
```
with pt.no_grad():
logits = self.model(input_ids)
```
The full error code can be obtained down below:
```
<ipython-input-15-b843edd87a1a> in predict_sentiment(self, texts)
23
24 with pt.no_grad():
---> 25 logits = self.model(input_ids)
26
27 label_ids = pt.argmax(logits[0], axis=1)
~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict)
1364 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1365
-> 1366 outputs = self.bert(
1367 input_ids,
1368 attention_mask=attention_mask,
~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, output_attentions, output_hidden_states, return_dict)
859 head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
860
--> 861 embedding_output = self.embeddings(
862 input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
863 )
~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
196
197 if inputs_embeds is None:
--> 198 inputs_embeds = self.word_embeddings(input_ids)
199 token_type_embeddings = self.token_type_embeddings(token_type_ids)
200
~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/torch/nn/modules/sparse.py in forward(self, input)
122
123 def forward(self, input: Tensor) -> Tensor:
--> 124 return F.embedding(
125 input, self.weight, self.padding_idx, self.max_norm,
126 self.norm_type, self.scale_grad_by_freq, self.sparse)
~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1850 # remove once script supports set_grad_enabled
1851 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1852 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1853
1854
``` | 2021/06/12 | [
"https://Stackoverflow.com/questions/67948945",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15445597/"
] | You can make the entire class inherit `torch.nn.Module` like so:
```
class SentimentModel_t(torch.nn.Module):
def __init___(...)
super(SentimentModel_t, self).__init__()
...
```
Upon initializing your model you can then call `.to(device)` to cast it to the device of your choice, like so:
```
sentiment_model = SentimentModel_t(...)
sentiment_model.to('cuda')
```
The `.to()` recursively applies to all submodules of the class, `model` being one of them (hugging face model inherit `torch.nn.Module`, thus providing an implementation for `to()`).
Note that this makes choosing device in the `__init__()` redundant: its now an external context that you can switch to/from easily.
---
Alternatively, you can hardcode the device by casting the contained BERT model directly into cuda (less elegant):
```
class SentimentModel_t():
def __init__(self, ...):
DEVICE = "cuda:0" if pt.cuda.is_available() else "cpu"
print(DEVICE)
self.model = AutoModelForSequenceClassification.from_pretrained(model_name).to(DEVICE)
``` | I am a bit late to the party. The python package that I wrote already uses your GPU. You can have a look at the [code to see how it was implemented](https://github.com/oliverguhr/german-sentiment-lib/blob/master/germansentiment/sentimentmodel.py)
Just install the package:
```
pip install germansentiment
```
and run the code:
```
from germansentiment import SentimentModel
model = SentimentModel()
texts = [
"Mit keinem guten Ergebniss","Das ist gar nicht mal so gut",
"Total awesome!","nicht so schlecht wie erwartet",
"Der Test verlief positiv.","Sie fährt ein grünes Auto."]
result = model.predict_sentiment(texts)
print(result)
```
**Important:** If you write your own code to use the model, you need to run the preprocessing code as well. Otherwise the results can be off. |
39,502,345 | I have two columns in a pandas dataframe that are supposed to be identical. Each column has many NaN values. I would like to compare the columns, producing a 3rd column containing True / False values; *True* when the columns match, *False* when they do not.
This is what I have tried:
```
df['new_column'] = (df['column_one'] == df['column_two'])
```
The above works for the numbers, but not the NaN values.
I know I could replace the NaNs with a value that doesn't make sense to be in each row (for my data this could be -9999), and then remove it later when I'm ready to echo out the comparison results, however I was wondering if there was a more pythonic method I was overlooking. | 2016/09/15 | [
"https://Stackoverflow.com/questions/39502345",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1762492/"
] | Or you could just use the `equals` method:
```
df['new_column'] = df['column_one'].equals(df['column_two'])
```
It is a batteries included approach, and will work no matter the `dtype` or the content of the cells. You can also put it in a loop, if you want. | To my understanding, Pandas does not consider NaNs different in element-wise equality and inequality comparison methods. While it does when comparing entire Pandas objects (Series, DataFrame, Panel).
>
> NaN values are considered different (i.e. NaN != NaN). - [source](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.eq.html)
>
>
>
**Element-wise equality assertion [`.eq()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.eq.html#pandas.DataFrame.eq)**
Compare the values of 2 columns for each row individually. This will return a Series of assertions.
*Option 1*: Chain the [`.eq()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.eq.html#pandas.DataFrame.eq) method with [`.fillna()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html).
```py
df['new_column'] = df['column_one'].fillna('-').eq(df['column_two'].fillna('-'))
```
Option 2: Replace the NaN assertions afterwards using [`.loc()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html) and [`.isna()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.isna.html).
```py
df['new_column'] = df['column_one'].eq(df['column_two'])
df.loc[test['column_one'].isna() & test['column_two'].isna(),'new_column'] = True
```
Note that both options are non-destructive regarding the source data in *column\_one* and *column\_two*. It is also worth having a look at the [working with missing data](https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html) guide in the Pandas docs.
**Object-wise equality assertion [`.equals()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.equals.html)**
Compare Pandas objects (Series, DataFrame, Panel) as a whole, interpreting all rows and their order as a single value. This will return a single boolean value (scalar).
```
df['column_one'].equals(df['column_two'])
``` |
39,502,345 | I have two columns in a pandas dataframe that are supposed to be identical. Each column has many NaN values. I would like to compare the columns, producing a 3rd column containing True / False values; *True* when the columns match, *False* when they do not.
This is what I have tried:
```
df['new_column'] = (df['column_one'] == df['column_two'])
```
The above works for the numbers, but not the NaN values.
I know I could replace the NaNs with a value that doesn't make sense to be in each row (for my data this could be -9999), and then remove it later when I'm ready to echo out the comparison results, however I was wondering if there was a more pythonic method I was overlooking. | 2016/09/15 | [
"https://Stackoverflow.com/questions/39502345",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1762492/"
] | Or you could just use the `equals` method:
```
df['new_column'] = df['column_one'].equals(df['column_two'])
```
It is a batteries included approach, and will work no matter the `dtype` or the content of the cells. You can also put it in a loop, if you want. | You can use a loop like below and it works irrespective of whether your dataframe contains NANs or not as long as both the columns are having same format
```
def Check(df):
if df['column_one']== df['column_two']:
return "True"
else:
return "False"
df['result'] = df.apply(Check, axis=1)
df
``` |
39,502,345 | I have two columns in a pandas dataframe that are supposed to be identical. Each column has many NaN values. I would like to compare the columns, producing a 3rd column containing True / False values; *True* when the columns match, *False* when they do not.
This is what I have tried:
```
df['new_column'] = (df['column_one'] == df['column_two'])
```
The above works for the numbers, but not the NaN values.
I know I could replace the NaNs with a value that doesn't make sense to be in each row (for my data this could be -9999), and then remove it later when I'm ready to echo out the comparison results, however I was wondering if there was a more pythonic method I was overlooking. | 2016/09/15 | [
"https://Stackoverflow.com/questions/39502345",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1762492/"
] | Or you could just use the `equals` method:
```
df['new_column'] = df['column_one'].equals(df['column_two'])
```
It is a batteries included approach, and will work no matter the `dtype` or the content of the cells. You can also put it in a loop, if you want. | Working also for None values.
```
df['are_equal'] = df['a'].eq(df_f['b'])
```
result df:[](https://i.stack.imgur.com/FujYb.png) |
65,583,958 | I've a Python program as follows:
```
class a:
def __init__(self,n):
self.n=n
def __del__(self,n):
print('dest',self.n,n)
def b():
d=a('d')
c=a('c')
d.__del__(8)
b()
```
Here, I have given a parameter `n` in `__del__()` just to clear my doubt. Its output :
```
$ python des.py
dest d 8
Exception ignored in: <function a.__del__ at 0xb799b074>
TypeError: __del__() missing 1 required positional argument: 'n'
Exception ignored in: <function a.__del__ at 0xb799b074>
TypeError: __del__() missing 1 required positional argument: 'n'
```
In classical programming languages like C++ we can't give parameters for the destructor. To know if it is applicable for python too, I've executed this program. Why does the interpreter allow the parameter `n` to be given as a parameter for the destructor? How can I specify value for that `n` then? As a try to give an argument for `__del__()` and it goes fine. But without it how can I specify the value for `n`? | 2021/01/05 | [
"https://Stackoverflow.com/questions/65583958",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | you cannot. pre-defined dunder methods (methods with leading and trailing double underscore) like `__del__` have a fixed signature.
If you define them with another signature, then when python calls them using the non-dunder interface (`del`, `len`, ...), the number of arguments is wrong and it fails.
To pass `n` to `del`, you'll have to define it as an object member. | Python objects become a candidate for garbage collection when there are no more references to them (object tagging), so you do not need to create such a destructor.
If you want to add optional arguments to a method, it's common to set them to `None` or an empty tuple `()`
```
def other_del(self, x=None):
...
``` |
65,583,958 | I've a Python program as follows:
```
class a:
def __init__(self,n):
self.n=n
def __del__(self,n):
print('dest',self.n,n)
def b():
d=a('d')
c=a('c')
d.__del__(8)
b()
```
Here, I have given a parameter `n` in `__del__()` just to clear my doubt. Its output :
```
$ python des.py
dest d 8
Exception ignored in: <function a.__del__ at 0xb799b074>
TypeError: __del__() missing 1 required positional argument: 'n'
Exception ignored in: <function a.__del__ at 0xb799b074>
TypeError: __del__() missing 1 required positional argument: 'n'
```
In classical programming languages like C++ we can't give parameters for the destructor. To know if it is applicable for python too, I've executed this program. Why does the interpreter allow the parameter `n` to be given as a parameter for the destructor? How can I specify value for that `n` then? As a try to give an argument for `__del__()` and it goes fine. But without it how can I specify the value for `n`? | 2021/01/05 | [
"https://Stackoverflow.com/questions/65583958",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | You can *define* the `__del__` method with an argument, as you've shown. And if you call the method yourself, you can pass in a value, just like you can with any other method. But when the interpreter calls `__del__` itself, it's not going to pass anything, and there will be an exception raised.
However, because `__del__` methods are often called in precarious situations, like when the interpreter is shutting down, Python doesn't stop everything if one raises an exception. Instead, it just prints out that it's ignoring the exception and keeps doing what it was doing already. This is why you see two "Exception ignored" messages, as your `d` and `c` objects get cleaned up.
It's unclear to me what you *actually* want your `__del__` method to do with the `n` value you were passing in. Your example was a trivial case, usually there's nothing useful you can do there. Indeed, it's only rarely a good idea to write a `__del__` method for a class. There are better ways of doing resource allocation, like the context manager protocol (which uses `__enter__` and `__exit__` methods). | you cannot. pre-defined dunder methods (methods with leading and trailing double underscore) like `__del__` have a fixed signature.
If you define them with another signature, then when python calls them using the non-dunder interface (`del`, `len`, ...), the number of arguments is wrong and it fails.
To pass `n` to `del`, you'll have to define it as an object member. |
65,583,958 | I've a Python program as follows:
```
class a:
def __init__(self,n):
self.n=n
def __del__(self,n):
print('dest',self.n,n)
def b():
d=a('d')
c=a('c')
d.__del__(8)
b()
```
Here, I have given a parameter `n` in `__del__()` just to clear my doubt. Its output :
```
$ python des.py
dest d 8
Exception ignored in: <function a.__del__ at 0xb799b074>
TypeError: __del__() missing 1 required positional argument: 'n'
Exception ignored in: <function a.__del__ at 0xb799b074>
TypeError: __del__() missing 1 required positional argument: 'n'
```
In classical programming languages like C++ we can't give parameters for the destructor. To know if it is applicable for python too, I've executed this program. Why does the interpreter allow the parameter `n` to be given as a parameter for the destructor? How can I specify value for that `n` then? As a try to give an argument for `__del__()` and it goes fine. But without it how can I specify the value for `n`? | 2021/01/05 | [
"https://Stackoverflow.com/questions/65583958",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | You can *define* the `__del__` method with an argument, as you've shown. And if you call the method yourself, you can pass in a value, just like you can with any other method. But when the interpreter calls `__del__` itself, it's not going to pass anything, and there will be an exception raised.
However, because `__del__` methods are often called in precarious situations, like when the interpreter is shutting down, Python doesn't stop everything if one raises an exception. Instead, it just prints out that it's ignoring the exception and keeps doing what it was doing already. This is why you see two "Exception ignored" messages, as your `d` and `c` objects get cleaned up.
It's unclear to me what you *actually* want your `__del__` method to do with the `n` value you were passing in. Your example was a trivial case, usually there's nothing useful you can do there. Indeed, it's only rarely a good idea to write a `__del__` method for a class. There are better ways of doing resource allocation, like the context manager protocol (which uses `__enter__` and `__exit__` methods). | Python objects become a candidate for garbage collection when there are no more references to them (object tagging), so you do not need to create such a destructor.
If you want to add optional arguments to a method, it's common to set them to `None` or an empty tuple `()`
```
def other_del(self, x=None):
...
``` |
18,485,044 | It's not under the supported libraries here:
<https://developers.google.com/api-client-library/python/reference/supported_apis>
Is it just not available with Python? If not, what language is it available for? | 2013/08/28 | [
"https://Stackoverflow.com/questions/18485044",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2721465/"
] | Andre's answer points you at a correct place to reference the API. Since your question was python specific, allow me to show you a basic approach to building your submitted search URL in python. This example will get you all the way to search content in just a few minutes after you sign up for Google's free API key.
```
ACCESS_TOKEN = <Get one of these following the directions on the places page>
import urllib
def build_URL(search_text='',types_text=''):
base_url = 'https://maps.googleapis.com/maps/api/place/textsearch/json' # Can change json to xml to change output type
key_string = '?key='+ACCESS_TOKEN # First think after the base_url starts with ? instead of &
query_string = '&query='+urllib.quote(search_text)
sensor_string = '&sensor=false' # Presumably you are not getting location from device GPS
type_string = ''
if types_text!='':
type_string = '&types='+urllib.quote(types_text) # More on types: https://developers.google.com/places/documentation/supported_types
url = base_url+key_string+query_string+sensor_string+type_string
return url
print(build_URL(search_text='Your search string here'))
```
This code will build and print a URL searching for whatever you put in the last line replacing "Your search string here". You need to build one of those URLs for each search. In this case I've printed it so you can copy and paste it into your browser address bar, which will give you a return (in the browser) of a JSON text object the same as you will get when your program submits that URL. I recommend using the python **requests** library to get that within your program and you can do that simply by taking the returned URL and doing this:
```
response = requests.get(url)
```
Next up you need to parse the returned response JSON, which you can do by converting it with the **json** library (look for [json.loads](http://docs.python.org/2/library/json.html) for example). After running that response through json.loads you will have a nice python dictionary with all your results. You can also paste that return (e.g. from the browser or a saved file) into an [online JSON viewer](http://www.jsoneditoronline.org/) to understand the structure while you write code to access the dictionary that comes out of json.loads.
Please feel free to post more questions if part of this isn't clear. | Somebody has written a wrapper for the API: <https://github.com/slimkrazy/python-google-places>
Basically it's just HTTP with JSON responses. It's easier to access through JavaScript but it's just as easy to use `urllib` and the `json` library to connect to the API. |
18,485,044 | It's not under the supported libraries here:
<https://developers.google.com/api-client-library/python/reference/supported_apis>
Is it just not available with Python? If not, what language is it available for? | 2013/08/28 | [
"https://Stackoverflow.com/questions/18485044",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2721465/"
] | Andre's answer points you at a correct place to reference the API. Since your question was python specific, allow me to show you a basic approach to building your submitted search URL in python. This example will get you all the way to search content in just a few minutes after you sign up for Google's free API key.
```
ACCESS_TOKEN = <Get one of these following the directions on the places page>
import urllib
def build_URL(search_text='',types_text=''):
base_url = 'https://maps.googleapis.com/maps/api/place/textsearch/json' # Can change json to xml to change output type
key_string = '?key='+ACCESS_TOKEN # First think after the base_url starts with ? instead of &
query_string = '&query='+urllib.quote(search_text)
sensor_string = '&sensor=false' # Presumably you are not getting location from device GPS
type_string = ''
if types_text!='':
type_string = '&types='+urllib.quote(types_text) # More on types: https://developers.google.com/places/documentation/supported_types
url = base_url+key_string+query_string+sensor_string+type_string
return url
print(build_URL(search_text='Your search string here'))
```
This code will build and print a URL searching for whatever you put in the last line replacing "Your search string here". You need to build one of those URLs for each search. In this case I've printed it so you can copy and paste it into your browser address bar, which will give you a return (in the browser) of a JSON text object the same as you will get when your program submits that URL. I recommend using the python **requests** library to get that within your program and you can do that simply by taking the returned URL and doing this:
```
response = requests.get(url)
```
Next up you need to parse the returned response JSON, which you can do by converting it with the **json** library (look for [json.loads](http://docs.python.org/2/library/json.html) for example). After running that response through json.loads you will have a nice python dictionary with all your results. You can also paste that return (e.g. from the browser or a saved file) into an [online JSON viewer](http://www.jsoneditoronline.org/) to understand the structure while you write code to access the dictionary that comes out of json.loads.
Please feel free to post more questions if part of this isn't clear. | Ezekiel's answer worked great for me and all of the credit goes to him. I had to change his code in order for it to work with python3. Below is the code I used:
```
def build_URL(search_text='',types_text=''):
base_url = 'https://maps.googleapis.com/maps/api/place/textsearch/json'
key_string = '?key=' + ACCESS_TOKEN
query_string = '&query=' + urllib.parse.quote(search_text)
type_string = ''
if types_text != '':
type_string = '&types='+urllib.parse.quote(types_text)
url = base_url+key_string+query_string+type_string
return url
```
The changes were urllib.quote was changed to urllib.parse.quote and sensor was removed because google is deprecating it. |
18,485,044 | It's not under the supported libraries here:
<https://developers.google.com/api-client-library/python/reference/supported_apis>
Is it just not available with Python? If not, what language is it available for? | 2013/08/28 | [
"https://Stackoverflow.com/questions/18485044",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2721465/"
] | Somebody has written a wrapper for the API: <https://github.com/slimkrazy/python-google-places>
Basically it's just HTTP with JSON responses. It's easier to access through JavaScript but it's just as easy to use `urllib` and the `json` library to connect to the API. | Ezekiel's answer worked great for me and all of the credit goes to him. I had to change his code in order for it to work with python3. Below is the code I used:
```
def build_URL(search_text='',types_text=''):
base_url = 'https://maps.googleapis.com/maps/api/place/textsearch/json'
key_string = '?key=' + ACCESS_TOKEN
query_string = '&query=' + urllib.parse.quote(search_text)
type_string = ''
if types_text != '':
type_string = '&types='+urllib.parse.quote(types_text)
url = base_url+key_string+query_string+type_string
return url
```
The changes were urllib.quote was changed to urllib.parse.quote and sensor was removed because google is deprecating it. |
37,659,072 | I'm new with python and I have to sort by date a voluminous file text with lot of line like these:
```
CCC!LL!EEEE!EW050034!2016-04-01T04:39:54.000Z!7!1!1!1
CCC!LL!EEEE!GH676589!2016-04-01T04:39:54.000Z!7!1!1!1
CCC!LL!EEEE!IJ6758004!2016-04-01T04:39:54.000Z!7!1!1!1
```
Can someone help me please ?
Thank you all ! | 2016/06/06 | [
"https://Stackoverflow.com/questions/37659072",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4989650/"
] | Have you considered using the \*nix [`sort`](http://linux.die.net/man/1/sort) program? in raw terms, it'll probably be faster than most Python scripts.
Use `-t \!` to specify that columns are separated by a `!` char, `-k n` to specify the field, where `n` is the field number, and `-o outputfile` if you want to output the result to a new file.
Example:
```
sort -t \! -k 5 -o sorted.txt input.txt
```
Will sort `input.txt` on its 5th field, and output the result to `sorted.txt` | I would like to convert the time to timestamp then sort.
first convert the date to list.
```
rawData = '''CCC!LL!EEEE!EW050034!2016-04-01T04:39:54.000Z!7!1!1!1
CCC!LL!EEEE!GH676589!2016-04-01T04:39:54.000Z!7!1!1!1
CCC!LL!EEEE!IJ6758004!2016-04-01T04:39:54.000Z!7!1!1!1'''
a = rawData.split('\n')
>>> import dateutil.parser,time
>>> sorted(a,key= lambda line:time.mktime(dateutil.parser.parse(line.split('!')[4]).timetuple()))
['CCC!LL!EEEE!EW050034!2016-04-01T04:39:54.000Z!7!1!1!1 ', ' CCC!LL!EEEE!GH676589!2016-04-01T04:39:54.000Z!7!1!1!1', ' CCC!LL!EEEE!IJ6758004!2016-04-01T04:39:54.000Z!7!1!1!1']
``` |
37,659,072 | I'm new with python and I have to sort by date a voluminous file text with lot of line like these:
```
CCC!LL!EEEE!EW050034!2016-04-01T04:39:54.000Z!7!1!1!1
CCC!LL!EEEE!GH676589!2016-04-01T04:39:54.000Z!7!1!1!1
CCC!LL!EEEE!IJ6758004!2016-04-01T04:39:54.000Z!7!1!1!1
```
Can someone help me please ?
Thank you all ! | 2016/06/06 | [
"https://Stackoverflow.com/questions/37659072",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4989650/"
] | Have you considered using the \*nix [`sort`](http://linux.die.net/man/1/sort) program? in raw terms, it'll probably be faster than most Python scripts.
Use `-t \!` to specify that columns are separated by a `!` char, `-k n` to specify the field, where `n` is the field number, and `-o outputfile` if you want to output the result to a new file.
Example:
```
sort -t \! -k 5 -o sorted.txt input.txt
```
Will sort `input.txt` on its 5th field, and output the result to `sorted.txt` | Take a look into regular expression module, I've used it a couple of times and it looks lretty simple to do what you want with this module
<https://docs.python.org/2/library/re.html> Here is the docs but try googling for regular expression python examples to make it more clear, good luck. |
42,620,323 | I am trying to parse many files found in a directory, however using multiprocessing slows my program.
```
# Calling my parsing function from Client.
L = getParsedFiles('/home/tony/Lab/slicedFiles') <--- 1000 .txt files found here.
combined ~100MB
```
Following this example from python documentation:
```
from multiprocessing import Pool
def f(x):
return x*x
if __name__ == '__main__':
p = Pool(5)
print(p.map(f, [1, 2, 3]))
```
I've written this piece of code:
```
from multiprocessing import Pool
from api.ttypes import *
import gc
import os
def _parse(pathToFile):
myList = []
with open(pathToFile) as f:
for line in f:
s = line.split()
x, y = [int(v) for v in s]
obj = CoresetPoint(x, y)
gc.disable()
myList.append(obj)
gc.enable()
return Points(myList)
def getParsedFiles(pathToFile):
myList = []
p = Pool(2)
for filename in os.listdir(pathToFile):
if filename.endswith(".txt"):
myList.append(filename)
return p.map(_pars, , myList)
```
I followed the example, put all the names of the files that end with a `.txt` in a list, then created Pools, and mapped them to my function. Then I want to return a list of objects. Each object holds the parsed data of a file. However it amazes me that I got the following results:
```
#Pool 32 ---> ~162(s)
#Pool 16 ---> ~150(s)
#Pool 12 ---> ~142(s)
#Pool 2 ---> ~130(s)
```
**Graph:**
[](https://i.stack.imgur.com/wVsZg.png)
Machine specification:
```none
62.8 GiB RAM
Intel® Core™ i7-6850K CPU @ 3.60GHz × 12
```
What am I missing here ?
Thanks in advance! | 2017/03/06 | [
"https://Stackoverflow.com/questions/42620323",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6530695/"
] | Looks like you're [I/O bound](https://en.wikipedia.org/wiki/I/O_bound):
>
> In computer science, I/O bound refers to a condition in which the time it takes to complete a computation is determined principally by the period spent waiting for input/output operations to be completed. This is the opposite of a task being CPU bound. This circumstance arises when the rate at which data is requested is slower than the rate it is consumed or, in other words, more time is spent requesting data than processing it.
>
>
>
You probably need to have your main thread do the reading and add the data to the pool when a subprocess becomes available. This will be different to using `map`.
As you are processing a line at a time, and the inputs are split, you can use [**`fileinput`**](https://docs.python.org/2/library/fileinput.html) to iterate over lines of multiple files, and map to a function processing lines instead of files:
Passing one line at a time might be too slow, so we can ask map to pass chunks, and can adjust until we find a sweet-spot. Our function parses chunks of lines:
```
def _parse_coreset_points(lines):
return Points([_parse_coreset_point(line) for line in lines])
def _parse_coreset_point(line):
s = line.split()
x, y = [int(v) for v in s]
return CoresetPoint(x, y)
```
And our main function:
```
import fileinput
def getParsedFiles(directory):
pool = Pool(2)
txts = [filename for filename in os.listdir(directory):
if filename.endswith(".txt")]
return pool.imap(_parse_coreset_points, fileinput.input(txts), chunksize=100)
``` | In general it is never a good idea to read from the same physical (spinning) hard disk from different threads simultaneously, because every switch causes an extra delay of around 10ms to position the read head of the hard disk (would be different on SSD).
As @peter-wood already said, it is better to have one thread reading in the data, and have other threads processing that data.
Also, to really test the difference, I think you should do the test with some bigger files. For example: current hard disks should be able to read around 100MB/sec. So reading the data of a 100kB file in one go would take 1ms, while positioning the read head to the beginning of that file would take 10ms.
On the other hand, looking at your numbers (assuming those are for a single loop) it is hard to believe that being I/O bound is the only problem here. Total data is 100MB, which should take 1 second to read from disk plus some overhead, but your program takes 130 seconds. I don't know if that number is with the files cold on disk, or an average of multiple tests where the data is already cached by the OS (with 62 GB or RAM all that data should be cached the second time) - it would be interesting to see both numbers.
So there has to be something else. Let's take a closer look at your loop:
```
for line in f:
s = line.split()
x, y = [int(v) for v in s]
obj = CoresetPoint(x, y)
gc.disable()
myList.append(obj)
gc.enable()
```
While I don't know Python, my guess would be that the `gc` calls are the problem here. They are called for every line read from disk. I don't know how expensive those calls are (or what if `gc.enable()` triggers a garbage collection for example) and why they would be needed around `append(obj)` only, but there might be other problems because this is multithreading:
Assuming the `gc` object is global (i.e. not thread local) you could have something like this:
```
thread 1 : gc.disable()
# switch to thread 2
thread 2 : gc.disable()
thread 2 : myList.append(obj)
thread 2 : gc.enable()
# gc now enabled!
# switch back to thread 1 (or one of the other threads)
thread 1 : myList.append(obj)
thread 1 : gc.enable()
```
And if the number of threads <= number of cores, there wouldn't even be any switching, they would all be calling this at the same time.
Also, if the `gc` object is thread safe (it would be worse if it isn't) it would have to do some locking in order to safely alter it's internal state, which would force all other threads to wait.
For example, `gc.disable()` would look something like this:
```
def disable()
lock() # all other threads are blocked for gc calls now
alter internal data
unlock()
```
And because `gc.disable()` and `gc.enable()` are called in a tight loop, this will hurt performance when using multiple threads.
So it would be better to remove those calls, or place them at the beginning and end of your program if they are really needed (or only disable `gc` at the beginning, no need to do `gc` right before quitting the program).
Depending on the way Python copies or moves objects, it might also be slightly better to use `myList.append(CoresetPoint(x, y))`.
So it would be interesting to test the same on one 100MB file with one thread and without the `gc` calls.
If the processing takes longer than the reading (i.e. not I/O bound), use one thread to read the data in a buffer (should take 1 or 2 seconds on one 100MB file if not already cached), and multiple threads to process the data (but still without those `gc` calls in that tight loop).
You don't have to split the data into multiple files in order to be able to use threads. Just let them process different parts of the same file (even with the 14GB file). |
42,620,323 | I am trying to parse many files found in a directory, however using multiprocessing slows my program.
```
# Calling my parsing function from Client.
L = getParsedFiles('/home/tony/Lab/slicedFiles') <--- 1000 .txt files found here.
combined ~100MB
```
Following this example from python documentation:
```
from multiprocessing import Pool
def f(x):
return x*x
if __name__ == '__main__':
p = Pool(5)
print(p.map(f, [1, 2, 3]))
```
I've written this piece of code:
```
from multiprocessing import Pool
from api.ttypes import *
import gc
import os
def _parse(pathToFile):
myList = []
with open(pathToFile) as f:
for line in f:
s = line.split()
x, y = [int(v) for v in s]
obj = CoresetPoint(x, y)
gc.disable()
myList.append(obj)
gc.enable()
return Points(myList)
def getParsedFiles(pathToFile):
myList = []
p = Pool(2)
for filename in os.listdir(pathToFile):
if filename.endswith(".txt"):
myList.append(filename)
return p.map(_pars, , myList)
```
I followed the example, put all the names of the files that end with a `.txt` in a list, then created Pools, and mapped them to my function. Then I want to return a list of objects. Each object holds the parsed data of a file. However it amazes me that I got the following results:
```
#Pool 32 ---> ~162(s)
#Pool 16 ---> ~150(s)
#Pool 12 ---> ~142(s)
#Pool 2 ---> ~130(s)
```
**Graph:**
[](https://i.stack.imgur.com/wVsZg.png)
Machine specification:
```none
62.8 GiB RAM
Intel® Core™ i7-6850K CPU @ 3.60GHz × 12
```
What am I missing here ?
Thanks in advance! | 2017/03/06 | [
"https://Stackoverflow.com/questions/42620323",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6530695/"
] | Looks like you're [I/O bound](https://en.wikipedia.org/wiki/I/O_bound):
>
> In computer science, I/O bound refers to a condition in which the time it takes to complete a computation is determined principally by the period spent waiting for input/output operations to be completed. This is the opposite of a task being CPU bound. This circumstance arises when the rate at which data is requested is slower than the rate it is consumed or, in other words, more time is spent requesting data than processing it.
>
>
>
You probably need to have your main thread do the reading and add the data to the pool when a subprocess becomes available. This will be different to using `map`.
As you are processing a line at a time, and the inputs are split, you can use [**`fileinput`**](https://docs.python.org/2/library/fileinput.html) to iterate over lines of multiple files, and map to a function processing lines instead of files:
Passing one line at a time might be too slow, so we can ask map to pass chunks, and can adjust until we find a sweet-spot. Our function parses chunks of lines:
```
def _parse_coreset_points(lines):
return Points([_parse_coreset_point(line) for line in lines])
def _parse_coreset_point(line):
s = line.split()
x, y = [int(v) for v in s]
return CoresetPoint(x, y)
```
And our main function:
```
import fileinput
def getParsedFiles(directory):
pool = Pool(2)
txts = [filename for filename in os.listdir(directory):
if filename.endswith(".txt")]
return pool.imap(_parse_coreset_points, fileinput.input(txts), chunksize=100)
``` | A copy-paste snippet, for people who come from Google and don't like reading
Example is for json reading, just replace `__single_json_loader` with another file type to work with that.
```
from multiprocessing import Pool
from typing import Callable, Any, Iterable
import os
import json
def parallel_file_read(existing_file_paths: Iterable[str], map_lambda: Callable[[str], Any]):
result = {p: None for p in existing_file_paths}
pool = Pool()
for i, (temp_result, path) in enumerate(zip(pool.imap(map_lambda, existing_file_paths), result.keys())):
result[path] = temp_result
pool.close()
pool.join()
return result
def __single_json_loader(f_path: str):
with open(f_path, "r") as f:
return json.load(f)
def parallel_json_read(existing_file_paths: Iterable[str]):
combined_result = parallel_file_read(existing_file_paths, __single_json_loader)
return combined_result
```
And usage
```
if __name__ == "__main__":
def main():
directory_path = r"/path/to/my/file/directory"
assert os.path.isdir(directory_path)
d: os.DirEntry
all_files_names = [f for f in os.listdir(directory_path)]
all_files_paths = [os.path.join(directory_path, f_name) for f_name in all_files_names]
assert(all(os.path.isfile(p) for p in all_files_paths))
combined_result = parallel_json_read(all_files_paths)
main()
```
Very straight forward to replace a json reader with any other reader, and you're done. |
42,620,323 | I am trying to parse many files found in a directory, however using multiprocessing slows my program.
```
# Calling my parsing function from Client.
L = getParsedFiles('/home/tony/Lab/slicedFiles') <--- 1000 .txt files found here.
combined ~100MB
```
Following this example from python documentation:
```
from multiprocessing import Pool
def f(x):
return x*x
if __name__ == '__main__':
p = Pool(5)
print(p.map(f, [1, 2, 3]))
```
I've written this piece of code:
```
from multiprocessing import Pool
from api.ttypes import *
import gc
import os
def _parse(pathToFile):
myList = []
with open(pathToFile) as f:
for line in f:
s = line.split()
x, y = [int(v) for v in s]
obj = CoresetPoint(x, y)
gc.disable()
myList.append(obj)
gc.enable()
return Points(myList)
def getParsedFiles(pathToFile):
myList = []
p = Pool(2)
for filename in os.listdir(pathToFile):
if filename.endswith(".txt"):
myList.append(filename)
return p.map(_pars, , myList)
```
I followed the example, put all the names of the files that end with a `.txt` in a list, then created Pools, and mapped them to my function. Then I want to return a list of objects. Each object holds the parsed data of a file. However it amazes me that I got the following results:
```
#Pool 32 ---> ~162(s)
#Pool 16 ---> ~150(s)
#Pool 12 ---> ~142(s)
#Pool 2 ---> ~130(s)
```
**Graph:**
[](https://i.stack.imgur.com/wVsZg.png)
Machine specification:
```none
62.8 GiB RAM
Intel® Core™ i7-6850K CPU @ 3.60GHz × 12
```
What am I missing here ?
Thanks in advance! | 2017/03/06 | [
"https://Stackoverflow.com/questions/42620323",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6530695/"
] | In general it is never a good idea to read from the same physical (spinning) hard disk from different threads simultaneously, because every switch causes an extra delay of around 10ms to position the read head of the hard disk (would be different on SSD).
As @peter-wood already said, it is better to have one thread reading in the data, and have other threads processing that data.
Also, to really test the difference, I think you should do the test with some bigger files. For example: current hard disks should be able to read around 100MB/sec. So reading the data of a 100kB file in one go would take 1ms, while positioning the read head to the beginning of that file would take 10ms.
On the other hand, looking at your numbers (assuming those are for a single loop) it is hard to believe that being I/O bound is the only problem here. Total data is 100MB, which should take 1 second to read from disk plus some overhead, but your program takes 130 seconds. I don't know if that number is with the files cold on disk, or an average of multiple tests where the data is already cached by the OS (with 62 GB or RAM all that data should be cached the second time) - it would be interesting to see both numbers.
So there has to be something else. Let's take a closer look at your loop:
```
for line in f:
s = line.split()
x, y = [int(v) for v in s]
obj = CoresetPoint(x, y)
gc.disable()
myList.append(obj)
gc.enable()
```
While I don't know Python, my guess would be that the `gc` calls are the problem here. They are called for every line read from disk. I don't know how expensive those calls are (or what if `gc.enable()` triggers a garbage collection for example) and why they would be needed around `append(obj)` only, but there might be other problems because this is multithreading:
Assuming the `gc` object is global (i.e. not thread local) you could have something like this:
```
thread 1 : gc.disable()
# switch to thread 2
thread 2 : gc.disable()
thread 2 : myList.append(obj)
thread 2 : gc.enable()
# gc now enabled!
# switch back to thread 1 (or one of the other threads)
thread 1 : myList.append(obj)
thread 1 : gc.enable()
```
And if the number of threads <= number of cores, there wouldn't even be any switching, they would all be calling this at the same time.
Also, if the `gc` object is thread safe (it would be worse if it isn't) it would have to do some locking in order to safely alter it's internal state, which would force all other threads to wait.
For example, `gc.disable()` would look something like this:
```
def disable()
lock() # all other threads are blocked for gc calls now
alter internal data
unlock()
```
And because `gc.disable()` and `gc.enable()` are called in a tight loop, this will hurt performance when using multiple threads.
So it would be better to remove those calls, or place them at the beginning and end of your program if they are really needed (or only disable `gc` at the beginning, no need to do `gc` right before quitting the program).
Depending on the way Python copies or moves objects, it might also be slightly better to use `myList.append(CoresetPoint(x, y))`.
So it would be interesting to test the same on one 100MB file with one thread and without the `gc` calls.
If the processing takes longer than the reading (i.e. not I/O bound), use one thread to read the data in a buffer (should take 1 or 2 seconds on one 100MB file if not already cached), and multiple threads to process the data (but still without those `gc` calls in that tight loop).
You don't have to split the data into multiple files in order to be able to use threads. Just let them process different parts of the same file (even with the 14GB file). | A copy-paste snippet, for people who come from Google and don't like reading
Example is for json reading, just replace `__single_json_loader` with another file type to work with that.
```
from multiprocessing import Pool
from typing import Callable, Any, Iterable
import os
import json
def parallel_file_read(existing_file_paths: Iterable[str], map_lambda: Callable[[str], Any]):
result = {p: None for p in existing_file_paths}
pool = Pool()
for i, (temp_result, path) in enumerate(zip(pool.imap(map_lambda, existing_file_paths), result.keys())):
result[path] = temp_result
pool.close()
pool.join()
return result
def __single_json_loader(f_path: str):
with open(f_path, "r") as f:
return json.load(f)
def parallel_json_read(existing_file_paths: Iterable[str]):
combined_result = parallel_file_read(existing_file_paths, __single_json_loader)
return combined_result
```
And usage
```
if __name__ == "__main__":
def main():
directory_path = r"/path/to/my/file/directory"
assert os.path.isdir(directory_path)
d: os.DirEntry
all_files_names = [f for f in os.listdir(directory_path)]
all_files_paths = [os.path.join(directory_path, f_name) for f_name in all_files_names]
assert(all(os.path.isfile(p) for p in all_files_paths))
combined_result = parallel_json_read(all_files_paths)
main()
```
Very straight forward to replace a json reader with any other reader, and you're done. |
56,465,109 | I am looking for an example of using python multiprocessing (i.e. a process-pool/threadpool, job queue etc.) with hylang. | 2019/06/05 | [
"https://Stackoverflow.com/questions/56465109",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7740698/"
] | The first example from the [`multiprocessing`](https://docs.python.org/3/library/multiprocessing.html) documentation can be literally translated to Hy like so:
```
(import multiprocessing [Pool])
(defn f [x]
(* x x))
(when (= __name__ "__main__")
(with [p (Pool 5)]
(print (.map p f [1 2 3]))))
``` | Note that a straightforward translation runs into a problem on macOS (which is not officially supported, but mostly works anyway): Hy sets `sys.executable` to the Hy interpreter, and `multiprocessing` relies on that value to start up new processes. You can work around that particular problem by calling `(multiprocessing.set_executable hy.sys_executable)`, but then it will fail to parse the file containing the Hy code itself, which it does again for some reason in the child process. So there doesn't seem to be a good solution for using multiprocessing with Hy running natively on a Mac.
Which is why we have Docker, I suppose. |
38,217,594 | [Distinguishable objects into distinguishable boxes](https://math.stackexchange.com/questions/468824/distinguishable-objects-into-distinguishable-boxes?rq=1)
It is very similar to this question posted.
I'm trying to get python code for this question.
Note although it is similar there is a key difference. i.e.
A bucket can be empty, while the other buckets contain all the items. Even this case will be considered as a separate case.
for example:
Consider I have 3 items A,B,C and 3 buckets B1, B2, B3
The table below will show the expected result:
```
B1 B2 B3
(A,B,C) () ()
() (A,B,C) ()
() () (A,B,C)
(A) (B) (C)
(A) (C) (B)
(B) (A) (C)
(B) (C) (A)
(C) (B) (A)
(C) (A) (B)
(A,B) (C) ()
(A,B) () (C)
(B,C) (A) ()
(B,C) () (A)
(A,C) (B) ()
(A,C) () (B)
() (A,B) (C)
(C) (A,B) ()
() (B,C) (A)
(A) (B,C) ()
() (A,C) (B)
(B) (A,C) ()
() (C) (A,B)
(C) () (A,B)
() (A) (B,C)
(A) () (B,C)
() (B) (A,C)
(B) () (A,C)
Length is 27.
```
```
>>def make_sets(items, num_of_baskets=3):
pass
>>make_sets(('A', 'B', 'C', 'D', 'E'), 3)
```
I'm expecting the output of a function to give me these combinations in a form of list of lists of tuples. I'm saying this again the number of items is variable and the number of buckets is variable too.
\*\* Please provide python code for the make\_sets function.
If someone can explain the math combinatorics. I'd greatly appreciate that too. I spent more than 2 days on this problem without reaching a definite solution. | 2016/07/06 | [
"https://Stackoverflow.com/questions/38217594",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6055596/"
] | I think there is no way to combine more than one language in one editor.
Please refer to this link.
<https://www.tinymce.com/docs/configure/localization/#language>
TinyMce is made for simplicity and easyness. If you want to have more than one language that points to one ID please play around with your Database Design. | Actually now you can add languages in Tinymce by downloading different languages packages and integrating it with your editor.
<https://www.tiny.cloud/docs/configure/localization/>
here you will find the list of Available Language Packages and how to use them |
50,311,713 | Hello I'm trying to make a python script to loop text and toggle through it. I'm able to get python to toggle through the text once but what I cant get it to do is to keep toggling through the text. After it toggles through the text once I get a message that says
Traceback (most recent call last): File "test.py", line 24, in hello() File "test.py", line 22, in hello hello() TypeError: 'str' object is not callable
```
import time, sys, os
from colorama import init
from termcolor import colored
def hello():
os.system('cls')
init()
hello = '''Hello!'''
print(colored(hello,'green',))
time.sleep(1)
os.system('cls')
print(colored(hello,'blue',))
time.sleep(1)
os.system('cls')
print(colored(hello,'yellow',))
time.sleep(1)
os.system('cls')
hello()
hello()
``` | 2018/05/13 | [
"https://Stackoverflow.com/questions/50311713",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9394080/"
] | >
> Is this not redundant??
>
>
>
Maybe it is redundant for instance methods and constructors.
It isn't redundant for static methods or class initialization pseudo-methods.
---
It is also possible that the (supposedly) redundant reference gets optimized away by the JIT compiler. (Or maybe it isn't optimized away ... because they have concluded that the redundancy leads to faster execution *on average*.) Or maybe the actual implementation of the JVM1 is just different.
Bear in mind that the JVM spec is describing an idealized stack frame. The actual implementation may be different ... provided that it *behaves* the way that the spec says it should.
---
On @EJP's point on normativeness, the only normative references for Java are the JLS and JVM specifications, and the Javadoc for the class library. You can also consult the source code of the JVM itself. The specifications say what *should* happen, and the code (in a sense) says what *does* happen. An article you might find in a published paper or a web article is not normative, and may well be incorrect or out of date.
---
1 - The actual implementation may vary from one version to the next, or between vendors. Furthermore, I have heard of a JVM implementation where a bytecode rewriter transformed from standard bytecodes to another abstract machine language at class load time. It wasn't a great idea from a performance perspective ... but it was certainly within the spirit of the JVM spec. | >
> The stack frame will contain the "current class constant pool reference" and also it will have the reference to the object in heap which in turn will also point to the class data. Is this not redundant??
>
>
>
You missed the precondition of that statement, or you misquoted it, or it was just plainly wrong where you saw it.
The "reference to the object in heap" is only added for non-static method, and it refers to the hidden `this` parameter.
As it says in section "[Local Variables Array](http://blog.jamesdbloom.com/JVMInternals.html#local_variables_array)":
>
> The array of local variables contains all the variables used during the execution of the method, including a reference to `this`, all method parameters and other locally defined variables. For class methods (i.e. static methods) the method parameters start from zero, however, **for instance method the zero slot is reserved for `this`**.
>
>
>
So, for static methods, there is no redundancy.
Could the constant pool reference be eliminated when `this` is present? Yes, but then there would need to be a different way to locate the constant pool reference, requiring different bytecode instructions, so that would be a different kind of redundancy.
Always having the constant pool reference available in a well-known location in the stack frame, simplifies the bytecode logic. |
50,311,713 | Hello I'm trying to make a python script to loop text and toggle through it. I'm able to get python to toggle through the text once but what I cant get it to do is to keep toggling through the text. After it toggles through the text once I get a message that says
Traceback (most recent call last): File "test.py", line 24, in hello() File "test.py", line 22, in hello hello() TypeError: 'str' object is not callable
```
import time, sys, os
from colorama import init
from termcolor import colored
def hello():
os.system('cls')
init()
hello = '''Hello!'''
print(colored(hello,'green',))
time.sleep(1)
os.system('cls')
print(colored(hello,'blue',))
time.sleep(1)
os.system('cls')
print(colored(hello,'yellow',))
time.sleep(1)
os.system('cls')
hello()
hello()
``` | 2018/05/13 | [
"https://Stackoverflow.com/questions/50311713",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9394080/"
] | >
> Is this not redundant??
>
>
>
Maybe it is redundant for instance methods and constructors.
It isn't redundant for static methods or class initialization pseudo-methods.
---
It is also possible that the (supposedly) redundant reference gets optimized away by the JIT compiler. (Or maybe it isn't optimized away ... because they have concluded that the redundancy leads to faster execution *on average*.) Or maybe the actual implementation of the JVM1 is just different.
Bear in mind that the JVM spec is describing an idealized stack frame. The actual implementation may be different ... provided that it *behaves* the way that the spec says it should.
---
On @EJP's point on normativeness, the only normative references for Java are the JLS and JVM specifications, and the Javadoc for the class library. You can also consult the source code of the JVM itself. The specifications say what *should* happen, and the code (in a sense) says what *does* happen. An article you might find in a published paper or a web article is not normative, and may well be incorrect or out of date.
---
1 - The actual implementation may vary from one version to the next, or between vendors. Furthermore, I have heard of a JVM implementation where a bytecode rewriter transformed from standard bytecodes to another abstract machine language at class load time. It wasn't a great idea from a performance perspective ... but it was certainly within the spirit of the JVM spec. | There are two points here. First, there are `static` methods which are invoked without a `this` reference. Second, the actual class of an object instance is not necessarily the declaring class of the method whose code we are actually executing. The purpose of the constant pool reference is to enable resolving of symbolic references and loading of constants referenced by the code. In both cases, we need the constant pool of the class containing the currently executed code, even if the method might be inherited by the actual class of the `this` reference (in case of a `private` method invoked by another inherited method, we have a method invoked with a `this` instance of a class which formally does not even inherit the method).
It might even be the case that the currently executed code is contained in an interface, so we never have instances of it, but still a class file with a constant pool which must be available when executing the code. This does not only apply to Java 8 and newer, which allow `static` and `default` methods in interfaces; earlier versions also might need to execute the `<clinit>` method of an interface to initialize its `static` fields.
By the way, even if an instance method is invoked with an object reference associated with `this` in its first local variable, there is no requirement for the bytecode instructions to keep it there. If not needed, it might get overwritten by an arbitrary value, reusing the variable slot for other purposes. This does not preclude that subsequent instructions need the constant pool, which, as said, does not need to belong to the actual class of `this` anyway.
Of course, that pool reference is a logical construct anyway. Implementations may transform the code to use a shared pool or not to need a pool at all when all references have been resolved already, etc. After inlining, code may not even have a dedicated stack frame anymore. |
50,311,713 | Hello I'm trying to make a python script to loop text and toggle through it. I'm able to get python to toggle through the text once but what I cant get it to do is to keep toggling through the text. After it toggles through the text once I get a message that says
Traceback (most recent call last): File "test.py", line 24, in hello() File "test.py", line 22, in hello hello() TypeError: 'str' object is not callable
```
import time, sys, os
from colorama import init
from termcolor import colored
def hello():
os.system('cls')
init()
hello = '''Hello!'''
print(colored(hello,'green',))
time.sleep(1)
os.system('cls')
print(colored(hello,'blue',))
time.sleep(1)
os.system('cls')
print(colored(hello,'yellow',))
time.sleep(1)
os.system('cls')
hello()
hello()
``` | 2018/05/13 | [
"https://Stackoverflow.com/questions/50311713",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9394080/"
] | >
> The stack frame will contain the "current class constant pool reference" and also it will have the reference to the object in heap which in turn will also point to the class data. Is this not redundant??
>
>
>
You missed the precondition of that statement, or you misquoted it, or it was just plainly wrong where you saw it.
The "reference to the object in heap" is only added for non-static method, and it refers to the hidden `this` parameter.
As it says in section "[Local Variables Array](http://blog.jamesdbloom.com/JVMInternals.html#local_variables_array)":
>
> The array of local variables contains all the variables used during the execution of the method, including a reference to `this`, all method parameters and other locally defined variables. For class methods (i.e. static methods) the method parameters start from zero, however, **for instance method the zero slot is reserved for `this`**.
>
>
>
So, for static methods, there is no redundancy.
Could the constant pool reference be eliminated when `this` is present? Yes, but then there would need to be a different way to locate the constant pool reference, requiring different bytecode instructions, so that would be a different kind of redundancy.
Always having the constant pool reference available in a well-known location in the stack frame, simplifies the bytecode logic. | There are two points here. First, there are `static` methods which are invoked without a `this` reference. Second, the actual class of an object instance is not necessarily the declaring class of the method whose code we are actually executing. The purpose of the constant pool reference is to enable resolving of symbolic references and loading of constants referenced by the code. In both cases, we need the constant pool of the class containing the currently executed code, even if the method might be inherited by the actual class of the `this` reference (in case of a `private` method invoked by another inherited method, we have a method invoked with a `this` instance of a class which formally does not even inherit the method).
It might even be the case that the currently executed code is contained in an interface, so we never have instances of it, but still a class file with a constant pool which must be available when executing the code. This does not only apply to Java 8 and newer, which allow `static` and `default` methods in interfaces; earlier versions also might need to execute the `<clinit>` method of an interface to initialize its `static` fields.
By the way, even if an instance method is invoked with an object reference associated with `this` in its first local variable, there is no requirement for the bytecode instructions to keep it there. If not needed, it might get overwritten by an arbitrary value, reusing the variable slot for other purposes. This does not preclude that subsequent instructions need the constant pool, which, as said, does not need to belong to the actual class of `this` anyway.
Of course, that pool reference is a logical construct anyway. Implementations may transform the code to use a shared pool or not to need a pool at all when all references have been resolved already, etc. After inlining, code may not even have a dedicated stack frame anymore. |
69,416,562 | I have this simple csv:
```
date,count
2020-07-09,144.0
2020-07-10,143.5
2020-07-12,145.5
2020-07-13,144.5
2020-07-14,146.0
2020-07-20,145.5
2020-07-21,146.0
2020-07-24,145.5
2020-07-28,143.0
2020-08-05,146.0
2020-08-10,147.0
2020-08-11,147.5
2020-08-14,146.5
2020-09-01,143.5
2020-09-02,143.0
2020-09-09,144.5
2020-09-10,143.5
2020-09-25,144.0
2021-09-21,132.4
2021-09-23,131.2
2021-09-25,131.0
2021-09-26,130.8
2021-09-27,130.6
2021-09-28,128.4
2021-09-30,126.8
2021-10-02,126.2
```
If I copy it into excel and scatter plot it, it looks like this
[](https://i.stack.imgur.com/ZNrCN.png)
This is correct; there should be a big gap in the middle (look carefully at the data, it jumps from 2020 to 2021)
However if I do this in python:
```
import matplotlib.pyplot as plt
import pandas as pd
data = pd.read_csv('data.csv')
data.plot.scatter('date', 'count')
plt.show()
```
It looks like this:
[](https://i.stack.imgur.com/e872e.png)
It evenly spaces them at the gap is gone. How do I stop that behavior? I tried to do
```
plt.xticks = data.date
```
But that didn't do anything different. | 2021/10/02 | [
"https://Stackoverflow.com/questions/69416562",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7503046/"
] | I made some research and found this: [how to close server on ctrl+c when in no-daemon](https://github.com/Unitech/pm2/issues/2833#issuecomment-298560152)
```sh
pm2 kill && pm2 start ecosystem.json --only dev --no-daemon
```
It works if you run pm2 alone but you are running 2 programs together, so give it a try below script:
```json
{
"scripts": {
"dev": "yarn pm2:del && yarn pm2:dev && yarn wp:dev && yarn pm2:del"
}
}
```
**How does it work?**
* first, kill all pm2 daemons
* start a pm2 daemon
* start webpack
* finally, kill all pm2 daemons again, it will run when you press `CTRL + C` | I've created `dev.sh` script:
```
#!/bin/bash
yarn pm2:del
yarn pm2:dev
yarn wp:dev
yarn pm2:del
```
And run it using `yarn dev`:
```
"scripts": {
"dev": "sh ./scripts/dev.sh",
"pm2:dev": "pm2 start ecosystem.config.js --only dev",
"pm2:del": "pm2 delete all || exit 0",
"wp:dev": "webpack --mode=development --watch"
}
``` |
61,819,993 | I'm trying to run a Python script from a (windows/c#) background process. I'm successfully getting python.exe to run with the script file, but it's erroring out on the first line, "import pandas as pd". The exact error I'm getting from stderr is...
Traceback (most recent call last):
File "predictX.py", line 1, in
import pandas as pd
ModuleNotFoundError: No module named 'pandas'
When I run the script from an anaconda prompt, it runs fine. I copied the "Path" environment variable from the anaconda prompt and replicated that in my background process. Might there be any other environment variables it's looking for? Any other thoughts?
Thanks!! -- Curt | 2020/05/15 | [
"https://Stackoverflow.com/questions/61819993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13507069/"
] | You should install it in your desktop before using it.
```
$ pip install pandas
```
Then it should work fine. If not, try un-install and re-install it.
[EDIT] Anaconda is a package for python which includes more module that wasn't included in the original python installer. So the script can run in Anaconda, but not with original python runner. | Pilot error...
Apparently there are at least two python.exe files on my computer. I changed the path to reflect the one under the Anaconda folder and everything came right up. |
14,974,659 | Please bear with me as I'm new to Python/Django/Unix in general.
I'm learning how to use different `settings.py` files for local and production environments. The following is from the section on the `--settings` option in [the official Django docs page on `django-admin.py`](https://docs.djangoproject.com/en/1.5/ref/django-admin/),
>
> --settings Example usage:
>
>
> django-admin.py syncdb --settings=mysite.settings
>
>
>
My project is structured as following:
```
mysite
L manage.py
L mysite
L __init__.py
L local.py
L urls.py
L production.py
L wsgi.py
```
However when I run the following command from the parent `mysite` directory,
>
> $ django-admin.py runserver --settings=mysite.local
>
>
>
I get the following error:
```
File "/Users/testuser/.virtualenvs/djdev/lib/python2.7/site-packages/django/conf/__init__.py", line 95, in __init__
raise ImportError("Could not import settings '%s' (Is it on sys.path?): %s" % (self.SETTINGS_MODULE, e))
ImportError: Could not import settings 'mysite.local' (Is it on sys.path?): No module named mysite.local
```
From what I gathered on various articles on the web, I think I need to add my project directory path to the `PYTHONPATH` variable in bash profile. Is this the right way to go?
EDIT: changed the slash to dot, but same error persists. | 2013/02/20 | [
"https://Stackoverflow.com/questions/14974659",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/312462/"
] | the `--settings` flag takes a dotted Python path, not a relative path on your filesystem.
Meaning `--settings=mysite/local` should actually be `--settings=mysite.local`. If your current working directory is your project root when you run `django-admin`, then you shouldn't have to touch your `PYTHONPATH`. | You have to replace `/` with `.`
```
$ django-admin.py runserver --settings=mysite.local
```
You can update PYTHONPATH in the `manage.py` too. Inside `if __name__ == "__main__":` add the following.
```
import sys
sys.path.append(additional_path)
``` |
22,429,004 | I have multiple forms in a html file, which all call the same python cgi script. For example:
```
<html>
<body>
<form method="POST" name="form1" action="script.cgi" enctype="multipart/data-form">
....
</form>
...
<form method="POST" name="form2" action="script.cgi" enctype="multipart/data-form">
...
</form>
...
</body>
</html>
```
And in my cgi script I do the following:
```
#!/usr/bin/python
import os
import cgi
print "content-type: text/html; charset=utf-8\n\n"
form = cgi.FieldStorate();
...
```
I am unable to get the data from the second from. I have tried to call FieldStorage multiple times, but that did not seem to work. So my question is how do I access different forms in the same cgi script? | 2014/03/15 | [
"https://Stackoverflow.com/questions/22429004",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2415118/"
] | You cannot. The browser submits one form, or the other, but not both.
If you need data from both forms, merge the forms into one `<form>` tag instead. | First, `FieldStorage()` consumes standard input, so it should only be instantiated once.
Second, only the data in the submitted form is sent to the server. The other forms may
as well not exist.
So while you can use the same cgi script to process both forms, if you need process both forms at the same time, as Martijn suggested, merge the forms into one `<form>`. |
46,511,011 | The question has racked my brains
There are 26 underscores presenting English alphabet in-sequence.
means that letter a,b and g should be substituted by the letter k, j and r respectively, while all the other letters are not substituted.
how do I do like this? How can python detect each underscore = each English alphabet?
I thought I could use `str.replace to do this` but it's more difficult than I thought.
thanks | 2017/10/01 | [
"https://Stackoverflow.com/questions/46511011",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | You could use `str.translate`:
```
In [8]: from string import ascii_lowercase
In [9]: text.translate({ord(l): l if g == '_' else g for g, l in zip(guess, ascii_lowercase)})
Out[9]: 'i km jen .'
```
This maps elements of `string.ascii_lowercase` to elements of `guess` (by position). If an element of `guess` is the underscore, the corresponding letter from `ascii_lowercase` is used instead. | If you had a list of the alphabet, then the list of underscores, enter a for loop and then just compare the two values, appending to a list if it does or doesn’t |
46,511,011 | The question has racked my brains
There are 26 underscores presenting English alphabet in-sequence.
means that letter a,b and g should be substituted by the letter k, j and r respectively, while all the other letters are not substituted.
how do I do like this? How can python detect each underscore = each English alphabet?
I thought I could use `str.replace to do this` but it's more difficult than I thought.
thanks | 2017/10/01 | [
"https://Stackoverflow.com/questions/46511011",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | You could use `str.translate`:
```
In [8]: from string import ascii_lowercase
In [9]: text.translate({ord(l): l if g == '_' else g for g, l in zip(guess, ascii_lowercase)})
Out[9]: 'i km jen .'
```
This maps elements of `string.ascii_lowercase` to elements of `guess` (by position). If an element of `guess` is the underscore, the corresponding letter from `ascii_lowercase` is used instead. | ```
>>> text = "i am ben ."
>>> guess = "kj____r___________________"
>>> d = dict()
>>> for i in xrange(len(guess)):
... if(guess[i] != "_"):
... d[chr(i+97)] = guess[i]
...
>>> d
{'a': 'k', 'b': 'j', 'g': 'r'}
>>> text_list = list(text)
>>> text_list
['i', ' ', 'a', 'm', ' ', 'b', 'e', 'n', ' ', '.']
>>> for i in xrange(len(text_list)):
... if(text_list[i] in d):
... text_list[i] = d.get(text_list[i])
...
>>> text_list
['i', ' ', 'k', 'm', ' ', 'j', 'e', 'n', ' ', '.']
>>> text_final = "".join(text_list)
>>> text_final
'i km jen .'
>>>
``` |
46,511,011 | The question has racked my brains
There are 26 underscores presenting English alphabet in-sequence.
means that letter a,b and g should be substituted by the letter k, j and r respectively, while all the other letters are not substituted.
how do I do like this? How can python detect each underscore = each English alphabet?
I thought I could use `str.replace to do this` but it's more difficult than I thought.
thanks | 2017/10/01 | [
"https://Stackoverflow.com/questions/46511011",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | You could use `str.translate`:
```
In [8]: from string import ascii_lowercase
In [9]: text.translate({ord(l): l if g == '_' else g for g, l in zip(guess, ascii_lowercase)})
Out[9]: 'i km jen .'
```
This maps elements of `string.ascii_lowercase` to elements of `guess` (by position). If an element of `guess` is the underscore, the corresponding letter from `ascii_lowercase` is used instead. | Zip them:
```
import string
zipped = zip(string.ascii_lowercase, "kj____r___________________")
print(zipped)
# [('a', 'k'), ('b', 'j'), ('c', '_'), ('d', '_'), ('e', '_'), ('f', '_'), ('g', 'r'), ('h', '_'), ('i', '_'), ('j', '_'), ('k', '_'), ('l', '_'), ('m', '_'), ('n', '_'), ('o', '_'), ('p', '_'), ('q', '_'), ('r', '_'), ('s', '_'), ('t', '_'), ('u', '_'), ('v', '_'), ('w', '_'), ('x', '_'), ('y', '_'), ('z', '_')]
```
Convert it into a dict:
```
dict_ = dict(zipped)
print(dict_)
# {'a': 'k', 'b': 'j', 'c': '_', 'd': '_', 'e': '_', 'f': '_', 'g': 'r', 'h': '_', 'i': '_', 'j': '_', 'k': '_', 'l': '_', 'm': '_', 'n': '_', 'o': '_', 'p': '_', 'q': '_', 'r': '_', 's': '_', 't': '_', 'u': '_', 'v': '_', 'w': '_', 'x': '_', 'y': '_', 'z': '_'}
```
And then use a for loop for the substitution:
```
inp = "I am Ben."
result = ""
for letter in inp:
if letter in dict_:
if dict_[letter] != "_"
result += dict_[letter]
continue
result += letter
```
It all combined:
```
def sub(text, criteria):
import string
dict_ = dict(zip(string.ascii_lowercase, criteria))
result = ""
for letter in text:
if letter in dict_:
if dict_[letter] != "_":
result += dict_[letter]
continue
result += letter
return result
>>> sub("I am Ben. abg abg", "kj____r___________________")
'I km Ben. kjr kjr'
``` |
46,511,011 | The question has racked my brains
There are 26 underscores presenting English alphabet in-sequence.
means that letter a,b and g should be substituted by the letter k, j and r respectively, while all the other letters are not substituted.
how do I do like this? How can python detect each underscore = each English alphabet?
I thought I could use `str.replace to do this` but it's more difficult than I thought.
thanks | 2017/10/01 | [
"https://Stackoverflow.com/questions/46511011",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | You could use `str.translate`:
```
In [8]: from string import ascii_lowercase
In [9]: text.translate({ord(l): l if g == '_' else g for g, l in zip(guess, ascii_lowercase)})
Out[9]: 'i km jen .'
```
This maps elements of `string.ascii_lowercase` to elements of `guess` (by position). If an element of `guess` is the underscore, the corresponding letter from `ascii_lowercase` is used instead. | Let's solve your issue step by step without making it too much complicated:
>
> First step :
>
>
>
So first step is data gathering which user providing or you already have :
Suppose you have one list of a-z alphabets and other list have replaced "\_" underscore and letters :
if you don't have let's gather data :
a-z alphabet list :
```
alphabet_list=list(map(chr,range(97,123)))
```
it will give :
```
>>> print(alphabet_list)
['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z']
```
next if you don't have underscore list :
```
underscore_list=[]
for i in range(1,27):
b.append("_")
```
and let's modify this list a little so make it like yours :
```
modified_underscore_list=['k', 'j', '_', '_', '_', '_', 'g', '_', '_', '_', '_', '_', '_', '_', '_', '_', '_', '_', '_', '_', '_', '_', '_', '_', '_', '_']
```
>
> Second step:
>
>
>
Now you have all things which needed to solve problem let's work on the second step now :
we have to map each element of first list to other list so we can save this result to dictionary format:
(Remember one thing dict keys can't be same or duplicate but values can be)
So let's iterate over list one and list two and then save the output to dictionary format :
For this purpose we need to iterate on both list so we will use built-in zip function :
```
final_output={} #we will save our iteration output in this dict
for i,j in zip(alphabet_list,modified_underscore_list):
final_output[i]=j
```
we can see the dict now :
```
{'p': '_', 'k': '_', 'w': '_', 't': '_', 'i': '_', 'c': '_', 'b': 'j', 'j': '_', 'a': 'k', 's': '_', 'g': 'g', 'x': '_', 'm': '_', 'l': '_', 'h': '_', 'o': '_', 'd': '_', 'n': '_', 'y': '_', 'r': '_', 'e': '_', 'u': '_', 'f': '_', 'v': '_', 'q': '_', 'z': '_'}
```
Now we have mapped data :
move to third and last step :
>
> Third Step :
>
>
>
Now ask for input from user and check if character's of user's string in our final dictionary, if yes then replace only "a" "g" and "b" with value of those keys from our dictionary , simple:
```
ask_input=str(input("enter string"))
ask=list(ask_input)
for i,j in enumerate(ask):
if j in final_output:
if j=="a" or j=="b" or j=="g":
ask[i]=final_output.get(j)
print("".join(ask))
```
So our full code would be :
```
alphabet_list=list(map(chr,range(97,123)))
modified_underscore_list=['k', 'j', '_', '_', '_', '_', 'g', '_', '_', '_', '_', '_', '_', '_', '_', '_', '_', '_', '_', '_', '_', '_', '_', '_', '_', '_']
final_output={}
for i,j in zip(alphabet_list,modified_underscore_list):
final_output[i]=j
ask_input=str(input("enter string"))
ask=list(ask_input)
for i,j in enumerate(ask):
if j in final_output:
if j=="a" or j=="b" or j=="g":
ask[i]=final_output.get(j)
print("".join(ask))
``` |
73,646,972 | I am using the following function to estimate the Gaussian window rolling average of my timeseries. Though it works great from small size averaging windows, it crushes (or gets extremely slow) for larger averaging windows.
```
def norm_factor_Gauss_window(s, dt):
numer = np.arange(-3*s, 3*s+dt, dt)
multiplic_fac = np.exp(-(numer)**2/(2*s**2))
norm_factor = np.sum(multiplic_fac)
window = len(multiplic_fac)
return window, multiplic_fac, norm_factor
# Create dataframe for MRE
aa = np.sin(np.linspace(0,2*np.pi,1000000))+0.15*np.random.rand(1000000)
df = pd.DataFrame({'x':aa})
hmany = 10
dt = 1 # ['seconds']
s = hmany*dt # Define averaging window size ['s']
# Estimate multip factor, normalizatoon factor etc
window, multiplic_fac, norm_factor= norm_factor_Gauss_window(s, dt)
# averaged timeseries
res2 =(1/norm_factor)*df.x.rolling(window, center=True).apply(lambda x: (x * multiplic_fac).sum(), raw=True, engine='numba', engine_kwargs= {'nopython': True, 'parallel': True} , args=None, kwargs=None)
#Plot
plt.plot(df.x[0:2000])
plt.plot(res2[0:2000])
```
I am aware that people usually speed up moving average operations using convolve(e.g., [How to calculate rolling / moving average using python + NumPy / SciPy?](https://stackoverflow.com/questions/14313510/how-to-calculate-rolling-moving-average-using-python-numpy-scipy))
Would it be possible to use convolve here somehow to fix this issue? Also, are there any other suggestion that would help me speed up the operation for large averaging windows? | 2022/09/08 | [
"https://Stackoverflow.com/questions/73646972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15353940/"
] | Using [numba njit decorator](https://numba.pydata.org/numba-doc/latest/user/parallel.html?highlight=njit) on `norm_factor_Gauss_window` function on my pc I get a **10x** speed up (from 10µs to 1µs) on the execution time of this function.
```
import numba as nb
@nb.njit(nogil=True)
def norm_factor_Gauss_window(s, dt):
numer = np.arange(-3*s, 3*s+dt, dt)
multiplic_fac = np.exp(-(numer)**2/(2*s**2))
norm_factor = np.sum(multiplic_fac)
window = len(multiplic_fac)
return window, multiplic_fac, norm_factor
```
This is not a big improvement seen on the total execution time which depends heavily on rolling mean on my pc 900ms. With some adjustments I was able to get to 650ms (**-25%** execution time) by removing the keyword `'parallel'`, as in this case there is nothing that can be parallelized with this approach, as evidenced by the warning `'NumbaPerformanceWarning'`. I also removed the other keywords, as they are the default values.
```
df.x.rolling(window, center=True).apply(lambda x: (x * multiplic_fac).sum(),
raw=True, engine='numba')
``` | I was able to drastically improve the speed of this code using the following:
```
from scipy import signal
def norm_factor_Gauss_window(s, dt):
numer = np.arange(-3*s, 3*s+dt, dt)
multiplic_fac = np.exp(-(numer)**2/(2*s**2))
norm_factor = np.sum(multiplic_fac)
window = len(multiplic_fac)
return window, multiplic_fac, norm_factor
# Create dataframe for MRE
aa = np.sin(np.linspace(0,2*np.pi,1000000))+0.15*np.random.rand(1000000)
df = pd.DataFrame({'x':aa})
hmany = 10
dt = 1 # ['seconds']
s = hmany*dt # Define averaging window size ['s']
# Estimate multip factor, normalizatoon factor etc
window, multiplic_fac, norm_factor= norm_factor_Gauss_window(s, dt)
# averaged timeseries
res2 = (1/norm_factor)*signal.fftconvolve(df.x.values, multiplic_fac[::-1], 'same')
#Plot
plt.plot(df.x[0:2000])
plt.plot(res2[0:2000])
``` |
8,765,568 | I am trying to make a windows executable from a python script that uses matplotlib and it seems that I am getting a common error.
>
> File "run.py", line 29, in
> import matplotlib.pyplot as plt File "matplotlib\pyplot.pyc", line 95, in File "matplotlib\backends\_\_init\_\_.pyc", line
> 25, in pylab\_setup ImportError: No module named backend\_tkagg
>
>
>
The problem is that I didn't found a solution while googling all over the internet.
Here is my `setup.py`
```
from distutils.core import setup
import matplotlib
import py2exe
matplotlib.use('TkAgg')
setup(data_files=matplotlib.get_py2exe_datafiles(),console=['run.py'])
``` | 2012/01/06 | [
"https://Stackoverflow.com/questions/8765568",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/842785/"
] | First, the easy question, is that backend installed? On my Fedora system I had to install it separately from the base matplotlib.
At a Python console can you:
```
>>> import matplotlib.backends.backend_tkagg
```
If that works, then force py2exe to include it. In your config:
```
opts = {
'py2exe': { "includes" : ["matplotlib.backends.backend_tkagg"] }
}
``` | If you are using py2exe it doesn't handle .egg formatted Python modules. If you used easy\_install to install the trouble module then you might only have the .egg version. See the py2exe site for more info on how to fix it.
<http://www.py2exe.org/index.cgi/ExeWithEggs> |
8,765,568 | I am trying to make a windows executable from a python script that uses matplotlib and it seems that I am getting a common error.
>
> File "run.py", line 29, in
> import matplotlib.pyplot as plt File "matplotlib\pyplot.pyc", line 95, in File "matplotlib\backends\_\_init\_\_.pyc", line
> 25, in pylab\_setup ImportError: No module named backend\_tkagg
>
>
>
The problem is that I didn't found a solution while googling all over the internet.
Here is my `setup.py`
```
from distutils.core import setup
import matplotlib
import py2exe
matplotlib.use('TkAgg')
setup(data_files=matplotlib.get_py2exe_datafiles(),console=['run.py'])
``` | 2012/01/06 | [
"https://Stackoverflow.com/questions/8765568",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/842785/"
] | First, the easy question, is that backend installed? On my Fedora system I had to install it separately from the base matplotlib.
At a Python console can you:
```
>>> import matplotlib.backends.backend_tkagg
```
If that works, then force py2exe to include it. In your config:
```
opts = {
'py2exe': { "includes" : ["matplotlib.backends.backend_tkagg"] }
}
``` | This works well
from distutils.core import setup
import py2exe, sys, os
import matplotlib
sys.setrecursionlimit(12000)
sys.argv.append('py2exe')
setup(
options = {
"py2exe" : {
"bundle\_files":3,
"compressed":True,
"includes" : ["matplotlib.backends.backend\_tkagg"]
}
},
windows = [{"script": "script.py"}],
```
zipfile = None,
```
data\_files=matplotlib data\_files = matplotlib.get\_py2exe\_datafiles(),
) |
8,765,568 | I am trying to make a windows executable from a python script that uses matplotlib and it seems that I am getting a common error.
>
> File "run.py", line 29, in
> import matplotlib.pyplot as plt File "matplotlib\pyplot.pyc", line 95, in File "matplotlib\backends\_\_init\_\_.pyc", line
> 25, in pylab\_setup ImportError: No module named backend\_tkagg
>
>
>
The problem is that I didn't found a solution while googling all over the internet.
Here is my `setup.py`
```
from distutils.core import setup
import matplotlib
import py2exe
matplotlib.use('TkAgg')
setup(data_files=matplotlib.get_py2exe_datafiles(),console=['run.py'])
``` | 2012/01/06 | [
"https://Stackoverflow.com/questions/8765568",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/842785/"
] | First, the easy question, is that backend installed? On my Fedora system I had to install it separately from the base matplotlib.
At a Python console can you:
```
>>> import matplotlib.backends.backend_tkagg
```
If that works, then force py2exe to include it. In your config:
```
opts = {
'py2exe': { "includes" : ["matplotlib.backends.backend_tkagg"] }
}
``` | Run the following command to install the backend\_tkagg
For centos -- **sudo yum install python-matplotlib-tk**
This should work. |
8,765,568 | I am trying to make a windows executable from a python script that uses matplotlib and it seems that I am getting a common error.
>
> File "run.py", line 29, in
> import matplotlib.pyplot as plt File "matplotlib\pyplot.pyc", line 95, in File "matplotlib\backends\_\_init\_\_.pyc", line
> 25, in pylab\_setup ImportError: No module named backend\_tkagg
>
>
>
The problem is that I didn't found a solution while googling all over the internet.
Here is my `setup.py`
```
from distutils.core import setup
import matplotlib
import py2exe
matplotlib.use('TkAgg')
setup(data_files=matplotlib.get_py2exe_datafiles(),console=['run.py'])
``` | 2012/01/06 | [
"https://Stackoverflow.com/questions/8765568",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/842785/"
] | If you are using py2exe it doesn't handle .egg formatted Python modules. If you used easy\_install to install the trouble module then you might only have the .egg version. See the py2exe site for more info on how to fix it.
<http://www.py2exe.org/index.cgi/ExeWithEggs> | This works well
from distutils.core import setup
import py2exe, sys, os
import matplotlib
sys.setrecursionlimit(12000)
sys.argv.append('py2exe')
setup(
options = {
"py2exe" : {
"bundle\_files":3,
"compressed":True,
"includes" : ["matplotlib.backends.backend\_tkagg"]
}
},
windows = [{"script": "script.py"}],
```
zipfile = None,
```
data\_files=matplotlib data\_files = matplotlib.get\_py2exe\_datafiles(),
) |
8,765,568 | I am trying to make a windows executable from a python script that uses matplotlib and it seems that I am getting a common error.
>
> File "run.py", line 29, in
> import matplotlib.pyplot as plt File "matplotlib\pyplot.pyc", line 95, in File "matplotlib\backends\_\_init\_\_.pyc", line
> 25, in pylab\_setup ImportError: No module named backend\_tkagg
>
>
>
The problem is that I didn't found a solution while googling all over the internet.
Here is my `setup.py`
```
from distutils.core import setup
import matplotlib
import py2exe
matplotlib.use('TkAgg')
setup(data_files=matplotlib.get_py2exe_datafiles(),console=['run.py'])
``` | 2012/01/06 | [
"https://Stackoverflow.com/questions/8765568",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/842785/"
] | If you are using py2exe it doesn't handle .egg formatted Python modules. If you used easy\_install to install the trouble module then you might only have the .egg version. See the py2exe site for more info on how to fix it.
<http://www.py2exe.org/index.cgi/ExeWithEggs> | Run the following command to install the backend\_tkagg
For centos -- **sudo yum install python-matplotlib-tk**
This should work. |
46,006,513 | I'm trying to evaluate the accuracy of an algorithm that segments regions in 3D MRI Volumes (Brain). I've been using Dice, Jaccard, FPR, TNR, Precision... etc but I've only done this pixelwise (I.E. FNs= number of false neg pixels). Is there a python package (or pseudo code) out there to do this at the lesion level? For example, calculate TPs as number of lesions (3d disconnected objects in grd trth) detected by my algorithm? This way the size of the lesion doesn't play as much of an effect on the accuracy metrics. | 2017/09/01 | [
"https://Stackoverflow.com/questions/46006513",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7914014/"
] | You could use scipy's [`label`](https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.ndimage.measurements.label.html) to find connected components in an image:
```
from scipy.ndimage.measurements import label
label_pred, numobj_pred = label(my_predictions)
label_true, numobj_true = label(my_groundtruth)
```
And then compare them using the metric of your choice.
PS: Or scikit-image's, with a demo [here](http://www.scipy-lectures.org/packages/scikit-image/auto_examples/plot_labels.html). | Here is the code I ended up writing to do this task. Please let me know if anyone sees any errors.
```
def distance(p1, p2,dim):
if dim==2: return math.sqrt((p2[0] - p1[0])**2 + (p2[1] - p1[1])**2)
elif dim==3: return math.sqrt((p2[0] - p1[0])**2 + (p2[1] - p1[1])**2+ (p2[2] - p1[2])**2)
else: print 'error'
def closest(true_cntrd,pred_pts,dim):
dist_list=[]
for pred_pt in pred_pts:
dist_list.append( distance(true_cntrd, pred_pt,dim) )
min_idx = np.argmin(dist_list)
return pred_pts[min_idx],min_idx
def eval_disconnected(y_true,y_pred,dim):
y_pred=y_pred>0.5
label_pred, numobj_pred = label(y_pred)
label_true, numobj_true = label(y_true)
true_labels,pred_labels=np.arange(numobj_true+1)[1:],np.arange(numobj_pred+1)[1:]
true_centroids=center_of_mass(y_true,label_true,true_labels)
pred_centroids=center_of_mass(y_pred,label_pred,pred_labels)
if len(pred_labels)==0:
TP,FN,FP=0,len(true_centroids),0
return TP,FN,FP
true_lbl_hit_list=[]
pred_lbl_hit_list=[]
for (cntr_true,lbl_t) in zip(true_centroids,np.arange(numobj_true+1)[1:]):
closest_pred_cntr,idx = closest(cntr_true,pred_centroids,dim)
closest_pred_cntr=tuple(int(coor) for coor in closest_pred_cntr)
if label_true[closest_pred_cntr]==lbl_t:
true_lbl_hit_list.append(lbl_t)
pred_lbl_hit_list.append(pred_labels[idx] )
pred_lbl_miss_list = [pred_lbl for pred_lbl in pred_labels if not(pred_lbl in pred_lbl_hit_list)]
true_lbl_miss_list = [true_lbl for true_lbl in true_labels if not(true_lbl in true_lbl_hit_list)]
TP=len(true_lbl_hit_list) # all the grd truth labels that were predicted
FN=len(true_lbl_miss_list) # all the grd trth labels that were missed
FP=len(pred_lbl_miss_list) # all of the predicted labels that didn't hit
return TP,FN,FP
``` |
67,959,301 | I want to print the code exactly after one min
```
import time
from datetime import datetime
while True:
time.sleep(1)
now = datetime.now()
current_datetime = now.strftime("%d-%m-%Y %H:%M:%S")
if current_datetime==today.strftime("%d-%m-%Y") + "09:15:00":
sec = 60
time.sleep(sec)
print("time : ", current_datetime)
```
I am trying to achieve these steps.
1. Start running the code at or before 09 am.
2. check if exactly 09.15 am today
3. print the time
4. Run after exactly 1 min and print time.
Output :
```
'2021-06-14 09:15:00+05:30'
'2021-06-14 09:16:00+05:30'
'2021-06-14 09:17:00+05:30'
'2021-06-14 09:18:00+05:30'
'2021-06-14 09:19:00+05:30'
'2021-06-14 09:20:00+05:30'
```
and so on till '2021-06-14 14:30:00+05:30'
What is the best pythonic way to do this? | 2021/06/13 | [
"https://Stackoverflow.com/questions/67959301",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/778942/"
] | No. They are different things. Auto-incremented columns in MySQL are not guaranteed to be gapless. Gaps can occur for multiple reasons. The most common are:
* Concurrent transactions.
* Deletion.
It sounds like you have a unique identifier in Java which is either redundant or an item of data. If the latter, then add it as an additional column.
More likely, though, you might want to reconsider your design, so there is only one auto-incremented value for a given record. I would recommend using the one in the database, because that would apply regardless of how inserts are made into the database. | It isn't compulsory to create and unique id field in the database . You can instead change the table like-->
```
CREATE TABLE companies (
'COMPANYID' int NOT NULL,
`NAME` varchar(200) DEFAULT NULL,
`EMAIL` varchar(200) DEFAULT NULL,
`PASSWORD` varchar(200) DEFAULT NULL,
PRIMARY KEY (`ID`)
```
since you are auto incrementing the same the same value twice , it will create some problems.
your ID column will be like this-->
```
Id|
---
2 |
---
4 |
--
6 |
--
8 |
```
it will increment the values twice |
39,852,963 | I have the following list of tuples already sorted, with "sorted" in python:
```
L = [("1","blaabal"),
("1.2","bbalab"),
("10","ejej"),
("11.1","aaua"),
("12.1","ehjej"),
("12.2 (c)", "ekeke"),
("12.2 (d)", "qwerty"),
("2.1","baala"),
("3","yuio"),
("4","poku"),
("5.2","qsdfg")]
```
My problem is as you can notice, at first it is good, though after "12.2 (d)" the list restart at "2.1",I don't how to solve this problem.
Thanks | 2016/10/04 | [
"https://Stackoverflow.com/questions/39852963",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6726377/"
] | Since the first element in each tuple is a string, Python is performing lexographic sorting in which all strings that start with `'1'` come before strings that start with a `'2'`.
To get the sorting you desire, you'll want to treat the first entry *as a `float`* instead of a string.
We can use `sorted` along with a custom sorting function which converts the first entry to a float prior to sorting. It also keeps the second tuple element to handle the case when you may have non-unique first entries.
```
result = sorted(L, key = lambda x: (float(x[0].split()[0]), x[1]))
# [('1', 'blaabal'), ('1.2', 'bbalab'), ('2.1', 'baala'), ('3', 'yuio'), ('4', 'poku'), ('5.2', 'qsdfg'), ('10', 'ejej'), ('11.1', 'aaua'), ('12.1', 'ehjej'), ('12.2 (c)', 'ekeke'), ('12.2 (d)', 'qwerty')]
```
I had to add in a `x[0].split()[0]` so that we split the first tuple element at the space and only grab the first pieces since some have values such as `'12.2 (d)'` and we only want the `'12.2'`.
If the second part of that first element that we've discarded matters, then you could use a sorting function similar to the following which breaks that first element into pieces and converts just the first piece to a float and leaves the rest as strings.
```
def sorter(value):
parts = value[0].split()
# Convert the first part to a number and leave all other parts as strings
parts[0] = float(parts[0]);
return (parts, value[1])
result = sorted(L, key = sorter)
``` | The first value of your tuples are strings, and are being sorted in lexicographic order. If you want them to remain strings, sort with
```
sorted(l, key = lambda x: float(x[0]))
``` |
39,852,963 | I have the following list of tuples already sorted, with "sorted" in python:
```
L = [("1","blaabal"),
("1.2","bbalab"),
("10","ejej"),
("11.1","aaua"),
("12.1","ehjej"),
("12.2 (c)", "ekeke"),
("12.2 (d)", "qwerty"),
("2.1","baala"),
("3","yuio"),
("4","poku"),
("5.2","qsdfg")]
```
My problem is as you can notice, at first it is good, though after "12.2 (d)" the list restart at "2.1",I don't how to solve this problem.
Thanks | 2016/10/04 | [
"https://Stackoverflow.com/questions/39852963",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6726377/"
] | There's a package made specifically for your case called [`natsort`](https://pypi.python.org/pypi/natsort):
```
>>> from natsort import natsorted
>>> L = [('1', 'blaabal'), ('4', 'poku'), ('12.2 (c)', 'ekeke'), ('12.1', 'ehjej')]
>>> natsorted(L)
[('1', 'blaabal'), ('4', 'poku'), ('12.1', 'ehjej'), ('12.2 (c)', 'ekeke')]
``` | The first value of your tuples are strings, and are being sorted in lexicographic order. If you want them to remain strings, sort with
```
sorted(l, key = lambda x: float(x[0]))
``` |
21,699,251 | I got a function to call an exec in **node.js** server. I'm really lost about getting the stdout back. This is function:
```
function callPythonFile(args) {
out = null
var exec = require('child_process').exec,
child;
child = exec("../Prácticas/python/Taylor.py 'sin(w)' -10 10 0 10",
function (error, stdout, stderr) {
console.log('stderr: ' + stderr)
if (error !== null)
console.log('exec error: ' + error);
out = stdout
})
return out
}
```
When I call to `console.log(stdout)` I actually get an output. But when I try to print outside the function, it's output, it'll always be null. I can't really see how I can get it | 2014/02/11 | [
"https://Stackoverflow.com/questions/21699251",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/742560/"
] | Because you return from the function before the exec is finished and the callback is executed.
Exec in this case is asynchronous and unfortunately there is no synchronous exec in node.js in the last version (0.10.x).
There are two ways to do what you are trying to do.
Wait until the exec is done
---------------------------
```
var exec = require('child_process').exec,
function callPythonFile (args, callback) {
var out = null
exec("../Prácticas/python/Taylor.py 'sin(w)' -10 10 0 10",
function (error, stdout, stderr) {
if (error !== null)
callback(err);
callback(null, out);
});
}
//then you call the function like this:
callPythonFile(args , function (err, out) {
console.log('output is', out);
});
```
You will see this pattern a lot in node.js, instead of returning something you have to pass a callback.
Return a ChildProcess object
----------------------------
The exec function returns a [ChildProcess](http://nodejs.org/api/child_process.html#child_process_class_childprocess) object which is basically an EventEmitter and has two important properties `stdout` and `stderr`:
```
var exec = require('child_process').exec,
function callPythonFile (args) {
return exec("../Prácticas/python/Taylor.py 'sin(w)' -10 10 0 10");
}
//then you call the function like this:
var proc = callPythonFile(args)
proc.stdout.on('data', function (data) {
//do something with data
});
proc.on('error', function (err) {
//handle the error
});
```
The interesting thing is that stdout and stderr are streams, so you can basically `pipe` to files, http responses, etc. and there are plenty of modules to handle streams. This is an http server that always call the process and reply with the stdout of the process:
```
var http = require('http');
http.createServer(function (req, res) {
callPythonFile(args).stdout.pipe(res);
}).listen(8080);
``` | Have a look here about the `exec`: [nodejs doc](http://nodejs.org/api/child_process.html#child_process_child_process_exec_command_options_callback).
The callback function does not really return anything. So if you want to "return" the output, why don't you just read the stream and return the resulting string ([nodejs doc](http://nodejs.org/api/stream.html#stream_readable_read_size))? |
3,289,330 | I have 5 python cgi pages. I can navigate from one page to another. All pages get their data from the same database table just that they use different queries.
The problem is that the application as a whole is slow. Though they connect to the same database, each page creates a new handle every time I visit it and handles are not shared by the pages.
I want to improve performance.
Can I do that by setting up sessions for the user?
Suggestions/Advices are welcome.
Thanks | 2010/07/20 | [
"https://Stackoverflow.com/questions/3289330",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/343409/"
] | cgi requires a new interpreter to start up for each request, and then all the resources such as db connections to be acquired and released.
[fastcgi](http://en.wikipedia.org/wiki/FastCGI) or [wsgi](http://en.wikipedia.org/wiki/Wsgi) improve performance by allowing you to keep running the same process between requests | Django and Pylons are both frameworks that solve this problem quite nicely, namely by abstracting the DB-frontend integration. They are worth considering. |
24,863,576 | I have a python script that have \_\_main\_\_ statement and took all values parametric.
I want to import and use it in my own script.
Actually I can import but don't know how to use it.
As you see below, \_\_main\_\_ is a bit complicated and rewriting it will take time because I even don't know what does most of code mean.
Want to know is there any way to import and use the code as a function?
```
import os
import sys
import time
import base64
from urllib2 import urlopen
from urllib2 import Request
from urllib2 import HTTPError
from urllib import urlencode
from urllib import quote
from exceptions import Exception
from email.mime.multipart import MIMEMultipart
from email.mime.base import MIMEBase
from email.mime.application import MIMEApplication
from email.encoders import encode_noop
from api_util import json2python, python2json
class MalformedResponse(Exception):
pass
class RequestError(Exception):
pass
class Client(object):
default_url = 'http://nova.astrometry.net/api/'
def __init__(self,
apiurl = default_url):
self.session = None
self.apiurl = apiurl
def get_url(self, service):
return self.apiurl + service
def send_request(self, service, args={}, file_args=None):
'''
service: string
args: dict
'''
if self.session is not None:
args.update({ 'session' : self.session })
print 'Python:', args
json = python2json(args)
print 'Sending json:', json
url = self.get_url(service)
print 'Sending to URL:', url
# If we're sending a file, format a multipart/form-data
if file_args is not None:
m1 = MIMEBase('text', 'plain')
m1.add_header('Content-disposition', 'form-data; name="request-json"')
m1.set_payload(json)
m2 = MIMEApplication(file_args[1],'octet-stream',encode_noop)
m2.add_header('Content-disposition',
'form-data; name="file"; filename="%s"' % file_args[0])
#msg.add_header('Content-Disposition', 'attachment',
# filename='bud.gif')
#msg.add_header('Content-Disposition', 'attachment',
# filename=('iso-8859-1', '', 'FuSballer.ppt'))
mp = MIMEMultipart('form-data', None, [m1, m2])
# Makie a custom generator to format it the way we need.
from cStringIO import StringIO
from email.generator import Generator
class MyGenerator(Generator):
def __init__(self, fp, root=True):
Generator.__init__(self, fp, mangle_from_=False,
maxheaderlen=0)
self.root = root
def _write_headers(self, msg):
# We don't want to write the top-level headers;
# they go into Request(headers) instead.
if self.root:
return
# We need to use \r\n line-terminator, but Generator
# doesn't provide the flexibility to override, so we
# have to copy-n-paste-n-modify.
for h, v in msg.items():
print >> self._fp, ('%s: %s\r\n' % (h,v)),
# A blank line always separates headers from body
print >> self._fp, '\r\n',
# The _write_multipart method calls "clone" for the
# subparts. We hijack that, setting root=False
def clone(self, fp):
return MyGenerator(fp, root=False)
fp = StringIO()
g = MyGenerator(fp)
g.flatten(mp)
data = fp.getvalue()
headers = {'Content-type': mp.get('Content-type')}
if False:
print 'Sending headers:'
print ' ', headers
print 'Sending data:'
print data[:1024].replace('\n', '\\n\n').replace('\r', '\\r')
if len(data) > 1024:
print '...'
print data[-256:].replace('\n', '\\n\n').replace('\r', '\\r')
print
else:
# Else send x-www-form-encoded
data = {'request-json': json}
print 'Sending form data:', data
data = urlencode(data)
print 'Sending data:', data
headers = {}
request = Request(url=url, headers=headers, data=data)
try:
f = urlopen(request)
txt = f.read()
print 'Got json:', txt
result = json2python(txt)
print 'Got result:', result
stat = result.get('status')
print 'Got status:', stat
if stat == 'error':
errstr = result.get('errormessage', '(none)')
raise RequestError('server error message: ' + errstr)
return result
except HTTPError, e:
print 'HTTPError', e
txt = e.read()
open('err.html', 'wb').write(txt)
print 'Wrote error text to err.html'
def login(self, apikey):
args = { 'apikey' : apikey }
result = self.send_request('login', args)
sess = result.get('session')
print 'Got session:', sess
if not sess:
raise RequestError('no session in result')
self.session = sess
def _get_upload_args(self, **kwargs):
args = {}
for key,default,typ in [('allow_commercial_use', 'd', str),
('allow_modifications', 'd', str),
('publicly_visible', 'y', str),
('scale_units', None, str),
('scale_type', None, str),
('scale_lower', None, float),
('scale_upper', None, float),
('scale_est', None, float),
('scale_err', None, float),
('center_ra', None, float),
('center_dec', None, float),
('radius', None, float),
('downsample_factor', None, int),
('tweak_order', None, int),
('crpix_center', None, bool),
# image_width, image_height
]:
if key in kwargs:
val = kwargs.pop(key)
val = typ(val)
args.update({key: val})
elif default is not None:
args.update({key: default})
print 'Upload args:', args
return args
def url_upload(self, url, **kwargs):
args = dict(url=url)
args.update(self._get_upload_args(**kwargs))
result = self.send_request('url_upload', args)
return result
def upload(self, fn, **kwargs):
args = self._get_upload_args(**kwargs)
try:
f = open(fn, 'rb')
result = self.send_request('upload', args, (fn, f.read()))
return result
except IOError:
print 'File %s does not exist' % fn
raise
def submission_images(self, subid):
result = self.send_request('submission_images', {'subid':subid})
return result.get('image_ids')
def overlay_plot(self, service, outfn, wcsfn, wcsext=0):
from astrometry.util import util as anutil
wcs = anutil.Tan(wcsfn, wcsext)
params = dict(crval1 = wcs.crval[0], crval2 = wcs.crval[1],
crpix1 = wcs.crpix[0], crpix2 = wcs.crpix[1],
cd11 = wcs.cd[0], cd12 = wcs.cd[1],
cd21 = wcs.cd[2], cd22 = wcs.cd[3],
imagew = wcs.imagew, imageh = wcs.imageh)
result = self.send_request(service, {'wcs':params})
print 'Result status:', result['status']
plotdata = result['plot']
plotdata = base64.b64decode(plotdata)
open(outfn, 'wb').write(plotdata)
print 'Wrote', outfn
def sdss_plot(self, outfn, wcsfn, wcsext=0):
return self.overlay_plot('sdss_image_for_wcs', outfn,
wcsfn, wcsext)
def galex_plot(self, outfn, wcsfn, wcsext=0):
return self.overlay_plot('galex_image_for_wcs', outfn,
wcsfn, wcsext)
def myjobs(self):
result = self.send_request('myjobs/')
return result['jobs']
def job_status(self, job_id, justdict=False):
result = self.send_request('jobs/%s' % job_id)
if justdict:
return result
stat = result.get('status')
if stat == 'success':
result = self.send_request('jobs/%s/calibration' % job_id)
print 'Calibration:', result
result = self.send_request('jobs/%s/tags' % job_id)
print 'Tags:', result
result = self.send_request('jobs/%s/machine_tags' % job_id)
print 'Machine Tags:', result
result = self.send_request('jobs/%s/objects_in_field' % job_id)
print 'Objects in field:', result
result = self.send_request('jobs/%s/annotations' % job_id)
print 'Annotations:', result
result = self.send_request('jobs/%s/info' % job_id)
print 'Calibration:', result
return stat
def sub_status(self, sub_id, justdict=False):
result = self.send_request('submissions/%s' % sub_id)
if justdict:
return result
return result.get('status')
def jobs_by_tag(self, tag, exact):
exact_option = 'exact=yes' if exact else ''
result = self.send_request(
'jobs_by_tag?query=%s&%s' % (quote(tag.strip()), exact_option),
{},
)
return result
if __name__ == '__main__':
import optparse
parser = optparse.OptionParser()
parser.add_option('--server', dest='server', default=Client.default_url,
help='Set server base URL (eg, %default)')
parser.add_option('--apikey', '-k', dest='apikey',
help='API key for Astrometry.net web service; if not given will check AN_API_KEY environment variable')
parser.add_option('--upload', '-u', dest='upload', help='Upload a file')
parser.add_option('--wait', '-w', dest='wait', action='store_true', help='After submitting, monitor job status')
parser.add_option('--wcs', dest='wcs', help='Download resulting wcs.fits file, saving to given filename; implies --wait if --urlupload or --upload')
parser.add_option('--kmz', dest='kmz', help='Download resulting kmz file, saving to given filename; implies --wait if --urlupload or --upload')
parser.add_option('--urlupload', '-U', dest='upload_url', help='Upload a file at specified url')
parser.add_option('--scale-units', dest='scale_units',
choices=('arcsecperpix', 'arcminwidth', 'degwidth', 'focalmm'), help='Units for scale estimate')
#parser.add_option('--scale-type', dest='scale_type',
# choices=('ul', 'ev'), help='Scale bounds: lower/upper or estimate/error')
parser.add_option('--scale-lower', dest='scale_lower', type=float, help='Scale lower-bound')
parser.add_option('--scale-upper', dest='scale_upper', type=float, help='Scale upper-bound')
parser.add_option('--scale-est', dest='scale_est', type=float, help='Scale estimate')
parser.add_option('--scale-err', dest='scale_err', type=float, help='Scale estimate error (in PERCENT), eg "10" if you estimate can be off by 10%')
parser.add_option('--ra', dest='center_ra', type=float, help='RA center')
parser.add_option('--dec', dest='center_dec', type=float, help='Dec center')
parser.add_option('--radius', dest='radius', type=float, help='Search radius around RA,Dec center')
parser.add_option('--downsample', dest='downsample_factor', type=int, help='Downsample image by this factor')
parser.add_option('--parity', dest='parity', choices=('0','1'), help='Parity (flip) of image')
parser.add_option('--tweak-order', dest='tweak_order', type=int, help='SIP distortion order (default: 2)')
parser.add_option('--crpix-center', dest='crpix_center', action='store_true', default=None, help='Set reference point to center of image?')
parser.add_option('--sdss', dest='sdss_wcs', nargs=2, help='Plot SDSS image for the given WCS file; write plot to given PNG filename')
parser.add_option('--galex', dest='galex_wcs', nargs=2, help='Plot GALEX image for the given WCS file; write plot to given PNG filename')
parser.add_option('--substatus', '-s', dest='sub_id', help='Get status of a submission')
parser.add_option('--jobstatus', '-j', dest='job_id', help='Get status of a job')
parser.add_option('--jobs', '-J', dest='myjobs', action='store_true', help='Get all my jobs')
parser.add_option('--jobsbyexacttag', '-T', dest='jobs_by_exact_tag', help='Get a list of jobs associated with a given tag--exact match')
parser.add_option('--jobsbytag', '-t', dest='jobs_by_tag', help='Get a list of jobs associated with a given tag')
parser.add_option( '--private', '-p',
dest='public',
action='store_const',
const='n',
default='y',
help='Hide this submission from other users')
parser.add_option('--allow_mod_sa','-m',
dest='allow_mod',
action='store_const',
const='sa',
default='d',
help='Select license to allow derivative works of submission, but only if shared under same conditions of original license')
parser.add_option('--no_mod','-M',
dest='allow_mod',
action='store_const',
const='n',
default='d',
help='Select license to disallow derivative works of submission')
parser.add_option('--no_commercial','-c',
dest='allow_commercial',
action='store_const',
const='n',
default='d',
help='Select license to disallow commercial use of submission')
opt,args = parser.parse_args()
if opt.apikey is None:
# try the environment
opt.apikey = os.environ.get('AN_API_KEY', None)
if opt.apikey is None:
parser.print_help()
print
print 'You must either specify --apikey or set AN_API_KEY'
sys.exit(-1)
args = {}
args['apiurl'] = opt.server
c = Client(**args)
c.login(opt.apikey)
if opt.upload or opt.upload_url:
if opt.wcs or opt.kmz:
opt.wait = True
kwargs = dict(
allow_commercial_use=opt.allow_commercial,
allow_modifications=opt.allow_mod,
publicly_visible=opt.public)
if opt.scale_lower and opt.scale_upper:
kwargs.update(scale_lower=opt.scale_lower,
scale_upper=opt.scale_upper,
scale_type='ul')
elif opt.scale_est and opt.scale_err:
kwargs.update(scale_est=opt.scale_est,
scale_err=opt.scale_err,
scale_type='ev')
elif opt.scale_lower or opt.scale_upper:
kwargs.update(scale_type='ul')
if opt.scale_lower:
kwargs.update(scale_lower=opt.scale_lower)
if opt.scale_upper:
kwargs.update(scale_upper=opt.scale_upper)
for key in ['scale_units', 'center_ra', 'center_dec', 'radius',
'downsample_factor', 'tweak_order', 'crpix_center',]:
if getattr(opt, key) is not None:
kwargs[key] = getattr(opt, key)
if opt.parity is not None:
kwargs.update(parity=int(opt.parity))
if opt.upload:
upres = c.upload(opt.upload, **kwargs)
if opt.upload_url:
upres = c.url_upload(opt.upload_url, **kwargs)
stat = upres['status']
if stat != 'success':
print 'Upload failed: status', stat
print upres
sys.exit(-1)
opt.sub_id = upres['subid']
if opt.wait:
if opt.job_id is None:
if opt.sub_id is None:
print "Can't --wait without a submission id or job id!"
sys.exit(-1)
while True:
stat = c.sub_status(opt.sub_id, justdict=True)
print 'Got status:', stat
jobs = stat.get('jobs', [])
if len(jobs):
for j in jobs:
if j is not None:
break
if j is not None:
print 'Selecting job id', j
opt.job_id = j
break
time.sleep(5)
success = False
while True:
stat = c.job_status(opt.job_id, justdict=True)
print 'Got job status:', stat
if stat.get('status','') in ['success']:
success = (stat['status'] == 'success')
break
time.sleep(5)
if success:
c.job_status(opt.job_id)
# result = c.send_request('jobs/%s/calibration' % opt.job_id)
# print 'Calibration:', result
# result = c.send_request('jobs/%s/tags' % opt.job_id)
# print 'Tags:', result
# result = c.send_request('jobs/%s/machine_tags' % opt.job_id)
# print 'Machine Tags:', result
# result = c.send_request('jobs/%s/objects_in_field' % opt.job_id)
# print 'Objects in field:', result
#result = c.send_request('jobs/%s/annotations' % opt.job_id)
#print 'Annotations:', result
retrieveurls = []
if opt.wcs:
# We don't need the API for this, just construct URL
url = opt.server.replace('/api/', '/wcs_file/%i' % opt.job_id)
retrieveurls.append((url, opt.wcs))
if opt.kmz:
url = opt.server.replace('/api/', '/kml_file/%i/' % opt.job_id)
retrieveurls.append((url, opt.kmz))
for url,fn in retrieveurls:
print 'Retrieving file from', url, 'to', fn
f = urlopen(url)
txt = f.read()
w = open(fn, 'wb')
w.write(txt)
w.close()
print 'Wrote to', fn
opt.job_id = None
opt.sub_id = None
if opt.sdss_wcs:
(wcsfn, outfn) = opt.sdss_wcs
c.sdss_plot(outfn, wcsfn)
if opt.galex_wcs:
(wcsfn, outfn) = opt.galex_wcs
c.galex_plot(outfn, wcsfn)
if opt.sub_id:
print c.sub_status(opt.sub_id)
if opt.job_id:
print c.job_status(opt.job_id)
#result = c.send_request('jobs/%s/annotations' % opt.job_id)
#print 'Annotations:', result
if opt.jobs_by_tag:
tag = opt.jobs_by_tag
print c.jobs_by_tag(tag, None)
if opt.jobs_by_exact_tag:
tag = opt.jobs_by_exact_tag
print c.jobs_by_tag(tag, 'yes')
if opt.myjobs:
jobs = c.myjobs()
print jobs
#print c.submission_images(1)
``` | 2014/07/21 | [
"https://Stackoverflow.com/questions/24863576",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2681662/"
] | No, there is no clean way to do so. When the module is being imported, it's code is executed and all global variables are set as attributes to the module object. So if part of the code is not executed at all (is guarded by `__main__` condition) there is no clean way to get access to that code. You can however run code of this module with substituted `__name__` but that's very hackish.
You should refactor this module and move whole `__main__` part into a method and call it like this:
```
def main():
do_everything()
if __name__ == '__main__':
main()
```
This way consumer apps will be able to run code without having to run it in a separate process. | by what your saying you want to call a function in the script that is importing the module so try:
```
import __main__
__main__.myfunc()
``` |
24,863,576 | I have a python script that have \_\_main\_\_ statement and took all values parametric.
I want to import and use it in my own script.
Actually I can import but don't know how to use it.
As you see below, \_\_main\_\_ is a bit complicated and rewriting it will take time because I even don't know what does most of code mean.
Want to know is there any way to import and use the code as a function?
```
import os
import sys
import time
import base64
from urllib2 import urlopen
from urllib2 import Request
from urllib2 import HTTPError
from urllib import urlencode
from urllib import quote
from exceptions import Exception
from email.mime.multipart import MIMEMultipart
from email.mime.base import MIMEBase
from email.mime.application import MIMEApplication
from email.encoders import encode_noop
from api_util import json2python, python2json
class MalformedResponse(Exception):
pass
class RequestError(Exception):
pass
class Client(object):
default_url = 'http://nova.astrometry.net/api/'
def __init__(self,
apiurl = default_url):
self.session = None
self.apiurl = apiurl
def get_url(self, service):
return self.apiurl + service
def send_request(self, service, args={}, file_args=None):
'''
service: string
args: dict
'''
if self.session is not None:
args.update({ 'session' : self.session })
print 'Python:', args
json = python2json(args)
print 'Sending json:', json
url = self.get_url(service)
print 'Sending to URL:', url
# If we're sending a file, format a multipart/form-data
if file_args is not None:
m1 = MIMEBase('text', 'plain')
m1.add_header('Content-disposition', 'form-data; name="request-json"')
m1.set_payload(json)
m2 = MIMEApplication(file_args[1],'octet-stream',encode_noop)
m2.add_header('Content-disposition',
'form-data; name="file"; filename="%s"' % file_args[0])
#msg.add_header('Content-Disposition', 'attachment',
# filename='bud.gif')
#msg.add_header('Content-Disposition', 'attachment',
# filename=('iso-8859-1', '', 'FuSballer.ppt'))
mp = MIMEMultipart('form-data', None, [m1, m2])
# Makie a custom generator to format it the way we need.
from cStringIO import StringIO
from email.generator import Generator
class MyGenerator(Generator):
def __init__(self, fp, root=True):
Generator.__init__(self, fp, mangle_from_=False,
maxheaderlen=0)
self.root = root
def _write_headers(self, msg):
# We don't want to write the top-level headers;
# they go into Request(headers) instead.
if self.root:
return
# We need to use \r\n line-terminator, but Generator
# doesn't provide the flexibility to override, so we
# have to copy-n-paste-n-modify.
for h, v in msg.items():
print >> self._fp, ('%s: %s\r\n' % (h,v)),
# A blank line always separates headers from body
print >> self._fp, '\r\n',
# The _write_multipart method calls "clone" for the
# subparts. We hijack that, setting root=False
def clone(self, fp):
return MyGenerator(fp, root=False)
fp = StringIO()
g = MyGenerator(fp)
g.flatten(mp)
data = fp.getvalue()
headers = {'Content-type': mp.get('Content-type')}
if False:
print 'Sending headers:'
print ' ', headers
print 'Sending data:'
print data[:1024].replace('\n', '\\n\n').replace('\r', '\\r')
if len(data) > 1024:
print '...'
print data[-256:].replace('\n', '\\n\n').replace('\r', '\\r')
print
else:
# Else send x-www-form-encoded
data = {'request-json': json}
print 'Sending form data:', data
data = urlencode(data)
print 'Sending data:', data
headers = {}
request = Request(url=url, headers=headers, data=data)
try:
f = urlopen(request)
txt = f.read()
print 'Got json:', txt
result = json2python(txt)
print 'Got result:', result
stat = result.get('status')
print 'Got status:', stat
if stat == 'error':
errstr = result.get('errormessage', '(none)')
raise RequestError('server error message: ' + errstr)
return result
except HTTPError, e:
print 'HTTPError', e
txt = e.read()
open('err.html', 'wb').write(txt)
print 'Wrote error text to err.html'
def login(self, apikey):
args = { 'apikey' : apikey }
result = self.send_request('login', args)
sess = result.get('session')
print 'Got session:', sess
if not sess:
raise RequestError('no session in result')
self.session = sess
def _get_upload_args(self, **kwargs):
args = {}
for key,default,typ in [('allow_commercial_use', 'd', str),
('allow_modifications', 'd', str),
('publicly_visible', 'y', str),
('scale_units', None, str),
('scale_type', None, str),
('scale_lower', None, float),
('scale_upper', None, float),
('scale_est', None, float),
('scale_err', None, float),
('center_ra', None, float),
('center_dec', None, float),
('radius', None, float),
('downsample_factor', None, int),
('tweak_order', None, int),
('crpix_center', None, bool),
# image_width, image_height
]:
if key in kwargs:
val = kwargs.pop(key)
val = typ(val)
args.update({key: val})
elif default is not None:
args.update({key: default})
print 'Upload args:', args
return args
def url_upload(self, url, **kwargs):
args = dict(url=url)
args.update(self._get_upload_args(**kwargs))
result = self.send_request('url_upload', args)
return result
def upload(self, fn, **kwargs):
args = self._get_upload_args(**kwargs)
try:
f = open(fn, 'rb')
result = self.send_request('upload', args, (fn, f.read()))
return result
except IOError:
print 'File %s does not exist' % fn
raise
def submission_images(self, subid):
result = self.send_request('submission_images', {'subid':subid})
return result.get('image_ids')
def overlay_plot(self, service, outfn, wcsfn, wcsext=0):
from astrometry.util import util as anutil
wcs = anutil.Tan(wcsfn, wcsext)
params = dict(crval1 = wcs.crval[0], crval2 = wcs.crval[1],
crpix1 = wcs.crpix[0], crpix2 = wcs.crpix[1],
cd11 = wcs.cd[0], cd12 = wcs.cd[1],
cd21 = wcs.cd[2], cd22 = wcs.cd[3],
imagew = wcs.imagew, imageh = wcs.imageh)
result = self.send_request(service, {'wcs':params})
print 'Result status:', result['status']
plotdata = result['plot']
plotdata = base64.b64decode(plotdata)
open(outfn, 'wb').write(plotdata)
print 'Wrote', outfn
def sdss_plot(self, outfn, wcsfn, wcsext=0):
return self.overlay_plot('sdss_image_for_wcs', outfn,
wcsfn, wcsext)
def galex_plot(self, outfn, wcsfn, wcsext=0):
return self.overlay_plot('galex_image_for_wcs', outfn,
wcsfn, wcsext)
def myjobs(self):
result = self.send_request('myjobs/')
return result['jobs']
def job_status(self, job_id, justdict=False):
result = self.send_request('jobs/%s' % job_id)
if justdict:
return result
stat = result.get('status')
if stat == 'success':
result = self.send_request('jobs/%s/calibration' % job_id)
print 'Calibration:', result
result = self.send_request('jobs/%s/tags' % job_id)
print 'Tags:', result
result = self.send_request('jobs/%s/machine_tags' % job_id)
print 'Machine Tags:', result
result = self.send_request('jobs/%s/objects_in_field' % job_id)
print 'Objects in field:', result
result = self.send_request('jobs/%s/annotations' % job_id)
print 'Annotations:', result
result = self.send_request('jobs/%s/info' % job_id)
print 'Calibration:', result
return stat
def sub_status(self, sub_id, justdict=False):
result = self.send_request('submissions/%s' % sub_id)
if justdict:
return result
return result.get('status')
def jobs_by_tag(self, tag, exact):
exact_option = 'exact=yes' if exact else ''
result = self.send_request(
'jobs_by_tag?query=%s&%s' % (quote(tag.strip()), exact_option),
{},
)
return result
if __name__ == '__main__':
import optparse
parser = optparse.OptionParser()
parser.add_option('--server', dest='server', default=Client.default_url,
help='Set server base URL (eg, %default)')
parser.add_option('--apikey', '-k', dest='apikey',
help='API key for Astrometry.net web service; if not given will check AN_API_KEY environment variable')
parser.add_option('--upload', '-u', dest='upload', help='Upload a file')
parser.add_option('--wait', '-w', dest='wait', action='store_true', help='After submitting, monitor job status')
parser.add_option('--wcs', dest='wcs', help='Download resulting wcs.fits file, saving to given filename; implies --wait if --urlupload or --upload')
parser.add_option('--kmz', dest='kmz', help='Download resulting kmz file, saving to given filename; implies --wait if --urlupload or --upload')
parser.add_option('--urlupload', '-U', dest='upload_url', help='Upload a file at specified url')
parser.add_option('--scale-units', dest='scale_units',
choices=('arcsecperpix', 'arcminwidth', 'degwidth', 'focalmm'), help='Units for scale estimate')
#parser.add_option('--scale-type', dest='scale_type',
# choices=('ul', 'ev'), help='Scale bounds: lower/upper or estimate/error')
parser.add_option('--scale-lower', dest='scale_lower', type=float, help='Scale lower-bound')
parser.add_option('--scale-upper', dest='scale_upper', type=float, help='Scale upper-bound')
parser.add_option('--scale-est', dest='scale_est', type=float, help='Scale estimate')
parser.add_option('--scale-err', dest='scale_err', type=float, help='Scale estimate error (in PERCENT), eg "10" if you estimate can be off by 10%')
parser.add_option('--ra', dest='center_ra', type=float, help='RA center')
parser.add_option('--dec', dest='center_dec', type=float, help='Dec center')
parser.add_option('--radius', dest='radius', type=float, help='Search radius around RA,Dec center')
parser.add_option('--downsample', dest='downsample_factor', type=int, help='Downsample image by this factor')
parser.add_option('--parity', dest='parity', choices=('0','1'), help='Parity (flip) of image')
parser.add_option('--tweak-order', dest='tweak_order', type=int, help='SIP distortion order (default: 2)')
parser.add_option('--crpix-center', dest='crpix_center', action='store_true', default=None, help='Set reference point to center of image?')
parser.add_option('--sdss', dest='sdss_wcs', nargs=2, help='Plot SDSS image for the given WCS file; write plot to given PNG filename')
parser.add_option('--galex', dest='galex_wcs', nargs=2, help='Plot GALEX image for the given WCS file; write plot to given PNG filename')
parser.add_option('--substatus', '-s', dest='sub_id', help='Get status of a submission')
parser.add_option('--jobstatus', '-j', dest='job_id', help='Get status of a job')
parser.add_option('--jobs', '-J', dest='myjobs', action='store_true', help='Get all my jobs')
parser.add_option('--jobsbyexacttag', '-T', dest='jobs_by_exact_tag', help='Get a list of jobs associated with a given tag--exact match')
parser.add_option('--jobsbytag', '-t', dest='jobs_by_tag', help='Get a list of jobs associated with a given tag')
parser.add_option( '--private', '-p',
dest='public',
action='store_const',
const='n',
default='y',
help='Hide this submission from other users')
parser.add_option('--allow_mod_sa','-m',
dest='allow_mod',
action='store_const',
const='sa',
default='d',
help='Select license to allow derivative works of submission, but only if shared under same conditions of original license')
parser.add_option('--no_mod','-M',
dest='allow_mod',
action='store_const',
const='n',
default='d',
help='Select license to disallow derivative works of submission')
parser.add_option('--no_commercial','-c',
dest='allow_commercial',
action='store_const',
const='n',
default='d',
help='Select license to disallow commercial use of submission')
opt,args = parser.parse_args()
if opt.apikey is None:
# try the environment
opt.apikey = os.environ.get('AN_API_KEY', None)
if opt.apikey is None:
parser.print_help()
print
print 'You must either specify --apikey or set AN_API_KEY'
sys.exit(-1)
args = {}
args['apiurl'] = opt.server
c = Client(**args)
c.login(opt.apikey)
if opt.upload or opt.upload_url:
if opt.wcs or opt.kmz:
opt.wait = True
kwargs = dict(
allow_commercial_use=opt.allow_commercial,
allow_modifications=opt.allow_mod,
publicly_visible=opt.public)
if opt.scale_lower and opt.scale_upper:
kwargs.update(scale_lower=opt.scale_lower,
scale_upper=opt.scale_upper,
scale_type='ul')
elif opt.scale_est and opt.scale_err:
kwargs.update(scale_est=opt.scale_est,
scale_err=opt.scale_err,
scale_type='ev')
elif opt.scale_lower or opt.scale_upper:
kwargs.update(scale_type='ul')
if opt.scale_lower:
kwargs.update(scale_lower=opt.scale_lower)
if opt.scale_upper:
kwargs.update(scale_upper=opt.scale_upper)
for key in ['scale_units', 'center_ra', 'center_dec', 'radius',
'downsample_factor', 'tweak_order', 'crpix_center',]:
if getattr(opt, key) is not None:
kwargs[key] = getattr(opt, key)
if opt.parity is not None:
kwargs.update(parity=int(opt.parity))
if opt.upload:
upres = c.upload(opt.upload, **kwargs)
if opt.upload_url:
upres = c.url_upload(opt.upload_url, **kwargs)
stat = upres['status']
if stat != 'success':
print 'Upload failed: status', stat
print upres
sys.exit(-1)
opt.sub_id = upres['subid']
if opt.wait:
if opt.job_id is None:
if opt.sub_id is None:
print "Can't --wait without a submission id or job id!"
sys.exit(-1)
while True:
stat = c.sub_status(opt.sub_id, justdict=True)
print 'Got status:', stat
jobs = stat.get('jobs', [])
if len(jobs):
for j in jobs:
if j is not None:
break
if j is not None:
print 'Selecting job id', j
opt.job_id = j
break
time.sleep(5)
success = False
while True:
stat = c.job_status(opt.job_id, justdict=True)
print 'Got job status:', stat
if stat.get('status','') in ['success']:
success = (stat['status'] == 'success')
break
time.sleep(5)
if success:
c.job_status(opt.job_id)
# result = c.send_request('jobs/%s/calibration' % opt.job_id)
# print 'Calibration:', result
# result = c.send_request('jobs/%s/tags' % opt.job_id)
# print 'Tags:', result
# result = c.send_request('jobs/%s/machine_tags' % opt.job_id)
# print 'Machine Tags:', result
# result = c.send_request('jobs/%s/objects_in_field' % opt.job_id)
# print 'Objects in field:', result
#result = c.send_request('jobs/%s/annotations' % opt.job_id)
#print 'Annotations:', result
retrieveurls = []
if opt.wcs:
# We don't need the API for this, just construct URL
url = opt.server.replace('/api/', '/wcs_file/%i' % opt.job_id)
retrieveurls.append((url, opt.wcs))
if opt.kmz:
url = opt.server.replace('/api/', '/kml_file/%i/' % opt.job_id)
retrieveurls.append((url, opt.kmz))
for url,fn in retrieveurls:
print 'Retrieving file from', url, 'to', fn
f = urlopen(url)
txt = f.read()
w = open(fn, 'wb')
w.write(txt)
w.close()
print 'Wrote to', fn
opt.job_id = None
opt.sub_id = None
if opt.sdss_wcs:
(wcsfn, outfn) = opt.sdss_wcs
c.sdss_plot(outfn, wcsfn)
if opt.galex_wcs:
(wcsfn, outfn) = opt.galex_wcs
c.galex_plot(outfn, wcsfn)
if opt.sub_id:
print c.sub_status(opt.sub_id)
if opt.job_id:
print c.job_status(opt.job_id)
#result = c.send_request('jobs/%s/annotations' % opt.job_id)
#print 'Annotations:', result
if opt.jobs_by_tag:
tag = opt.jobs_by_tag
print c.jobs_by_tag(tag, None)
if opt.jobs_by_exact_tag:
tag = opt.jobs_by_exact_tag
print c.jobs_by_tag(tag, 'yes')
if opt.myjobs:
jobs = c.myjobs()
print jobs
#print c.submission_images(1)
``` | 2014/07/21 | [
"https://Stackoverflow.com/questions/24863576",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2681662/"
] | No, there is no clean way to do so. When the module is being imported, it's code is executed and all global variables are set as attributes to the module object. So if part of the code is not executed at all (is guarded by `__main__` condition) there is no clean way to get access to that code. You can however run code of this module with substituted `__name__` but that's very hackish.
You should refactor this module and move whole `__main__` part into a method and call it like this:
```
def main():
do_everything()
if __name__ == '__main__':
main()
```
This way consumer apps will be able to run code without having to run it in a separate process. | Use the ***runpy*** module in the [Python 3 Standard Library](https://docs.python.org/3/library/runpy.html)
See that data can be passed to and from the called script
```py
# top.py
import runpy
import sys
sys.argv += ["another parameter"]
module_globals_dict = runpy.run_path("other_script.py",
init_globals = globals(), run_name="__main__")
print(module_globals_dict["return_value"])
```
```py
# other_script.py
# Note we did not load sys module, it gets passed to this script
script_name = sys.argv[0]
print(f"Script {script_name} loaded")
if __name__ == "__main__":
params = sys.argv[1:]
print(f"Script {script_name} run with params: {params}")
return_value = f"{script_name} Done"
``` |
24,863,576 | I have a python script that have \_\_main\_\_ statement and took all values parametric.
I want to import and use it in my own script.
Actually I can import but don't know how to use it.
As you see below, \_\_main\_\_ is a bit complicated and rewriting it will take time because I even don't know what does most of code mean.
Want to know is there any way to import and use the code as a function?
```
import os
import sys
import time
import base64
from urllib2 import urlopen
from urllib2 import Request
from urllib2 import HTTPError
from urllib import urlencode
from urllib import quote
from exceptions import Exception
from email.mime.multipart import MIMEMultipart
from email.mime.base import MIMEBase
from email.mime.application import MIMEApplication
from email.encoders import encode_noop
from api_util import json2python, python2json
class MalformedResponse(Exception):
pass
class RequestError(Exception):
pass
class Client(object):
default_url = 'http://nova.astrometry.net/api/'
def __init__(self,
apiurl = default_url):
self.session = None
self.apiurl = apiurl
def get_url(self, service):
return self.apiurl + service
def send_request(self, service, args={}, file_args=None):
'''
service: string
args: dict
'''
if self.session is not None:
args.update({ 'session' : self.session })
print 'Python:', args
json = python2json(args)
print 'Sending json:', json
url = self.get_url(service)
print 'Sending to URL:', url
# If we're sending a file, format a multipart/form-data
if file_args is not None:
m1 = MIMEBase('text', 'plain')
m1.add_header('Content-disposition', 'form-data; name="request-json"')
m1.set_payload(json)
m2 = MIMEApplication(file_args[1],'octet-stream',encode_noop)
m2.add_header('Content-disposition',
'form-data; name="file"; filename="%s"' % file_args[0])
#msg.add_header('Content-Disposition', 'attachment',
# filename='bud.gif')
#msg.add_header('Content-Disposition', 'attachment',
# filename=('iso-8859-1', '', 'FuSballer.ppt'))
mp = MIMEMultipart('form-data', None, [m1, m2])
# Makie a custom generator to format it the way we need.
from cStringIO import StringIO
from email.generator import Generator
class MyGenerator(Generator):
def __init__(self, fp, root=True):
Generator.__init__(self, fp, mangle_from_=False,
maxheaderlen=0)
self.root = root
def _write_headers(self, msg):
# We don't want to write the top-level headers;
# they go into Request(headers) instead.
if self.root:
return
# We need to use \r\n line-terminator, but Generator
# doesn't provide the flexibility to override, so we
# have to copy-n-paste-n-modify.
for h, v in msg.items():
print >> self._fp, ('%s: %s\r\n' % (h,v)),
# A blank line always separates headers from body
print >> self._fp, '\r\n',
# The _write_multipart method calls "clone" for the
# subparts. We hijack that, setting root=False
def clone(self, fp):
return MyGenerator(fp, root=False)
fp = StringIO()
g = MyGenerator(fp)
g.flatten(mp)
data = fp.getvalue()
headers = {'Content-type': mp.get('Content-type')}
if False:
print 'Sending headers:'
print ' ', headers
print 'Sending data:'
print data[:1024].replace('\n', '\\n\n').replace('\r', '\\r')
if len(data) > 1024:
print '...'
print data[-256:].replace('\n', '\\n\n').replace('\r', '\\r')
print
else:
# Else send x-www-form-encoded
data = {'request-json': json}
print 'Sending form data:', data
data = urlencode(data)
print 'Sending data:', data
headers = {}
request = Request(url=url, headers=headers, data=data)
try:
f = urlopen(request)
txt = f.read()
print 'Got json:', txt
result = json2python(txt)
print 'Got result:', result
stat = result.get('status')
print 'Got status:', stat
if stat == 'error':
errstr = result.get('errormessage', '(none)')
raise RequestError('server error message: ' + errstr)
return result
except HTTPError, e:
print 'HTTPError', e
txt = e.read()
open('err.html', 'wb').write(txt)
print 'Wrote error text to err.html'
def login(self, apikey):
args = { 'apikey' : apikey }
result = self.send_request('login', args)
sess = result.get('session')
print 'Got session:', sess
if not sess:
raise RequestError('no session in result')
self.session = sess
def _get_upload_args(self, **kwargs):
args = {}
for key,default,typ in [('allow_commercial_use', 'd', str),
('allow_modifications', 'd', str),
('publicly_visible', 'y', str),
('scale_units', None, str),
('scale_type', None, str),
('scale_lower', None, float),
('scale_upper', None, float),
('scale_est', None, float),
('scale_err', None, float),
('center_ra', None, float),
('center_dec', None, float),
('radius', None, float),
('downsample_factor', None, int),
('tweak_order', None, int),
('crpix_center', None, bool),
# image_width, image_height
]:
if key in kwargs:
val = kwargs.pop(key)
val = typ(val)
args.update({key: val})
elif default is not None:
args.update({key: default})
print 'Upload args:', args
return args
def url_upload(self, url, **kwargs):
args = dict(url=url)
args.update(self._get_upload_args(**kwargs))
result = self.send_request('url_upload', args)
return result
def upload(self, fn, **kwargs):
args = self._get_upload_args(**kwargs)
try:
f = open(fn, 'rb')
result = self.send_request('upload', args, (fn, f.read()))
return result
except IOError:
print 'File %s does not exist' % fn
raise
def submission_images(self, subid):
result = self.send_request('submission_images', {'subid':subid})
return result.get('image_ids')
def overlay_plot(self, service, outfn, wcsfn, wcsext=0):
from astrometry.util import util as anutil
wcs = anutil.Tan(wcsfn, wcsext)
params = dict(crval1 = wcs.crval[0], crval2 = wcs.crval[1],
crpix1 = wcs.crpix[0], crpix2 = wcs.crpix[1],
cd11 = wcs.cd[0], cd12 = wcs.cd[1],
cd21 = wcs.cd[2], cd22 = wcs.cd[3],
imagew = wcs.imagew, imageh = wcs.imageh)
result = self.send_request(service, {'wcs':params})
print 'Result status:', result['status']
plotdata = result['plot']
plotdata = base64.b64decode(plotdata)
open(outfn, 'wb').write(plotdata)
print 'Wrote', outfn
def sdss_plot(self, outfn, wcsfn, wcsext=0):
return self.overlay_plot('sdss_image_for_wcs', outfn,
wcsfn, wcsext)
def galex_plot(self, outfn, wcsfn, wcsext=0):
return self.overlay_plot('galex_image_for_wcs', outfn,
wcsfn, wcsext)
def myjobs(self):
result = self.send_request('myjobs/')
return result['jobs']
def job_status(self, job_id, justdict=False):
result = self.send_request('jobs/%s' % job_id)
if justdict:
return result
stat = result.get('status')
if stat == 'success':
result = self.send_request('jobs/%s/calibration' % job_id)
print 'Calibration:', result
result = self.send_request('jobs/%s/tags' % job_id)
print 'Tags:', result
result = self.send_request('jobs/%s/machine_tags' % job_id)
print 'Machine Tags:', result
result = self.send_request('jobs/%s/objects_in_field' % job_id)
print 'Objects in field:', result
result = self.send_request('jobs/%s/annotations' % job_id)
print 'Annotations:', result
result = self.send_request('jobs/%s/info' % job_id)
print 'Calibration:', result
return stat
def sub_status(self, sub_id, justdict=False):
result = self.send_request('submissions/%s' % sub_id)
if justdict:
return result
return result.get('status')
def jobs_by_tag(self, tag, exact):
exact_option = 'exact=yes' if exact else ''
result = self.send_request(
'jobs_by_tag?query=%s&%s' % (quote(tag.strip()), exact_option),
{},
)
return result
if __name__ == '__main__':
import optparse
parser = optparse.OptionParser()
parser.add_option('--server', dest='server', default=Client.default_url,
help='Set server base URL (eg, %default)')
parser.add_option('--apikey', '-k', dest='apikey',
help='API key for Astrometry.net web service; if not given will check AN_API_KEY environment variable')
parser.add_option('--upload', '-u', dest='upload', help='Upload a file')
parser.add_option('--wait', '-w', dest='wait', action='store_true', help='After submitting, monitor job status')
parser.add_option('--wcs', dest='wcs', help='Download resulting wcs.fits file, saving to given filename; implies --wait if --urlupload or --upload')
parser.add_option('--kmz', dest='kmz', help='Download resulting kmz file, saving to given filename; implies --wait if --urlupload or --upload')
parser.add_option('--urlupload', '-U', dest='upload_url', help='Upload a file at specified url')
parser.add_option('--scale-units', dest='scale_units',
choices=('arcsecperpix', 'arcminwidth', 'degwidth', 'focalmm'), help='Units for scale estimate')
#parser.add_option('--scale-type', dest='scale_type',
# choices=('ul', 'ev'), help='Scale bounds: lower/upper or estimate/error')
parser.add_option('--scale-lower', dest='scale_lower', type=float, help='Scale lower-bound')
parser.add_option('--scale-upper', dest='scale_upper', type=float, help='Scale upper-bound')
parser.add_option('--scale-est', dest='scale_est', type=float, help='Scale estimate')
parser.add_option('--scale-err', dest='scale_err', type=float, help='Scale estimate error (in PERCENT), eg "10" if you estimate can be off by 10%')
parser.add_option('--ra', dest='center_ra', type=float, help='RA center')
parser.add_option('--dec', dest='center_dec', type=float, help='Dec center')
parser.add_option('--radius', dest='radius', type=float, help='Search radius around RA,Dec center')
parser.add_option('--downsample', dest='downsample_factor', type=int, help='Downsample image by this factor')
parser.add_option('--parity', dest='parity', choices=('0','1'), help='Parity (flip) of image')
parser.add_option('--tweak-order', dest='tweak_order', type=int, help='SIP distortion order (default: 2)')
parser.add_option('--crpix-center', dest='crpix_center', action='store_true', default=None, help='Set reference point to center of image?')
parser.add_option('--sdss', dest='sdss_wcs', nargs=2, help='Plot SDSS image for the given WCS file; write plot to given PNG filename')
parser.add_option('--galex', dest='galex_wcs', nargs=2, help='Plot GALEX image for the given WCS file; write plot to given PNG filename')
parser.add_option('--substatus', '-s', dest='sub_id', help='Get status of a submission')
parser.add_option('--jobstatus', '-j', dest='job_id', help='Get status of a job')
parser.add_option('--jobs', '-J', dest='myjobs', action='store_true', help='Get all my jobs')
parser.add_option('--jobsbyexacttag', '-T', dest='jobs_by_exact_tag', help='Get a list of jobs associated with a given tag--exact match')
parser.add_option('--jobsbytag', '-t', dest='jobs_by_tag', help='Get a list of jobs associated with a given tag')
parser.add_option( '--private', '-p',
dest='public',
action='store_const',
const='n',
default='y',
help='Hide this submission from other users')
parser.add_option('--allow_mod_sa','-m',
dest='allow_mod',
action='store_const',
const='sa',
default='d',
help='Select license to allow derivative works of submission, but only if shared under same conditions of original license')
parser.add_option('--no_mod','-M',
dest='allow_mod',
action='store_const',
const='n',
default='d',
help='Select license to disallow derivative works of submission')
parser.add_option('--no_commercial','-c',
dest='allow_commercial',
action='store_const',
const='n',
default='d',
help='Select license to disallow commercial use of submission')
opt,args = parser.parse_args()
if opt.apikey is None:
# try the environment
opt.apikey = os.environ.get('AN_API_KEY', None)
if opt.apikey is None:
parser.print_help()
print
print 'You must either specify --apikey or set AN_API_KEY'
sys.exit(-1)
args = {}
args['apiurl'] = opt.server
c = Client(**args)
c.login(opt.apikey)
if opt.upload or opt.upload_url:
if opt.wcs or opt.kmz:
opt.wait = True
kwargs = dict(
allow_commercial_use=opt.allow_commercial,
allow_modifications=opt.allow_mod,
publicly_visible=opt.public)
if opt.scale_lower and opt.scale_upper:
kwargs.update(scale_lower=opt.scale_lower,
scale_upper=opt.scale_upper,
scale_type='ul')
elif opt.scale_est and opt.scale_err:
kwargs.update(scale_est=opt.scale_est,
scale_err=opt.scale_err,
scale_type='ev')
elif opt.scale_lower or opt.scale_upper:
kwargs.update(scale_type='ul')
if opt.scale_lower:
kwargs.update(scale_lower=opt.scale_lower)
if opt.scale_upper:
kwargs.update(scale_upper=opt.scale_upper)
for key in ['scale_units', 'center_ra', 'center_dec', 'radius',
'downsample_factor', 'tweak_order', 'crpix_center',]:
if getattr(opt, key) is not None:
kwargs[key] = getattr(opt, key)
if opt.parity is not None:
kwargs.update(parity=int(opt.parity))
if opt.upload:
upres = c.upload(opt.upload, **kwargs)
if opt.upload_url:
upres = c.url_upload(opt.upload_url, **kwargs)
stat = upres['status']
if stat != 'success':
print 'Upload failed: status', stat
print upres
sys.exit(-1)
opt.sub_id = upres['subid']
if opt.wait:
if opt.job_id is None:
if opt.sub_id is None:
print "Can't --wait without a submission id or job id!"
sys.exit(-1)
while True:
stat = c.sub_status(opt.sub_id, justdict=True)
print 'Got status:', stat
jobs = stat.get('jobs', [])
if len(jobs):
for j in jobs:
if j is not None:
break
if j is not None:
print 'Selecting job id', j
opt.job_id = j
break
time.sleep(5)
success = False
while True:
stat = c.job_status(opt.job_id, justdict=True)
print 'Got job status:', stat
if stat.get('status','') in ['success']:
success = (stat['status'] == 'success')
break
time.sleep(5)
if success:
c.job_status(opt.job_id)
# result = c.send_request('jobs/%s/calibration' % opt.job_id)
# print 'Calibration:', result
# result = c.send_request('jobs/%s/tags' % opt.job_id)
# print 'Tags:', result
# result = c.send_request('jobs/%s/machine_tags' % opt.job_id)
# print 'Machine Tags:', result
# result = c.send_request('jobs/%s/objects_in_field' % opt.job_id)
# print 'Objects in field:', result
#result = c.send_request('jobs/%s/annotations' % opt.job_id)
#print 'Annotations:', result
retrieveurls = []
if opt.wcs:
# We don't need the API for this, just construct URL
url = opt.server.replace('/api/', '/wcs_file/%i' % opt.job_id)
retrieveurls.append((url, opt.wcs))
if opt.kmz:
url = opt.server.replace('/api/', '/kml_file/%i/' % opt.job_id)
retrieveurls.append((url, opt.kmz))
for url,fn in retrieveurls:
print 'Retrieving file from', url, 'to', fn
f = urlopen(url)
txt = f.read()
w = open(fn, 'wb')
w.write(txt)
w.close()
print 'Wrote to', fn
opt.job_id = None
opt.sub_id = None
if opt.sdss_wcs:
(wcsfn, outfn) = opt.sdss_wcs
c.sdss_plot(outfn, wcsfn)
if opt.galex_wcs:
(wcsfn, outfn) = opt.galex_wcs
c.galex_plot(outfn, wcsfn)
if opt.sub_id:
print c.sub_status(opt.sub_id)
if opt.job_id:
print c.job_status(opt.job_id)
#result = c.send_request('jobs/%s/annotations' % opt.job_id)
#print 'Annotations:', result
if opt.jobs_by_tag:
tag = opt.jobs_by_tag
print c.jobs_by_tag(tag, None)
if opt.jobs_by_exact_tag:
tag = opt.jobs_by_exact_tag
print c.jobs_by_tag(tag, 'yes')
if opt.myjobs:
jobs = c.myjobs()
print jobs
#print c.submission_images(1)
``` | 2014/07/21 | [
"https://Stackoverflow.com/questions/24863576",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2681662/"
] | Use the ***runpy*** module in the [Python 3 Standard Library](https://docs.python.org/3/library/runpy.html)
See that data can be passed to and from the called script
```py
# top.py
import runpy
import sys
sys.argv += ["another parameter"]
module_globals_dict = runpy.run_path("other_script.py",
init_globals = globals(), run_name="__main__")
print(module_globals_dict["return_value"])
```
```py
# other_script.py
# Note we did not load sys module, it gets passed to this script
script_name = sys.argv[0]
print(f"Script {script_name} loaded")
if __name__ == "__main__":
params = sys.argv[1:]
print(f"Script {script_name} run with params: {params}")
return_value = f"{script_name} Done"
``` | by what your saying you want to call a function in the script that is importing the module so try:
```
import __main__
__main__.myfunc()
``` |
43,754,065 | I want to get the shade value of each circles from an image.
1. I try to detect circles using `HoughCircle`.
2. I get the center of each circle.
3. I put the text (the circle numbers) in a circle.
4. I set the pixel subset to obtain the shading values and calculate the averaged shading values.
5. I want to get the results of circle number, the coordinates of the center, and averaged shading values in CSV format.
But, in the 3rd step, the circle numbers were randomly assigned. So, it's so hard to find circle number.
How can I number circles in a sequence?
[](https://i.stack.imgur.com/w823U.jpg)
```
# USAGE
# python detect_circles.py --image images/simple.png
# import the necessary packages
import numpy as np
import argparse
import cv2
import csv
# define a funtion of ROI calculating the average value in specified sample size
def ROI(img,x,y,sample_size):
Each_circle=img[y-sample_size:y+sample_size, x-sample_size:x+sample_size]
average_values=np.mean(Each_circle)
return average_values
# open the csv file named circles_value
circles_values=open('circles_value.csv', 'w')
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required = True, help = "Path to the image")
args = vars(ap.parse_args())
# load the image, clone it for output, and then convert it to grayscale
image = cv2.imread(args["image"])
output = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# detect circles in the image
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1.2,50, 100, 1, 1, 20, 30)
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circles = np.round(circles[0, :]).astype("int")
number=1
font = cv2.FONT_HERSHEY_SIMPLEX
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
number=str(number)
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.rectangle(output, (x - 10, y - 10), (x + 10, y + 10), (0, 128, 255), -1)
# number each circle, but its result shows irregular pattern
cv2.putText(output, number, (x,y), font,0.5,(0,0,0),2,cv2.LINE_AA)
# get the average value in specified sample size (20 x 20)
sample_average_value=ROI(output, x, y, 20)
# write the csv file with number, (x,y), and average pixel value
circles_values.write(number+','+str(x)+','+str(y)+','+str(sample_average_value)+'\n')
number=int(number)
number+=1
# show the output image
cv2.namedWindow("image", cv2.WINDOW_NORMAL)
cv2.imshow("image", output)
cv2.waitKey(0)
# close the csv file
circles_values.close()
``` | 2017/05/03 | [
"https://Stackoverflow.com/questions/43754065",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7955795/"
] | I can't answer completely, because it depends entirely on what's in `$HashVariable`.
The easiest way to tell what's in there is:
```
use Data::Dumper;
print Dumper $HashVariable;
```
Assuming this is a hash *reference* - which it would be, if `print $HashVariable` gives `HASH(0xdeadbeef)` as an output.
So this *should* work:
```
#!/usr/bin/env perl
use strict;
use warnings;
my $HashVariable = { somekey => 'somevalue' };
foreach my $key ( keys %$HashVariable ) {
print $key, " => ", $HashVariable->{$key},"\n";
}
```
The only mistake you're making is that `$HashVariable{$key}` won't work - you need to dereference, because as it stands it refers to `%HashVariable` not `$HashVariable` which are two completely different things.
Otherwise - if it's not entering the loop - it may mean that `keys %$HashVariable` isn't returning anything. Which is why that `Dumper` test would be useful - is there any chance you're either not populating it correctly, or you're *writing* to `%HashVariable` instead.
E.g.:
```
my %HashVariable;
$HashVariable{'test'} = "foo";
``` | There's an obvious problem here, but it wouldn't cause the behaviour that you are seeing.
You think that you have a hash reference in `$HashVariable` and that sounds correct given the `HASH(0xd1007d0)` output that you see when you print it.
But setting up a hash reference and running your code, gives slightly strange results:
```
my $HashVariable = {
foo => 1,
bar => 2,
baz => 3,
};
foreach my $var(keys %{$HashVariable}){
print"In the loop \n";
print"$var and $HashVariable{$var}\n";
}
```
The output I get is:
```
In the loop
baz and
In the loop
bar and
In the loop
foo and
```
Notice that the values aren't being printed out. That's because of the problem I mentioned above. Adding `use strict` to the program (which you should always do) tells us what the problem is.
```
Global symbol "%HashVariable" requires explicit package name (did you forget to declare "my %HashVariable"?) at hash line 14.
Execution of hash aborted due to compilation errors.
```
You are using `$HashVariable{$var}` to look up a key in your hash. That would be correct if you had a hash called `%HashVariable`, but you don't - you have a hash reference called `$HashVariable` (note the `$` instead of `%`). To look up a key from a hash reference, you need to use a dereferencing arrow - `$HashVariable->{$var}`.
Fixing that, your program works as expected.
```
use strict;
use warnings;
my $HashVariable = {
foo => 1,
bar => 2,
baz => 3,
};
foreach my $var(keys %{$HashVariable}){
print"In the loop \n";
print"$var and $HashVariable->{$var}\n";
}
```
And I see:
```
In the loop
bar and 2
In the loop
foo and 1
In the loop
baz and 3
```
The only way that you could get the results you describe (the `HASH(0xd1007d0)` output but no iterations of the loop) is if you have a hash reference but the hash has no keys.
So (as I said in a comment) we need to see how your hash reference is created. |
37,096,806 | I have landed into quite a unique problem. I created the model **1.**'message', used it for a while, then i changed it to **2.** 'messages' and after that again changed it back to **3.** 'message' but this time with many changes in the model fields.
As i got to know afterwards, django migrations gets into some problems while renaming models. In my migrations, some problems have arose. Although I had run all migrations in the right way, while running the 3rd migration for message, i faced few problems that i fixed manually. Now when i ran migration for changes in other models, i found that this migration is still dependent on the 2nd migration of the messages. However, the fields for which it was dependent on the 2nd migration were actually created in third migration.
The traceback i am getting:
```
ValueError: Lookup failed for model referenced by field activities.Enquiry.message_fk: chat.Message
```
and:
```
Applying contacts.0002_mailsend...Traceback (most recent call last):
File "/home/sp/webapps/myenv/lib/python3.4/site-packages/django/apps/config.py", line 163, in get_model
return self.models[model_name.lower()]
KeyError: 'message'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sp/webapps/myenv/lib/python3.4/site-packages/django/db/migrations/state.py", line 84, in render
model = self.apps.get_model(lookup_model[0], lookup_model[1])
File "/home/sp/webapps/myenv/lib/python3.4/site-packages/django/apps/registry.py", line 202, in get_model
return self.get_app_config(app_label).get_model(model_name.lower())
File "/home/sp/webapps/myenv/lib/python3.4/site-packages/django/apps/config.py", line 166, in get_model
"App '%s' doesn't have a '%s' model." % (self.label, model_name))
LookupError: App 'chat' doesn't have a 'message' model.
```
What i want to ask is whether I should manually edit the dependencies in the migration file to change it from migration 2 to migration 3 in messages.
PS: using django 1.7.2 | 2016/05/08 | [
"https://Stackoverflow.com/questions/37096806",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4510252/"
] | Normally, You should not edit them manually.
Once you start editing them, you will land into cyclic dependencies problems and if you do not remember what changes you made, your entire migrations will be messed up.
What you can do is revert back migrations if you do not have any data to lose. If you are deleting migrations, you should take extra precaution just to ensure that in the migration table no entry remains which points towards unexisting migrations. (I would suggest not to delete migrations manually as it might get complicated.)
If only you have analyzed the migration files and have clear idea as at what position problem has occurred, then only you should think of editing the migration file but don' do it until you can handle it.
In you case, yes the problem might have generated due to renaming and as you say while running a migration you landed into some problem which you fixed manually, it might have happened that the process would have been stuck in between and it created some problem. You can change the dependency and run `makemigrations`. If there is a circular dependency, it will come directly, then you should revert back the change. Or otherwise, just do a little more analysis and remove the cyclic dependency issue by editing a few more files. (keep backup) If you are lucky or you understand migrations deeply, you might end up with success. | No, I don't think so, you are better off deleting the migration files after the last successful migrations and running it again. |
37,096,806 | I have landed into quite a unique problem. I created the model **1.**'message', used it for a while, then i changed it to **2.** 'messages' and after that again changed it back to **3.** 'message' but this time with many changes in the model fields.
As i got to know afterwards, django migrations gets into some problems while renaming models. In my migrations, some problems have arose. Although I had run all migrations in the right way, while running the 3rd migration for message, i faced few problems that i fixed manually. Now when i ran migration for changes in other models, i found that this migration is still dependent on the 2nd migration of the messages. However, the fields for which it was dependent on the 2nd migration were actually created in third migration.
The traceback i am getting:
```
ValueError: Lookup failed for model referenced by field activities.Enquiry.message_fk: chat.Message
```
and:
```
Applying contacts.0002_mailsend...Traceback (most recent call last):
File "/home/sp/webapps/myenv/lib/python3.4/site-packages/django/apps/config.py", line 163, in get_model
return self.models[model_name.lower()]
KeyError: 'message'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sp/webapps/myenv/lib/python3.4/site-packages/django/db/migrations/state.py", line 84, in render
model = self.apps.get_model(lookup_model[0], lookup_model[1])
File "/home/sp/webapps/myenv/lib/python3.4/site-packages/django/apps/registry.py", line 202, in get_model
return self.get_app_config(app_label).get_model(model_name.lower())
File "/home/sp/webapps/myenv/lib/python3.4/site-packages/django/apps/config.py", line 166, in get_model
"App '%s' doesn't have a '%s' model." % (self.label, model_name))
LookupError: App 'chat' doesn't have a 'message' model.
```
What i want to ask is whether I should manually edit the dependencies in the migration file to change it from migration 2 to migration 3 in messages.
PS: using django 1.7.2 | 2016/05/08 | [
"https://Stackoverflow.com/questions/37096806",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4510252/"
] | Normally, You should not edit them manually.
Once you start editing them, you will land into cyclic dependencies problems and if you do not remember what changes you made, your entire migrations will be messed up.
What you can do is revert back migrations if you do not have any data to lose. If you are deleting migrations, you should take extra precaution just to ensure that in the migration table no entry remains which points towards unexisting migrations. (I would suggest not to delete migrations manually as it might get complicated.)
If only you have analyzed the migration files and have clear idea as at what position problem has occurred, then only you should think of editing the migration file but don' do it until you can handle it.
In you case, yes the problem might have generated due to renaming and as you say while running a migration you landed into some problem which you fixed manually, it might have happened that the process would have been stuck in between and it created some problem. You can change the dependency and run `makemigrations`. If there is a circular dependency, it will come directly, then you should revert back the change. Or otherwise, just do a little more analysis and remove the cyclic dependency issue by editing a few more files. (keep backup) If you are lucky or you understand migrations deeply, you might end up with success. | Having gone through the migration management process in different companies, I think it's fine to edit migrations if you know what you are doing. Actually, in many cases you will have to edit existing migrations file or even create new file just to implement a particular change. Few points to be taken care here:
1. Understand and Maintain the sequence of operations being done.
2. Be aware of dependencies
3. Test it before pushing to staging and production |
57,060,964 | I am using `sklearn` modules to find the best fitting models and model parameters. However, I have an unexpected Index error down below:
```
> IndexError Traceback (most recent call
> last) <ipython-input-38-ea3f99e30226> in <module>
> 22 s = mean_squared_error(y[ts], best_m.predict(X[ts]))
> 23 cv[i].append(s)
> ---> 24 print(np.mean(cv, 1))
> IndexError: tuple index out of range
```
what I want to do is to find best fitting regressor and its parameters, but I got above error. I looked into `SO` and tried [this solution](https://stackoverflow.com/questions/20296188/indexerror-tuple-index-out-of-range-python) but still, same error bumps up. any idea to fix this bug? can anyone point me out why this error happening? any thought?
**my code**:
```
from sklearn.model_selection import KFold
from sklearn.metrics import mean_squared_error
from sklearn.linear_model import LinearRegression, Ridge, Lasso
from xgboost.sklearn import XGBRegressor
from sklearn.datasets import make_regression
models = [SVR(), RandomForestRegressor(), LinearRegression(), Ridge(), Lasso(), XGBRegressor()]
params = [{'C': [0.01, 1]}, {'n_estimators': [10, 20]}]
X, y = make_regression(n_samples=10000, n_features=20)
with warnings.catch_warnings():
warnings.filterwarnings("ignore")
cv = [[] for _ in range(len(models))]
fold = KFold(5,shuffle=False)
for tr, ts in fold.split(X):
for i, (model, param) in enumerate(zip(models, params)):
best_m = GridSearchCV(model, param)
best_m.fit(X[tr], y[tr])
s = mean_squared_error(y[ts], best_m.predict(X[ts]))
cv[i].append(s)
print(np.mean(cv, 1))
```
**desired output**:
if there is a way to fix up above error, I am expecting to pick up best-fitted models with parameters, then use it for estimation. Any idea to improve the above attempt? Thanks | 2019/07/16 | [
"https://Stackoverflow.com/questions/57060964",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7302169/"
] | The root cause of your issue is that, while you ask for the evaluation of 6 models in `GridSearchCV`, you provide parameters only for the first 2 ones:
```
models = [SVR(), RandomForestRegressor(), LinearRegression(), Ridge(), Lasso(), XGBRegressor()]
params = [{'C': [0.01, 1]}, {'n_estimators': [10, 20]}]
```
The result of `enumerate(zip(models, params))` in this setting, i.e:
```
for i, (model, param) in enumerate(zip(models, params)):
print((model, param))
```
is
```
(SVR(C=1.0, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma='auto',
kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False), {'C': [0.01, 1]})
(RandomForestRegressor(bootstrap=True, criterion='mse', max_depth=None,
max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,
oob_score=False, random_state=None, verbose=0, warm_start=False), {'n_estimators': [10, 20]})
```
i.e the last 4 models are simply ignored, so you get empty entries for them in `cv`:
```
print(cv)
# result:
[[5950.6018771284835, 5987.293514740653, 6055.368320208183, 6099.316091619069, 6146.478702335218], [3625.3243553665975, 3301.3552182952058, 3404.3321983193728, 3521.5160621260898, 3561.254684271113], [], [], [], []]
```
which causes the downstream error when trying to get the `np.mean(cv, 1)`.
The solution, as already correctly pointed out by Psi in their answer, is to go for empty dictionaries in the models in which you actually **don't** perform any CV search; omitting the `XGBRegressor` (have not installed it), here are the results:
```
models = [SVR(), RandomForestRegressor(), LinearRegression(), Ridge(), Lasso()]
params2 = [{'C': [0.01, 1]}, {'n_estimators': [10, 20]}, {}, {}, {}]
cv = [[] for _ in range(len(models))]
fold = KFold(5,shuffle=False)
for tr, ts in fold.split(X):
for i, (model, param) in enumerate(zip(models, params2)):
best_m = GridSearchCV(model, param)
best_m.fit(X[tr], y[tr])
s = mean_squared_error(y[ts], best_m.predict(X[ts]))
cv[i].append(s)
```
where `print(cv)` gives:
```
[[4048.660483326826, 3973.984055352062, 3847.7215568088545, 3907.0566348092684, 3820.0517432992765], [1037.9378737329769, 1025.237441119364, 1016.549294695313, 993.7083268195154, 963.8115632611381], [2.2948917095935095e-26, 1.971022007799432e-26, 4.1583774042712844e-26, 2.0229469068846665e-25, 1.9295075684919642e-26], [0.0003350178681602639, 0.0003297411022124562, 0.00030834076832371557, 0.0003355298330301431, 0.00032049282437794516], [10.372789356303688, 10.137748082073076, 10.136028304131141, 10.499159069700834, 9.80779910439471]]
```
and `print(np.mean(cv, 1))` works OK, giving:
```
[3.91949489e+03 1.00744890e+03 6.11665355e-26 3.25824479e-04
1.01907048e+01]
```
So, in your case, you should indeed change `params` to:
```
params = [{'C': [0.01, 1]}, {'n_estimators': [10, 20]}, {}, {}, {}, {}]
```
as already suggested by Psi. | When you define
```
cv = [[] for _ in range(len(models))]
```
it has an empty list for each model.
In the loop, however, you go over `enumerate(zip(models, params))` which has only **two** elements, since your `params` list has two elements (because `list(zip(x,y))` [has length](https://docs.python.org/3.3/library/functions.html#zip) equal to `min(len(x),len(y)`).
Hence, you get an `IndexError` because some of the lists in `cv` are empty (all but the first two) when you calculate the mean with `np.mean`.
**Solution:**
If you don't need to use `GridSearchCV` on the remaining models you may just extend the `params` list with empty dictionaries:
```
params = [{'C': [0.01, 1]}, {'n_estimators': [10, 20]}, {}, {}, {}, {}]
``` |
31,387,660 | How I can use the Kivy framework in Qpython3 (Python 3.2 for android) app?
I know that Qpython (Python 2.7 for android) app support this framework.
pip\_console don't install kivy. I have an error, when I try to install it. Please help me. | 2015/07/13 | [
"https://Stackoverflow.com/questions/31387660",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5111676/"
] | ```
Session["email"] = email;
```
This will store the value between response and postback. Let me know if this is what you were looking for. | **TempData** can work for you. Another option is to store it in hidden field and receive it back on POST but you should be aware that "bad users" can modify that (via browser developer tools for example). |
31,387,660 | How I can use the Kivy framework in Qpython3 (Python 3.2 for android) app?
I know that Qpython (Python 2.7 for android) app support this framework.
pip\_console don't install kivy. I have an error, when I try to install it. Please help me. | 2015/07/13 | [
"https://Stackoverflow.com/questions/31387660",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5111676/"
] | ```
Session["email"] = email;
```
This will store the value between response and postback. Let me know if this is what you were looking for. | **If you need values that were rendered with the view to arrive in the POST**
You should add that value to the Model (`Profile` in your case).
**If you need values that were saved with POST to load in the next view**
If you are using the same action (`return View()`), just preload the model.
```
DoStuff();
return View(new Model { MyValue = "foo" });
```
If you are redirecting to another action, use persisted state, and redirect using appropriate route parameters.
```
var transactionId = SaveThings();
return RedirectToAction("Confirmation", new {transactionId});
```
And load the values you need depending on how you persisted the information. If the information is OK to stick in route parameters, even simpler. |
7,391,689 | Here is what I can read in the python subprocess module documentation:
```
Replacing shell pipeline
output=`dmesg | grep hda`
==>
p1 = Popen(["dmesg"], stdout=PIPE)
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits.
output = p2.communicate()[0]
The p1.stdout.close() call after starting the p2 is important in order for p1
to receive a SIGPIPE if p2 exits before p1.
```
I don't really understand why we have to close p1.stdout after have created p2.
When is exactly executed the p1.stdout.close()?
What happens when p2 never ends?
What happens when nor p1 or p2 end? | 2011/09/12 | [
"https://Stackoverflow.com/questions/7391689",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | From [Wikipedia](http://en.wikipedia.org/wiki/SIGPIPE), **SIGPIPE** is the signal sent to a process when it attempts to write to a pipe without a process connected to the other end.
When you first create `p1` using `stdout=PIPE`, there is one process connected to the pipe, which is your Python process, and you can read the output using `p1.stdout`.
When you create `p2` using `stdin=p1.stdout` there are now two processes connected to the pipe `p1.stdout`.
Generally when you are running processes in a pipeline you want all processes to end when any of the processes end. For this to happen automatically you need to close `p1.stdout` so `p2.stdin` is the only process attached to that pipe, this way if `p2` ends and `p1` writes additional data to stdout, it will receive a SIGPIPE since there are no longer any processes attached to that pipe. | OK I see.
p1.stdout is closed from my python script but remains open in p2, and then p1 and p2 communicate together.
Except if p2 is already closed, then p1 receives a SIGPIPE.
Am I correct? |
46,517,814 | sudo python yantest.py 255,255,0
```
who = sys.argv[1]
print sys.argv[1]
print who
print 'Number of arguments:', len(sys.argv), 'arguments.'
print 'Argument List:', str(sys.argv)
yanon(strip, Color(who))
```
output from above is
```
255,255,0
255,255,0
Number of arguments: 2 arguments.
Argument List: ['yantest.py', '255,255,0']
Traceback (most recent call last):
File "yantest.py", line 46, in <module>
yanon(strip, Color(who))
TypeError: Color() takes at least 3 arguments (1 given)
Segmentation fault
```
How do I use the variable "who" inside the Color function?
Ive tried ('who'), ("who") neither of which work either. | 2017/10/01 | [
"https://Stackoverflow.com/questions/46517814",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7509061/"
] | The problem with your implementation is that it does not distinguish original numbers from the squares that you have previously added.
First, since you are doing this recursively, you don't need a `for` loop. Each invocation needs to take care of the initial value of the list alone.
Next, `add(n)` adds the number at the end, while your example shows adding numbers immediately after the original value. Therefore, you should use `num.add(1, hold)`, and skip two initial numbers when making a recursive call.
Here is how the fixed method should look:
```
public static int sumOfSquares(List<Integer> num) {
if (num.isEmpty()) {
return 0;
}
// Deal with only the initial element
int hold= num.get(0)*num.get(0);
// Insert at position 1, right after the squared number
num.add(1, hold);
// Truncate two initial numbers, the value and its square:
return num.get(1) + sumOfSquares(num.subList(2, num.size()));
}
```
[Demo.](https://ideone.com/ByylCV) | There are two ways to safely add (or remove) elements to a list while iterating it:
1. Iterate backwards over the list, so that the indexes of the upcoming elements don't shift.
2. Use an [`Iterator`](https://docs.oracle.com/javase/9/docs/api/java/util/Iterator.html) or [`ListIterator`](https://docs.oracle.com/javase/9/docs/api/java/util/ListIterator.html).
You can fix your code using either strategy, but I recommend a `ListIterator` for readable code.
```
import java.util.ListIterator;
public static void insertSquares(List<Integer> num) {
ListIterator<Integer> iter = num.listIterator();
while (iter.hasNext()) {
int value = iter.next();
iter.add(value * value);
}
}
```
Then, move the summing code into a separate method so that the recursion doesn't interfere with the inserting of squares into the list. Your recursive solution will work, but an iterative solution would be more efficient for Java. |
45,765,946 | I'm using some objects in python with dynamic properties, all with numbers and strings. Also I created a simple method to make a copy of an object. One of the property is a list, but I don't need it to be deep copied. This method seems to work fine, but I found an odd problem. This piece of code shows it:
```
#!/usr/bin/env python3
# class used for the example
class test(object):
def copy(self):
retval = test()
# just create a new, empty object, and populate it with
# my defined properties
for element in dir(self):
if element.startswith("_"):
continue
setattr(retval, element, getattr(self, element))
return retval
test1 = test()
# here I dynamically create an attribute (called "type") in this object
setattr(test1, "type", "A TEST VALUE")
# this print shows "A TEST VALUE", as expected
print(test1.type)
# Let's copy test1 as test2
test2 = test1.copy()
# this print shows also "A TEST VALUE", as expected
print(test2.type)
test2.type = "ANOTHER VALUE"
# this print shows "ANOTHER VALUE", as expected
print(test2.type)
# Let's copy test2 as test3
test3 = test2.copy()
# this print shows "A TEST VALUE", but "ANOTHER VALUE" was expected
print(test3.type)
```
Where is my conceptual error?
Thanks. | 2017/08/18 | [
"https://Stackoverflow.com/questions/45765946",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1502508/"
] | Your `copy()` method copied the `copy` method (*not* the function from the class) from `test1`, which means that `self` in `test2.copy()` is still `test1`. | If you take a look at `dir(test1)`, you'll see that one of the elements is `'copy'`. In other words, you're not just copying the `type` attribute.
**You're copying the `copy` method.**
`test2` gets `test2.copy` set to `test1.copy`, a bound method that will copy `test1`.
Don't use `dir` for this. Look at the instance's `__dict__`, which only contains instance-specific data. |
4,834,538 | ```
import os
import sys
os.environ['DJANGO_SETTINGS_MODULE'] = "trade.settings"
from trade.turkey.models import *
d = DemoRecs.objects.all()
d.delete()
```
When I run this, it imports fine if I leave out the `d.delete()` line. It's erroring on that line. Why? If I comment that out, everything is cool. I can insert. I can update. But when I have that line everything screws up.
The traceback is:
```
d.delete()
File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 447, in delete
obj._collect_sub_objects(seen_objs)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/base.py", line 585, in _collect_sub_objects
for related in self._meta.get_all_related_objects():
File "/usr/local/lib/python2.6/dist-packages/django/db/models/options.py", line 347, in get_all_related_objects
self._fill_related_objects_cache()
File "/usr/local/lib/python2.6/dist-packages/django/db/models/options.py", line 374, in _fill_related_objects_cache
for klass in get_models():
File "/usr/local/lib/python2.6/dist-packages/django/db/models/loading.py", line 167, in get_models
self._populate()
File "/usr/local/lib/python2.6/dist-packages/django/db/models/loading.py", line 61, in _populate
self.load_app(app_name, True)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/loading.py", line 76, in load_app
app_module = import_module(app_name)
File "/usr/local/lib/python2.6/dist-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
ImportError: No module named turkey
``` | 2011/01/29 | [
"https://Stackoverflow.com/questions/4834538",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/179736/"
] | The directory for the `trade` project is missing from `sys.path`. | Try adding "trade" to the pythonpath...
```
import os.path
_pypath = os.path.realpath(os.path.dirname(__file__) + '/trade')
sys.path.append(_pypath)
``` |
50,809,052 | So in python, if I want to make an if statement I need to do something like this (where a,b,c are conditions):
```
if(a)
x=1
elsif(b)
x=1
elseif(c)
x=1
```
is there a way to simply do something like:
```
if(a or b or c)
x=1
```
this would save a huge amount of time, but it doesn't evaluate. | 2018/06/12 | [
"https://Stackoverflow.com/questions/50809052",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9928114/"
] | Turns out, whatever the issue was internally, it was actually triggered by this library in my `build.gradle` file:
```
implementation "com.github.bigfishcat.android:svg-android:2.0.8"
```
How a library cause this, I do not know. Everything builds fine now though. | apply plugin: 'com.android.application'
**apply plugin: 'kotlin-android'**
**apply plugin: 'kotlin-android-extensions'**
android {
```
compileSdkVersion 26
defaultConfig {
applicationId "com.example.admin.myapplication"
minSdkVersion 15
targetSdkVersion 26
versionCode 1
versionName "1.0"
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
```
}
dependencies {
```
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation"org.jetbrains.kotlin:kotlin-stdlib-jre7:$kotlin_version"
implementation 'com.android.support:appcompat-v7:26.1.0'
implementation 'com.android.support.constraint:constraint-layout:1.1.0'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'com.android.support.test:runner:1.0.2'
androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.2'
implementation 'com.android.support:recyclerview-v7:26.1.0'
```
}
my android studio version is 3.1.1
wroking properly
remove all kotlin library from your gradle
and put
implementation"org.jetbrains.kotlin:kotlin-stdlib-jre7:$kotlin\_version"
apply plugin: 'kotlin-android'
apply plugin: 'kotlin-android-extensions
and in top level gradle bulid.gradle(your\_app\_name) put
buildscript {
```
ext.kotlin_version = '1.2.30'
repositories {
google()
jcenter()
}
dependencies {
classpath 'com.android.tools.build:gradle:3.1.1'
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
```
} |
12,173,856 | I'm trying to reimplement python [slice notation](https://stackoverflow.com/questions/509211/good-primer-for-python-slice-notation) in another language (php) and looking for a snippet (in any language or pseudocode) that would mimic the python logic. That is, given a list and a triple `(start, stop, step)` or a part thereof, determine correct values or defaults for all parameters and return a slice as a new list.
I tried looking into [the source](http://hg.python.org/cpython/file/3d4d52e47431/Objects/sliceobject.c). That code is far beyond my c skills, but I can't help but agree with the comment saying:
```
/* this is harder to get right than you might think */
```
Also, if something like this is already done, pointers will be greatly appreciated.
This is my test bench (make sure your code passes before posting):
```
#place your code below
code = """
def mySlice(L, start=None, stop=None, step=None):
or
<?php function mySlice($L, $start=NULL, $stop=NULL, $step=NULL) ...
or
function mySlice(L, start, stop, step) ...
"""
import itertools
L = [0,1,2,3,4,5,6,7,8,9]
if code.strip().startswith('<?php'):
mode = 'php'
if code.strip().startswith('def'):
mode = 'python'
if code.strip().startswith('function'):
mode = 'js'
if mode == 'php':
var, none = '$L', 'NULL'
print code, '\n'
print '$L=array(%s);' % ','.join(str(x) for x in L)
print "function _c($s,$a,$e){if($a!==$e)echo $s,' should be [',implode(',',$e),'] got [',implode(',',$a),']',PHP_EOL;}"
if mode == 'python':
var, none = 'L', 'None'
print code, '\n'
print 'L=%r' % L
print "def _c(s,a,e):\n\tif a!=e:\n\t\tprint s,'should be',e,'got',a"
if mode == 'js':
var, none = 'L', 'undefined'
print code, '\n'
print 'L=%r' % L
print "function _c(s,a,e){if(a.join()!==e.join())console.log(s+' should be ['+e.join()+'] got ['+a.join()+']');}"
print
n = len(L) + 3
start = range(-n, n) + [None, 100, -100]
stop = range(-n, n) + [None, 100, -100]
step = range(-n, n) + [100, -100]
for q in itertools.product(start, stop, step):
if not q[2]: q = q[:-1]
actual = 'mySlice(%s,%s)' % (var, ','.join(none if x is None else str(x) for x in q))
slice_ = 'L[%s]' % ':'.join('' if x is None else str(x) for x in q)
expect = eval(slice_)
if mode == 'php':
expect = 'array(%s)' % ','.join(str(x) for x in expect)
print "_c(%r,%s,%s);" % (slice_, actual, expect)
if mode == 'python':
print "_c(%r,%s,%s);" % (slice_, actual, expect)
if mode == 'js':
print "_c(%r,%s,%s);" % (slice_, actual, expect)
```
how to use it:
* save into a file (`test.py`)
* place your python, php or javascript code between `"""`s
* run `python test.py | python` or `python test.py | php` or `python test.py | node` | 2012/08/29 | [
"https://Stackoverflow.com/questions/12173856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/989121/"
] | I've written a PHP port based on the C code, optimized for step sizes -1 and 1:
```
function get_indices($length, $step, &$start, &$end, &$size)
{
if (is_null($start)) {
$start = $step < 0 ? $length - 1 : 0;
} else {
if ($start < 0) {
$start += $length;
if ($start < 0) {
$start = $step < 0 ? -1 : 0;
}
} elseif ($start >= $length) {
$start = $step < 0 ? $length - 1 : $length;
}
}
if (is_null($end)) {
$end = $step < 0 ? -1 : $length;
} else {
if ($end < 0) {
$end += $length;
if ($end < 0) {
$end = $step < 0 ? - 1 : 0;
}
} elseif ($end >= $length) {
$end = $step < 0 ? $length - 1 : $length;
}
}
if (($step < 0 && $end >= $start) || ($step > 0 && $start >= $end)) {
$size = 0;
} elseif ($step < 0) {
$size = ($end - $start + 1) / $step + 1;
} else {
$size = ($end - $start - 1) / $step + 1;
}
}
function mySlice($L, $start = NULL, $end = NULL, $step = 1)
{
if (!$step) {
return false; // could throw exception too
}
$length = count($L);
get_indices($length, $step, $start, $end, $size);
// optimize default step
if ($step == 1) {
// apply native array_slice()
return array_slice($L, $start, $size);
} elseif ($step == -1) {
// negative step needs an array reversal first
// with range translation
return array_slice(array_reverse($L), $length - $start - 1, $size);
} else {
// standard fallback
$r = array();
for ($i = $start; $step < 0 ? $i > $end : $i < $end; $i += $step) {
$r[] = $L[$i];
}
return $r;
}
}
``` | I can't say there's no bug in the codes, but it had past your test program :)
```
def mySlice(L, start=None, stop=None, step=None):
ret = []
le = len(L)
if step is None: step = 1
if step > 0: #this situation might be easier
if start is None:
start = 0
else:
if start < 0: start += le
if start < 0: start = 0
if start > le: start = le
if stop is None:
stop = le
else:
if stop < 0: stop += le
if stop < 0: stop = 0
if stop > le: stop = le
else:
if start is None:
start = le-1
else:
if start < 0: start += le
if start < 0: start = -1
if start >= le: start = le-1
if stop is None:
stop = -1 #stop is not 0 because we need L[0]
else:
if stop < 0: stop += le
if stop < 0: stop = -1
if stop >= le: stop = le
#(stop-start)*step>0 to make sure 2 things:
#1: step != 0
#2: iteration will end
while start != stop and (stop-start)*step > 0 and start >=0 and start < le:
ret.append( L[start] )
start += step
return ret
``` |
12,173,856 | I'm trying to reimplement python [slice notation](https://stackoverflow.com/questions/509211/good-primer-for-python-slice-notation) in another language (php) and looking for a snippet (in any language or pseudocode) that would mimic the python logic. That is, given a list and a triple `(start, stop, step)` or a part thereof, determine correct values or defaults for all parameters and return a slice as a new list.
I tried looking into [the source](http://hg.python.org/cpython/file/3d4d52e47431/Objects/sliceobject.c). That code is far beyond my c skills, but I can't help but agree with the comment saying:
```
/* this is harder to get right than you might think */
```
Also, if something like this is already done, pointers will be greatly appreciated.
This is my test bench (make sure your code passes before posting):
```
#place your code below
code = """
def mySlice(L, start=None, stop=None, step=None):
or
<?php function mySlice($L, $start=NULL, $stop=NULL, $step=NULL) ...
or
function mySlice(L, start, stop, step) ...
"""
import itertools
L = [0,1,2,3,4,5,6,7,8,9]
if code.strip().startswith('<?php'):
mode = 'php'
if code.strip().startswith('def'):
mode = 'python'
if code.strip().startswith('function'):
mode = 'js'
if mode == 'php':
var, none = '$L', 'NULL'
print code, '\n'
print '$L=array(%s);' % ','.join(str(x) for x in L)
print "function _c($s,$a,$e){if($a!==$e)echo $s,' should be [',implode(',',$e),'] got [',implode(',',$a),']',PHP_EOL;}"
if mode == 'python':
var, none = 'L', 'None'
print code, '\n'
print 'L=%r' % L
print "def _c(s,a,e):\n\tif a!=e:\n\t\tprint s,'should be',e,'got',a"
if mode == 'js':
var, none = 'L', 'undefined'
print code, '\n'
print 'L=%r' % L
print "function _c(s,a,e){if(a.join()!==e.join())console.log(s+' should be ['+e.join()+'] got ['+a.join()+']');}"
print
n = len(L) + 3
start = range(-n, n) + [None, 100, -100]
stop = range(-n, n) + [None, 100, -100]
step = range(-n, n) + [100, -100]
for q in itertools.product(start, stop, step):
if not q[2]: q = q[:-1]
actual = 'mySlice(%s,%s)' % (var, ','.join(none if x is None else str(x) for x in q))
slice_ = 'L[%s]' % ':'.join('' if x is None else str(x) for x in q)
expect = eval(slice_)
if mode == 'php':
expect = 'array(%s)' % ','.join(str(x) for x in expect)
print "_c(%r,%s,%s);" % (slice_, actual, expect)
if mode == 'python':
print "_c(%r,%s,%s);" % (slice_, actual, expect)
if mode == 'js':
print "_c(%r,%s,%s);" % (slice_, actual, expect)
```
how to use it:
* save into a file (`test.py`)
* place your python, php or javascript code between `"""`s
* run `python test.py | python` or `python test.py | php` or `python test.py | node` | 2012/08/29 | [
"https://Stackoverflow.com/questions/12173856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/989121/"
] | Here's a straight port of the C code:
```
def adjust_endpoint(length, endpoint, step):
if endpoint < 0:
endpoint += length
if endpoint < 0:
endpoint = -1 if step < 0 else 0
elif endpoint >= length:
endpoint = length - 1 if step < 0 else length
return endpoint
def adjust_slice(length, start, stop, step):
if step is None:
step = 1
elif step == 0:
raise ValueError("step cannot be 0")
if start is None:
start = length - 1 if step < 0 else 0
else:
start = adjust_endpoint(length, start, step)
if stop is None:
stop = -1 if step < 0 else length
else:
stop = adjust_endpoint(length, stop, step)
return start, stop, step
def slice_indices(length, start, stop, step):
start, stop, step = adjust_slice(length, start, stop, step)
i = start
while (i > stop) if step < 0 else (i < stop):
yield i
i += step
def mySlice(L, start=None, stop=None, step=None):
return [L[i] for i in slice_indices(len(L), start, stop, step)]
``` | This is what I came up with (python)
```
def mySlice(L, start=None, stop=None, step=None):
answer = []
if not start:
start = 0
if start < 0:
start += len(L)
if not stop:
stop = len(L)
if stop < 0:
stop += len(L)
if not step:
step = 1
if stop == start or (stop<=start and step>0) or (stop>=start and step<0):
return []
i = start
while i != stop:
try:
answer.append(L[i])
i += step
except:
break
return answer
```
Seems to work - let me know what you think
Hope it helps |
12,173,856 | I'm trying to reimplement python [slice notation](https://stackoverflow.com/questions/509211/good-primer-for-python-slice-notation) in another language (php) and looking for a snippet (in any language or pseudocode) that would mimic the python logic. That is, given a list and a triple `(start, stop, step)` or a part thereof, determine correct values or defaults for all parameters and return a slice as a new list.
I tried looking into [the source](http://hg.python.org/cpython/file/3d4d52e47431/Objects/sliceobject.c). That code is far beyond my c skills, but I can't help but agree with the comment saying:
```
/* this is harder to get right than you might think */
```
Also, if something like this is already done, pointers will be greatly appreciated.
This is my test bench (make sure your code passes before posting):
```
#place your code below
code = """
def mySlice(L, start=None, stop=None, step=None):
or
<?php function mySlice($L, $start=NULL, $stop=NULL, $step=NULL) ...
or
function mySlice(L, start, stop, step) ...
"""
import itertools
L = [0,1,2,3,4,5,6,7,8,9]
if code.strip().startswith('<?php'):
mode = 'php'
if code.strip().startswith('def'):
mode = 'python'
if code.strip().startswith('function'):
mode = 'js'
if mode == 'php':
var, none = '$L', 'NULL'
print code, '\n'
print '$L=array(%s);' % ','.join(str(x) for x in L)
print "function _c($s,$a,$e){if($a!==$e)echo $s,' should be [',implode(',',$e),'] got [',implode(',',$a),']',PHP_EOL;}"
if mode == 'python':
var, none = 'L', 'None'
print code, '\n'
print 'L=%r' % L
print "def _c(s,a,e):\n\tif a!=e:\n\t\tprint s,'should be',e,'got',a"
if mode == 'js':
var, none = 'L', 'undefined'
print code, '\n'
print 'L=%r' % L
print "function _c(s,a,e){if(a.join()!==e.join())console.log(s+' should be ['+e.join()+'] got ['+a.join()+']');}"
print
n = len(L) + 3
start = range(-n, n) + [None, 100, -100]
stop = range(-n, n) + [None, 100, -100]
step = range(-n, n) + [100, -100]
for q in itertools.product(start, stop, step):
if not q[2]: q = q[:-1]
actual = 'mySlice(%s,%s)' % (var, ','.join(none if x is None else str(x) for x in q))
slice_ = 'L[%s]' % ':'.join('' if x is None else str(x) for x in q)
expect = eval(slice_)
if mode == 'php':
expect = 'array(%s)' % ','.join(str(x) for x in expect)
print "_c(%r,%s,%s);" % (slice_, actual, expect)
if mode == 'python':
print "_c(%r,%s,%s);" % (slice_, actual, expect)
if mode == 'js':
print "_c(%r,%s,%s);" % (slice_, actual, expect)
```
how to use it:
* save into a file (`test.py`)
* place your python, php or javascript code between `"""`s
* run `python test.py | python` or `python test.py | php` or `python test.py | node` | 2012/08/29 | [
"https://Stackoverflow.com/questions/12173856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/989121/"
] | This is based on @ecatmur's Python code ported again to PHP.
```
<?php
function adjust_endpoint($length, $endpoint, $step) {
if ($endpoint < 0) {
$endpoint += $length;
if ($endpoint < 0) {
$endpoint = $step < 0 ? -1 : 0;
}
}
elseif ($endpoint >= $length) {
$endpoint = $step < 0 ? $length - 1 : $length;
}
return $endpoint;
}
function mySlice($L, $start = null, $stop = null, $step = null) {
$sliced = array();
$length = count($L);
// adjust_slice()
if ($step === null) {
$step = 1;
}
elseif ($step == 0) {
throw new Exception('step cannot be 0');
}
if ($start === null) {
$start = $step < 0 ? $length - 1 : 0;
}
else {
$start = adjust_endpoint($length, $start, $step);
}
if ($stop === null) {
$stop = $step < 0 ? -1 : $length;
}
else {
$stop = adjust_endpoint($length, $stop, $step);
}
// slice_indices()
$i = $start;
$result = array();
while ($step < 0 ? ($i > $stop) : ($i < $stop)) {
$sliced []= $L[$i];
$i += $step;
}
return $sliced;
}
``` | I can't say there's no bug in the codes, but it had past your test program :)
```
def mySlice(L, start=None, stop=None, step=None):
ret = []
le = len(L)
if step is None: step = 1
if step > 0: #this situation might be easier
if start is None:
start = 0
else:
if start < 0: start += le
if start < 0: start = 0
if start > le: start = le
if stop is None:
stop = le
else:
if stop < 0: stop += le
if stop < 0: stop = 0
if stop > le: stop = le
else:
if start is None:
start = le-1
else:
if start < 0: start += le
if start < 0: start = -1
if start >= le: start = le-1
if stop is None:
stop = -1 #stop is not 0 because we need L[0]
else:
if stop < 0: stop += le
if stop < 0: stop = -1
if stop >= le: stop = le
#(stop-start)*step>0 to make sure 2 things:
#1: step != 0
#2: iteration will end
while start != stop and (stop-start)*step > 0 and start >=0 and start < le:
ret.append( L[start] )
start += step
return ret
``` |
12,173,856 | I'm trying to reimplement python [slice notation](https://stackoverflow.com/questions/509211/good-primer-for-python-slice-notation) in another language (php) and looking for a snippet (in any language or pseudocode) that would mimic the python logic. That is, given a list and a triple `(start, stop, step)` or a part thereof, determine correct values or defaults for all parameters and return a slice as a new list.
I tried looking into [the source](http://hg.python.org/cpython/file/3d4d52e47431/Objects/sliceobject.c). That code is far beyond my c skills, but I can't help but agree with the comment saying:
```
/* this is harder to get right than you might think */
```
Also, if something like this is already done, pointers will be greatly appreciated.
This is my test bench (make sure your code passes before posting):
```
#place your code below
code = """
def mySlice(L, start=None, stop=None, step=None):
or
<?php function mySlice($L, $start=NULL, $stop=NULL, $step=NULL) ...
or
function mySlice(L, start, stop, step) ...
"""
import itertools
L = [0,1,2,3,4,5,6,7,8,9]
if code.strip().startswith('<?php'):
mode = 'php'
if code.strip().startswith('def'):
mode = 'python'
if code.strip().startswith('function'):
mode = 'js'
if mode == 'php':
var, none = '$L', 'NULL'
print code, '\n'
print '$L=array(%s);' % ','.join(str(x) for x in L)
print "function _c($s,$a,$e){if($a!==$e)echo $s,' should be [',implode(',',$e),'] got [',implode(',',$a),']',PHP_EOL;}"
if mode == 'python':
var, none = 'L', 'None'
print code, '\n'
print 'L=%r' % L
print "def _c(s,a,e):\n\tif a!=e:\n\t\tprint s,'should be',e,'got',a"
if mode == 'js':
var, none = 'L', 'undefined'
print code, '\n'
print 'L=%r' % L
print "function _c(s,a,e){if(a.join()!==e.join())console.log(s+' should be ['+e.join()+'] got ['+a.join()+']');}"
print
n = len(L) + 3
start = range(-n, n) + [None, 100, -100]
stop = range(-n, n) + [None, 100, -100]
step = range(-n, n) + [100, -100]
for q in itertools.product(start, stop, step):
if not q[2]: q = q[:-1]
actual = 'mySlice(%s,%s)' % (var, ','.join(none if x is None else str(x) for x in q))
slice_ = 'L[%s]' % ':'.join('' if x is None else str(x) for x in q)
expect = eval(slice_)
if mode == 'php':
expect = 'array(%s)' % ','.join(str(x) for x in expect)
print "_c(%r,%s,%s);" % (slice_, actual, expect)
if mode == 'python':
print "_c(%r,%s,%s);" % (slice_, actual, expect)
if mode == 'js':
print "_c(%r,%s,%s);" % (slice_, actual, expect)
```
how to use it:
* save into a file (`test.py`)
* place your python, php or javascript code between `"""`s
* run `python test.py | python` or `python test.py | php` or `python test.py | node` | 2012/08/29 | [
"https://Stackoverflow.com/questions/12173856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/989121/"
] | This is what I came up with (python)
```
def mySlice(L, start=None, stop=None, step=None):
answer = []
if not start:
start = 0
if start < 0:
start += len(L)
if not stop:
stop = len(L)
if stop < 0:
stop += len(L)
if not step:
step = 1
if stop == start or (stop<=start and step>0) or (stop>=start and step<0):
return []
i = start
while i != stop:
try:
answer.append(L[i])
i += step
except:
break
return answer
```
Seems to work - let me know what you think
Hope it helps | I've written a PHP port based on the C code, optimized for step sizes -1 and 1:
```
function get_indices($length, $step, &$start, &$end, &$size)
{
if (is_null($start)) {
$start = $step < 0 ? $length - 1 : 0;
} else {
if ($start < 0) {
$start += $length;
if ($start < 0) {
$start = $step < 0 ? -1 : 0;
}
} elseif ($start >= $length) {
$start = $step < 0 ? $length - 1 : $length;
}
}
if (is_null($end)) {
$end = $step < 0 ? -1 : $length;
} else {
if ($end < 0) {
$end += $length;
if ($end < 0) {
$end = $step < 0 ? - 1 : 0;
}
} elseif ($end >= $length) {
$end = $step < 0 ? $length - 1 : $length;
}
}
if (($step < 0 && $end >= $start) || ($step > 0 && $start >= $end)) {
$size = 0;
} elseif ($step < 0) {
$size = ($end - $start + 1) / $step + 1;
} else {
$size = ($end - $start - 1) / $step + 1;
}
}
function mySlice($L, $start = NULL, $end = NULL, $step = 1)
{
if (!$step) {
return false; // could throw exception too
}
$length = count($L);
get_indices($length, $step, $start, $end, $size);
// optimize default step
if ($step == 1) {
// apply native array_slice()
return array_slice($L, $start, $size);
} elseif ($step == -1) {
// negative step needs an array reversal first
// with range translation
return array_slice(array_reverse($L), $length - $start - 1, $size);
} else {
// standard fallback
$r = array();
for ($i = $start; $step < 0 ? $i > $end : $i < $end; $i += $step) {
$r[] = $L[$i];
}
return $r;
}
}
``` |
12,173,856 | I'm trying to reimplement python [slice notation](https://stackoverflow.com/questions/509211/good-primer-for-python-slice-notation) in another language (php) and looking for a snippet (in any language or pseudocode) that would mimic the python logic. That is, given a list and a triple `(start, stop, step)` or a part thereof, determine correct values or defaults for all parameters and return a slice as a new list.
I tried looking into [the source](http://hg.python.org/cpython/file/3d4d52e47431/Objects/sliceobject.c). That code is far beyond my c skills, but I can't help but agree with the comment saying:
```
/* this is harder to get right than you might think */
```
Also, if something like this is already done, pointers will be greatly appreciated.
This is my test bench (make sure your code passes before posting):
```
#place your code below
code = """
def mySlice(L, start=None, stop=None, step=None):
or
<?php function mySlice($L, $start=NULL, $stop=NULL, $step=NULL) ...
or
function mySlice(L, start, stop, step) ...
"""
import itertools
L = [0,1,2,3,4,5,6,7,8,9]
if code.strip().startswith('<?php'):
mode = 'php'
if code.strip().startswith('def'):
mode = 'python'
if code.strip().startswith('function'):
mode = 'js'
if mode == 'php':
var, none = '$L', 'NULL'
print code, '\n'
print '$L=array(%s);' % ','.join(str(x) for x in L)
print "function _c($s,$a,$e){if($a!==$e)echo $s,' should be [',implode(',',$e),'] got [',implode(',',$a),']',PHP_EOL;}"
if mode == 'python':
var, none = 'L', 'None'
print code, '\n'
print 'L=%r' % L
print "def _c(s,a,e):\n\tif a!=e:\n\t\tprint s,'should be',e,'got',a"
if mode == 'js':
var, none = 'L', 'undefined'
print code, '\n'
print 'L=%r' % L
print "function _c(s,a,e){if(a.join()!==e.join())console.log(s+' should be ['+e.join()+'] got ['+a.join()+']');}"
print
n = len(L) + 3
start = range(-n, n) + [None, 100, -100]
stop = range(-n, n) + [None, 100, -100]
step = range(-n, n) + [100, -100]
for q in itertools.product(start, stop, step):
if not q[2]: q = q[:-1]
actual = 'mySlice(%s,%s)' % (var, ','.join(none if x is None else str(x) for x in q))
slice_ = 'L[%s]' % ':'.join('' if x is None else str(x) for x in q)
expect = eval(slice_)
if mode == 'php':
expect = 'array(%s)' % ','.join(str(x) for x in expect)
print "_c(%r,%s,%s);" % (slice_, actual, expect)
if mode == 'python':
print "_c(%r,%s,%s);" % (slice_, actual, expect)
if mode == 'js':
print "_c(%r,%s,%s);" % (slice_, actual, expect)
```
how to use it:
* save into a file (`test.py`)
* place your python, php or javascript code between `"""`s
* run `python test.py | python` or `python test.py | php` or `python test.py | node` | 2012/08/29 | [
"https://Stackoverflow.com/questions/12173856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/989121/"
] | Here's a straight port of the C code:
```
def adjust_endpoint(length, endpoint, step):
if endpoint < 0:
endpoint += length
if endpoint < 0:
endpoint = -1 if step < 0 else 0
elif endpoint >= length:
endpoint = length - 1 if step < 0 else length
return endpoint
def adjust_slice(length, start, stop, step):
if step is None:
step = 1
elif step == 0:
raise ValueError("step cannot be 0")
if start is None:
start = length - 1 if step < 0 else 0
else:
start = adjust_endpoint(length, start, step)
if stop is None:
stop = -1 if step < 0 else length
else:
stop = adjust_endpoint(length, stop, step)
return start, stop, step
def slice_indices(length, start, stop, step):
start, stop, step = adjust_slice(length, start, stop, step)
i = start
while (i > stop) if step < 0 else (i < stop):
yield i
i += step
def mySlice(L, start=None, stop=None, step=None):
return [L[i] for i in slice_indices(len(L), start, stop, step)]
``` | This is based on @ecatmur's Python code ported again to PHP.
```
<?php
function adjust_endpoint($length, $endpoint, $step) {
if ($endpoint < 0) {
$endpoint += $length;
if ($endpoint < 0) {
$endpoint = $step < 0 ? -1 : 0;
}
}
elseif ($endpoint >= $length) {
$endpoint = $step < 0 ? $length - 1 : $length;
}
return $endpoint;
}
function mySlice($L, $start = null, $stop = null, $step = null) {
$sliced = array();
$length = count($L);
// adjust_slice()
if ($step === null) {
$step = 1;
}
elseif ($step == 0) {
throw new Exception('step cannot be 0');
}
if ($start === null) {
$start = $step < 0 ? $length - 1 : 0;
}
else {
$start = adjust_endpoint($length, $start, $step);
}
if ($stop === null) {
$stop = $step < 0 ? -1 : $length;
}
else {
$stop = adjust_endpoint($length, $stop, $step);
}
// slice_indices()
$i = $start;
$result = array();
while ($step < 0 ? ($i > $stop) : ($i < $stop)) {
$sliced []= $L[$i];
$i += $step;
}
return $sliced;
}
``` |
12,173,856 | I'm trying to reimplement python [slice notation](https://stackoverflow.com/questions/509211/good-primer-for-python-slice-notation) in another language (php) and looking for a snippet (in any language or pseudocode) that would mimic the python logic. That is, given a list and a triple `(start, stop, step)` or a part thereof, determine correct values or defaults for all parameters and return a slice as a new list.
I tried looking into [the source](http://hg.python.org/cpython/file/3d4d52e47431/Objects/sliceobject.c). That code is far beyond my c skills, but I can't help but agree with the comment saying:
```
/* this is harder to get right than you might think */
```
Also, if something like this is already done, pointers will be greatly appreciated.
This is my test bench (make sure your code passes before posting):
```
#place your code below
code = """
def mySlice(L, start=None, stop=None, step=None):
or
<?php function mySlice($L, $start=NULL, $stop=NULL, $step=NULL) ...
or
function mySlice(L, start, stop, step) ...
"""
import itertools
L = [0,1,2,3,4,5,6,7,8,9]
if code.strip().startswith('<?php'):
mode = 'php'
if code.strip().startswith('def'):
mode = 'python'
if code.strip().startswith('function'):
mode = 'js'
if mode == 'php':
var, none = '$L', 'NULL'
print code, '\n'
print '$L=array(%s);' % ','.join(str(x) for x in L)
print "function _c($s,$a,$e){if($a!==$e)echo $s,' should be [',implode(',',$e),'] got [',implode(',',$a),']',PHP_EOL;}"
if mode == 'python':
var, none = 'L', 'None'
print code, '\n'
print 'L=%r' % L
print "def _c(s,a,e):\n\tif a!=e:\n\t\tprint s,'should be',e,'got',a"
if mode == 'js':
var, none = 'L', 'undefined'
print code, '\n'
print 'L=%r' % L
print "function _c(s,a,e){if(a.join()!==e.join())console.log(s+' should be ['+e.join()+'] got ['+a.join()+']');}"
print
n = len(L) + 3
start = range(-n, n) + [None, 100, -100]
stop = range(-n, n) + [None, 100, -100]
step = range(-n, n) + [100, -100]
for q in itertools.product(start, stop, step):
if not q[2]: q = q[:-1]
actual = 'mySlice(%s,%s)' % (var, ','.join(none if x is None else str(x) for x in q))
slice_ = 'L[%s]' % ':'.join('' if x is None else str(x) for x in q)
expect = eval(slice_)
if mode == 'php':
expect = 'array(%s)' % ','.join(str(x) for x in expect)
print "_c(%r,%s,%s);" % (slice_, actual, expect)
if mode == 'python':
print "_c(%r,%s,%s);" % (slice_, actual, expect)
if mode == 'js':
print "_c(%r,%s,%s);" % (slice_, actual, expect)
```
how to use it:
* save into a file (`test.py`)
* place your python, php or javascript code between `"""`s
* run `python test.py | python` or `python test.py | php` or `python test.py | node` | 2012/08/29 | [
"https://Stackoverflow.com/questions/12173856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/989121/"
] | This is a solution I came up with in C# .NET, maybe not the prettiest, but it works.
```
private object[] Slice(object[] list, int start = 0, int stop = 0, int step = 0)
{
List<object> result = new List<object>();
if (step == 0) step = 1;
if (start < 0)
{
for (int i = list.Length + start; i < list.Length - (list.Length + start); i++)
{
result.Add(list[i]);
}
}
if (start >= 0 && stop == 0) stop = list.Length - (start >= 0 ? start : 0);
else if (start >= 0 && stop < 0) stop = list.Length + stop;
int loopStart = (start < 0 ? 0 : start);
int loopEnd = (start > 0 ? start + stop : stop);
if (step > 0)
{
for (int i = loopStart; i < loopEnd; i += step)
result.Add(list[i]);
}
else if (step < 0)
{
for (int i = loopEnd - 1; i >= loopStart; i += step)
result.Add(list[i]);
}
return result.ToArray();
}
``` | I can't say there's no bug in the codes, but it had past your test program :)
```
def mySlice(L, start=None, stop=None, step=None):
ret = []
le = len(L)
if step is None: step = 1
if step > 0: #this situation might be easier
if start is None:
start = 0
else:
if start < 0: start += le
if start < 0: start = 0
if start > le: start = le
if stop is None:
stop = le
else:
if stop < 0: stop += le
if stop < 0: stop = 0
if stop > le: stop = le
else:
if start is None:
start = le-1
else:
if start < 0: start += le
if start < 0: start = -1
if start >= le: start = le-1
if stop is None:
stop = -1 #stop is not 0 because we need L[0]
else:
if stop < 0: stop += le
if stop < 0: stop = -1
if stop >= le: stop = le
#(stop-start)*step>0 to make sure 2 things:
#1: step != 0
#2: iteration will end
while start != stop and (stop-start)*step > 0 and start >=0 and start < le:
ret.append( L[start] )
start += step
return ret
``` |
12,173,856 | I'm trying to reimplement python [slice notation](https://stackoverflow.com/questions/509211/good-primer-for-python-slice-notation) in another language (php) and looking for a snippet (in any language or pseudocode) that would mimic the python logic. That is, given a list and a triple `(start, stop, step)` or a part thereof, determine correct values or defaults for all parameters and return a slice as a new list.
I tried looking into [the source](http://hg.python.org/cpython/file/3d4d52e47431/Objects/sliceobject.c). That code is far beyond my c skills, but I can't help but agree with the comment saying:
```
/* this is harder to get right than you might think */
```
Also, if something like this is already done, pointers will be greatly appreciated.
This is my test bench (make sure your code passes before posting):
```
#place your code below
code = """
def mySlice(L, start=None, stop=None, step=None):
or
<?php function mySlice($L, $start=NULL, $stop=NULL, $step=NULL) ...
or
function mySlice(L, start, stop, step) ...
"""
import itertools
L = [0,1,2,3,4,5,6,7,8,9]
if code.strip().startswith('<?php'):
mode = 'php'
if code.strip().startswith('def'):
mode = 'python'
if code.strip().startswith('function'):
mode = 'js'
if mode == 'php':
var, none = '$L', 'NULL'
print code, '\n'
print '$L=array(%s);' % ','.join(str(x) for x in L)
print "function _c($s,$a,$e){if($a!==$e)echo $s,' should be [',implode(',',$e),'] got [',implode(',',$a),']',PHP_EOL;}"
if mode == 'python':
var, none = 'L', 'None'
print code, '\n'
print 'L=%r' % L
print "def _c(s,a,e):\n\tif a!=e:\n\t\tprint s,'should be',e,'got',a"
if mode == 'js':
var, none = 'L', 'undefined'
print code, '\n'
print 'L=%r' % L
print "function _c(s,a,e){if(a.join()!==e.join())console.log(s+' should be ['+e.join()+'] got ['+a.join()+']');}"
print
n = len(L) + 3
start = range(-n, n) + [None, 100, -100]
stop = range(-n, n) + [None, 100, -100]
step = range(-n, n) + [100, -100]
for q in itertools.product(start, stop, step):
if not q[2]: q = q[:-1]
actual = 'mySlice(%s,%s)' % (var, ','.join(none if x is None else str(x) for x in q))
slice_ = 'L[%s]' % ':'.join('' if x is None else str(x) for x in q)
expect = eval(slice_)
if mode == 'php':
expect = 'array(%s)' % ','.join(str(x) for x in expect)
print "_c(%r,%s,%s);" % (slice_, actual, expect)
if mode == 'python':
print "_c(%r,%s,%s);" % (slice_, actual, expect)
if mode == 'js':
print "_c(%r,%s,%s);" % (slice_, actual, expect)
```
how to use it:
* save into a file (`test.py`)
* place your python, php or javascript code between `"""`s
* run `python test.py | python` or `python test.py | php` or `python test.py | node` | 2012/08/29 | [
"https://Stackoverflow.com/questions/12173856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/989121/"
] | This is what I came up with (python)
```
def mySlice(L, start=None, stop=None, step=None):
answer = []
if not start:
start = 0
if start < 0:
start += len(L)
if not stop:
stop = len(L)
if stop < 0:
stop += len(L)
if not step:
step = 1
if stop == start or (stop<=start and step>0) or (stop>=start and step<0):
return []
i = start
while i != stop:
try:
answer.append(L[i])
i += step
except:
break
return answer
```
Seems to work - let me know what you think
Hope it helps | I can't say there's no bug in the codes, but it had past your test program :)
```
def mySlice(L, start=None, stop=None, step=None):
ret = []
le = len(L)
if step is None: step = 1
if step > 0: #this situation might be easier
if start is None:
start = 0
else:
if start < 0: start += le
if start < 0: start = 0
if start > le: start = le
if stop is None:
stop = le
else:
if stop < 0: stop += le
if stop < 0: stop = 0
if stop > le: stop = le
else:
if start is None:
start = le-1
else:
if start < 0: start += le
if start < 0: start = -1
if start >= le: start = le-1
if stop is None:
stop = -1 #stop is not 0 because we need L[0]
else:
if stop < 0: stop += le
if stop < 0: stop = -1
if stop >= le: stop = le
#(stop-start)*step>0 to make sure 2 things:
#1: step != 0
#2: iteration will end
while start != stop and (stop-start)*step > 0 and start >=0 and start < le:
ret.append( L[start] )
start += step
return ret
``` |
12,173,856 | I'm trying to reimplement python [slice notation](https://stackoverflow.com/questions/509211/good-primer-for-python-slice-notation) in another language (php) and looking for a snippet (in any language or pseudocode) that would mimic the python logic. That is, given a list and a triple `(start, stop, step)` or a part thereof, determine correct values or defaults for all parameters and return a slice as a new list.
I tried looking into [the source](http://hg.python.org/cpython/file/3d4d52e47431/Objects/sliceobject.c). That code is far beyond my c skills, but I can't help but agree with the comment saying:
```
/* this is harder to get right than you might think */
```
Also, if something like this is already done, pointers will be greatly appreciated.
This is my test bench (make sure your code passes before posting):
```
#place your code below
code = """
def mySlice(L, start=None, stop=None, step=None):
or
<?php function mySlice($L, $start=NULL, $stop=NULL, $step=NULL) ...
or
function mySlice(L, start, stop, step) ...
"""
import itertools
L = [0,1,2,3,4,5,6,7,8,9]
if code.strip().startswith('<?php'):
mode = 'php'
if code.strip().startswith('def'):
mode = 'python'
if code.strip().startswith('function'):
mode = 'js'
if mode == 'php':
var, none = '$L', 'NULL'
print code, '\n'
print '$L=array(%s);' % ','.join(str(x) for x in L)
print "function _c($s,$a,$e){if($a!==$e)echo $s,' should be [',implode(',',$e),'] got [',implode(',',$a),']',PHP_EOL;}"
if mode == 'python':
var, none = 'L', 'None'
print code, '\n'
print 'L=%r' % L
print "def _c(s,a,e):\n\tif a!=e:\n\t\tprint s,'should be',e,'got',a"
if mode == 'js':
var, none = 'L', 'undefined'
print code, '\n'
print 'L=%r' % L
print "function _c(s,a,e){if(a.join()!==e.join())console.log(s+' should be ['+e.join()+'] got ['+a.join()+']');}"
print
n = len(L) + 3
start = range(-n, n) + [None, 100, -100]
stop = range(-n, n) + [None, 100, -100]
step = range(-n, n) + [100, -100]
for q in itertools.product(start, stop, step):
if not q[2]: q = q[:-1]
actual = 'mySlice(%s,%s)' % (var, ','.join(none if x is None else str(x) for x in q))
slice_ = 'L[%s]' % ':'.join('' if x is None else str(x) for x in q)
expect = eval(slice_)
if mode == 'php':
expect = 'array(%s)' % ','.join(str(x) for x in expect)
print "_c(%r,%s,%s);" % (slice_, actual, expect)
if mode == 'python':
print "_c(%r,%s,%s);" % (slice_, actual, expect)
if mode == 'js':
print "_c(%r,%s,%s);" % (slice_, actual, expect)
```
how to use it:
* save into a file (`test.py`)
* place your python, php or javascript code between `"""`s
* run `python test.py | python` or `python test.py | php` or `python test.py | node` | 2012/08/29 | [
"https://Stackoverflow.com/questions/12173856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/989121/"
] | This is what I came up with (python)
```
def mySlice(L, start=None, stop=None, step=None):
answer = []
if not start:
start = 0
if start < 0:
start += len(L)
if not stop:
stop = len(L)
if stop < 0:
stop += len(L)
if not step:
step = 1
if stop == start or (stop<=start and step>0) or (stop>=start and step<0):
return []
i = start
while i != stop:
try:
answer.append(L[i])
i += step
except:
break
return answer
```
Seems to work - let me know what you think
Hope it helps | This is based on @ecatmur's Python code ported again to PHP.
```
<?php
function adjust_endpoint($length, $endpoint, $step) {
if ($endpoint < 0) {
$endpoint += $length;
if ($endpoint < 0) {
$endpoint = $step < 0 ? -1 : 0;
}
}
elseif ($endpoint >= $length) {
$endpoint = $step < 0 ? $length - 1 : $length;
}
return $endpoint;
}
function mySlice($L, $start = null, $stop = null, $step = null) {
$sliced = array();
$length = count($L);
// adjust_slice()
if ($step === null) {
$step = 1;
}
elseif ($step == 0) {
throw new Exception('step cannot be 0');
}
if ($start === null) {
$start = $step < 0 ? $length - 1 : 0;
}
else {
$start = adjust_endpoint($length, $start, $step);
}
if ($stop === null) {
$stop = $step < 0 ? -1 : $length;
}
else {
$stop = adjust_endpoint($length, $stop, $step);
}
// slice_indices()
$i = $start;
$result = array();
while ($step < 0 ? ($i > $stop) : ($i < $stop)) {
$sliced []= $L[$i];
$i += $step;
}
return $sliced;
}
``` |
12,173,856 | I'm trying to reimplement python [slice notation](https://stackoverflow.com/questions/509211/good-primer-for-python-slice-notation) in another language (php) and looking for a snippet (in any language or pseudocode) that would mimic the python logic. That is, given a list and a triple `(start, stop, step)` or a part thereof, determine correct values or defaults for all parameters and return a slice as a new list.
I tried looking into [the source](http://hg.python.org/cpython/file/3d4d52e47431/Objects/sliceobject.c). That code is far beyond my c skills, but I can't help but agree with the comment saying:
```
/* this is harder to get right than you might think */
```
Also, if something like this is already done, pointers will be greatly appreciated.
This is my test bench (make sure your code passes before posting):
```
#place your code below
code = """
def mySlice(L, start=None, stop=None, step=None):
or
<?php function mySlice($L, $start=NULL, $stop=NULL, $step=NULL) ...
or
function mySlice(L, start, stop, step) ...
"""
import itertools
L = [0,1,2,3,4,5,6,7,8,9]
if code.strip().startswith('<?php'):
mode = 'php'
if code.strip().startswith('def'):
mode = 'python'
if code.strip().startswith('function'):
mode = 'js'
if mode == 'php':
var, none = '$L', 'NULL'
print code, '\n'
print '$L=array(%s);' % ','.join(str(x) for x in L)
print "function _c($s,$a,$e){if($a!==$e)echo $s,' should be [',implode(',',$e),'] got [',implode(',',$a),']',PHP_EOL;}"
if mode == 'python':
var, none = 'L', 'None'
print code, '\n'
print 'L=%r' % L
print "def _c(s,a,e):\n\tif a!=e:\n\t\tprint s,'should be',e,'got',a"
if mode == 'js':
var, none = 'L', 'undefined'
print code, '\n'
print 'L=%r' % L
print "function _c(s,a,e){if(a.join()!==e.join())console.log(s+' should be ['+e.join()+'] got ['+a.join()+']');}"
print
n = len(L) + 3
start = range(-n, n) + [None, 100, -100]
stop = range(-n, n) + [None, 100, -100]
step = range(-n, n) + [100, -100]
for q in itertools.product(start, stop, step):
if not q[2]: q = q[:-1]
actual = 'mySlice(%s,%s)' % (var, ','.join(none if x is None else str(x) for x in q))
slice_ = 'L[%s]' % ':'.join('' if x is None else str(x) for x in q)
expect = eval(slice_)
if mode == 'php':
expect = 'array(%s)' % ','.join(str(x) for x in expect)
print "_c(%r,%s,%s);" % (slice_, actual, expect)
if mode == 'python':
print "_c(%r,%s,%s);" % (slice_, actual, expect)
if mode == 'js':
print "_c(%r,%s,%s);" % (slice_, actual, expect)
```
how to use it:
* save into a file (`test.py`)
* place your python, php or javascript code between `"""`s
* run `python test.py | python` or `python test.py | php` or `python test.py | node` | 2012/08/29 | [
"https://Stackoverflow.com/questions/12173856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/989121/"
] | Here's a straight port of the C code:
```
def adjust_endpoint(length, endpoint, step):
if endpoint < 0:
endpoint += length
if endpoint < 0:
endpoint = -1 if step < 0 else 0
elif endpoint >= length:
endpoint = length - 1 if step < 0 else length
return endpoint
def adjust_slice(length, start, stop, step):
if step is None:
step = 1
elif step == 0:
raise ValueError("step cannot be 0")
if start is None:
start = length - 1 if step < 0 else 0
else:
start = adjust_endpoint(length, start, step)
if stop is None:
stop = -1 if step < 0 else length
else:
stop = adjust_endpoint(length, stop, step)
return start, stop, step
def slice_indices(length, start, stop, step):
start, stop, step = adjust_slice(length, start, stop, step)
i = start
while (i > stop) if step < 0 else (i < stop):
yield i
i += step
def mySlice(L, start=None, stop=None, step=None):
return [L[i] for i in slice_indices(len(L), start, stop, step)]
``` | I can't say there's no bug in the codes, but it had past your test program :)
```
def mySlice(L, start=None, stop=None, step=None):
ret = []
le = len(L)
if step is None: step = 1
if step > 0: #this situation might be easier
if start is None:
start = 0
else:
if start < 0: start += le
if start < 0: start = 0
if start > le: start = le
if stop is None:
stop = le
else:
if stop < 0: stop += le
if stop < 0: stop = 0
if stop > le: stop = le
else:
if start is None:
start = le-1
else:
if start < 0: start += le
if start < 0: start = -1
if start >= le: start = le-1
if stop is None:
stop = -1 #stop is not 0 because we need L[0]
else:
if stop < 0: stop += le
if stop < 0: stop = -1
if stop >= le: stop = le
#(stop-start)*step>0 to make sure 2 things:
#1: step != 0
#2: iteration will end
while start != stop and (stop-start)*step > 0 and start >=0 and start < le:
ret.append( L[start] )
start += step
return ret
``` |
12,173,856 | I'm trying to reimplement python [slice notation](https://stackoverflow.com/questions/509211/good-primer-for-python-slice-notation) in another language (php) and looking for a snippet (in any language or pseudocode) that would mimic the python logic. That is, given a list and a triple `(start, stop, step)` or a part thereof, determine correct values or defaults for all parameters and return a slice as a new list.
I tried looking into [the source](http://hg.python.org/cpython/file/3d4d52e47431/Objects/sliceobject.c). That code is far beyond my c skills, but I can't help but agree with the comment saying:
```
/* this is harder to get right than you might think */
```
Also, if something like this is already done, pointers will be greatly appreciated.
This is my test bench (make sure your code passes before posting):
```
#place your code below
code = """
def mySlice(L, start=None, stop=None, step=None):
or
<?php function mySlice($L, $start=NULL, $stop=NULL, $step=NULL) ...
or
function mySlice(L, start, stop, step) ...
"""
import itertools
L = [0,1,2,3,4,5,6,7,8,9]
if code.strip().startswith('<?php'):
mode = 'php'
if code.strip().startswith('def'):
mode = 'python'
if code.strip().startswith('function'):
mode = 'js'
if mode == 'php':
var, none = '$L', 'NULL'
print code, '\n'
print '$L=array(%s);' % ','.join(str(x) for x in L)
print "function _c($s,$a,$e){if($a!==$e)echo $s,' should be [',implode(',',$e),'] got [',implode(',',$a),']',PHP_EOL;}"
if mode == 'python':
var, none = 'L', 'None'
print code, '\n'
print 'L=%r' % L
print "def _c(s,a,e):\n\tif a!=e:\n\t\tprint s,'should be',e,'got',a"
if mode == 'js':
var, none = 'L', 'undefined'
print code, '\n'
print 'L=%r' % L
print "function _c(s,a,e){if(a.join()!==e.join())console.log(s+' should be ['+e.join()+'] got ['+a.join()+']');}"
print
n = len(L) + 3
start = range(-n, n) + [None, 100, -100]
stop = range(-n, n) + [None, 100, -100]
step = range(-n, n) + [100, -100]
for q in itertools.product(start, stop, step):
if not q[2]: q = q[:-1]
actual = 'mySlice(%s,%s)' % (var, ','.join(none if x is None else str(x) for x in q))
slice_ = 'L[%s]' % ':'.join('' if x is None else str(x) for x in q)
expect = eval(slice_)
if mode == 'php':
expect = 'array(%s)' % ','.join(str(x) for x in expect)
print "_c(%r,%s,%s);" % (slice_, actual, expect)
if mode == 'python':
print "_c(%r,%s,%s);" % (slice_, actual, expect)
if mode == 'js':
print "_c(%r,%s,%s);" % (slice_, actual, expect)
```
how to use it:
* save into a file (`test.py`)
* place your python, php or javascript code between `"""`s
* run `python test.py | python` or `python test.py | php` or `python test.py | node` | 2012/08/29 | [
"https://Stackoverflow.com/questions/12173856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/989121/"
] | Here's a straight port of the C code:
```
def adjust_endpoint(length, endpoint, step):
if endpoint < 0:
endpoint += length
if endpoint < 0:
endpoint = -1 if step < 0 else 0
elif endpoint >= length:
endpoint = length - 1 if step < 0 else length
return endpoint
def adjust_slice(length, start, stop, step):
if step is None:
step = 1
elif step == 0:
raise ValueError("step cannot be 0")
if start is None:
start = length - 1 if step < 0 else 0
else:
start = adjust_endpoint(length, start, step)
if stop is None:
stop = -1 if step < 0 else length
else:
stop = adjust_endpoint(length, stop, step)
return start, stop, step
def slice_indices(length, start, stop, step):
start, stop, step = adjust_slice(length, start, stop, step)
i = start
while (i > stop) if step < 0 else (i < stop):
yield i
i += step
def mySlice(L, start=None, stop=None, step=None):
return [L[i] for i in slice_indices(len(L), start, stop, step)]
``` | This is a solution I came up with in C# .NET, maybe not the prettiest, but it works.
```
private object[] Slice(object[] list, int start = 0, int stop = 0, int step = 0)
{
List<object> result = new List<object>();
if (step == 0) step = 1;
if (start < 0)
{
for (int i = list.Length + start; i < list.Length - (list.Length + start); i++)
{
result.Add(list[i]);
}
}
if (start >= 0 && stop == 0) stop = list.Length - (start >= 0 ? start : 0);
else if (start >= 0 && stop < 0) stop = list.Length + stop;
int loopStart = (start < 0 ? 0 : start);
int loopEnd = (start > 0 ? start + stop : stop);
if (step > 0)
{
for (int i = loopStart; i < loopEnd; i += step)
result.Add(list[i]);
}
else if (step < 0)
{
for (int i = loopEnd - 1; i >= loopStart; i += step)
result.Add(list[i]);
}
return result.ToArray();
}
``` |
1,376,016 | I was playing around with Python's subprocess module, trying a few examples but I can't seem to get heredoc statements to work.
Here is the trivial example I was playing with:
```
import subprocess
a = "A String of Text"
p = subprocess.Popen(["cat", "<<DATA\n" + a + "\nDATA"])
```
I get the following error when I run the code above:
```
cat: <<DATA\nA String of Text\nDATA: No such file or directory
```
Am I doing it wrong? Is this even possible? If so how would I go about doing it?
---
Update
Just wanted to say that this should never be performed in a real python program because there are better ways of doing this. | 2009/09/03 | [
"https://Stackoverflow.com/questions/1376016",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/124861/"
] | The shell "heredoc" support is a shell feature. `subprocess.Popen` does not run your command through the shell by default, so this syntax certainly won't work.
However, since you're using pipes anyway, there isn't any need to use the heredoc support of the shell. Just write your string `a` to the stdin pipe of the process you just started. This is exactly what the shell would do with the heredoc anyway.
You can do this with [`Popen.communicate()`](http://docs.python.org/library/subprocess.html#subprocess.Popen.communicate):
```
p.communicate(a)
```
The return value of the `communicate()` function contains the output of the process (in two streams, see the docs). | You're passing shell syntax as an arguments to `cat` program. You can try to do it like that:
```
p = subprocess.Popen(["sh", "-c", "cat <<DATA\n" + a + "\nDATA"])
```
But the concept itself is wrong. You should use Python features instead of calling shell scripts inside your python scripts.
And in this particular case you should that shell's heredoc syntax interpolates variables, so you'll need to escape all the text inside `a` and make sure there's no `DATA` line in it.
---
For Python equivalent, I think the closest idea to this (assuming you don't want just `print(a)` ;-)) is passing the value of the variable to stdin of a spawned process:
```
p = subprocess.Popen(["program", ...], stdin=subprocess.PIPE)
p.communicate(a)
``` |
1,376,016 | I was playing around with Python's subprocess module, trying a few examples but I can't seem to get heredoc statements to work.
Here is the trivial example I was playing with:
```
import subprocess
a = "A String of Text"
p = subprocess.Popen(["cat", "<<DATA\n" + a + "\nDATA"])
```
I get the following error when I run the code above:
```
cat: <<DATA\nA String of Text\nDATA: No such file or directory
```
Am I doing it wrong? Is this even possible? If so how would I go about doing it?
---
Update
Just wanted to say that this should never be performed in a real python program because there are better ways of doing this. | 2009/09/03 | [
"https://Stackoverflow.com/questions/1376016",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/124861/"
] | The shell "heredoc" support is a shell feature. `subprocess.Popen` does not run your command through the shell by default, so this syntax certainly won't work.
However, since you're using pipes anyway, there isn't any need to use the heredoc support of the shell. Just write your string `a` to the stdin pipe of the process you just started. This is exactly what the shell would do with the heredoc anyway.
You can do this with [`Popen.communicate()`](http://docs.python.org/library/subprocess.html#subprocess.Popen.communicate):
```
p.communicate(a)
```
The return value of the `communicate()` function contains the output of the process (in two streams, see the docs). | As others have pointed out, you need to run it in a shell. Popen makes this easy with a shell=True argument. I get the following output:
```
>>> import subprocess
>>> a = "A String of Text"
>>> p = subprocess.Popen("cat <<DATA\n" + a + "\nDATA", shell=True)
>>> A String of Text
>>> p.wait()
0
``` |
1,376,016 | I was playing around with Python's subprocess module, trying a few examples but I can't seem to get heredoc statements to work.
Here is the trivial example I was playing with:
```
import subprocess
a = "A String of Text"
p = subprocess.Popen(["cat", "<<DATA\n" + a + "\nDATA"])
```
I get the following error when I run the code above:
```
cat: <<DATA\nA String of Text\nDATA: No such file or directory
```
Am I doing it wrong? Is this even possible? If so how would I go about doing it?
---
Update
Just wanted to say that this should never be performed in a real python program because there are better ways of doing this. | 2009/09/03 | [
"https://Stackoverflow.com/questions/1376016",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/124861/"
] | The shell "heredoc" support is a shell feature. `subprocess.Popen` does not run your command through the shell by default, so this syntax certainly won't work.
However, since you're using pipes anyway, there isn't any need to use the heredoc support of the shell. Just write your string `a` to the stdin pipe of the process you just started. This is exactly what the shell would do with the heredoc anyway.
You can do this with [`Popen.communicate()`](http://docs.python.org/library/subprocess.html#subprocess.Popen.communicate):
```
p.communicate(a)
```
The return value of the `communicate()` function contains the output of the process (in two streams, see the docs). | As of Python 3.5 you can use [subrocess.run](https://docs.python.org/3.5/library/subprocess.html#subprocess.run) as in:
```
subprocess.run(['cat'], input=b"A String of Text")
``` |
1,376,016 | I was playing around with Python's subprocess module, trying a few examples but I can't seem to get heredoc statements to work.
Here is the trivial example I was playing with:
```
import subprocess
a = "A String of Text"
p = subprocess.Popen(["cat", "<<DATA\n" + a + "\nDATA"])
```
I get the following error when I run the code above:
```
cat: <<DATA\nA String of Text\nDATA: No such file or directory
```
Am I doing it wrong? Is this even possible? If so how would I go about doing it?
---
Update
Just wanted to say that this should never be performed in a real python program because there are better ways of doing this. | 2009/09/03 | [
"https://Stackoverflow.com/questions/1376016",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/124861/"
] | You're passing shell syntax as an arguments to `cat` program. You can try to do it like that:
```
p = subprocess.Popen(["sh", "-c", "cat <<DATA\n" + a + "\nDATA"])
```
But the concept itself is wrong. You should use Python features instead of calling shell scripts inside your python scripts.
And in this particular case you should that shell's heredoc syntax interpolates variables, so you'll need to escape all the text inside `a` and make sure there's no `DATA` line in it.
---
For Python equivalent, I think the closest idea to this (assuming you don't want just `print(a)` ;-)) is passing the value of the variable to stdin of a spawned process:
```
p = subprocess.Popen(["program", ...], stdin=subprocess.PIPE)
p.communicate(a)
``` | As of Python 3.5 you can use [subrocess.run](https://docs.python.org/3.5/library/subprocess.html#subprocess.run) as in:
```
subprocess.run(['cat'], input=b"A String of Text")
``` |
1,376,016 | I was playing around with Python's subprocess module, trying a few examples but I can't seem to get heredoc statements to work.
Here is the trivial example I was playing with:
```
import subprocess
a = "A String of Text"
p = subprocess.Popen(["cat", "<<DATA\n" + a + "\nDATA"])
```
I get the following error when I run the code above:
```
cat: <<DATA\nA String of Text\nDATA: No such file or directory
```
Am I doing it wrong? Is this even possible? If so how would I go about doing it?
---
Update
Just wanted to say that this should never be performed in a real python program because there are better ways of doing this. | 2009/09/03 | [
"https://Stackoverflow.com/questions/1376016",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/124861/"
] | As others have pointed out, you need to run it in a shell. Popen makes this easy with a shell=True argument. I get the following output:
```
>>> import subprocess
>>> a = "A String of Text"
>>> p = subprocess.Popen("cat <<DATA\n" + a + "\nDATA", shell=True)
>>> A String of Text
>>> p.wait()
0
``` | As of Python 3.5 you can use [subrocess.run](https://docs.python.org/3.5/library/subprocess.html#subprocess.run) as in:
```
subprocess.run(['cat'], input=b"A String of Text")
``` |
30,438,227 | I am building an application in python that uses a wrap to a library that performs hardware communication
I would like to create some test units and I am pretty new to unit tests, so I would like to mock the communications but I really don't know how to do it
quick example:
this is the application code using the comm lib
```
def changeValue(id, val):
current_value = comm.getval(id)
if (current_value != val):
comm.send(id, val)
```
I want to test this without performing communications, i.e. replacing the comm.getval return by some mocked value, and sending comm.send to a mocked comm class.
Can anyone give a hint on that?
---
The thing is that comm is a object inside a class
let's say the class is like this:
```
class myClass:
comm = Comm()
....
def __init__():
comm = comm.start()
def changeValue(id, val):
....
....
``` | 2015/05/25 | [
"https://Stackoverflow.com/questions/30438227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/180699/"
] | You can use [`mock`](https://docs.python.org/3/library/unittest.mock.html#module-unittest.mock) framework to this kind of jobs. First of all you use
`comm = Comm()` in `MyClass` and that means you have something like `from comm_module import Comm` in `MyClass`'s module. In these cases you need to patch `Comm` reference in `MyClass`'s module to make your patch active.
So an example of how you can test your code without do any connection could be:
```
@patch("my_class.Comm", autospec=True)
def test_base(self, mock_comm_factory):
mock_comm = mock_comm_factory.return_value
MyClass()
mock_comm.start.assert_called_with()
@patch("my_class.Comm", autospec=True)
def test_changeValue(self, mock_comm_factory):
mock_comm = mock_comm_factory.return_value
mock_comm.getval.return_value = 13
MyClass().changeValue(33, 23)
mock_comm.getval.assert_called_with(33)
mock_comm.send.assert_called_with(33, 23)
mock_comm.reset_mock()
mock_comm.getval.return_value = 23
MyClass().changeValue(33, 23)
mock_comm.getval.assert_called_with(33)
self.assertFalse(mock_comm.send.called)
```
Now I can start to explain all details of my answer like why use [`autospec=True`](https://docs.python.org/3/library/unittest.mock.html#autospeccing) or [how to apply patch to all methods](https://docs.python.org/3/library/unittest.mock.html#test-prefix) but that means to rewrite a lot of `mock` documentations and a SO answers. So I hope that is enough as starting point. | The trick is not to use global objects like `comm`. If you can, make it so that `comm` gets injected to your class or method by the caller. Then what you do is pass a mocked `comm` when testing and then real one when in production.
So either you make a `comm` reference a field in your class (and inject it via a constructor or setter method) like so
```
class myClass:
....
def __init__(myComm):
comm = myComm;
comm = comm.start()
def changeValue(id, val):
current_value = comm.getval(id)
if (current_value != val):
comm.send(id, val)
....
```
or you make it a parameter in the method where it is used, like so
```
def changeValue(id, val, myComm):
current_value = myComm.getval(id)
if (current_value != val):
myComm.send(id, val)
```
Using global *anything* makes mocking a huge pain, try to use [Dependency Injection](https://stackoverflow.com/questions/130794/what-is-dependency-injection) whenever you need to mock something.
This is another good post about DI. It is in java, but it should be the same in python <http://googletesting.blogspot.ca/2008/07/how-to-think-about-new-operator-with.html> |
247,301 | Besides the syntactic sugar and expressiveness power what are the differences in runtime efficiency. I mean, plpgsql can be faster than, lets say plpythonu or pljava? Or are they all approximately equals?
We are using stored procedures for the task of detecting nearly-duplicates records of people in a moderately sized database (around 10M of records) | 2008/10/29 | [
"https://Stackoverflow.com/questions/247301",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18300/"
] | plpgsql provides greater type safety I believe, you have to perform explicit casts if you want to perform operations using two different columns of similar type, like varchar and text or int4 and int8. This is important because if you need to have your stored proc use indexes, postgres requires that the types match exactly between join conditions (edit: for equality checks too I think).
There may be a facility for this in the other languages though, I haven't used them. In any case, I hope this gives you a better starting point for your investigation. | Without doing actual testing, I would expect plpgsql to be somewhat more efficient than other languages, because it's small. Having said that, remember that SQL functions are likely to be even faster than plpgsql, if a function is simple enough that you can write it in just SQL. |
247,301 | Besides the syntactic sugar and expressiveness power what are the differences in runtime efficiency. I mean, plpgsql can be faster than, lets say plpythonu or pljava? Or are they all approximately equals?
We are using stored procedures for the task of detecting nearly-duplicates records of people in a moderately sized database (around 10M of records) | 2008/10/29 | [
"https://Stackoverflow.com/questions/247301",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18300/"
] | plpgsql provides greater type safety I believe, you have to perform explicit casts if you want to perform operations using two different columns of similar type, like varchar and text or int4 and int8. This is important because if you need to have your stored proc use indexes, postgres requires that the types match exactly between join conditions (edit: for equality checks too I think).
There may be a facility for this in the other languages though, I haven't used them. In any case, I hope this gives you a better starting point for your investigation. | plpgsql is very well integrated with SQL - the source code should be very clean and readable. For SQL languages like PLJava or PLPython, SQL statements have to be isolated - SQL isn't part of language. So you have to write little bit more code. If your procedure has lot of SQL statements, then plpgsql procedure should be cleaner, shorter and little bit faster. When your procedure hasn't SQL statements, then procedures from external languages can be faster - but external languages (interprets) needs some time for initialisation - so for simple task, procedures in SQL or plpgsql language should be faster.
External languages are used when you need some functionality like access to net, access to filesystem - <http://www.postgres.cz/index.php/PL/Perlu_-_Untrusted_Perl_%28en%29>
What I know - people usually use a combination of PL languages - (SQL,plpgsql, plperl) or (SQL, plpgsql, plpython). |
247,301 | Besides the syntactic sugar and expressiveness power what are the differences in runtime efficiency. I mean, plpgsql can be faster than, lets say plpythonu or pljava? Or are they all approximately equals?
We are using stored procedures for the task of detecting nearly-duplicates records of people in a moderately sized database (around 10M of records) | 2008/10/29 | [
"https://Stackoverflow.com/questions/247301",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18300/"
] | plpgsql is very well integrated with SQL - the source code should be very clean and readable. For SQL languages like PLJava or PLPython, SQL statements have to be isolated - SQL isn't part of language. So you have to write little bit more code. If your procedure has lot of SQL statements, then plpgsql procedure should be cleaner, shorter and little bit faster. When your procedure hasn't SQL statements, then procedures from external languages can be faster - but external languages (interprets) needs some time for initialisation - so for simple task, procedures in SQL or plpgsql language should be faster.
External languages are used when you need some functionality like access to net, access to filesystem - <http://www.postgres.cz/index.php/PL/Perlu_-_Untrusted_Perl_%28en%29>
What I know - people usually use a combination of PL languages - (SQL,plpgsql, plperl) or (SQL, plpgsql, plpython). | Without doing actual testing, I would expect plpgsql to be somewhat more efficient than other languages, because it's small. Having said that, remember that SQL functions are likely to be even faster than plpgsql, if a function is simple enough that you can write it in just SQL. |
14,053,552 | I am writing a webapp and I would like to start charging my users. What are the recommended billing platforms for a python/Django webapp?
I would like something that keeps track of my users' purchase history, can elegantly handle subscription purchases, a la carte items, coupon codes, and refunds, makes it straightforward to generate invoices/receipts, and can easily integrate with most payment processors. Extra points if it comes with a fancy admin interface.
I found this [django-billing project](https://github.com/gabrielgrant/django-billing), are there any others? Also, do you rely on your payment processor to handle these tasks or do you do all of them yourself?
*Note: I am not asking what payment processors to use, but rather what middleware/libraries one should run on their webapp itself.* | 2012/12/27 | [
"https://Stackoverflow.com/questions/14053552",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/234270/"
] | The **[koalixcrm](https://github.com/scaphilo/koalixcrm)** is perhaps something you could start with.
It offers some of your required functionality. Still it is in a prealpha stage but it already provides PDF export for Invoices and Quotes, there is already one included plugin for subscriptions.
also try the **[demo](https://github.com/scaphilo/koalixcrm/wiki)**
As i am the developer of the koalixcrm im very interested to work with you - perhaps we can merge our projects. | It's not really clear why Django Community hasn't come up a with complete billing system or at least a generic one to start working on.
There's many packages that can be used for getting an idea how to implement such platform:
<https://www.djangopackages.com/grids/g/payment-processing/> |
67,996,181 | So in python to call a parent classes function in a child class we use the `super()` method but why do we use the `super()` when we can just call the Parent class function suppose i have a `Class Employee:` and i have another class which inherites from the Employee class `class Programmer(Employee):` to call any function of Employee class in Programmer class i can just use `Employee.functionName()` and that does the job.
Here is some code:
```
class Person:
country = "India"
def takeBreath(self):
print("I am breathing...")
class Employee(Person):
company = "Honda"
def getSalary(self):
print(f"Salary is {self.salary}")
def takeBreath(self):
print("I am an Employee so I am luckily breathing...")
class Programmer(Employee):
company = "Fiverr"
def getSalary(self):
print(f"No salary to programmer's.")
def takeBreath(self):
Employee().takeBreath()
print("I am a Programmer so i am breathing++..")
p = Person()
p.takeBreath()
e = Employee()
e.takeBreath()
pr = Programmer()
pr.takeBreath()
```
As you can see i wanted to call the Employee functions `takeBreath()` method in the `Programmer()` class so i just wrote `Employee.takeBreath()` which also does the job so can anyone explain me why we need the `super()` method in python?
### :) | 2021/06/16 | [
"https://Stackoverflow.com/questions/67996181",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15895348/"
] | With `super()` you don't need to define `takeBreath()` in each class inherited from the `Person()` class. | `super()` is a far more general method. Suppose you decide to change your superclass. Maybe you name it `Tom` instead of `Employee`. Now you have to go about and change every mention of your `Employee` call.
You can think of `super()` as a "proxy" to get the superclass regardless of what it is. It enables you to write more flexible code.
Though, what you are doing is different. You are creating a new instance of Employee each time, and then calling the method on it. If you change your `takeBreath` method to not take a `self` parameter, you will be able to do something like `Employee.takeBreath()`, or better, `super().takeBreath()`. |
67,996,181 | So in python to call a parent classes function in a child class we use the `super()` method but why do we use the `super()` when we can just call the Parent class function suppose i have a `Class Employee:` and i have another class which inherites from the Employee class `class Programmer(Employee):` to call any function of Employee class in Programmer class i can just use `Employee.functionName()` and that does the job.
Here is some code:
```
class Person:
country = "India"
def takeBreath(self):
print("I am breathing...")
class Employee(Person):
company = "Honda"
def getSalary(self):
print(f"Salary is {self.salary}")
def takeBreath(self):
print("I am an Employee so I am luckily breathing...")
class Programmer(Employee):
company = "Fiverr"
def getSalary(self):
print(f"No salary to programmer's.")
def takeBreath(self):
Employee().takeBreath()
print("I am a Programmer so i am breathing++..")
p = Person()
p.takeBreath()
e = Employee()
e.takeBreath()
pr = Programmer()
pr.takeBreath()
```
As you can see i wanted to call the Employee functions `takeBreath()` method in the `Programmer()` class so i just wrote `Employee.takeBreath()` which also does the job so can anyone explain me why we need the `super()` method in python?
### :) | 2021/06/16 | [
"https://Stackoverflow.com/questions/67996181",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15895348/"
] | You already got a few answers as to why your code doesn't work (you're creating a new `Employee` rather than calling `Employee`'s method with your own `self`) and why delegation via `super` is convenient (don't have to update all the code if you update the parent class).
An other reason is that Python supports *multiple inheritance*, and explicit delegation simply can't work with a *diamond inheritance* pattern.
A diamond pattern is something like that:
```
A
/ \
/ \
B C
\ /
\ /
D
```
```py
class A:
def foo(self):
...
class B(A):
def foo(self):
...
class C(A):
def foo(self):
...
class D(C, B):
def foo(self):
...
```
"D" extends both "B" and "C", and also overrides "foo" to add its own stuff. "D" can't just call "B::foo" and "C::foo": if it does that, then *both* will in turn call "A::foo", which will be invoked twice, to potential ill effects if it has dangerous side-effects.
If you use `super()`, then Python manages the chain using the *method resolution order* (specifically [C3](https://en.wikipedia.org/wiki/C3_linearization)): when classes are created, it generates a *linear* sequence of all parent classes, and when calling `super()` it simply walks that sequence from where it is to find the "parent" method, even between completely unrelated classes.
Here the MRO would be D -> C -> B -> A, so when `D` calls `super().foo()`, Python will invoke `C::foo`, whose super will call `B::foo` \*despite C and B having no knowledge of one another. | `super()` is a far more general method. Suppose you decide to change your superclass. Maybe you name it `Tom` instead of `Employee`. Now you have to go about and change every mention of your `Employee` call.
You can think of `super()` as a "proxy" to get the superclass regardless of what it is. It enables you to write more flexible code.
Though, what you are doing is different. You are creating a new instance of Employee each time, and then calling the method on it. If you change your `takeBreath` method to not take a `self` parameter, you will be able to do something like `Employee.takeBreath()`, or better, `super().takeBreath()`. |
59,475,157 | I'm a beginner in python. I'm not able to understand what the problem is?
```
the runtime process for the instance running on port 43421 has unexpectedly quit
ERROR 2019-12-24 17:29:10,258 base.py:209] Internal Server Error: /input/
Traceback (most recent call last):
File "/var/www/html/sym_math/google_appengine/lib/django-1.3/django/core/handlers/base.py", line 178, in get_response
response = middleware_method(request, response)
File "/var/www/html/sym_math/google_appengine/lib/django-1.3/django/middleware/common.py", line 94, in process_response
if response.status_code == 404:
AttributeError: 'tuple' object has no attribute 'status_code'
``` | 2019/12/25 | [
"https://Stackoverflow.com/questions/59475157",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12277769/"
] | Since the column in the first table is an identity field, you should use [`scope_idenity()`](https://learn.microsoft.com/en-us/sql/t-sql/functions/scope-identity-transact-sql?view=sql-server-ver15) immediately after the first INSERT statement to get the result. Then use that result in the subsequent INSERT statements.
```
Create Procedure spCustomerDetails
@FirstName nvarchar (30),
@LastName nvarchar(30),
@Phone Char(30),
@Email nvarchar(30)
As
Begin
Begin Try
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
Begin Transaction
DECLARE @NewBusEntityID int;
INSERT INTO Person.Person(PersonType, NameStyle,Title, FirstName, MiddleName, LastName, Suffix, EmailPromotion, AdditionalContactInfo)
VALUES('SC', 0, 'NULL', '@FirstName', '@MiddleName', '@LastName', 'NULL', '0', 'NULL');
SELECT @NewBusEntityID = scope_idenity();
INSERT INTO Person.PersonPhone(BusinessEntityID, PhoneNumber, PhoneNumberTypeID)
VALUES(@NewBusEntityID, '@Phone', 2);
INSERT INTO Person.EmailAddress(BusinessEntityID,EmailAddressID,EmailAddress)
VALUES(@NewBusEntityID, '1', '@Email');
COMMIT TRANSACTION
End Try
Begin Catch
Rollback Transaction
Print 'Roll back transaction'
End Catch
End
```
If it were not an identity field, you could instead use a [`SEQUENCE`](https://learn.microsoft.com/en-us/sql/t-sql/statements/create-sequence-transact-sql?view=sql-server-ver15). Then you could select the [`NEXT VALUE FOR`](https://learn.microsoft.com/en-us/sql/t-sql/functions/next-value-for-transact-sql?view=sql-server-ver15) the sequence at the beginning of the procedure and use that value for all three INSERT statements. | You can use MAX:
```
DECLARE @id int = (select max(BusinessEntityId) From Person.BusinessEntity)
``` |
59,475,157 | I'm a beginner in python. I'm not able to understand what the problem is?
```
the runtime process for the instance running on port 43421 has unexpectedly quit
ERROR 2019-12-24 17:29:10,258 base.py:209] Internal Server Error: /input/
Traceback (most recent call last):
File "/var/www/html/sym_math/google_appengine/lib/django-1.3/django/core/handlers/base.py", line 178, in get_response
response = middleware_method(request, response)
File "/var/www/html/sym_math/google_appengine/lib/django-1.3/django/middleware/common.py", line 94, in process_response
if response.status_code == 404:
AttributeError: 'tuple' object has no attribute 'status_code'
``` | 2019/12/25 | [
"https://Stackoverflow.com/questions/59475157",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12277769/"
] | I encourage you to use the [`OUTPUT`](https://learn.microsoft.com/en-us/sql/t-sql/queries/output-clause-transact-sql?view=sql-server-ver15) clause. This just works, regardless of triggers on tables, other transactions, sessions, and so on.
```
CREATE PROCEDURE spCustomerDetails (
@FirstName NVARCHAR(30),
@LastName NVARCHAR(30),
@Phone CHAR(30),
@Email NVARCHAR(30)
) AS
BEGIN
DECLARE TABLE @id (BusinessEntityId INT);
BEGIN TRY
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
INSERT INTO Person.Person (BusinessEntityID, PersonType, NameStyle, Title, FirstName, MiddleName, LastName, Suffix, EmailPromotion, AdditionalContactInfo)
OUTPUT inserted.BusinessEntityId) INTO @ids
VALUES (20778, 'SC', 0, NULL, @FirstName, @MiddleName, @LastName, NULL, '0', NULL);
INSERT INTO Person.PersonPhone (BusinessEntityID, PhoneNumber, PhoneNumberTypeID)
SELECT i.BusinessEntityID, @Phone, 2
FROM @ids i;
INSERT INTO Person.EmailAddress (BusinessEntityID, EmailAddressID, EmailAddress)
SELECT i.BusinessEntityID, 1, @Email
FROM @ids;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
PRINT 'Roll back transaction';
END CATCH;
END;
```
Note that this also drops the single quotes around the parameters and `NULL`. I assume you want parameter values and not constant strings for those. | You can use MAX:
```
DECLARE @id int = (select max(BusinessEntityId) From Person.BusinessEntity)
``` |
59,475,157 | I'm a beginner in python. I'm not able to understand what the problem is?
```
the runtime process for the instance running on port 43421 has unexpectedly quit
ERROR 2019-12-24 17:29:10,258 base.py:209] Internal Server Error: /input/
Traceback (most recent call last):
File "/var/www/html/sym_math/google_appengine/lib/django-1.3/django/core/handlers/base.py", line 178, in get_response
response = middleware_method(request, response)
File "/var/www/html/sym_math/google_appengine/lib/django-1.3/django/middleware/common.py", line 94, in process_response
if response.status_code == 404:
AttributeError: 'tuple' object has no attribute 'status_code'
``` | 2019/12/25 | [
"https://Stackoverflow.com/questions/59475157",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12277769/"
] | Since the column in the first table is an identity field, you should use [`scope_idenity()`](https://learn.microsoft.com/en-us/sql/t-sql/functions/scope-identity-transact-sql?view=sql-server-ver15) immediately after the first INSERT statement to get the result. Then use that result in the subsequent INSERT statements.
```
Create Procedure spCustomerDetails
@FirstName nvarchar (30),
@LastName nvarchar(30),
@Phone Char(30),
@Email nvarchar(30)
As
Begin
Begin Try
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
Begin Transaction
DECLARE @NewBusEntityID int;
INSERT INTO Person.Person(PersonType, NameStyle,Title, FirstName, MiddleName, LastName, Suffix, EmailPromotion, AdditionalContactInfo)
VALUES('SC', 0, 'NULL', '@FirstName', '@MiddleName', '@LastName', 'NULL', '0', 'NULL');
SELECT @NewBusEntityID = scope_idenity();
INSERT INTO Person.PersonPhone(BusinessEntityID, PhoneNumber, PhoneNumberTypeID)
VALUES(@NewBusEntityID, '@Phone', 2);
INSERT INTO Person.EmailAddress(BusinessEntityID,EmailAddressID,EmailAddress)
VALUES(@NewBusEntityID, '1', '@Email');
COMMIT TRANSACTION
End Try
Begin Catch
Rollback Transaction
Print 'Roll back transaction'
End Catch
End
```
If it were not an identity field, you could instead use a [`SEQUENCE`](https://learn.microsoft.com/en-us/sql/t-sql/statements/create-sequence-transact-sql?view=sql-server-ver15). Then you could select the [`NEXT VALUE FOR`](https://learn.microsoft.com/en-us/sql/t-sql/functions/next-value-for-transact-sql?view=sql-server-ver15) the sequence at the beginning of the procedure and use that value for all three INSERT statements. | I encourage you to use the [`OUTPUT`](https://learn.microsoft.com/en-us/sql/t-sql/queries/output-clause-transact-sql?view=sql-server-ver15) clause. This just works, regardless of triggers on tables, other transactions, sessions, and so on.
```
CREATE PROCEDURE spCustomerDetails (
@FirstName NVARCHAR(30),
@LastName NVARCHAR(30),
@Phone CHAR(30),
@Email NVARCHAR(30)
) AS
BEGIN
DECLARE TABLE @id (BusinessEntityId INT);
BEGIN TRY
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
INSERT INTO Person.Person (BusinessEntityID, PersonType, NameStyle, Title, FirstName, MiddleName, LastName, Suffix, EmailPromotion, AdditionalContactInfo)
OUTPUT inserted.BusinessEntityId) INTO @ids
VALUES (20778, 'SC', 0, NULL, @FirstName, @MiddleName, @LastName, NULL, '0', NULL);
INSERT INTO Person.PersonPhone (BusinessEntityID, PhoneNumber, PhoneNumberTypeID)
SELECT i.BusinessEntityID, @Phone, 2
FROM @ids i;
INSERT INTO Person.EmailAddress (BusinessEntityID, EmailAddressID, EmailAddress)
SELECT i.BusinessEntityID, 1, @Email
FROM @ids;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
PRINT 'Roll back transaction';
END CATCH;
END;
```
Note that this also drops the single quotes around the parameters and `NULL`. I assume you want parameter values and not constant strings for those. |
23,382,499 | I'm running a python script that makes modifications in a specific database.
I want to run a second script once there is a modification in my database (local server).
Is there anyway to do that?
Any help would be very appreciated.
Thanks! | 2014/04/30 | [
"https://Stackoverflow.com/questions/23382499",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2343621/"
] | Thanks for your answers, i found a solution here:
<http://crazytechthoughts.blogspot.fr/2011/12/call-external-program-from-mysql.html>
A Trigger must be defined to call an external function once the DB Table is modified:
```
DELIMITER $
CREATE TRIGGER Test_Trigger
AFTER INSERT ON SFCRoutingTable
FOR EACH ROW
BEGIN
DECLARE cmd CHAR(255);
DECLARE result int(10);
SET cmd = CONCAT('python /home/triggers.py');
SET result = sys_exec(cmd);
END;
$
DELIMITER ;
```
Here, to call my python script, I use 'sys\_exec' which is a UDF (User Defined Function). You can download the library from here: <https://github.com/mysqludf/lib_mysqludf_sys> | You can use 'Stored Procedures' in your database a lot of RDBMS engines support one or multiple programming languages to do so. AFAIK postgresql support signals to call external process to. Google something like 'Stored Procedures in Python for PostgreSQL' or 'postgresql trigger call external program' |
23,382,499 | I'm running a python script that makes modifications in a specific database.
I want to run a second script once there is a modification in my database (local server).
Is there anyway to do that?
Any help would be very appreciated.
Thanks! | 2014/04/30 | [
"https://Stackoverflow.com/questions/23382499",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2343621/"
] | You can use 'Stored Procedures' in your database a lot of RDBMS engines support one or multiple programming languages to do so. AFAIK postgresql support signals to call external process to. Google something like 'Stored Procedures in Python for PostgreSQL' or 'postgresql trigger call external program' | And if what you need is to keep the python script running and listen to changes in a certain table.
1. You can create a listener table ex. 'trigger\_table' in your database with only one value.
2. Create a trigger that will change the value in the 'trigger\_table' table every time a change occurs in some table.
3. And last create a python script that will check this table if the value has been changed every n-seconds (depending how fast you need updates) taking into consideration that its only one value and you have a good internet connection (If the database is online), everything should be running quite fast. And execute a function when the value has been changed.
Create MqSQL trigger: <https://www.mysqltutorial.org/create-the-first-trigger-in-mysql.aspx>
This is the SQL Code (Btw I am working with MS SQL but you should get the idea about how it should be set-up):
```sql
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TRIGGER dbo.new
ON dbo.rc
AFTER INSERT,DELETE,UPDATE
AS
--Declare the variable and set the value from the change_table
DECLARE @PurchaseName AS CHAR(1)
SELECT @PurchaseName = _check
FROM dev.dbo.change_table
WHERE _check_1 IS NOT NULL
IF @PurchaseName = 'Y'
BEGIN
-- If the condition is TRUE then execute the following statement
UPDATE dev.dbo.change_table SET _check = 'N' WHERE _check_1 IS NOT NULL
END
ELSE
BEGIN
-- If the condition is False then execute the following statement
UPDATE dev.dbo.change_table SET _check = 'Y' WHERE _check_1 IS NOT NULL
END
GO
```
The change\_table has only two columns and one row- see the picture below.
[change\_table picture](https://i.stack.imgur.com/lkmks.jpg)
This is the python code:
```py
import pyodbc
import pandas as pd
# Connect to SQL Server
cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER=<YOUR SERVER>;DATABASE=<YOUR DATABASE>;UID=<YOUR USER ID>;PWD=<YOUR PASSWORD>')
previous_value = ""
while True:
current_value = str(pd.read_sql_query('SELECT _check FROM change_table',cnxn)['_check'].tolist()[0])
if current_value != previous_value:
prev_value = current_value
#Write your code here
```
The SQL code above will change the value in the change\_table on every INSERT, UPDATE or DELETE in the 'rc' table, while the python code will check the value 8 times every second to see if a change has happen.
Btw that 8 times per second is on a 100MB/s optic internet and the server is on the other side of the Earth, sooo... it should do the job. |
23,382,499 | I'm running a python script that makes modifications in a specific database.
I want to run a second script once there is a modification in my database (local server).
Is there anyway to do that?
Any help would be very appreciated.
Thanks! | 2014/04/30 | [
"https://Stackoverflow.com/questions/23382499",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2343621/"
] | Thanks for your answers, i found a solution here:
<http://crazytechthoughts.blogspot.fr/2011/12/call-external-program-from-mysql.html>
A Trigger must be defined to call an external function once the DB Table is modified:
```
DELIMITER $
CREATE TRIGGER Test_Trigger
AFTER INSERT ON SFCRoutingTable
FOR EACH ROW
BEGIN
DECLARE cmd CHAR(255);
DECLARE result int(10);
SET cmd = CONCAT('python /home/triggers.py');
SET result = sys_exec(cmd);
END;
$
DELIMITER ;
```
Here, to call my python script, I use 'sys\_exec' which is a UDF (User Defined Function). You can download the library from here: <https://github.com/mysqludf/lib_mysqludf_sys> | And if what you need is to keep the python script running and listen to changes in a certain table.
1. You can create a listener table ex. 'trigger\_table' in your database with only one value.
2. Create a trigger that will change the value in the 'trigger\_table' table every time a change occurs in some table.
3. And last create a python script that will check this table if the value has been changed every n-seconds (depending how fast you need updates) taking into consideration that its only one value and you have a good internet connection (If the database is online), everything should be running quite fast. And execute a function when the value has been changed.
Create MqSQL trigger: <https://www.mysqltutorial.org/create-the-first-trigger-in-mysql.aspx>
This is the SQL Code (Btw I am working with MS SQL but you should get the idea about how it should be set-up):
```sql
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TRIGGER dbo.new
ON dbo.rc
AFTER INSERT,DELETE,UPDATE
AS
--Declare the variable and set the value from the change_table
DECLARE @PurchaseName AS CHAR(1)
SELECT @PurchaseName = _check
FROM dev.dbo.change_table
WHERE _check_1 IS NOT NULL
IF @PurchaseName = 'Y'
BEGIN
-- If the condition is TRUE then execute the following statement
UPDATE dev.dbo.change_table SET _check = 'N' WHERE _check_1 IS NOT NULL
END
ELSE
BEGIN
-- If the condition is False then execute the following statement
UPDATE dev.dbo.change_table SET _check = 'Y' WHERE _check_1 IS NOT NULL
END
GO
```
The change\_table has only two columns and one row- see the picture below.
[change\_table picture](https://i.stack.imgur.com/lkmks.jpg)
This is the python code:
```py
import pyodbc
import pandas as pd
# Connect to SQL Server
cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER=<YOUR SERVER>;DATABASE=<YOUR DATABASE>;UID=<YOUR USER ID>;PWD=<YOUR PASSWORD>')
previous_value = ""
while True:
current_value = str(pd.read_sql_query('SELECT _check FROM change_table',cnxn)['_check'].tolist()[0])
if current_value != previous_value:
prev_value = current_value
#Write your code here
```
The SQL code above will change the value in the change\_table on every INSERT, UPDATE or DELETE in the 'rc' table, while the python code will check the value 8 times every second to see if a change has happen.
Btw that 8 times per second is on a 100MB/s optic internet and the server is on the other side of the Earth, sooo... it should do the job. |
37,355,375 | There is a dict (say `d`). `dict.get(key, None)` returns `None` if `key` doesn't exist in `d`.
**How do I get the first value (i.e., `d[key]` is not `None`) from a list of keys (some of them might not exist in `d`)?**
This post, [Pythonic way to avoid “if x: return x” statements](https://stackoverflow.com/questions/36117583/pythonic-way-to-avoid-if-x-return-x-statements), provides a concrete way.
```
for d in list_dicts:
for key in keys:
if key in d:
print(d[key])
break
```
I use **xor operator** to acheive it in one line, as demonstrated in,
```
# a list of dicts
list_dicts = [ {'level0' : (1, 2), 'col': '#ff310021'},
{'level1' : (3, 4), 'col': '#ff310011'},
{'level2' : (5, 6), 'col': '#ff312221'}]
# loop over the list of dicts dicts, extract the tuple value whose key is like level*
for d in list_dicts:
t = d.get('level0', None) or d.get('level1', None) or d.get('level2', None)
col = d['col']
do_something(t, col)
```
It works. In this way, I just simply list all options (`level0` ~ `level3`). Is there a better way for a lot of keys (say, from `level0` to `level100`), like list comprehensions? | 2016/05/20 | [
"https://Stackoverflow.com/questions/37355375",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3067748/"
] | There's no convenient builtin, but you could implement it easily enough:
```
def getfirst(d, keys):
for key in keys:
if key in d:
return d[key]
return None
``` | I would use `next` with a comprehension:
```
# build list of keys
levels = [ 'level' + str(i) for i in range(3) ]
for d in list_dicts:
level_key = next(k for k in levels if d.get(k))
level = d[level_key]
``` |
37,355,375 | There is a dict (say `d`). `dict.get(key, None)` returns `None` if `key` doesn't exist in `d`.
**How do I get the first value (i.e., `d[key]` is not `None`) from a list of keys (some of them might not exist in `d`)?**
This post, [Pythonic way to avoid “if x: return x” statements](https://stackoverflow.com/questions/36117583/pythonic-way-to-avoid-if-x-return-x-statements), provides a concrete way.
```
for d in list_dicts:
for key in keys:
if key in d:
print(d[key])
break
```
I use **xor operator** to acheive it in one line, as demonstrated in,
```
# a list of dicts
list_dicts = [ {'level0' : (1, 2), 'col': '#ff310021'},
{'level1' : (3, 4), 'col': '#ff310011'},
{'level2' : (5, 6), 'col': '#ff312221'}]
# loop over the list of dicts dicts, extract the tuple value whose key is like level*
for d in list_dicts:
t = d.get('level0', None) or d.get('level1', None) or d.get('level2', None)
col = d['col']
do_something(t, col)
```
It works. In this way, I just simply list all options (`level0` ~ `level3`). Is there a better way for a lot of keys (say, from `level0` to `level100`), like list comprehensions? | 2016/05/20 | [
"https://Stackoverflow.com/questions/37355375",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3067748/"
] | This line:
```
x, y = d.get('level0', None) or d.get('level1', None) or d.get('level2', None)
```
Is basically mapping a `list` of `['level0', 'level1', 'level2']` to `d.get` (`None` is already the default value; there's no need to explicitly state it in this case). Next, you want to choose the one that doesn't map to `None`, which is basically a filter. You can use the `map()` and `filter()` built-in functions (which are lazy generator-like objects in Python 3) and call `next()` to get the first match:
```
list_dicts = [ {'level0' : (1, 2), 'col': '#ff310021'},
{'level1' : (3, 4), 'col': '#ff310011'},
{'level2' : (5, 6), 'col': '#ff312221'}]
>>> l = 'level0', 'level1', 'level2'
>>> for d in list_dicts:
... print(next(filter(None, map(d.get, l))))
...
(1, 2)
(3, 4)
(5, 6)
``` | There's no convenient builtin, but you could implement it easily enough:
```
def getfirst(d, keys):
for key in keys:
if key in d:
return d[key]
return None
``` |
37,355,375 | There is a dict (say `d`). `dict.get(key, None)` returns `None` if `key` doesn't exist in `d`.
**How do I get the first value (i.e., `d[key]` is not `None`) from a list of keys (some of them might not exist in `d`)?**
This post, [Pythonic way to avoid “if x: return x” statements](https://stackoverflow.com/questions/36117583/pythonic-way-to-avoid-if-x-return-x-statements), provides a concrete way.
```
for d in list_dicts:
for key in keys:
if key in d:
print(d[key])
break
```
I use **xor operator** to acheive it in one line, as demonstrated in,
```
# a list of dicts
list_dicts = [ {'level0' : (1, 2), 'col': '#ff310021'},
{'level1' : (3, 4), 'col': '#ff310011'},
{'level2' : (5, 6), 'col': '#ff312221'}]
# loop over the list of dicts dicts, extract the tuple value whose key is like level*
for d in list_dicts:
t = d.get('level0', None) or d.get('level1', None) or d.get('level2', None)
col = d['col']
do_something(t, col)
```
It works. In this way, I just simply list all options (`level0` ~ `level3`). Is there a better way for a lot of keys (say, from `level0` to `level100`), like list comprehensions? | 2016/05/20 | [
"https://Stackoverflow.com/questions/37355375",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3067748/"
] | There's no convenient builtin, but you could implement it easily enough:
```
def getfirst(d, keys):
for key in keys:
if key in d:
return d[key]
return None
``` | Should work on all Pythons:
```
# a list of dicts
list_dicts = [{'level0': (1, 2), 'col': '#ff310021'},
{'level1': (3, 4), 'col': '#ff310011'},
{'level2': (5, 6), 'col': '#ff312221'}]
# Prioritized (ordered) list of keys [level0, level99]
KEYS = ['level{}'.format(i) for i in range(100)]
# loop over the list of dicts dicts, extract the tuple value whose key is
# like level*
for d in list_dicts:
try:
k = next(k for k in KEYS if k in d)
t = d[k]
col = d['col']
do_something(t, col)
except StopIteration:
pass
``` |
37,355,375 | There is a dict (say `d`). `dict.get(key, None)` returns `None` if `key` doesn't exist in `d`.
**How do I get the first value (i.e., `d[key]` is not `None`) from a list of keys (some of them might not exist in `d`)?**
This post, [Pythonic way to avoid “if x: return x” statements](https://stackoverflow.com/questions/36117583/pythonic-way-to-avoid-if-x-return-x-statements), provides a concrete way.
```
for d in list_dicts:
for key in keys:
if key in d:
print(d[key])
break
```
I use **xor operator** to acheive it in one line, as demonstrated in,
```
# a list of dicts
list_dicts = [ {'level0' : (1, 2), 'col': '#ff310021'},
{'level1' : (3, 4), 'col': '#ff310011'},
{'level2' : (5, 6), 'col': '#ff312221'}]
# loop over the list of dicts dicts, extract the tuple value whose key is like level*
for d in list_dicts:
t = d.get('level0', None) or d.get('level1', None) or d.get('level2', None)
col = d['col']
do_something(t, col)
```
It works. In this way, I just simply list all options (`level0` ~ `level3`). Is there a better way for a lot of keys (say, from `level0` to `level100`), like list comprehensions? | 2016/05/20 | [
"https://Stackoverflow.com/questions/37355375",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3067748/"
] | There's no convenient builtin, but you could implement it easily enough:
```
def getfirst(d, keys):
for key in keys:
if key in d:
return d[key]
return None
``` | Just as a novelty item, here is a version that first computes a getter using functional composition.
```
if 'reduce' not in globals():
from functools import reduce
list_dicts = [ {'level0' : (1, 2), 'col': '#ff310021'},
{'level1' : (3, 4), 'col': '#ff310011'},
{'level2' : (5, 6), 'col': '#ff312221'}]
levels = list(map('level{}'.format, range(3)))
getter = reduce(lambda f, key: lambda dct: dct.get(key, f(dct)), reversed(levels), lambda _: None)
print(list(map(getter, list_dicts)))
# [(1, 2), (3, 4), (5, 6)]
``` |
37,355,375 | There is a dict (say `d`). `dict.get(key, None)` returns `None` if `key` doesn't exist in `d`.
**How do I get the first value (i.e., `d[key]` is not `None`) from a list of keys (some of them might not exist in `d`)?**
This post, [Pythonic way to avoid “if x: return x” statements](https://stackoverflow.com/questions/36117583/pythonic-way-to-avoid-if-x-return-x-statements), provides a concrete way.
```
for d in list_dicts:
for key in keys:
if key in d:
print(d[key])
break
```
I use **xor operator** to acheive it in one line, as demonstrated in,
```
# a list of dicts
list_dicts = [ {'level0' : (1, 2), 'col': '#ff310021'},
{'level1' : (3, 4), 'col': '#ff310011'},
{'level2' : (5, 6), 'col': '#ff312221'}]
# loop over the list of dicts dicts, extract the tuple value whose key is like level*
for d in list_dicts:
t = d.get('level0', None) or d.get('level1', None) or d.get('level2', None)
col = d['col']
do_something(t, col)
```
It works. In this way, I just simply list all options (`level0` ~ `level3`). Is there a better way for a lot of keys (say, from `level0` to `level100`), like list comprehensions? | 2016/05/20 | [
"https://Stackoverflow.com/questions/37355375",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3067748/"
] | This line:
```
x, y = d.get('level0', None) or d.get('level1', None) or d.get('level2', None)
```
Is basically mapping a `list` of `['level0', 'level1', 'level2']` to `d.get` (`None` is already the default value; there's no need to explicitly state it in this case). Next, you want to choose the one that doesn't map to `None`, which is basically a filter. You can use the `map()` and `filter()` built-in functions (which are lazy generator-like objects in Python 3) and call `next()` to get the first match:
```
list_dicts = [ {'level0' : (1, 2), 'col': '#ff310021'},
{'level1' : (3, 4), 'col': '#ff310011'},
{'level2' : (5, 6), 'col': '#ff312221'}]
>>> l = 'level0', 'level1', 'level2'
>>> for d in list_dicts:
... print(next(filter(None, map(d.get, l))))
...
(1, 2)
(3, 4)
(5, 6)
``` | I would use `next` with a comprehension:
```
# build list of keys
levels = [ 'level' + str(i) for i in range(3) ]
for d in list_dicts:
level_key = next(k for k in levels if d.get(k))
level = d[level_key]
``` |
37,355,375 | There is a dict (say `d`). `dict.get(key, None)` returns `None` if `key` doesn't exist in `d`.
**How do I get the first value (i.e., `d[key]` is not `None`) from a list of keys (some of them might not exist in `d`)?**
This post, [Pythonic way to avoid “if x: return x” statements](https://stackoverflow.com/questions/36117583/pythonic-way-to-avoid-if-x-return-x-statements), provides a concrete way.
```
for d in list_dicts:
for key in keys:
if key in d:
print(d[key])
break
```
I use **xor operator** to acheive it in one line, as demonstrated in,
```
# a list of dicts
list_dicts = [ {'level0' : (1, 2), 'col': '#ff310021'},
{'level1' : (3, 4), 'col': '#ff310011'},
{'level2' : (5, 6), 'col': '#ff312221'}]
# loop over the list of dicts dicts, extract the tuple value whose key is like level*
for d in list_dicts:
t = d.get('level0', None) or d.get('level1', None) or d.get('level2', None)
col = d['col']
do_something(t, col)
```
It works. In this way, I just simply list all options (`level0` ~ `level3`). Is there a better way for a lot of keys (say, from `level0` to `level100`), like list comprehensions? | 2016/05/20 | [
"https://Stackoverflow.com/questions/37355375",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3067748/"
] | This line:
```
x, y = d.get('level0', None) or d.get('level1', None) or d.get('level2', None)
```
Is basically mapping a `list` of `['level0', 'level1', 'level2']` to `d.get` (`None` is already the default value; there's no need to explicitly state it in this case). Next, you want to choose the one that doesn't map to `None`, which is basically a filter. You can use the `map()` and `filter()` built-in functions (which are lazy generator-like objects in Python 3) and call `next()` to get the first match:
```
list_dicts = [ {'level0' : (1, 2), 'col': '#ff310021'},
{'level1' : (3, 4), 'col': '#ff310011'},
{'level2' : (5, 6), 'col': '#ff312221'}]
>>> l = 'level0', 'level1', 'level2'
>>> for d in list_dicts:
... print(next(filter(None, map(d.get, l))))
...
(1, 2)
(3, 4)
(5, 6)
``` | Should work on all Pythons:
```
# a list of dicts
list_dicts = [{'level0': (1, 2), 'col': '#ff310021'},
{'level1': (3, 4), 'col': '#ff310011'},
{'level2': (5, 6), 'col': '#ff312221'}]
# Prioritized (ordered) list of keys [level0, level99]
KEYS = ['level{}'.format(i) for i in range(100)]
# loop over the list of dicts dicts, extract the tuple value whose key is
# like level*
for d in list_dicts:
try:
k = next(k for k in KEYS if k in d)
t = d[k]
col = d['col']
do_something(t, col)
except StopIteration:
pass
``` |
37,355,375 | There is a dict (say `d`). `dict.get(key, None)` returns `None` if `key` doesn't exist in `d`.
**How do I get the first value (i.e., `d[key]` is not `None`) from a list of keys (some of them might not exist in `d`)?**
This post, [Pythonic way to avoid “if x: return x” statements](https://stackoverflow.com/questions/36117583/pythonic-way-to-avoid-if-x-return-x-statements), provides a concrete way.
```
for d in list_dicts:
for key in keys:
if key in d:
print(d[key])
break
```
I use **xor operator** to acheive it in one line, as demonstrated in,
```
# a list of dicts
list_dicts = [ {'level0' : (1, 2), 'col': '#ff310021'},
{'level1' : (3, 4), 'col': '#ff310011'},
{'level2' : (5, 6), 'col': '#ff312221'}]
# loop over the list of dicts dicts, extract the tuple value whose key is like level*
for d in list_dicts:
t = d.get('level0', None) or d.get('level1', None) or d.get('level2', None)
col = d['col']
do_something(t, col)
```
It works. In this way, I just simply list all options (`level0` ~ `level3`). Is there a better way for a lot of keys (say, from `level0` to `level100`), like list comprehensions? | 2016/05/20 | [
"https://Stackoverflow.com/questions/37355375",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3067748/"
] | This line:
```
x, y = d.get('level0', None) or d.get('level1', None) or d.get('level2', None)
```
Is basically mapping a `list` of `['level0', 'level1', 'level2']` to `d.get` (`None` is already the default value; there's no need to explicitly state it in this case). Next, you want to choose the one that doesn't map to `None`, which is basically a filter. You can use the `map()` and `filter()` built-in functions (which are lazy generator-like objects in Python 3) and call `next()` to get the first match:
```
list_dicts = [ {'level0' : (1, 2), 'col': '#ff310021'},
{'level1' : (3, 4), 'col': '#ff310011'},
{'level2' : (5, 6), 'col': '#ff312221'}]
>>> l = 'level0', 'level1', 'level2'
>>> for d in list_dicts:
... print(next(filter(None, map(d.get, l))))
...
(1, 2)
(3, 4)
(5, 6)
``` | Just as a novelty item, here is a version that first computes a getter using functional composition.
```
if 'reduce' not in globals():
from functools import reduce
list_dicts = [ {'level0' : (1, 2), 'col': '#ff310021'},
{'level1' : (3, 4), 'col': '#ff310011'},
{'level2' : (5, 6), 'col': '#ff312221'}]
levels = list(map('level{}'.format, range(3)))
getter = reduce(lambda f, key: lambda dct: dct.get(key, f(dct)), reversed(levels), lambda _: None)
print(list(map(getter, list_dicts)))
# [(1, 2), (3, 4), (5, 6)]
``` |
828,139 | I'm trying to get the values from a pointer to a float array, but it returns as c\_void\_p in python
The C code
```
double v;
const void *data;
pa_stream_peek(s, &data, &length);
v = ((const float*) data)[length / sizeof(float) -1];
```
Python so far
```
import ctypes
null_ptr = ctypes.c_void_p()
pa_stream_peek(stream, null_ptr, ctypes.c_ulong(length))
```
The issue being the null\_ptr has an int value (memory address?) but there is no way to read the array?! | 2009/05/06 | [
"https://Stackoverflow.com/questions/828139",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/102018/"
] | My ctypes is rusty, but I believe you want POINTER(c\_float) instead of c\_void\_p.
So try this:
```
null_ptr = POINTER(c_float)()
pa_stream_peek(stream, null_ptr, ctypes.c_ulong(length))
null_ptr[0]
null_ptr[5] # etc
``` | You'll also probably want to be passing the null\_ptr using byref, e.g.
```
pa_stream_peek(stream, ctypes.byref(null_ptr), ctypes.c_ulong(length))
``` |
828,139 | I'm trying to get the values from a pointer to a float array, but it returns as c\_void\_p in python
The C code
```
double v;
const void *data;
pa_stream_peek(s, &data, &length);
v = ((const float*) data)[length / sizeof(float) -1];
```
Python so far
```
import ctypes
null_ptr = ctypes.c_void_p()
pa_stream_peek(stream, null_ptr, ctypes.c_ulong(length))
```
The issue being the null\_ptr has an int value (memory address?) but there is no way to read the array?! | 2009/05/06 | [
"https://Stackoverflow.com/questions/828139",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/102018/"
] | My ctypes is rusty, but I believe you want POINTER(c\_float) instead of c\_void\_p.
So try this:
```
null_ptr = POINTER(c_float)()
pa_stream_peek(stream, null_ptr, ctypes.c_ulong(length))
null_ptr[0]
null_ptr[5] # etc
``` | To use ctypes in a way that mimics your C code, I would suggest (and I'm out-of-practice and this is untested):
```
vdata = ctypes.c_void_p()
length = ctypes.c_ulong(0)
pa_stream_peek(stream, ctypes.byref(vdata), ctypes.byref(length))
fdata = ctypes.cast(vdata, POINTER(float))
``` |
828,139 | I'm trying to get the values from a pointer to a float array, but it returns as c\_void\_p in python
The C code
```
double v;
const void *data;
pa_stream_peek(s, &data, &length);
v = ((const float*) data)[length / sizeof(float) -1];
```
Python so far
```
import ctypes
null_ptr = ctypes.c_void_p()
pa_stream_peek(stream, null_ptr, ctypes.c_ulong(length))
```
The issue being the null\_ptr has an int value (memory address?) but there is no way to read the array?! | 2009/05/06 | [
"https://Stackoverflow.com/questions/828139",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/102018/"
] | My ctypes is rusty, but I believe you want POINTER(c\_float) instead of c\_void\_p.
So try this:
```
null_ptr = POINTER(c_float)()
pa_stream_peek(stream, null_ptr, ctypes.c_ulong(length))
null_ptr[0]
null_ptr[5] # etc
``` | When you pass pointer arguments without using ctypes.pointer or ctypes.byref, their contents simply get set to the integer value of the memory address (i.e., the pointer bits). These arguments should be passed with `byref` (or `pointer`, but `byref` has less overhead):
```
data = ctypes.pointer(ctypes.c_float())
nbytes = ctypes.c_sizeof()
pa_stream_peek(s, byref(data), byref(nbytes))
nfloats = nbytes.value / ctypes.sizeof(c_float)
v = data[nfloats - 1]
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.