qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
---|---|---|---|---|---|
48,275,466 | I was trying to install [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-install-macos.html) on mac but was facing some challenges as aws command was unable to parse the credential file. So I decided to re-install the whole stuff but facing some issues here again.
I am trying `pip uninstall awscli` which says
```
Cannot uninstall requirement awscli, not installed
```
So, i try `pip3 install awscli --upgrade --user` which gives me this:
```
You are using pip version 6.0.8, however version 9.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Requirement already up-to-date: awscli in ./Library/Python/3.5/lib/python/site-packages
Requirement already up-to-date: rsa<=3.5.0,>=3.1.2 in ./Library/Python/3.5/lib/python/site-packages (from awscli)
Requirement already up-to-date: docutils>=0.10 in ./Library/Python/3.5/lib/python/site-packages (from awscli)
Requirement already up-to-date: PyYAML<=3.12,>=3.10 in ./Library/Python/3.5/lib/python/site-packages (from awscli)
Requirement already up-to-date: colorama<=0.3.7,>=0.2.5 in ./Library/Python/3.5/lib/python/site-packages (from awscli)
Requirement already up-to-date: botocore==1.8.29 in ./Library/Python/3.5/lib/python/site-packages (from awscli)
Requirement already up-to-date: s3transfer<0.2.0,>=0.1.12 in ./Library/Python/3.5/lib/python/site-packages (from awscli)
Requirement already up-to-date: pyasn1>=0.1.3 in ./Library/Python/3.5/lib/python/site-packages (from rsa<=3.5.0,>=3.1.2->awscli)
Requirement already up-to-date: python-dateutil<3.0.0,>=2.1 in ./Library/Python/3.5/lib/python/site-packages (from botocore==1.8.29->awscli)
Requirement already up-to-date: jmespath<1.0.0,>=0.7.1 in ./Library/Python/3.5/lib/python/site-packages (from botocore==1.8.29->awscli)
Requirement already up-to-date: six>=1.5 in ./Library/Python/3.5/lib/python/site-packages (from python-dateutil<3.0.0,>=2.1->botocore==1.8.29->awscli)
```
Not sure what to do. | 2018/01/16 | [
"https://Stackoverflow.com/questions/48275466",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1471314/"
] | You run **pip3** `install awscli` but **pip** `uninstall awscli`. Shouldn't it be **pip3** `uninstall awscli`? | I had a similar issue.
And I used the following command to fix it.
```
pip3 install --no-cache-dir awscli==1.14.39
``` |
52,977,914 | I'm trying to segment the numbers and/or characters of the following image then converting each individual num/char to text using ocr:
[](https://i.stack.imgur.com/rWMEa.png)
This is the code (in python) used:
```
new, contours, hierarchy = cv2.findContours(gray, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
digitCnts = []
final = gray.copy()
# loop over the digit area candidates
for c in contours:
(x, y, w, h) = cv2.boundingRect(c)
# if the contour is sufficiently large, it must be a digit
if (w >= 20 and w <= 290) and h >= (gray.shape[0]>>1)-15:
x1 = x+w
y1 = y+h
digitCnts.append([x,x1,y,y1])
#print(x,x1,y,y1)
# Drawing the selected contour on the original image
cv2.rectangle(final,(x,y),(x1,y1),(0, 255, 0), 2)
plt.imshow(final, cmap=cm.gray, vmin=0, vmax=255)
```
I get the following output:
[](https://i.stack.imgur.com/jJOHY.png)
You see that all are detected correctly except the middle 2 with only the top part has bounding box on it and not around the whole digit. I cannot figure out why only this one not detected correctly especially that it is similar to the others. Any idea how to resolve this? | 2018/10/24 | [
"https://Stackoverflow.com/questions/52977914",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1261829/"
] | As far as I know, most of OpenCV methods for binary images operate `white objects on the black background`.
Src:
[](https://i.stack.imgur.com/fc1Ld.png)
Threahold INV and morph-open:
[](https://i.stack.imgur.com/oIktF.png)
Filter by height and draw on the src:
[](https://i.stack.imgur.com/OXMVp.png)
---
```
#!/usr/bin/python3
# 2018/10/25 08:30
import cv2
import numpy as np
# (1) src
img = cv2.imread( "car.png")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# (2) threshold-inv and morph-open
th, threshed = cv2.threshold(gray, 100, 255, cv2.THRESH_OTSU|cv2.THRESH_BINARY_INV)
morphed = cv2.morphologyEx(threshed, cv2.MORPH_OPEN, np.ones((2,2)))
# (3) find and filter contours, then draw on src
cnts = cv2.findContours(morphed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2]
nh, nw = img.shape[:2]
for cnt in cnts:
x,y,w,h = bbox = cv2.boundingRect(cnt)
if h < 0.3 * nh:
continue
cv2.rectangle(img, (x,y), (x+w, y+h), (255, 0, 255), 1, cv2.LINE_AA)
cv2.imwrite("dst.png", img)
cv2.imwrite("morphed.png", morphed)
``` | Your image is a bit noisy, therefore binarizing it would do the trick.
```
cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY, gray)
new, contours, hierarchy = cv2.findContours(gray, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
# cv2.drawContours(gray, contours, -1, 127, 5)
digitCnts = []
final = gray.copy()
# loop over the digit area candidates
for c in contours:
(x, y, w, h) = cv2.boundingRect(c)
# if the contour is sufficiently large, it must be a digit
if (w >= 20 and w <= 290) and h >= (gray.shape[0]>>1)-15:
x1 = x+w
y1 = y+h
digitCnts.append([x,x1,y,y1])
#print(x,x1,y,y1)
# Drawing the selected contour on the original image
cv2.rectangle(final,(x,y),(x1,y1),(0, 255, 0), 2)
```
[](https://i.stack.imgur.com/BzfDJ.png) |
9,101,800 | So I've been experimenting with numpy and matplotlib and have stumbled across some bug when running python from the emacs inferior shell.
When I send the py file to the shell interpreter I can run commands after the code executed. The command prompt ">>>" appears fine. However, after I invoke a matplotlib show command on a plot the shell just hangs with the command prompt not showing.
```
>>> plt.plot(x,u_k[1,:]);
[<matplotlib.lines.Line2D object at 0x0000000004A9A358>]
>>> plt.show();
```
I am running the traditional C-python implementation. under emacs 23.3 with Fabian Gallina's Python python.el v. 0.23.1 on Win7.
A similar question has been raised here under the i-python platform: [running matplotlib or enthought.mayavi.mlab from a py-shell inside emacs on windows](https://stackoverflow.com/questions/4701607/running-matplotlib-or-enthought-mayavi-mlab-from-a-py-shell-inside-emacs-on-wind)
**UPDATE: I have duplicated the problem on a fresh instalation of Win 7 x64 with the typical python 2.7.2 binaries available from the python website and with numpy 1.6.1 and matplotlib 1.1.0 on emacs 23.3 and 23.4 for Windows.**
There must be a bug somewhere in the emacs shell. | 2012/02/01 | [
"https://Stackoverflow.com/questions/9101800",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/752726/"
] | I think there are two ways to do it.
1. Use ipython. Then you can use `-pylab` option.
I don't use Fabian Gallina's python.el, but I guess you will need something like this:
```
(setq python-shell-interpreter-args "-pylab")
```
Please read the documentation of python.el.
2. You can manually activate interactive mode by [ion](http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.ion)
```
>>> from matplotlib import pyplot as plt
>>> plt.ion()
>>> plt.plot([1,2,3])
[<matplotlib.lines.Line2D object at 0x20711d0>]
>>>
``` | I think that this might have something to do with the behavior of the show function:
>
> [matplotlib.pyplot.show(\*args, \*\*kw)](http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.show)
>
>
> When running in ipython with its pylab mode, display all figures and
> return to the ipython prompt.
>
>
> In non-interactive mode, display all figures and block until the
> figures have been closed; in interactive mode it has no effect unless
> figures were created prior to a change from non-interactive to
> interactive mode (not recommended). In that case it displays the
> figures but does not block.
>
>
> A single experimental keyword argument, block, may be set to True or
> False to override the blocking behavior described above.
>
>
>
I think your running into the blocking behavior mentioned above which would result in the shell hanging. Perhaps try running the function as: `plt.show(block = False)` and see if it produces the output you expect. If this is still giving you trouble let me know and I will try and reproduce your setup locally. |
9,101,800 | So I've been experimenting with numpy and matplotlib and have stumbled across some bug when running python from the emacs inferior shell.
When I send the py file to the shell interpreter I can run commands after the code executed. The command prompt ">>>" appears fine. However, after I invoke a matplotlib show command on a plot the shell just hangs with the command prompt not showing.
```
>>> plt.plot(x,u_k[1,:]);
[<matplotlib.lines.Line2D object at 0x0000000004A9A358>]
>>> plt.show();
```
I am running the traditional C-python implementation. under emacs 23.3 with Fabian Gallina's Python python.el v. 0.23.1 on Win7.
A similar question has been raised here under the i-python platform: [running matplotlib or enthought.mayavi.mlab from a py-shell inside emacs on windows](https://stackoverflow.com/questions/4701607/running-matplotlib-or-enthought-mayavi-mlab-from-a-py-shell-inside-emacs-on-wind)
**UPDATE: I have duplicated the problem on a fresh instalation of Win 7 x64 with the typical python 2.7.2 binaries available from the python website and with numpy 1.6.1 and matplotlib 1.1.0 on emacs 23.3 and 23.4 for Windows.**
There must be a bug somewhere in the emacs shell. | 2012/02/01 | [
"https://Stackoverflow.com/questions/9101800",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/752726/"
] | You can use different back-end:
```
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
```
Other GUI backends:
* `TkAgg`
* `WX`
* `QTAgg`
* `QT4Agg`
If you are using Elpy run your code using `C-u C-c C-c` | I think that this might have something to do with the behavior of the show function:
>
> [matplotlib.pyplot.show(\*args, \*\*kw)](http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.show)
>
>
> When running in ipython with its pylab mode, display all figures and
> return to the ipython prompt.
>
>
> In non-interactive mode, display all figures and block until the
> figures have been closed; in interactive mode it has no effect unless
> figures were created prior to a change from non-interactive to
> interactive mode (not recommended). In that case it displays the
> figures but does not block.
>
>
> A single experimental keyword argument, block, may be set to True or
> False to override the blocking behavior described above.
>
>
>
I think your running into the blocking behavior mentioned above which would result in the shell hanging. Perhaps try running the function as: `plt.show(block = False)` and see if it produces the output you expect. If this is still giving you trouble let me know and I will try and reproduce your setup locally. |
9,101,800 | So I've been experimenting with numpy and matplotlib and have stumbled across some bug when running python from the emacs inferior shell.
When I send the py file to the shell interpreter I can run commands after the code executed. The command prompt ">>>" appears fine. However, after I invoke a matplotlib show command on a plot the shell just hangs with the command prompt not showing.
```
>>> plt.plot(x,u_k[1,:]);
[<matplotlib.lines.Line2D object at 0x0000000004A9A358>]
>>> plt.show();
```
I am running the traditional C-python implementation. under emacs 23.3 with Fabian Gallina's Python python.el v. 0.23.1 on Win7.
A similar question has been raised here under the i-python platform: [running matplotlib or enthought.mayavi.mlab from a py-shell inside emacs on windows](https://stackoverflow.com/questions/4701607/running-matplotlib-or-enthought-mayavi-mlab-from-a-py-shell-inside-emacs-on-wind)
**UPDATE: I have duplicated the problem on a fresh instalation of Win 7 x64 with the typical python 2.7.2 binaries available from the python website and with numpy 1.6.1 and matplotlib 1.1.0 on emacs 23.3 and 23.4 for Windows.**
There must be a bug somewhere in the emacs shell. | 2012/02/01 | [
"https://Stackoverflow.com/questions/9101800",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/752726/"
] | I think there are two ways to do it.
1. Use ipython. Then you can use `-pylab` option.
I don't use Fabian Gallina's python.el, but I guess you will need something like this:
```
(setq python-shell-interpreter-args "-pylab")
```
Please read the documentation of python.el.
2. You can manually activate interactive mode by [ion](http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.ion)
```
>>> from matplotlib import pyplot as plt
>>> plt.ion()
>>> plt.plot([1,2,3])
[<matplotlib.lines.Line2D object at 0x20711d0>]
>>>
``` | I think I have found an even simpler way to hang the inferior shell but only when pdb is invoked. Start pdb by supplying 'python' as the program to run.
Try this code:
```
print "> {<console>(1)<module>() }"
``` |
9,101,800 | So I've been experimenting with numpy and matplotlib and have stumbled across some bug when running python from the emacs inferior shell.
When I send the py file to the shell interpreter I can run commands after the code executed. The command prompt ">>>" appears fine. However, after I invoke a matplotlib show command on a plot the shell just hangs with the command prompt not showing.
```
>>> plt.plot(x,u_k[1,:]);
[<matplotlib.lines.Line2D object at 0x0000000004A9A358>]
>>> plt.show();
```
I am running the traditional C-python implementation. under emacs 23.3 with Fabian Gallina's Python python.el v. 0.23.1 on Win7.
A similar question has been raised here under the i-python platform: [running matplotlib or enthought.mayavi.mlab from a py-shell inside emacs on windows](https://stackoverflow.com/questions/4701607/running-matplotlib-or-enthought-mayavi-mlab-from-a-py-shell-inside-emacs-on-wind)
**UPDATE: I have duplicated the problem on a fresh instalation of Win 7 x64 with the typical python 2.7.2 binaries available from the python website and with numpy 1.6.1 and matplotlib 1.1.0 on emacs 23.3 and 23.4 for Windows.**
There must be a bug somewhere in the emacs shell. | 2012/02/01 | [
"https://Stackoverflow.com/questions/9101800",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/752726/"
] | I think there are two ways to do it.
1. Use ipython. Then you can use `-pylab` option.
I don't use Fabian Gallina's python.el, but I guess you will need something like this:
```
(setq python-shell-interpreter-args "-pylab")
```
Please read the documentation of python.el.
2. You can manually activate interactive mode by [ion](http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.ion)
```
>>> from matplotlib import pyplot as plt
>>> plt.ion()
>>> plt.plot([1,2,3])
[<matplotlib.lines.Line2D object at 0x20711d0>]
>>>
``` | Well after a tremendous amount of time and posting the bug on the matplotlib project page and the python-mode page I found out that supplying the arguments console --matplotlib in ipython.bat will do the trick with matplotlib 1.3.1 and ipython 1.2.0
This is what I have in my iphython.bat
@python.exe -i D:\devel\Python27\Scripts\ipython-script.py console --matplotlib %\* |
9,101,800 | So I've been experimenting with numpy and matplotlib and have stumbled across some bug when running python from the emacs inferior shell.
When I send the py file to the shell interpreter I can run commands after the code executed. The command prompt ">>>" appears fine. However, after I invoke a matplotlib show command on a plot the shell just hangs with the command prompt not showing.
```
>>> plt.plot(x,u_k[1,:]);
[<matplotlib.lines.Line2D object at 0x0000000004A9A358>]
>>> plt.show();
```
I am running the traditional C-python implementation. under emacs 23.3 with Fabian Gallina's Python python.el v. 0.23.1 on Win7.
A similar question has been raised here under the i-python platform: [running matplotlib or enthought.mayavi.mlab from a py-shell inside emacs on windows](https://stackoverflow.com/questions/4701607/running-matplotlib-or-enthought-mayavi-mlab-from-a-py-shell-inside-emacs-on-wind)
**UPDATE: I have duplicated the problem on a fresh instalation of Win 7 x64 with the typical python 2.7.2 binaries available from the python website and with numpy 1.6.1 and matplotlib 1.1.0 on emacs 23.3 and 23.4 for Windows.**
There must be a bug somewhere in the emacs shell. | 2012/02/01 | [
"https://Stackoverflow.com/questions/9101800",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/752726/"
] | I think there are two ways to do it.
1. Use ipython. Then you can use `-pylab` option.
I don't use Fabian Gallina's python.el, but I guess you will need something like this:
```
(setq python-shell-interpreter-args "-pylab")
```
Please read the documentation of python.el.
2. You can manually activate interactive mode by [ion](http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.ion)
```
>>> from matplotlib import pyplot as plt
>>> plt.ion()
>>> plt.plot([1,2,3])
[<matplotlib.lines.Line2D object at 0x20711d0>]
>>>
``` | You can use different back-end:
```
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
```
Other GUI backends:
* `TkAgg`
* `WX`
* `QTAgg`
* `QT4Agg`
If you are using Elpy run your code using `C-u C-c C-c` |
9,101,800 | So I've been experimenting with numpy and matplotlib and have stumbled across some bug when running python from the emacs inferior shell.
When I send the py file to the shell interpreter I can run commands after the code executed. The command prompt ">>>" appears fine. However, after I invoke a matplotlib show command on a plot the shell just hangs with the command prompt not showing.
```
>>> plt.plot(x,u_k[1,:]);
[<matplotlib.lines.Line2D object at 0x0000000004A9A358>]
>>> plt.show();
```
I am running the traditional C-python implementation. under emacs 23.3 with Fabian Gallina's Python python.el v. 0.23.1 on Win7.
A similar question has been raised here under the i-python platform: [running matplotlib or enthought.mayavi.mlab from a py-shell inside emacs on windows](https://stackoverflow.com/questions/4701607/running-matplotlib-or-enthought-mayavi-mlab-from-a-py-shell-inside-emacs-on-wind)
**UPDATE: I have duplicated the problem on a fresh instalation of Win 7 x64 with the typical python 2.7.2 binaries available from the python website and with numpy 1.6.1 and matplotlib 1.1.0 on emacs 23.3 and 23.4 for Windows.**
There must be a bug somewhere in the emacs shell. | 2012/02/01 | [
"https://Stackoverflow.com/questions/9101800",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/752726/"
] | You can use different back-end:
```
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
```
Other GUI backends:
* `TkAgg`
* `WX`
* `QTAgg`
* `QT4Agg`
If you are using Elpy run your code using `C-u C-c C-c` | I think I have found an even simpler way to hang the inferior shell but only when pdb is invoked. Start pdb by supplying 'python' as the program to run.
Try this code:
```
print "> {<console>(1)<module>() }"
``` |
9,101,800 | So I've been experimenting with numpy and matplotlib and have stumbled across some bug when running python from the emacs inferior shell.
When I send the py file to the shell interpreter I can run commands after the code executed. The command prompt ">>>" appears fine. However, after I invoke a matplotlib show command on a plot the shell just hangs with the command prompt not showing.
```
>>> plt.plot(x,u_k[1,:]);
[<matplotlib.lines.Line2D object at 0x0000000004A9A358>]
>>> plt.show();
```
I am running the traditional C-python implementation. under emacs 23.3 with Fabian Gallina's Python python.el v. 0.23.1 on Win7.
A similar question has been raised here under the i-python platform: [running matplotlib or enthought.mayavi.mlab from a py-shell inside emacs on windows](https://stackoverflow.com/questions/4701607/running-matplotlib-or-enthought-mayavi-mlab-from-a-py-shell-inside-emacs-on-wind)
**UPDATE: I have duplicated the problem on a fresh instalation of Win 7 x64 with the typical python 2.7.2 binaries available from the python website and with numpy 1.6.1 and matplotlib 1.1.0 on emacs 23.3 and 23.4 for Windows.**
There must be a bug somewhere in the emacs shell. | 2012/02/01 | [
"https://Stackoverflow.com/questions/9101800",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/752726/"
] | You can use different back-end:
```
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
```
Other GUI backends:
* `TkAgg`
* `WX`
* `QTAgg`
* `QT4Agg`
If you are using Elpy run your code using `C-u C-c C-c` | Well after a tremendous amount of time and posting the bug on the matplotlib project page and the python-mode page I found out that supplying the arguments console --matplotlib in ipython.bat will do the trick with matplotlib 1.3.1 and ipython 1.2.0
This is what I have in my iphython.bat
@python.exe -i D:\devel\Python27\Scripts\ipython-script.py console --matplotlib %\* |
58,498,100 | I have a complicated nested numpy array which contains list. I am trying to converted the elements to float32. However, it gives me following error:
```
ValueError Traceback (most recent call last)
<ipython-input-225-22d2824961c2> in <module>
----> 1 x_train_single.astype(np.float32)
ValueError: setting an array element with a sequence.
```
Here is the code and sample input:
```
x_train_single.astype(np.float32)
array([[ list([[[0, 0, 0, 0, 0, 0]], [-1.0], [0]]),
list([[[0, 0, 0, 0, 0, 0], [173, 8, 172, 0, 0, 0]], [-1.0], [0]])
]])
``` | 2019/10/22 | [
"https://Stackoverflow.com/questions/58498100",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1584253/"
] | As your array contains lists of different sizes and nesting depths, I doubt that there is a simple or fast solution.
Here is a "get-the-job-done-no-matter-what" approach. It comes in two flavors. One creates arrays for leaves, the other one lists.
```
>>> a
array([[list([[[0, 0, 0, 0, 0, 0]], [-1.0], [0]]),
list([[[0, 0, 0, 0, 0, 0], [173, 8, 172, 0, 0, 0]], [-1.0], [0]])]],
dtype=object)
>>> def mkarr(a):
... try:
... return np.array(a,np.float32)
... except:
... return [*map(mkarr,a)]
...
>>> def mklst(a):
... try:
... return [*map(mklst,a)]
... except:
... return np.float32(a)
...
>>> np.frompyfunc(mkarr,1,1)(a)
array([[list([array([[0., 0., 0., 0., 0., 0.]], dtype=float32), array([-1.], dtype=float32), array([0.], dtype=float32)]),
list([array([[ 0., 0., 0., 0., 0., 0.],
[173., 8., 172., 0., 0., 0.]], dtype=float32), array([-1.], dtype=float32), array([0.], dtype=float32)])]],
dtype=object)
>>> np.frompyfunc(mklst,1,1)(a)
array([[list([[[0.0, 0.0, 0.0, 0.0, 0.0, 0.0]], [-1.0], [0.0]]),
list([[[0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [173.0, 8.0, 172.0, 0.0, 0.0, 0.0]], [-1.0], [0.0]])]],
dtype=object)
``` | if number of columns is fixed then
```
np.array([l.astype(np.float) for l in x_train_single.squeeze()])
```
But it will remove the redundant dimensions, convert everything into numpy array.
Before: (1, 1, 1, 11, 6)
After: (11,6) |
58,498,100 | I have a complicated nested numpy array which contains list. I am trying to converted the elements to float32. However, it gives me following error:
```
ValueError Traceback (most recent call last)
<ipython-input-225-22d2824961c2> in <module>
----> 1 x_train_single.astype(np.float32)
ValueError: setting an array element with a sequence.
```
Here is the code and sample input:
```
x_train_single.astype(np.float32)
array([[ list([[[0, 0, 0, 0, 0, 0]], [-1.0], [0]]),
list([[[0, 0, 0, 0, 0, 0], [173, 8, 172, 0, 0, 0]], [-1.0], [0]])
]])
``` | 2019/10/22 | [
"https://Stackoverflow.com/questions/58498100",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1584253/"
] | As your array contains lists of different sizes and nesting depths, I doubt that there is a simple or fast solution.
Here is a "get-the-job-done-no-matter-what" approach. It comes in two flavors. One creates arrays for leaves, the other one lists.
```
>>> a
array([[list([[[0, 0, 0, 0, 0, 0]], [-1.0], [0]]),
list([[[0, 0, 0, 0, 0, 0], [173, 8, 172, 0, 0, 0]], [-1.0], [0]])]],
dtype=object)
>>> def mkarr(a):
... try:
... return np.array(a,np.float32)
... except:
... return [*map(mkarr,a)]
...
>>> def mklst(a):
... try:
... return [*map(mklst,a)]
... except:
... return np.float32(a)
...
>>> np.frompyfunc(mkarr,1,1)(a)
array([[list([array([[0., 0., 0., 0., 0., 0.]], dtype=float32), array([-1.], dtype=float32), array([0.], dtype=float32)]),
list([array([[ 0., 0., 0., 0., 0., 0.],
[173., 8., 172., 0., 0., 0.]], dtype=float32), array([-1.], dtype=float32), array([0.], dtype=float32)])]],
dtype=object)
>>> np.frompyfunc(mklst,1,1)(a)
array([[list([[[0.0, 0.0, 0.0, 0.0, 0.0, 0.0]], [-1.0], [0.0]]),
list([[[0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [173.0, 8.0, 172.0, 0.0, 0.0, 0.0]], [-1.0], [0.0]])]],
dtype=object)
``` | Try this:
```
np.array(x_train_single.tolist())
```
Looks like you have a (1,1) shaped array, where the single element is a list. And the sublists a consistent in size.
I expect you will get an array with shape (1, 1, 1, 11, 6) and int dtype.
or:
```
np.array(x_train_single[0,0])
```
Again this extracts the list from the array, and then makes an array from that.
My answer so far was based on the display:
```
array([[list([[[173, 8, 172, 0, 0, 0], [512, 58, 57, 0, 0, 0],
...: [513, 514, 0, 0, 0, 0], [515, 189, 516, 0, 0, 0], [309, 266, 0, 0, 0,
...: 0],
...: [32, 310, 0, 0, 0, 0], [271, 58, 517, 0, 0, 0], [164, 40, 0, 0, 0, 0],
...: [38, 32, 60, 0, 0, 0], [38, 83, 60, 0, 0, 0], [149, 311, 0, 0, 0, 0]]
...: ])]])
```
The new display is more complicated
```
array([[ list([[[0, 0, 0, 0, 0, 0]], [-1.0], [0]]),
...: list([[[0, 0, 0, 0, 0, 0], [173, 8, 172, 0, 0, 0]], [-1.0], [0]])]])
```
because the inner lists differ in size. It can't be made into a numeric dtype array.
It can be turned into a (1,2,3) shape array, but still object dtype with 1d list elements. |
18,662,264 | from the documents, the urllib.unquote\_plus should replce plus signs by spaces.
but when I tried the below code in IDLE for python 2.7, it did not.
```
>>s = 'http://stackoverflow.com/questions/?q1=xx%2Bxx%2Bxx'
>>urllib.unquote_plus(s)
>>'http://stackoverflow.com/questions/?q1=xx+xx+xx'
```
I also tried doing something like `urllib.unquote_plus(s).decode('utf-8').`
is there a proper to decode the url component? | 2013/09/06 | [
"https://Stackoverflow.com/questions/18662264",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/251024/"
] | `%2B` is the escape code for a *literal* `+`; it is being unescaped entirely correctly.
Don't confuse this with the *URL escaped* `+`, which is the escape character for spaces:
```
>>> s = 'http://stackoverflow.com/questions/?q1=xx+xx+xx'
>>> urllib.parse.unquote_plus(s)
'http://stackoverflow.com/questions/?q1=xx xx xx'
```
`unquote_plus()` only decodes encoded spaces to literal spaces (`'+'` -> `' '`), not encoded `+` symbols (`'%2B'` -> `'+'`).
If you have input to decode that uses `%2B` instead of `+` where you expected spaces, then those input values were perhaps *doubly* quoted, you'd need to unquote them twice. You'd see `%` escapes encoded too:
```
>>> urllib.parse.quote_plus('Hello world!')
'Hello+world%21'
>>> urllib.parse.quote_plus(urllib.quote_plus('Hello world!'))
'Hello%2Bworld%2521'
```
where `%25` is the quoted `%` character. | Those aren't spaces, those are actual pluses. A space is %20, which in that part of the URL is indeed equivalent to +, but %2B means a literal plus. |
34,495,839 | I saw the following coding gif, which depicts a user typing commands (e.g. `import`) and a pop up message would describe the usage for that command.
How can I set up something similar?[](https://i.stack.imgur.com/7OUwv.gif) | 2015/12/28 | [
"https://Stackoverflow.com/questions/34495839",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2636317/"
] | According to the github issues in the repo of that gif, the video was taken using [bpython](http://bpython-interpreter.org)
Source: <https://github.com/tqdm/tqdm/issues/67> | Code editors like [`vim`](http://www.vim.org/) (with [`jedi`](https://github.com/davidhalter/jedi-vim) or [`python-mode`](https://github.com/klen/python-mode.git)) or [`emacs`](https://www.gnu.org/software/emacs/) and integrated development environments like [`pycharm`](https://www.jetbrains.com/pycharm/) can offer the same functionality. |
51,060,433 | I coded a jQuery with flask where on-click it should perform an SQL search and export the dataframe as excel, the script is:
```
<script type=text/javascript>
$(function () {
$('a#export_to_excel').bind('click', function () {
$.getJSON($SCRIPT_ROOT + ' /api/sanctionsSearch/download', {
nm: $('input[name="nm"]').val(),
searchtype: $('select[name="searchtype"]').val()
}, function (data) {
$("#download_results").text(data.result);
});
return false;
});
});
```
However there was not response on the browser, my python code is as below:
```
from io import BytesIO,StringIO
from flask import render_template, request, url_for, jsonify, redirect, request, Flask, send_file
def index():
#get the dataframe ready and define as 'data', parameters obtained from form input in html
name = request.args.get('nm','', type = str)
type = request.args.get('searchtype','Entity',type = str)
#function get_entity() to get the dataframe
#I have checked and the dataframe is functioning properly
data = get_entity(name,type)
#check if the dataframe is empty
if data.empty == True:
print("its not working bruh...")
word = "No results to export! Please try again!"
return jsonify(result = word)
#store the csv to BytesIO
proxy = StringIO()
data.to_csv(proxy)
mem = BytesIO()
mem.write(proxy.getvalue().encode('utf-8'))
mem.seek(0)
proxy.close()
print("download starting....")
#send file
send_file(mem, as_attachment=True,attachment_filename='Exportresults.csv', mimetype='text/csv')
word = "Download starting!"
return jsonify(result = word)
```
Can someone tell me what's wrong with my code? The "download starting..." was properly printed to the html but the download did not start at all. | 2018/06/27 | [
"https://Stackoverflow.com/questions/51060433",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9979747/"
] | The solution is not ideal, but what I did is adding a window.open(url) command in the jquery which will call another function, this function will send\_file to the user. | You should use return statement
```
return send_file()
``` |
59,959,629 | I've been stuck on this for the last week and I'm fairly lost as to what do for next steps.
I have a Django application that uses a MySQL database. I've deployed it using AWS Elastic Beanstalk using the following tutorial : <https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-django.html>
It successfully deployed. However, I keep getting 500 errors when trying to access the site. I've also updated the host value as well.
Here's the error\_log, but I'm not able to deduce much from it.
```
[Tue Jan 28 08:05:34.444677 2020] [suexec:notice] [pid 3125] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Tue Jan 28 08:05:34.460731 2020] [http2:warn] [pid 3125] AH10034: The mpm module (prefork.c) is not supported by mod_http2. The mpm determines how things are processed in your server. HTTP/2 has more demands in this regard and the currently selected mpm will just not do. This is an advisory warning. Your server will continue to work, but the HTTP/2 protocol will be inactive.
[Tue Jan 28 08:05:34.460743 2020] [http2:warn] [pid 3125] AH02951: mod_ssl does not seem to be enabled
[Tue Jan 28 08:05:34.461206 2020] [lbmethod_heartbeat:notice] [pid 3125] AH02282: No slotmem from mod_heartmonitor
[Tue Jan 28 08:05:34.461249 2020] [:warn] [pid 3125] mod_wsgi: Compiled for Python/3.6.2.
[Tue Jan 28 08:05:34.461253 2020] [:warn] [pid 3125] mod_wsgi: Runtime using Python/3.6.8.
[Tue Jan 28 08:05:34.463081 2020] [mpm_prefork:notice] [pid 3125] AH00163: Apache/2.4.41 (Amazon) mod_wsgi/3.5 Python/3.6.8 configured -- resuming normal operations
[Tue Jan 28 08:05:34.463096 2020] [core:notice] [pid 3125] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND'
[Tue Jan 28 08:06:21.350696 2020] [mpm_prefork:notice] [pid 3125] AH00169: caught SIGTERM, shutting down
[Tue Jan 28 08:06:22.419261 2020] [suexec:notice] [pid 4501] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Tue Jan 28 08:06:22.435310 2020] [so:warn] [pid 4501] AH01574: module wsgi_module is already loaded, skipping
[Tue Jan 28 08:06:22.437572 2020] [http2:warn] [pid 4501] AH10034: The mpm module (prefork.c) is not supported by mod_http2. The mpm determines how things are processed in your server. HTTP/2 has more demands in this regard and the currently selected mpm will just not do. This is an advisory warning. Your server will continue to work, but the HTTP/2 protocol will be inactive.
[Tue Jan 28 08:06:22.437582 2020] [http2:warn] [pid 4501] AH02951: mod_ssl does not seem to be enabled
[Tue Jan 28 08:06:22.438217 2020] [lbmethod_heartbeat:notice] [pid 4501] AH02282: No slotmem from mod_heartmonitor
[Tue Jan 28 08:06:22.438283 2020] [:warn] [pid 4501] mod_wsgi: Compiled for Python/3.6.2.
[Tue Jan 28 08:06:22.438292 2020] [:warn] [pid 4501] mod_wsgi: Runtime using Python/3.6.8.
[Tue Jan 28 08:06:22.440572 2020] [mpm_prefork:notice] [pid 4501] AH00163: Apache/2.4.41 (Amazon) mod_wsgi/3.5 Python/3.6.8 configured -- resuming normal operations
[Tue Jan 28 08:06:22.440593 2020] [core:notice] [pid 4501] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND'
[Tue Jan 28 08:08:03.028260 2020] [mpm_prefork:notice] [pid 4501] AH00169: caught SIGTERM, shutting down
Exception ignored in: <bound method BaseEventLoop.__del__ of <_UnixSelectorEventLoop running=False closed=False debug=False>>
Traceback (most recent call last):
File "/usr/lib64/python3.6/asyncio/base_events.py", line 526, in __del__
NameError: name 'ResourceWarning' is not defined
[Tue Jan 28 08:08:04.152017 2020] [suexec:notice] [pid 4833] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Tue Jan 28 08:08:04.168082 2020] [so:warn] [pid 4833] AH01574: module wsgi_module is already loaded, skipping
[Tue Jan 28 08:08:04.170245 2020] [http2:warn] [pid 4833] AH10034: The mpm module (prefork.c) is not supported by mod_http2. The mpm determines how things are processed in your server. HTTP/2 has more demands in this regard and the currently selected mpm will just not do. This is an advisory warning. Your server will continue to work, but the HTTP/2 protocol will be inactive.
[Tue Jan 28 08:08:04.170256 2020] [http2:warn] [pid 4833] AH02951: mod_ssl does not seem to be enabled
[Tue Jan 28 08:08:04.170793 2020] [lbmethod_heartbeat:notice] [pid 4833] AH02282: No slotmem from mod_heartmonitor
[Tue Jan 28 08:08:04.170852 2020] [:warn] [pid 4833] mod_wsgi: Compiled for Python/3.6.2.
[Tue Jan 28 08:08:04.170856 2020] [:warn] [pid 4833] mod_wsgi: Runtime using Python/3.6.8.
[Tue Jan 28 08:08:04.173067 2020] [mpm_prefork:notice] [pid 4833] AH00163: Apache/2.4.41 (Amazon) mod_wsgi/3.5 Python/3.6.8 configured -- resuming normal operations
[Tue Jan 28 08:08:04.173089 2020] [core:notice] [pid 4833] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND'
[Tue Jan 28 08:25:28.783035 2020] [mpm_prefork:notice] [pid 4833] AH00169: caught SIGTERM, shutting down
[Tue Jan 28 08:25:32.859422 2020] [suexec:notice] [pid 5573] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Tue Jan 28 08:25:32.875584 2020] [so:warn] [pid 5573] AH01574: module wsgi_module is already loaded, skipping
[Tue Jan 28 08:25:32.877541 2020] [http2:warn] [pid 5573] AH10034: The mpm module (prefork.c) is not supported by mod_http2. The mpm determines how things are processed in your server. HTTP/2 has more demands in this regard and the currently selected mpm will just not do. This is an advisory warning. Your server will continue to work, but the HTTP/2 protocol will be inactive.
[Tue Jan 28 08:25:32.877552 2020] [http2:warn] [pid 5573] AH02951: mod_ssl does not seem to be enabled
[Tue Jan 28 08:25:32.878103 2020] [lbmethod_heartbeat:notice] [pid 5573] AH02282: No slotmem from mod_heartmonitor
[Tue Jan 28 08:25:32.878167 2020] [:warn] [pid 5573] mod_wsgi: Compiled for Python/3.6.2.
[Tue Jan 28 08:25:32.878174 2020] [:warn] [pid 5573] mod_wsgi: Runtime using Python/3.6.8.
[Tue Jan 28 08:25:32.880448 2020] [mpm_prefork:notice] [pid 5573] AH00163: Apache/2.4.41 (Amazon) mod_wsgi/3.5 Python/3.6.8 configured -- resuming normal operations
[Tue Jan 28 08:25:32.880477 2020] [core:notice] [pid 5573] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND'
[Wed Jan 29 01:11:07.166917 2020] [mpm_prefork:notice] [pid 5573] AH00169: caught SIGTERM, shutting down
Exception ignored in: <bound method BaseEventLoop.__del__ of <_UnixSelectorEventLoop running=False closed=False debug=False>>
Traceback (most recent call last):
File "/usr/lib64/python3.6/asyncio/base_events.py", line 526, in __del__
NameError: name 'ResourceWarning' is not defined
[Wed Jan 29 01:11:08.333254 2020] [suexec:notice] [pid 28706] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Wed Jan 29 01:11:08.349662 2020] [so:warn] [pid 28706] AH01574: module wsgi_module is already loaded, skipping
[Wed Jan 29 01:11:08.351804 2020] [http2:warn] [pid 28706] AH10034: The mpm module (prefork.c) is not supported by mod_http2. The mpm determines how things are processed in your server. HTTP/2 has more demands in this regard and the currently selected mpm will just not do. This is an advisory warning. Your server will continue to work, but the HTTP/2 protocol will be inactive.
[Wed Jan 29 01:11:08.351813 2020] [http2:warn] [pid 28706] AH02951: mod_ssl does not seem to be enabled
[Wed Jan 29 01:11:08.352386 2020] [lbmethod_heartbeat:notice] [pid 28706] AH02282: No slotmem from mod_heartmonitor
[Wed Jan 29 01:11:08.352447 2020] [:warn] [pid 28706] mod_wsgi: Compiled for Python/3.6.2.
[Wed Jan 29 01:11:08.352451 2020] [:warn] [pid 28706] mod_wsgi: Runtime using Python/3.6.8.
[Wed Jan 29 01:11:08.354766 2020] [mpm_prefork:notice] [pid 28706] AH00163: Apache/2.4.41 (Amazon) mod_wsgi/3.5 Python/3.6.8 configured -- resuming normal operations
[Wed Jan 29 01:11:08.354783 2020] [core:notice] [pid 28706] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND'
```
If anyone could provide some insight/help/further steps, it would be greatly appreciated. I can provide more logs, etc anything else that would help. Thank you. | 2020/01/29 | [
"https://Stackoverflow.com/questions/59959629",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3310212/"
] | This should be sufficient to hide all but one sheet.
```
function hideAllSheetsExceptThisOne(sheetName) {
var sheetName=sh||'Student Report';//default for testing
var ss = SpreadsheetApp.getActive();
var sheets=ss.getSheets();
for(var i=0;i<sheets.length; i++){
if(sheets[i].getName()!=sheetName){
sheets[i].hideSheet();
}
}
SpreadsheetApp.flush();
}
``` | I had to do something similar earlier this year, and this code proved to be very helpful. <https://gist.github.com/ixhd/3660885> |
67,111,664 | I created a little app with Python as backend and React as frontend. I receive some data from the frontend and I want to eliminate the first 20 words of the text I receive if a condition is satisfyed.
```
@app.route("/translate", methods=["GET", "POST"])
def translate():
prompt = request.json["prompt"]
max_tokens=50
prompt = re.sub(r"^(?:.+?\b\s+?\b){20}", "", prompt)
response = translation_response(prompt)
return {'text': response}
```
How can I translate **eliminate the first 20 words** of the var prompt into python code?
Thanks a lot in advance.... | 2021/04/15 | [
"https://Stackoverflow.com/questions/67111664",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14880010/"
] | ```
import pandas as pd
```
Use `to_datetime()` method and convert your date column from string to datetime:
```
df['Date']=pd.to_datetime(df['Date'])
```
Finally use `apply()` method:
```
df['comm0']=df['Date'].apply(lambda x:1 if x==pd.to_datetime('2021-01-07') else 0)
```
Or as suggested by @anky:
Simply use:
```
df['comm0']=pd.to_datetime(df['Date']).eq('2021-01-07').astype(int)
```
Or
If you are familiar with `numpy` then you can also use after converting your Date columns to datetime:
```
import numpy as np
df['comm0']=np.where(df['Date']=='2021-01-07',1,0)
``` | It's a problem with types.
df['Date'] is a string and not a datetime object, so when you compare each element with '2021-01-07' (another string) they differ because the time informations (00:00:00).
as solution you can convert elements to datetime, as following:
```
def int_21(x):
if x == pd.to_datetime('2021-01-07'):
return '1'
else:
return '0'
df['Date'] = pd.to_datetime(df['Date'])
df['comm0'] = df['Date'].apply(int_21)
```
or, you can still use string objects, but the comparing element must have the same format as the dates:
```
def int_21(x):
if x == '2021-01-07 00:00:00':
return '1'
else:
return '0'
``` |
39,815,551 | I am trying to make a program in python that will accept a user's input and check if it is a Kaprekar number.
I'm still a beginner, and have been having a lot of issues, but my main issue now that I can't seem to solve is how I would add up all possibilities in a list, with only two variables. I'm probably not explaining it very well so here is an example:
I have a list that contains the numbers
`['2', '0', '2', '5']`.
How would I make python do `2 + 025`, `20 + 25` and `202 + 5`?
It would be inside an if else statement, and as soon as it would equal the user inputted number, it would stop.
([Here](http://pastebin.com/Kg9bQq47) is what the entire code looks like if it helps- where it currently says `if 1 == 0:`, it should be adding them up.) | 2016/10/02 | [
"https://Stackoverflow.com/questions/39815551",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | Say you start with
```
a = ['2', '0', '2', '5']
```
Then you can run
```
>>> [(a[: i], a[i: ]) for i in range(1, len(a))]
[(['2'], ['0', '2', '5']), (['2', '0'], ['2', '5']), (['2', '0', '2'], ['5'])]
```
to obtain all the possible contiguous splits.
If you want to process it further, you can change it to numbers via
```
>>> [(int(''.join(a[: i])), int(''.join(a[i: ]))) for i in range(1, len(a))]
[(2, 25), (20, 25), (202, 5)]
```
or add them up
```
>>> [int(''.join(a[: i])) + int(''.join(a[i: ])) for i in range(1, len(a))]
[27, 45, 207]
``` | Not a direct answer to your question, but you can write an expression to determine whether a number, N, is a Krapekar number more concisely.
```
>>> N=45
>>> digits=str(N**2)
>>> Krapekar=any([N==int(digits[:_])+int(digits[_:]) for _ in range(1,len(digits))])
>>> Krapekar
True
``` |
8,827,304 | I'm using Plone v4.1.2, and I'd like to know if there a way to include more than one author in the by line of a page? I have two authors listed in ownership, but only one author is listed in the byline.
I'd like the byline to look something like this:
by First Author and Second Author — last modified Jan 11, 2012 01:53 PM — History
UPDATE - Thanks everyone for your replies. I managed to bungle my way through this (I've never used tal before). I edited plone.belowcontenttitle.documentbyline as suggested by Giaccamo, and managed to learn a bit about tal along the way. Here is the code that does what I needed (this replaces the existing tal:creator construct):
```
<span>
by
<span class="documentCreators"
tal:condition="context/Creators"
tal:repeat="creator context/Creators"
i18n:translate="text_creators">
<span tal:define="cond1 repeat/creator/start; cond2 repeat/creator/end"
tal:condition="python: not cond1 and not cond2" >, </span>
<span tal:define="cond1 repeat/creator/start; cond2 repeat/creator/end"
tal:condition="python: not cond1 and cond2" > and </span>
<tal:i18n i18n:translate="label_by_author">
<a href="#"
tal:attributes="href string:${context/@@plone_portal_state/navigation_root_url}/author/${creator}"
tal:content="creator"
tal:omit-tag="python:view.author() is None"
i18n:name="author">Roland Barthes</a>
</tal:i18n>
</span>
</span>
```
This puts the userid on the byline instead of the full name. I tried to get the full name, but after some time without success I decided I could live with userid. | 2012/01/11 | [
"https://Stackoverflow.com/questions/8827304",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1144225/"
] | In order to browse more than one author you'll need a little bit of coding:
That piece of page is called `viewlets`.
That specific viewlet is called `plone.belowcontenttitle.documentbyline`.
You can use [z3c.jbot](http://pypi.python.org/pypi/z3c.jbot) to override the viewlet template. Take a look at [this howto](https://weblion.psu.edu/trac/weblion/wiki/z3c.jbot) for usage. Another option is to customize the template through-the-web following [this tutorial](http://plone.org/documentation/manual/theme-reference/elements/visibleelements/plone.belowcontenttitle.documentbyline). | you could use the contributors- instead of the owners-field. they are listed by default in the docByLine. hth, i |
65,433,038 | So I'm trying to run Django developing server on a container but I can't access it through my browser. I have 2 containers using the same docker network, one with postgress and the other is Django. I manage to ping both containers and successfully connect 2 of them together and run `./manage.py runserver` ok but can't `curl` or open it in a browser
Here is my Django docker file
```
FROM alpine:latest
COPY ./requirements.txt .
ADD ./parking/ /parking
RUN apk add --no-cache --virtual .build-deps python3-dev gcc py3-pip postgresql-dev py3-virtualenv musl-dev libc-dev linux-headers
RUN virtualenv /.env
RUN /.env/bin/pip install -r /requirements.txt
WORKDIR /parking
EXPOSE 8000 5432
```
The postgres container I pulled it from docker hub
I ran django with
`docker run --name=django --network=app -p 127.4.3.1:6969:8000 -it dev/django:1.0`
I ran postgres with
`docker run --name=some-postgres --network=app -p 127.2.2.2:6969:5432 -e POSTGRES_PASSWORD=123 -e POSTGRES_DB=parking postgres`
Any help would be great. Thank you | 2020/12/24 | [
"https://Stackoverflow.com/questions/65433038",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11386561/"
] | Think of it this way:
Your React application is the U-Haul truck that delivers **everything** from the Web Server (Back-End) to the Browser (Front-End)

Now you say you want everything wrapped in a (native) Web Component:
`<move-house></move-house>`
It is do-able, but you as the Developer have to develop all **dependencies**
It starts by **fully** understanding what React is and does, so you can wrap **all** its behaviour.
**Unlike** other Frameworks (Angular, Vue, Svelte) React has no "*Make-it-a-Web-Component*" option,
**because** React, with its virtual DOM, is a totally different (and rather outdated) beast that doesn't comply with modern Web Standards. (today [Dec2020] React only scores **71%** on the Custom Elements Everywhere test)
See: <https://custom-elements-everywhere.com/libraries/react/results/results.html>
for what you as the developer have to fix, because React does not do that for you
Some say React compared to modern JavaScript technologies, Frameworks, Web Component libraries like Lit and Stencil or Svelte, is more like:
 | It is possible in react using direflow. <https://direflow.io/> |
19,037,928 | I am using python + beautifulsoup to parse html. My problem is that I have a variable amount of text items. In this case, for example, I want to extract 'Text 1', 'Text 2', ... 'Text 4'. In other webpages, there may be only 'Text 1' or possibly two, etc. So it changes. If the 'Text x's were contained in a tag, it would make my life easier. But they are not. I can access them using next and previous (or maybe nextSibling and previousSibling), but off the top of my head I don't know how to get all of them. The idea would be to (assuming the max. number I would ever encounter would be four) write the 'Text 1' to a file, then proceed all the way to 'Text 4'. That is in this case. In the case where there were only 'Text 1', I would write 'Text 1' to the file, and then just have blanks for 2-4. Any suggestions on what I should do?
```
<div id="DIVID" style="display: block; margin-left: 1em;">
<b>Header 1</b>
<br/>
Text 1
<br/>
Text 2
<br/>
Text 3
<br/>
Text 4
<br/>
<b>Header 2</b>
</div>
```
While I'm at it, I have a not-so-related question. Say I have a website that has a variable number of links that all link to html exactly like what I have above. This is not what this application is, but think craigslist - there are a number of links on a central page. I need to be able to access each of these pages in order to do my parsing. What would be a good approach to do this?
Thanks!
extra:
The next webpage might look like this:
```
<div id="DIVID2" style="display: block; margin-left: 1em;">
<b>Header 1</b>
<br/>
Different Text 1
<br/>
Different Text 2
<br/>
<b>Header 2</b>
</div>
```
Note the differences:
1. DIVID is now DIVID2. I can figure out the ending on DIVID based on other parsing on pages. This is not a problem.
2. I only have two items of text instead of four.
3. The text now is different.
Note the key similarity:
1. Header 1 and Header 2 are the same. These don't change. | 2013/09/26 | [
"https://Stackoverflow.com/questions/19037928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2049545/"
] | You might try something like this:
```
>>> test ="""<b>Header 1</b>
<br/>
Text 1
<br/>
Text 2
<br/>
Text 3
<br/>
Text 4
<br/>
<b>Header 2</b>"""
>>> soup = BeautifulSoup(test)
>>> test = soup.find('b')
>>> desired_text = [x.strip() for x in str(test.parent).split('<br />')]
['<b>Header 1</b>', 'Text 1', 'Text 2', 'Text 3', 'Text 4', '<b>Header 2</b>']
```
Now you just need to separate by your 'Header' blocks, which I think is doable and I believe may get you started in the right direction.
As for your other question, you need to assemble a list of links and then iterate through them opening each one individually and processing how you will. This is a much broader question, though, so you should attempt some stuff and come back to refine what you have and ask a new question once you need some help on a specific issue.
---
Explanation on last line of code:
```
[x.strip() for x in str(test.parent).split('<br />')]
```
This takes my "test" node that I assigned above and grabs the parent. By turning into a string, I can "split" on the `<br>` tags, which makes those tags disappear and separates all the text we want separated. This creates a list where each list-item has the text we want and some '\n's.
Finally, what is probably most confusing is the list comprehension syntax, which looks like this:
```
some_list = [item for item in some_iterable]
```
This simply produces a list of "item"s all taken from "some\_iterable". In my list comprehension, I'm running through the list, taking each item in the list, and simply stripping off a newline (the `x.strip()` part). There are many ways to do this, by the way. | Here is a different solution. nextSibling can get parts of the structured document that follow a named tag.
```
from BeautifulSoup import BeautifulSoup
text="""
<b>Header 1</b>
<br/>
Text 1
<br/>
Text 2
<br/>
Text 3
<br/>
Text 4
<br/>
<b>Header 2</b>
"""
soup = BeautifulSoup(text)
for br in soup.findAll('br'):
following = br.nextSibling
print following.strip()
``` |
19,037,928 | I am using python + beautifulsoup to parse html. My problem is that I have a variable amount of text items. In this case, for example, I want to extract 'Text 1', 'Text 2', ... 'Text 4'. In other webpages, there may be only 'Text 1' or possibly two, etc. So it changes. If the 'Text x's were contained in a tag, it would make my life easier. But they are not. I can access them using next and previous (or maybe nextSibling and previousSibling), but off the top of my head I don't know how to get all of them. The idea would be to (assuming the max. number I would ever encounter would be four) write the 'Text 1' to a file, then proceed all the way to 'Text 4'. That is in this case. In the case where there were only 'Text 1', I would write 'Text 1' to the file, and then just have blanks for 2-4. Any suggestions on what I should do?
```
<div id="DIVID" style="display: block; margin-left: 1em;">
<b>Header 1</b>
<br/>
Text 1
<br/>
Text 2
<br/>
Text 3
<br/>
Text 4
<br/>
<b>Header 2</b>
</div>
```
While I'm at it, I have a not-so-related question. Say I have a website that has a variable number of links that all link to html exactly like what I have above. This is not what this application is, but think craigslist - there are a number of links on a central page. I need to be able to access each of these pages in order to do my parsing. What would be a good approach to do this?
Thanks!
extra:
The next webpage might look like this:
```
<div id="DIVID2" style="display: block; margin-left: 1em;">
<b>Header 1</b>
<br/>
Different Text 1
<br/>
Different Text 2
<br/>
<b>Header 2</b>
</div>
```
Note the differences:
1. DIVID is now DIVID2. I can figure out the ending on DIVID based on other parsing on pages. This is not a problem.
2. I only have two items of text instead of four.
3. The text now is different.
Note the key similarity:
1. Header 1 and Header 2 are the same. These don't change. | 2013/09/26 | [
"https://Stackoverflow.com/questions/19037928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2049545/"
] | You can just combine everything using `get_text`:
```
test ="""<div id='DIVID'>
<b>Header 1</b>
<br/>
Text 1
<br/>
Text 2
<br/>
Text 3
<br/>
Text 4
<br/>
<b>Header 2</b>
</div>"""
def divid(tag):
return tag.name=='div' and tag.has_attr('id') and tag['id'].startswith('DIVID')
soup = BeautifulSoup(test)
print soup.find(divid).get_text()
```
which will give you
```
Header 1
Text 1
Text 2
Text 3
Text 4
Header 2
``` | Here is a different solution. nextSibling can get parts of the structured document that follow a named tag.
```
from BeautifulSoup import BeautifulSoup
text="""
<b>Header 1</b>
<br/>
Text 1
<br/>
Text 2
<br/>
Text 3
<br/>
Text 4
<br/>
<b>Header 2</b>
"""
soup = BeautifulSoup(text)
for br in soup.findAll('br'):
following = br.nextSibling
print following.strip()
``` |
10,899,197 | ```
#include <ext/hash_map>
using namespace std;
class hash_t : public __gnu_cxx::hash_map<const char*, list<time_t> > { };
hash_t hash;
...
```
I'm having some problems using this hash\_map. The const char\* im using as a key is always a 12 length number with this format 58412xxxxxxx. I know there are 483809 different numbers, so that should be the hash\_map size after inserting everything, but i'm only getting 193 entries.
```
hash_t::iterator it = hash.find(origen.c_str());
if (it != hash.end()) { //Found
x++;
(*it).second.push_front(fecha);
}
else { //Not found
y++;
list<time_t> lista(1, fecha);
hash.insert(make_pair(origen.c_str(), lista));
}
```
The same procedure works perfectly using python dictionaries (i'm getting the correct number of entries) but not even close using c++. Is it possible that since every key begins with 58412 (actually almost every key, but not all of them, and that's the reason I don't want to chop those 5 chars), im getting a lot of collisions? | 2012/06/05 | [
"https://Stackoverflow.com/questions/10899197",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1430913/"
] | `const char*` is not good for a key, since you now have pointer comparison instead of string comparison (also, you probably have dangling pointers, the return value of `c_str()` is not usable long-term).
Use `hash_map<std::string, list<time_t> >` instead. | If your key is `char*`, you are comparing no the strings, but pointers, which makes your hashmap work differently than what you expect. Consider using `const std::string` for the keys, so they are compared using lexicographical ordering |
39,599,596 | I´m writing a simple calculator program that will let a user add a list of integers together as a kind of entry to the syntax of python. I want the program to allow the user to add as many numbers together as they want. My error is:
```
Traceback (most recent call last):
File "Calculator.py", line 17, in <module>
addition = sum(inputs)
TypeError: unsupported operand type(s) for +: 'int' and 'str'
```
My code is:
```
#declare variables
inputs = []
done = False
#while loop for inputting numbers
while done == False:
value = raw_input()
#escape loop if user enters done
if value == "Done":
print inputs
done = True
else:
inputs.append(value)
addition = sum(inputs)
print addition
``` | 2016/09/20 | [
"https://Stackoverflow.com/questions/39599596",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6854420/"
] | [`raw_input`](https://docs.python.org/2/library/functions.html#raw_input) returns strings, not numbers. [`sum`](https://docs.python.org/2/library/functions.html#sum) operates only on numbers.
You can convert each item to an int as you add it to the list: `inputs.append(int(value))`. If you use `float` rather than `int` then non-integer numbers will work too. In either case, this will produce an error if the user enters something that is neither `Done` nor an integer. You can use `try`/`except` to deal with that, but that's probably out of the scope of this question. | When using `raw_input()` you're storing a string in `value`. Convert it to an int before appending it to your list, e.g.
```
inputs.append( int( value ) )
``` |
63,640,435 | SSO is not enabled for bot on Teams channel.
I develop a bot on Bot Framework and Azure Service, using python 3.7. I needed user authentication in the Microsoft system to use Graph API, etc.
Previously successfully used the [example](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/python) 18.bot-authentication and 24.bot-authentication-msgraph.
And this [guide](https://learn.microsoft.com/en-us/azure/bot-service/bot-builder-authentication?view=azure-bot-service-4.0&tabs=aadv2%2Cpython)
I got the error “SSO is not enabled for bot”. I created new certificates and a new server with a bot, for the source code example 18.bot-authentication. Created a new channel in Azure and try to login from Teams, but have the same problem. In Bot Emulator and test in web-chat both authentications work. Teams want SSO.
Any tips? Thank you | 2020/08/28 | [
"https://Stackoverflow.com/questions/63640435",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13382091/"
] | Please check the following articles:
<https://learn.microsoft.com/en-us/power-virtual-agents/advanced-end-user-authentication>
<https://learn.microsoft.com/en-us/power-virtual-agents/configuration-end-user-authentication>
<https://learn.microsoft.com/en-us/power-virtual-agents/publication-add-bot-to-microsoft-teams>
The second article explains step by step how you can set a PVA bot to use in Microsoft Teams.
Please be aware of this part:
"Currently, if your bot supports end-user authentication, the user will not be able to explicitly sign out. This will fail the Microsoft Teams AppSource certification if you are publishing your bot in the Seller Dashboard. This does not apply to personal or tenant usage of the bot. Learn more at Publish your Microsoft Teams app and AppSource Validation Policy." | Please refer to the Teams-Auth [sample](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/python/46.teams-auth) and the [documentation](https://learn.microsoft.com/en-us/microsoftteams/platform/bots/how-to/authentication/add-authentication?tabs=dotnet%2Cdotnet-sample) which helps you get started with authenticating a bot in Microsoft Teams as Teams behaves slightly differently than other channels. Presently, you can enable [Single Sign-On(SSO)](https://learn.microsoft.com/en-us/microsoftteams/platform/tabs/how-to/authentication/auth-aad-sso) in a custom tab. Microsoft Teams is currently working on the feature to enable SSO for bots. |
24,136,733 | ```
process_name = "CCC.exe"
for proc in psutil.process_iter():
if proc.name == process_name:
print ("have")
else:
print ("Dont have")
```
I know for the fact that CCC.exe is running. I tried this code with both 2.7 and 3.4 python
I have imported psutil as well. However the process is there but it is printing "Dont have". | 2014/06/10 | [
"https://Stackoverflow.com/questions/24136733",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2016977/"
] | Here is the modified version that worked for me on Windows 7 with python v2.7
You were doing it in a wrong way here `if proc.name == process_name:` in your code. Try to `print proc.name` and you'll notice why your code didn't work as you were expecting.
Code:
```
import psutil
process_name = "System"
for proc in psutil.process_iter():
process = psutil.Process(proc.pid)# Get the process info using PID
pname = process.name()# Here is the process name
#print pname
if pname == process_name:
print ("have")
else: print ("Dont have")
```
[Here](https://pypi.python.org/pypi?%3aaction=display&name=psutil#downloads) are some examples about how to use psutil. I just read them and figured out this solution, may be there is a better solution. I hope it was helpful. | I solved it by using WMI instead of psutil.
<https://pypi.python.org/pypi/WMI/>
install it on windows.
`import wmi
c = wmi.WMI ()
for process in c.Win32_Process ():
if "a" in process.Name:
print (process.ProcessId, process.Name)` |
24,136,733 | ```
process_name = "CCC.exe"
for proc in psutil.process_iter():
if proc.name == process_name:
print ("have")
else:
print ("Dont have")
```
I know for the fact that CCC.exe is running. I tried this code with both 2.7 and 3.4 python
I have imported psutil as well. However the process is there but it is printing "Dont have". | 2014/06/10 | [
"https://Stackoverflow.com/questions/24136733",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2016977/"
] | `name` is a method of `proc`:
```
process_name = "CCC.exe"
for proc in psutil.process_iter():
if proc.name() == process_name:
print ("have")
else:
print ("Dont have")
``` | Here is the modified version that worked for me on Windows 7 with python v2.7
You were doing it in a wrong way here `if proc.name == process_name:` in your code. Try to `print proc.name` and you'll notice why your code didn't work as you were expecting.
Code:
```
import psutil
process_name = "System"
for proc in psutil.process_iter():
process = psutil.Process(proc.pid)# Get the process info using PID
pname = process.name()# Here is the process name
#print pname
if pname == process_name:
print ("have")
else: print ("Dont have")
```
[Here](https://pypi.python.org/pypi?%3aaction=display&name=psutil#downloads) are some examples about how to use psutil. I just read them and figured out this solution, may be there is a better solution. I hope it was helpful. |
24,136,733 | ```
process_name = "CCC.exe"
for proc in psutil.process_iter():
if proc.name == process_name:
print ("have")
else:
print ("Dont have")
```
I know for the fact that CCC.exe is running. I tried this code with both 2.7 and 3.4 python
I have imported psutil as well. However the process is there but it is printing "Dont have". | 2014/06/10 | [
"https://Stackoverflow.com/questions/24136733",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2016977/"
] | `name` is a method of `proc`:
```
process_name = "CCC.exe"
for proc in psutil.process_iter():
if proc.name() == process_name:
print ("have")
else:
print ("Dont have")
``` | I solved it by using WMI instead of psutil.
<https://pypi.python.org/pypi/WMI/>
install it on windows.
`import wmi
c = wmi.WMI ()
for process in c.Win32_Process ():
if "a" in process.Name:
print (process.ProcessId, process.Name)` |
57,640,451 | I'm trying to iterate each row in a Pandas dataframe named 'cd'.
If a specific cell, e.g. [row,empl\_accept] in a row contains a substring, then updates the value of an other cell, e.g.[row,empl\_accept\_a] in the same dataframe.
```py
for row in range(0,len(cd.index),1):
if 'Master' in cd.at[row,empl_accept]:
cd.at[row,empl_accept_a] = '1'
else:
cd.at[row,empl_accept_a] = '0'
```
The code above not working and jupyter notebook displays the error:
```py
TypeError Traceback (most recent call last)
<ipython-input-70-21b1f73e320c> in <module>
1 for row in range(0,len(cd.index),1):
----> 2 if 'Master' in cd.at[row,empl_accept]:
3 cd.at[row,empl_accept_a] = '1'
4 else:
5 cd.at[row,empl_accept_a] = '0'
TypeError: argument of type 'float' is not iterable
```
I'm not really sure what is the problem there as the for loop contains no float variable. | 2019/08/24 | [
"https://Stackoverflow.com/questions/57640451",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10609069/"
] | Please do *not* use loops for this. You can do this in bulk with:
```
cd['empl_accept_a'] = cd['empl_accept'].str.contains('Master').astype(int).astype(str)
```
This will store `'0`' and `'1'` in the column. That being said, I am not convinced if storing this as strings is a good idea. You can just store these as `bool`s with:
```
cd['empl_accept_a'] = cd['empl_accept'].str.contains('Master')
```
For example:
```
>>> cd
empl_accept empl_accept_a
0 Master True
1 Slave False
2 Slave False
3 Master Windu True
``` | You need to check in your dataframe what value is placed at [row,empl\_accept]. I'm sure there will be some numeric value at this location in your dataframe. Just print the value and you'll see the problem if any.
```
print (cd.at[row,empl_accept])
``` |
52,338,706 | I already split the data into test and training set into the different folder. Now I need to load the patient data. Each patient has 8 images.
```py
def load_dataset(root_dir, split):
"""
load the data set numpy arrays saved by the preprocessing script
:param root_dir: path to input data
:param split: defines whether to load the training or test set
:return: data: dictionary containing one dictionary ({'data', 'seg', 'pid'}) per patient
"""
in_dir = os.path.join(root_dir, split)
data_paths = [os.path.join(in_dir, f) for f in os.listdir(in_dir)]
data_and_seg_arr = [np.load(ii, mmap_mode='r') for ii in data_paths]
pids = [ii.split('/')[-1].split('.')[0] for ii in data_paths]
data = OrderedDict()
for ix, pid in enumerate(pids):
data[pid] = {'data': data_and_seg_arr[ix][..., 0], 'seg': data_and_seg_arr[ix][..., 1], 'pid': pid}
return data
```
But, the error said:
```
File "/home/zhe/Research/Seg/heart_seg/data_loader.py", line 61, in load_dataset
data_and_seg_arr = [np.load(ii, mmap_mode='r') for ii in data_paths]
File "/home/zhe/Research/Seg/heart_seg/data_loader.py", line 61, in <listcomp>
data_and_seg_arr = [np.load(ii, mmap_mode='r') for ii in data_paths]
File "/home/zhe/anaconda3/envs/tf_env/lib/python3.6/site-packages/numpy/lib/npyio.py", line 372, in load
fid = open(file, "rb")
IsADirectoryError: [Errno 21] Is a directory: './data/preprocessed_data/train/Patient009969'
```
It is already a file name, not a directory. Thanks! | 2018/09/14 | [
"https://Stackoverflow.com/questions/52338706",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9403249/"
] | It seems that `./data/preprocessed_data/train/Patient009969` is a directory, not a file.
`os.listdir()` returns both files and directories.
Maybe try using `os.walk()` instead. It treats files and directories separately, and can recurse inside the subdirectories to find more files in a iterative way:
```
data_paths = [os.path.join(pth, f)
[for pth, dirs, files in os.walk(in_dir) for f in files]
``` | Do you have both files and directories inside your path? `os.listdir` will list both files and directories, so when you try to open a directory with `np.load` it will give that error. You can filter only files to avoid the error:
```
data_paths = [os.path.join(in_dir, f) for f in os.listdir(in_dir)]
data_paths = [i for i in data_paths if os.path.isfile(i)]
```
Or all together in a single line:
```
data_paths = [i for i in (os.path.join(in_dir, f) for f in os.listdir(in_dir)) if os.path.isfile(i)]
``` |
52,338,706 | I already split the data into test and training set into the different folder. Now I need to load the patient data. Each patient has 8 images.
```py
def load_dataset(root_dir, split):
"""
load the data set numpy arrays saved by the preprocessing script
:param root_dir: path to input data
:param split: defines whether to load the training or test set
:return: data: dictionary containing one dictionary ({'data', 'seg', 'pid'}) per patient
"""
in_dir = os.path.join(root_dir, split)
data_paths = [os.path.join(in_dir, f) for f in os.listdir(in_dir)]
data_and_seg_arr = [np.load(ii, mmap_mode='r') for ii in data_paths]
pids = [ii.split('/')[-1].split('.')[0] for ii in data_paths]
data = OrderedDict()
for ix, pid in enumerate(pids):
data[pid] = {'data': data_and_seg_arr[ix][..., 0], 'seg': data_and_seg_arr[ix][..., 1], 'pid': pid}
return data
```
But, the error said:
```
File "/home/zhe/Research/Seg/heart_seg/data_loader.py", line 61, in load_dataset
data_and_seg_arr = [np.load(ii, mmap_mode='r') for ii in data_paths]
File "/home/zhe/Research/Seg/heart_seg/data_loader.py", line 61, in <listcomp>
data_and_seg_arr = [np.load(ii, mmap_mode='r') for ii in data_paths]
File "/home/zhe/anaconda3/envs/tf_env/lib/python3.6/site-packages/numpy/lib/npyio.py", line 372, in load
fid = open(file, "rb")
IsADirectoryError: [Errno 21] Is a directory: './data/preprocessed_data/train/Patient009969'
```
It is already a file name, not a directory. Thanks! | 2018/09/14 | [
"https://Stackoverflow.com/questions/52338706",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9403249/"
] | Do you have both files and directories inside your path? `os.listdir` will list both files and directories, so when you try to open a directory with `np.load` it will give that error. You can filter only files to avoid the error:
```
data_paths = [os.path.join(in_dir, f) for f in os.listdir(in_dir)]
data_paths = [i for i in data_paths if os.path.isfile(i)]
```
Or all together in a single line:
```
data_paths = [i for i in (os.path.join(in_dir, f) for f in os.listdir(in_dir)) if os.path.isfile(i)]
``` | I had the same problem but i resolved by changing my path from `Data/Train_Data/myDataset/(my images)` to `Data/Train_Data/(my images)` where the script python is in the same path as Data.
Hope this help. |
52,338,706 | I already split the data into test and training set into the different folder. Now I need to load the patient data. Each patient has 8 images.
```py
def load_dataset(root_dir, split):
"""
load the data set numpy arrays saved by the preprocessing script
:param root_dir: path to input data
:param split: defines whether to load the training or test set
:return: data: dictionary containing one dictionary ({'data', 'seg', 'pid'}) per patient
"""
in_dir = os.path.join(root_dir, split)
data_paths = [os.path.join(in_dir, f) for f in os.listdir(in_dir)]
data_and_seg_arr = [np.load(ii, mmap_mode='r') for ii in data_paths]
pids = [ii.split('/')[-1].split('.')[0] for ii in data_paths]
data = OrderedDict()
for ix, pid in enumerate(pids):
data[pid] = {'data': data_and_seg_arr[ix][..., 0], 'seg': data_and_seg_arr[ix][..., 1], 'pid': pid}
return data
```
But, the error said:
```
File "/home/zhe/Research/Seg/heart_seg/data_loader.py", line 61, in load_dataset
data_and_seg_arr = [np.load(ii, mmap_mode='r') for ii in data_paths]
File "/home/zhe/Research/Seg/heart_seg/data_loader.py", line 61, in <listcomp>
data_and_seg_arr = [np.load(ii, mmap_mode='r') for ii in data_paths]
File "/home/zhe/anaconda3/envs/tf_env/lib/python3.6/site-packages/numpy/lib/npyio.py", line 372, in load
fid = open(file, "rb")
IsADirectoryError: [Errno 21] Is a directory: './data/preprocessed_data/train/Patient009969'
```
It is already a file name, not a directory. Thanks! | 2018/09/14 | [
"https://Stackoverflow.com/questions/52338706",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9403249/"
] | It seems that `./data/preprocessed_data/train/Patient009969` is a directory, not a file.
`os.listdir()` returns both files and directories.
Maybe try using `os.walk()` instead. It treats files and directories separately, and can recurse inside the subdirectories to find more files in a iterative way:
```
data_paths = [os.path.join(pth, f)
[for pth, dirs, files in os.walk(in_dir) for f in files]
``` | I had the same problem but i resolved by changing my path from `Data/Train_Data/myDataset/(my images)` to `Data/Train_Data/(my images)` where the script python is in the same path as Data.
Hope this help. |
57,690,881 | Interested in the scala spark implementation of this
[split-column-of-list-into-multiple-columns-in-the-same-pyspark-dataframe](https://stackoverflow.com/questions/49650907/split-column-of-list-into-multiple-columns-in-the-same-pyspark-dataframe)
Given this Dataframe:
```
| X | Y|
+--------------------+-------------+
| rent|[1,2,3......]|
| is_rent_changed|[4,5,6......]|
| phone|[7,8,9......]|
```
I want A new Dataframe with exploded values and mapped to my provided col names:
```
colNames = ['cat','dog','mouse'....]
| Column|cat |dog |mouse |.......|
+--------------------+---|---|--------|-------|
| rent|1 |2 |3 |.......|
| is_rent_changed|4 |5 |6 |.......|
| phone|7 |8 |9 |.......|
```
Tried:
```
val out = df.select(col("X"),explode($"Y"))
```
But its wrong format and i dont know how to map to my colNames list:
```
X | Y |
---------------|---|
rent |1 |
rent |2 |
rent |3 |
. |. |
. |. |
is_rent_changed|4 |
is_rent_changed|5 |
```
In the link above, the python solution was to use a list comprehension:
```
univar_df10.select([univar_df10.Column] + [univar_df10.Quantile[i] for i in range(length)])
```
But it doesn't show how to use a provided column name list given the column names are just the index of the columns. | 2019/08/28 | [
"https://Stackoverflow.com/questions/57690881",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2800939/"
] | I modified the loss functions and used the *metrics* in compile finction.
```
def recon_loss(inputs,outputs):
reconstruction_loss = original_dim*binary_crossentropy(inputs,outputs)
return K.mean(reconstruction_loss)
def latent_loss(inputs,outputs):
kl_loss = -0.5*K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
return K.mean(kl_loss)
def total_loss(inputs,outputs):
reconstruction_loss = original_dim*binary_crossentropy(inputs,outputs)
kl_loss = -0.5*K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
return K.mean(reconstruction_loss + kl_loss)
vae.compile(optimizer='adam',loss=total_loss,metrics=[recon_loss, latent_loss])
```
Now, the model returns reconstruction, latent and total losses for both training and validation data sets. | Please check the type of each loss in your losses dictionary.
```
print (type(losses['recon_loss']))
``` |
56,034,031 | I am a new user of python and the neo4j. I just want to run the python file in Pycharm and connect to Neo4j. But the import of py2neo always does not work, I tried to use Virtualenv but still does not work. I have tried to put my py file inside env folder or outside and both don't work.
I really install the py2neo and the version is the latest, how to solve this problem???
My code:
```
from py2neo import Graph, Node, Relationship
graph = Graph("http://localhost:7474")
jack = Node("Perosn", name="Jack")
nicole = Node("Person",name="Nicole")
tina = Node("Person", name="Tina")
graph.create(Relationship(nicole, "KNOWS",jack))
graph.create(Relationship(nicole, "KNOWS",tina))
graph.create(Relationship(tina, "KNOWS",jack))
graph.create(Relationship(jack, "KNOWS",tina))
Error:
Traceback (most recent call last):
File "/Users/huangjingzhan/PycharmProjects/untitled2/venv/neo4j.py", line 1, in <module>
from py2neo import Graph, Node, Relationship
ModuleNotFoundError: No module named 'py2neo'
``` | 2019/05/08 | [
"https://Stackoverflow.com/questions/56034031",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11467790/"
] | check which python version is configured to run the project and make sure that module is installed for that version.
here is how to:
[Pycharm](https://www.jetbrains.com/help/idea/configuring-local-python-interpreters.html) | You need to install py2neo in virtual environment. if you haven't install.
and Check you python version on your machine and project.
```
pip install py2neo
``` |
56,034,031 | I am a new user of python and the neo4j. I just want to run the python file in Pycharm and connect to Neo4j. But the import of py2neo always does not work, I tried to use Virtualenv but still does not work. I have tried to put my py file inside env folder or outside and both don't work.
I really install the py2neo and the version is the latest, how to solve this problem???
My code:
```
from py2neo import Graph, Node, Relationship
graph = Graph("http://localhost:7474")
jack = Node("Perosn", name="Jack")
nicole = Node("Person",name="Nicole")
tina = Node("Person", name="Tina")
graph.create(Relationship(nicole, "KNOWS",jack))
graph.create(Relationship(nicole, "KNOWS",tina))
graph.create(Relationship(tina, "KNOWS",jack))
graph.create(Relationship(jack, "KNOWS",tina))
Error:
Traceback (most recent call last):
File "/Users/huangjingzhan/PycharmProjects/untitled2/venv/neo4j.py", line 1, in <module>
from py2neo import Graph, Node, Relationship
ModuleNotFoundError: No module named 'py2neo'
``` | 2019/05/08 | [
"https://Stackoverflow.com/questions/56034031",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11467790/"
] | check which python version is configured to run the project and make sure that module is installed for that version.
here is how to:
[Pycharm](https://www.jetbrains.com/help/idea/configuring-local-python-interpreters.html) | Go to Pycharm Preferences Plugins and look for Graph Database Support and install the plugin, then it should work. |
13,584,524 | In the old world I had a pretty ideal development setup going to work together with a webdesigner. Keep in mind we mostly do small/fast projects, so this is how it worked:
* I have a staging site on a server (Webfaction or other)
* Designer accesses that site and edits templates and assets to his satisfaction
* I SSH in regularly to checkin everything into source control, update files from upstream, resolve conflicts
It works brilliantly because the designer does not need to learn git, python, package tools, syncdb, migrations etc. And there's only the one so we don't have any conflicts on staging either.
Now the problem is in the new world under Heroku, this is not possible. Or is it? In any way, I would like your advice on a development setup that caters to those who are not technical. | 2012/11/27 | [
"https://Stackoverflow.com/questions/13584524",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/102315/"
] | Reformat a string to display it as a MAC address:
```
var macadres = "0018103AB839";
var regex = "(.{2})(.{2})(.{2})(.{2})(.{2})(.{2})";
var replace = "$1:$2:$3:$4:$5:$6";
var newformat = Regex.Replace(macadres, regex, replace);
// newformat = "00:18:10:3A:B8:39"
```
If you want to validate the input string use this regex (thanks to J0HN):
```
var regex = String.Concat(Enumerable.Repeat("([a-fA-F0-9]{2})", 6));
``` | Suppose that we have the Mac Address stored in a long. This is how to have it in a formatted string:
```
ulong lMacAddr = 0x0018103AB839L;
string strMacAddr = String.Format("{0:X2}:{1:X2}:{2:X2}:{3:X2}:{4:X2}:{5:X2}",
(lMacAddr >> (8 * 5)) & 0xff,
(lMacAddr >> (8 * 4)) & 0xff,
(lMacAddr >> (8 * 3)) & 0xff,
(lMacAddr >> (8 * 2)) & 0xff,
(lMacAddr >> (8 * 1)) & 0xff,
(lMacAddr >> (8 * 0)) & 0xff);
``` |
13,584,524 | In the old world I had a pretty ideal development setup going to work together with a webdesigner. Keep in mind we mostly do small/fast projects, so this is how it worked:
* I have a staging site on a server (Webfaction or other)
* Designer accesses that site and edits templates and assets to his satisfaction
* I SSH in regularly to checkin everything into source control, update files from upstream, resolve conflicts
It works brilliantly because the designer does not need to learn git, python, package tools, syncdb, migrations etc. And there's only the one so we don't have any conflicts on staging either.
Now the problem is in the new world under Heroku, this is not possible. Or is it? In any way, I would like your advice on a development setup that caters to those who are not technical. | 2012/11/27 | [
"https://Stackoverflow.com/questions/13584524",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/102315/"
] | Reformat a string to display it as a MAC address:
```
var macadres = "0018103AB839";
var regex = "(.{2})(.{2})(.{2})(.{2})(.{2})(.{2})";
var replace = "$1:$2:$3:$4:$5:$6";
var newformat = Regex.Replace(macadres, regex, replace);
// newformat = "00:18:10:3A:B8:39"
```
If you want to validate the input string use this regex (thanks to J0HN):
```
var regex = String.Concat(Enumerable.Repeat("([a-fA-F0-9]{2})", 6));
``` | ```
string input = "0018103AB839";
var output = string.Join(":", Enumerable.Range(0, 6)
.Select(i => input.Substring(i * 2, 2)));
``` |
9,955,715 | i'm trying to do some "post"/"lazy" evaluation of arguments on my strings. Suppose i've this:
```
s = "SELECT * FROM {table_name} WHERE {condition}"
```
I'd like to return the string with the `{table_name}` replaced, but not the `{condition}`, so, something like this:
```
s1 = s.format(table_name = "users")
```
So, I can build the hole string later, like:
```
final = s1.format(condition= "user.id = {id}".format(id=2))
```
The result should be, of course:
```
"SELECT * FROM users WHERE user.id = 2"
```
I've found this previous answer, this is exactly what I need, but i'd like to use the `format` string function.
[python, format string](https://stackoverflow.com/questions/4928526/python-format-string)
Thank you for your help! | 2012/03/31 | [
"https://Stackoverflow.com/questions/9955715",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/198212/"
] | You can replace the condition with itself:
```
s.format(table_name='users', condition='{condition}')
```
which gives us:
```
SELECT * FROM users WHERE {condition}
```
You can use this string later to fill in the condition. | I have been using this function for some time now, which casts the `Dict` of inputted keyword arguments as a `SafeDict` object that subclasses `Dict`.
```
def safeformat(str, **kwargs):
class SafeDict(dict):
def __missing__(self, key):
return '{' + key + '}'
replacements = SafeDict(**kwargs)
return str.format_map(replacements)
```
I didn't make this up, but I think it's a good solution. The one downside is that you can't call `mystring.safeformat(**kwargs)` - of course, you have to call `safeformat(mystring,**kwargs)`.
---
If you're really interested in being able to call `mystr.safeformat(**kwargs)` (which I am interested in doing!), consider using this:
```
class safestr(str):
def safeformat(self, **kwargs):
class SafeDict(dict):
def __missing__(self, key):
return '{' + key + '}'
replacements = SafeDict(**kwargs)
return safestr(self.format_map(replacements))
```
You can then create a `safestr` object as `a = safestr(mystr)` (for some `str` called `mystr`), and you can in fact call
`mystr.safeformat(**kwargs)`.
e.g.
```
mysafestr = safestr('Hey, {friendname}. I am {myname}.')
print(mysafestr.safeformat(friendname='Bill'))
```
prints
`Hey, Bill. I am {myname}.`
This is cool in some ways - you can pass around a partially-formatted `safestr`, and could call `safeformat` in different contexts. I especially like to call `mystr.format(**locals())` to format with the appropriate namespace variables; the `safeformat` method is especially useful in this case, because I don't always carefully looks through my namespace.
The main issue with this is that inherited methods from `str` return a `str` object, not a `safestr`. So `mysafestr.lower().safeformat(**kwargs)` fails. Of course you could cast as a `safestr` when using `safeformat`:
`safestr(mysafestr.lower()).safeformat(**kwargs)`,
but that's less than ideal looking. I wish Python just gave the `str` class a `safeformat` method of some kind. |
9,955,715 | i'm trying to do some "post"/"lazy" evaluation of arguments on my strings. Suppose i've this:
```
s = "SELECT * FROM {table_name} WHERE {condition}"
```
I'd like to return the string with the `{table_name}` replaced, but not the `{condition}`, so, something like this:
```
s1 = s.format(table_name = "users")
```
So, I can build the hole string later, like:
```
final = s1.format(condition= "user.id = {id}".format(id=2))
```
The result should be, of course:
```
"SELECT * FROM users WHERE user.id = 2"
```
I've found this previous answer, this is exactly what I need, but i'd like to use the `format` string function.
[python, format string](https://stackoverflow.com/questions/4928526/python-format-string)
Thank you for your help! | 2012/03/31 | [
"https://Stackoverflow.com/questions/9955715",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/198212/"
] | You can replace the condition with itself:
```
s.format(table_name='users', condition='{condition}')
```
which gives us:
```
SELECT * FROM users WHERE {condition}
```
You can use this string later to fill in the condition. | This builds on @Karoly Horvath's answer to add support for index keys and attribute access on named keys:
```
import re
def my_format(template, *args, **kwargs):
next_index = len(args)
while True:
try:
return template.format(*args, **kwargs)
except KeyError as e:
key = e.args[0]
finder = '\{' + key + '.*?\}'
template = re.sub(finder, '{\g<0>}', template)
except IndexError as e:
args = args + ('{' + str(next_index) + '}',)
next_index += 1
```
So to test it out:
```
class MyObj:
bar = 'baz'
def __repr__(self):
return '<MyObj instance>'
my_obj = MyObj()
template = '{0}, {1}, {foo}, {foo.bar}, {0}, {10}, {missing}'
print my_format(template)
print my_format(template, '1st', '2nd', missing='Not Missing')
print my_format(template, foo=my_obj)
```
Output:
```
{0}, {1}, {foo}, {foo.bar}, {0}, {10}, {missing}
1st, 2nd, {foo}, {foo.bar}, 1st, {10}, Not Missing
{0}, {1}, <MyObj instance>, baz, {0}, {10}, {missing}
``` |
9,955,715 | i'm trying to do some "post"/"lazy" evaluation of arguments on my strings. Suppose i've this:
```
s = "SELECT * FROM {table_name} WHERE {condition}"
```
I'd like to return the string with the `{table_name}` replaced, but not the `{condition}`, so, something like this:
```
s1 = s.format(table_name = "users")
```
So, I can build the hole string later, like:
```
final = s1.format(condition= "user.id = {id}".format(id=2))
```
The result should be, of course:
```
"SELECT * FROM users WHERE user.id = 2"
```
I've found this previous answer, this is exactly what I need, but i'd like to use the `format` string function.
[python, format string](https://stackoverflow.com/questions/4928526/python-format-string)
Thank you for your help! | 2012/03/31 | [
"https://Stackoverflow.com/questions/9955715",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/198212/"
] | You can't use the format function because it will raise a KeyError.
`string.Template` supports safe substituion:
```
from string import Template
s = Template('SELECT * FROM $table_name WHERE $condition')
s.safe_substitute(table_name='users')
'SELECT * FROM users WHERE $condition'
```
If you use plain variable names (no format specifiers, no indexing, etc..) this will also work (thanks @Simeon Visser for the idea):
```
def myformat(s, *args, **kwargs):
while True:
try:
return s.format(*args, **kwargs)
except KeyError as e:
e=e.args[0]
kwargs[e] = "{%s}" % e
s = "SELECT * FROM {table_name} WHERE {condition}"
myformat(s, table_name="users")
'SELECT * FROM users WHERE {condition}'
``` | I have been using this function for some time now, which casts the `Dict` of inputted keyword arguments as a `SafeDict` object that subclasses `Dict`.
```
def safeformat(str, **kwargs):
class SafeDict(dict):
def __missing__(self, key):
return '{' + key + '}'
replacements = SafeDict(**kwargs)
return str.format_map(replacements)
```
I didn't make this up, but I think it's a good solution. The one downside is that you can't call `mystring.safeformat(**kwargs)` - of course, you have to call `safeformat(mystring,**kwargs)`.
---
If you're really interested in being able to call `mystr.safeformat(**kwargs)` (which I am interested in doing!), consider using this:
```
class safestr(str):
def safeformat(self, **kwargs):
class SafeDict(dict):
def __missing__(self, key):
return '{' + key + '}'
replacements = SafeDict(**kwargs)
return safestr(self.format_map(replacements))
```
You can then create a `safestr` object as `a = safestr(mystr)` (for some `str` called `mystr`), and you can in fact call
`mystr.safeformat(**kwargs)`.
e.g.
```
mysafestr = safestr('Hey, {friendname}. I am {myname}.')
print(mysafestr.safeformat(friendname='Bill'))
```
prints
`Hey, Bill. I am {myname}.`
This is cool in some ways - you can pass around a partially-formatted `safestr`, and could call `safeformat` in different contexts. I especially like to call `mystr.format(**locals())` to format with the appropriate namespace variables; the `safeformat` method is especially useful in this case, because I don't always carefully looks through my namespace.
The main issue with this is that inherited methods from `str` return a `str` object, not a `safestr`. So `mysafestr.lower().safeformat(**kwargs)` fails. Of course you could cast as a `safestr` when using `safeformat`:
`safestr(mysafestr.lower()).safeformat(**kwargs)`,
but that's less than ideal looking. I wish Python just gave the `str` class a `safeformat` method of some kind. |
9,955,715 | i'm trying to do some "post"/"lazy" evaluation of arguments on my strings. Suppose i've this:
```
s = "SELECT * FROM {table_name} WHERE {condition}"
```
I'd like to return the string with the `{table_name}` replaced, but not the `{condition}`, so, something like this:
```
s1 = s.format(table_name = "users")
```
So, I can build the hole string later, like:
```
final = s1.format(condition= "user.id = {id}".format(id=2))
```
The result should be, of course:
```
"SELECT * FROM users WHERE user.id = 2"
```
I've found this previous answer, this is exactly what I need, but i'd like to use the `format` string function.
[python, format string](https://stackoverflow.com/questions/4928526/python-format-string)
Thank you for your help! | 2012/03/31 | [
"https://Stackoverflow.com/questions/9955715",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/198212/"
] | You can't use the format function because it will raise a KeyError.
`string.Template` supports safe substituion:
```
from string import Template
s = Template('SELECT * FROM $table_name WHERE $condition')
s.safe_substitute(table_name='users')
'SELECT * FROM users WHERE $condition'
```
If you use plain variable names (no format specifiers, no indexing, etc..) this will also work (thanks @Simeon Visser for the idea):
```
def myformat(s, *args, **kwargs):
while True:
try:
return s.format(*args, **kwargs)
except KeyError as e:
e=e.args[0]
kwargs[e] = "{%s}" % e
s = "SELECT * FROM {table_name} WHERE {condition}"
myformat(s, table_name="users")
'SELECT * FROM users WHERE {condition}'
``` | This builds on @Karoly Horvath's answer to add support for index keys and attribute access on named keys:
```
import re
def my_format(template, *args, **kwargs):
next_index = len(args)
while True:
try:
return template.format(*args, **kwargs)
except KeyError as e:
key = e.args[0]
finder = '\{' + key + '.*?\}'
template = re.sub(finder, '{\g<0>}', template)
except IndexError as e:
args = args + ('{' + str(next_index) + '}',)
next_index += 1
```
So to test it out:
```
class MyObj:
bar = 'baz'
def __repr__(self):
return '<MyObj instance>'
my_obj = MyObj()
template = '{0}, {1}, {foo}, {foo.bar}, {0}, {10}, {missing}'
print my_format(template)
print my_format(template, '1st', '2nd', missing='Not Missing')
print my_format(template, foo=my_obj)
```
Output:
```
{0}, {1}, {foo}, {foo.bar}, {0}, {10}, {missing}
1st, 2nd, {foo}, {foo.bar}, 1st, {10}, Not Missing
{0}, {1}, <MyObj instance>, baz, {0}, {10}, {missing}
``` |
9,955,715 | i'm trying to do some "post"/"lazy" evaluation of arguments on my strings. Suppose i've this:
```
s = "SELECT * FROM {table_name} WHERE {condition}"
```
I'd like to return the string with the `{table_name}` replaced, but not the `{condition}`, so, something like this:
```
s1 = s.format(table_name = "users")
```
So, I can build the hole string later, like:
```
final = s1.format(condition= "user.id = {id}".format(id=2))
```
The result should be, of course:
```
"SELECT * FROM users WHERE user.id = 2"
```
I've found this previous answer, this is exactly what I need, but i'd like to use the `format` string function.
[python, format string](https://stackoverflow.com/questions/4928526/python-format-string)
Thank you for your help! | 2012/03/31 | [
"https://Stackoverflow.com/questions/9955715",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/198212/"
] | You can't use the format function because it will raise a KeyError.
`string.Template` supports safe substituion:
```
from string import Template
s = Template('SELECT * FROM $table_name WHERE $condition')
s.safe_substitute(table_name='users')
'SELECT * FROM users WHERE $condition'
```
If you use plain variable names (no format specifiers, no indexing, etc..) this will also work (thanks @Simeon Visser for the idea):
```
def myformat(s, *args, **kwargs):
while True:
try:
return s.format(*args, **kwargs)
except KeyError as e:
e=e.args[0]
kwargs[e] = "{%s}" % e
s = "SELECT * FROM {table_name} WHERE {condition}"
myformat(s, table_name="users")
'SELECT * FROM users WHERE {condition}'
``` | You can replace the condition with itself:
```
s.format(table_name='users', condition='{condition}')
```
which gives us:
```
SELECT * FROM users WHERE {condition}
```
You can use this string later to fill in the condition. |
9,955,715 | i'm trying to do some "post"/"lazy" evaluation of arguments on my strings. Suppose i've this:
```
s = "SELECT * FROM {table_name} WHERE {condition}"
```
I'd like to return the string with the `{table_name}` replaced, but not the `{condition}`, so, something like this:
```
s1 = s.format(table_name = "users")
```
So, I can build the hole string later, like:
```
final = s1.format(condition= "user.id = {id}".format(id=2))
```
The result should be, of course:
```
"SELECT * FROM users WHERE user.id = 2"
```
I've found this previous answer, this is exactly what I need, but i'd like to use the `format` string function.
[python, format string](https://stackoverflow.com/questions/4928526/python-format-string)
Thank you for your help! | 2012/03/31 | [
"https://Stackoverflow.com/questions/9955715",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/198212/"
] | This builds on @Karoly Horvath's answer to add support for index keys and attribute access on named keys:
```
import re
def my_format(template, *args, **kwargs):
next_index = len(args)
while True:
try:
return template.format(*args, **kwargs)
except KeyError as e:
key = e.args[0]
finder = '\{' + key + '.*?\}'
template = re.sub(finder, '{\g<0>}', template)
except IndexError as e:
args = args + ('{' + str(next_index) + '}',)
next_index += 1
```
So to test it out:
```
class MyObj:
bar = 'baz'
def __repr__(self):
return '<MyObj instance>'
my_obj = MyObj()
template = '{0}, {1}, {foo}, {foo.bar}, {0}, {10}, {missing}'
print my_format(template)
print my_format(template, '1st', '2nd', missing='Not Missing')
print my_format(template, foo=my_obj)
```
Output:
```
{0}, {1}, {foo}, {foo.bar}, {0}, {10}, {missing}
1st, 2nd, {foo}, {foo.bar}, 1st, {10}, Not Missing
{0}, {1}, <MyObj instance>, baz, {0}, {10}, {missing}
``` | An alternative to `string.Template.safe_substitute` could be subclassing `string.Formatter` like so:
```
class LazyFormatter(string.Formatter):
def get_value(self, key, args, kwargs):
'''Overrides string.Formatter.get_value'''
if isinstance(key, (int, long)):
return args[key]
else:
return kwargs.get(key, '{{{0}}}'.format(key))
lazyfmt = LazyFormatter()
print lazyfmt.format("{field}: {value}", **{'field': 'foo'})
```
Output:
```
foo: {value}
``` |
9,955,715 | i'm trying to do some "post"/"lazy" evaluation of arguments on my strings. Suppose i've this:
```
s = "SELECT * FROM {table_name} WHERE {condition}"
```
I'd like to return the string with the `{table_name}` replaced, but not the `{condition}`, so, something like this:
```
s1 = s.format(table_name = "users")
```
So, I can build the hole string later, like:
```
final = s1.format(condition= "user.id = {id}".format(id=2))
```
The result should be, of course:
```
"SELECT * FROM users WHERE user.id = 2"
```
I've found this previous answer, this is exactly what I need, but i'd like to use the `format` string function.
[python, format string](https://stackoverflow.com/questions/4928526/python-format-string)
Thank you for your help! | 2012/03/31 | [
"https://Stackoverflow.com/questions/9955715",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/198212/"
] | You can replace the condition with itself:
```
s.format(table_name='users', condition='{condition}')
```
which gives us:
```
SELECT * FROM users WHERE {condition}
```
You can use this string later to fill in the condition. | This is a slight change to @ShawnFumo's answer which has a small bug. We need to add a word boundary check (the \b in the regular expression) to ensure that we are matching only the failing key and another key that starts with the same string. This prevents a missing {foo} key from also treating {food} and {foolish} as if they were missing.
```
import re
def my_format(template, *args, **kwargs):
next_index = len(args)
while True:
try:
return template.format(*args, **kwargs)
except KeyError as e:
key = e.args[0]
finder = r'\{' + key + r'\b.*?\}'
template = re.sub(finder, r'{\g<0>}', template)
except IndexError as e:
args = args + ('{' + str(next_index) + '}',)
next_index += 1
```
So to test it out:
```
class MyObj:
bar = 'baz'
def __repr__(self):
return '<MyObj instance>'
my_obj = MyObj()
template = '{0}, {1}, {foo}, {foo.bar}, {0}, {10}, {missing}'
print my_format(template)
print my_format(template, '1st', '2nd', missing='Not Missing')
print my_format(template, foo=my_obj)
print
template2 = '{foo} and {food}'
print my_format(template2)
print my_format(template2, food='burger')
print my_format(template2, foo=my_obj, food='burger')
```
Output:
```
{0}, {1}, {foo}, {foo.bar}, {0}, {10}, {missing}
1st, 2nd, {foo}, {foo.bar}, 1st, {10}, Not Missing
{0}, {1}, <MyObj instance>, baz, {0}, {10}, {missing}
{foo} and {food}
{foo} and burger
repr(<MyObj instance>) and burger
``` |
9,955,715 | i'm trying to do some "post"/"lazy" evaluation of arguments on my strings. Suppose i've this:
```
s = "SELECT * FROM {table_name} WHERE {condition}"
```
I'd like to return the string with the `{table_name}` replaced, but not the `{condition}`, so, something like this:
```
s1 = s.format(table_name = "users")
```
So, I can build the hole string later, like:
```
final = s1.format(condition= "user.id = {id}".format(id=2))
```
The result should be, of course:
```
"SELECT * FROM users WHERE user.id = 2"
```
I've found this previous answer, this is exactly what I need, but i'd like to use the `format` string function.
[python, format string](https://stackoverflow.com/questions/4928526/python-format-string)
Thank you for your help! | 2012/03/31 | [
"https://Stackoverflow.com/questions/9955715",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/198212/"
] | You can't use the format function because it will raise a KeyError.
`string.Template` supports safe substituion:
```
from string import Template
s = Template('SELECT * FROM $table_name WHERE $condition')
s.safe_substitute(table_name='users')
'SELECT * FROM users WHERE $condition'
```
If you use plain variable names (no format specifiers, no indexing, etc..) this will also work (thanks @Simeon Visser for the idea):
```
def myformat(s, *args, **kwargs):
while True:
try:
return s.format(*args, **kwargs)
except KeyError as e:
e=e.args[0]
kwargs[e] = "{%s}" % e
s = "SELECT * FROM {table_name} WHERE {condition}"
myformat(s, table_name="users")
'SELECT * FROM users WHERE {condition}'
``` | An alternative to `string.Template.safe_substitute` could be subclassing `string.Formatter` like so:
```
class LazyFormatter(string.Formatter):
def get_value(self, key, args, kwargs):
'''Overrides string.Formatter.get_value'''
if isinstance(key, (int, long)):
return args[key]
else:
return kwargs.get(key, '{{{0}}}'.format(key))
lazyfmt = LazyFormatter()
print lazyfmt.format("{field}: {value}", **{'field': 'foo'})
```
Output:
```
foo: {value}
``` |
9,955,715 | i'm trying to do some "post"/"lazy" evaluation of arguments on my strings. Suppose i've this:
```
s = "SELECT * FROM {table_name} WHERE {condition}"
```
I'd like to return the string with the `{table_name}` replaced, but not the `{condition}`, so, something like this:
```
s1 = s.format(table_name = "users")
```
So, I can build the hole string later, like:
```
final = s1.format(condition= "user.id = {id}".format(id=2))
```
The result should be, of course:
```
"SELECT * FROM users WHERE user.id = 2"
```
I've found this previous answer, this is exactly what I need, but i'd like to use the `format` string function.
[python, format string](https://stackoverflow.com/questions/4928526/python-format-string)
Thank you for your help! | 2012/03/31 | [
"https://Stackoverflow.com/questions/9955715",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/198212/"
] | I have been using this function for some time now, which casts the `Dict` of inputted keyword arguments as a `SafeDict` object that subclasses `Dict`.
```
def safeformat(str, **kwargs):
class SafeDict(dict):
def __missing__(self, key):
return '{' + key + '}'
replacements = SafeDict(**kwargs)
return str.format_map(replacements)
```
I didn't make this up, but I think it's a good solution. The one downside is that you can't call `mystring.safeformat(**kwargs)` - of course, you have to call `safeformat(mystring,**kwargs)`.
---
If you're really interested in being able to call `mystr.safeformat(**kwargs)` (which I am interested in doing!), consider using this:
```
class safestr(str):
def safeformat(self, **kwargs):
class SafeDict(dict):
def __missing__(self, key):
return '{' + key + '}'
replacements = SafeDict(**kwargs)
return safestr(self.format_map(replacements))
```
You can then create a `safestr` object as `a = safestr(mystr)` (for some `str` called `mystr`), and you can in fact call
`mystr.safeformat(**kwargs)`.
e.g.
```
mysafestr = safestr('Hey, {friendname}. I am {myname}.')
print(mysafestr.safeformat(friendname='Bill'))
```
prints
`Hey, Bill. I am {myname}.`
This is cool in some ways - you can pass around a partially-formatted `safestr`, and could call `safeformat` in different contexts. I especially like to call `mystr.format(**locals())` to format with the appropriate namespace variables; the `safeformat` method is especially useful in this case, because I don't always carefully looks through my namespace.
The main issue with this is that inherited methods from `str` return a `str` object, not a `safestr`. So `mysafestr.lower().safeformat(**kwargs)` fails. Of course you could cast as a `safestr` when using `safeformat`:
`safestr(mysafestr.lower()).safeformat(**kwargs)`,
but that's less than ideal looking. I wish Python just gave the `str` class a `safeformat` method of some kind. | This is a slight change to @ShawnFumo's answer which has a small bug. We need to add a word boundary check (the \b in the regular expression) to ensure that we are matching only the failing key and another key that starts with the same string. This prevents a missing {foo} key from also treating {food} and {foolish} as if they were missing.
```
import re
def my_format(template, *args, **kwargs):
next_index = len(args)
while True:
try:
return template.format(*args, **kwargs)
except KeyError as e:
key = e.args[0]
finder = r'\{' + key + r'\b.*?\}'
template = re.sub(finder, r'{\g<0>}', template)
except IndexError as e:
args = args + ('{' + str(next_index) + '}',)
next_index += 1
```
So to test it out:
```
class MyObj:
bar = 'baz'
def __repr__(self):
return '<MyObj instance>'
my_obj = MyObj()
template = '{0}, {1}, {foo}, {foo.bar}, {0}, {10}, {missing}'
print my_format(template)
print my_format(template, '1st', '2nd', missing='Not Missing')
print my_format(template, foo=my_obj)
print
template2 = '{foo} and {food}'
print my_format(template2)
print my_format(template2, food='burger')
print my_format(template2, foo=my_obj, food='burger')
```
Output:
```
{0}, {1}, {foo}, {foo.bar}, {0}, {10}, {missing}
1st, 2nd, {foo}, {foo.bar}, 1st, {10}, Not Missing
{0}, {1}, <MyObj instance>, baz, {0}, {10}, {missing}
{foo} and {food}
{foo} and burger
repr(<MyObj instance>) and burger
``` |
9,955,715 | i'm trying to do some "post"/"lazy" evaluation of arguments on my strings. Suppose i've this:
```
s = "SELECT * FROM {table_name} WHERE {condition}"
```
I'd like to return the string with the `{table_name}` replaced, but not the `{condition}`, so, something like this:
```
s1 = s.format(table_name = "users")
```
So, I can build the hole string later, like:
```
final = s1.format(condition= "user.id = {id}".format(id=2))
```
The result should be, of course:
```
"SELECT * FROM users WHERE user.id = 2"
```
I've found this previous answer, this is exactly what I need, but i'd like to use the `format` string function.
[python, format string](https://stackoverflow.com/questions/4928526/python-format-string)
Thank you for your help! | 2012/03/31 | [
"https://Stackoverflow.com/questions/9955715",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/198212/"
] | This builds on @Karoly Horvath's answer to add support for index keys and attribute access on named keys:
```
import re
def my_format(template, *args, **kwargs):
next_index = len(args)
while True:
try:
return template.format(*args, **kwargs)
except KeyError as e:
key = e.args[0]
finder = '\{' + key + '.*?\}'
template = re.sub(finder, '{\g<0>}', template)
except IndexError as e:
args = args + ('{' + str(next_index) + '}',)
next_index += 1
```
So to test it out:
```
class MyObj:
bar = 'baz'
def __repr__(self):
return '<MyObj instance>'
my_obj = MyObj()
template = '{0}, {1}, {foo}, {foo.bar}, {0}, {10}, {missing}'
print my_format(template)
print my_format(template, '1st', '2nd', missing='Not Missing')
print my_format(template, foo=my_obj)
```
Output:
```
{0}, {1}, {foo}, {foo.bar}, {0}, {10}, {missing}
1st, 2nd, {foo}, {foo.bar}, 1st, {10}, Not Missing
{0}, {1}, <MyObj instance>, baz, {0}, {10}, {missing}
``` | This is a slight change to @ShawnFumo's answer which has a small bug. We need to add a word boundary check (the \b in the regular expression) to ensure that we are matching only the failing key and another key that starts with the same string. This prevents a missing {foo} key from also treating {food} and {foolish} as if they were missing.
```
import re
def my_format(template, *args, **kwargs):
next_index = len(args)
while True:
try:
return template.format(*args, **kwargs)
except KeyError as e:
key = e.args[0]
finder = r'\{' + key + r'\b.*?\}'
template = re.sub(finder, r'{\g<0>}', template)
except IndexError as e:
args = args + ('{' + str(next_index) + '}',)
next_index += 1
```
So to test it out:
```
class MyObj:
bar = 'baz'
def __repr__(self):
return '<MyObj instance>'
my_obj = MyObj()
template = '{0}, {1}, {foo}, {foo.bar}, {0}, {10}, {missing}'
print my_format(template)
print my_format(template, '1st', '2nd', missing='Not Missing')
print my_format(template, foo=my_obj)
print
template2 = '{foo} and {food}'
print my_format(template2)
print my_format(template2, food='burger')
print my_format(template2, foo=my_obj, food='burger')
```
Output:
```
{0}, {1}, {foo}, {foo.bar}, {0}, {10}, {missing}
1st, 2nd, {foo}, {foo.bar}, 1st, {10}, Not Missing
{0}, {1}, <MyObj instance>, baz, {0}, {10}, {missing}
{foo} and {food}
{foo} and burger
repr(<MyObj instance>) and burger
``` |
39,689,012 | i have written a code (python 2.7) that goes to a website [Cricket score](http://www.cricbuzz.com/live-cricket-scorecard/16822/ind-vs-nz-1st-test-new-zealand-tour-of-india-2016) and then takes out some data out of it to display just its score .It also periodically repeats and keeps running because the scores keep changing.
i have also written a code for taking a message input from user and send that message as an sms to my number .
i want to club these two so that the scores printed on my screen serve as the message input for sending live scores to me.
codes are
**sms.py**
```
import urllib2
import cookielib
from getpass import getpass
import sys
import os
from stat import *
import sched, time
import requests
from bs4 import BeautifulSoup
s = sched.scheduler(time.time, time.sleep)
from urllib2 import Request
#from livematch import function
#this sends the desired input message to my number
number = raw_input('enter number you want to message: ')
message = raw_input('enter text: ' )
#this declares my credentials
if __name__ == "__main__":
username = "9876543210"
passwd = "abcdefghij"
message = "+".join(message.split(' '))
#logging into the sms site
url ='http://site24.way2sms.com/Login1.action?'
data = 'username='+username+'&password='+passwd+'&Submit=Sign+in'
#For cookies
cj= cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
#Adding header details
opener.addheaders=[('User-Agent','Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.120')]
try:
usock =opener.open(url, data)
except IOError:
print "error"
#return()
jession_id =str(cj).split('~')[1].split(' ')[0]
send_sms_url = 'http://site24.way2sms.com/smstoss.action?'
send_sms_data = 'ssaction=ss&Token='+jession_id+'&mobile='+number+'&message='+message+'&msgLen=136'
opener.addheaders=[('Referer', 'http://site25.way2sms.com/sendSMS?Token='+jession_id)]
try:
sms_sent_page = opener.open(send_sms_url,send_sms_data)
except IOError:
print "error"
#return()
print "success"
#return ()
```
**livematch.py**
```
import sched, time
import requests
from bs4 import BeautifulSoup
s = sched.scheduler(time.time, time.sleep)
from urllib2 import Request
url=raw_input('enter the desired score card url here : ')
req=Request(url)
def do_something(sc) :
#global x
r=requests.get(url)
soup=BeautifulSoup(r.content)
for i in soup.find_all("div",{"id":"innings_1"}):
x=i.text.find('Batsman')
in_1=i.text
print(in_1[0:x])
for i in soup.find_all("div",{"id":"innings_2"}):
x=i.text.find('Batsman')
in_1=i.text
print(in_1[0:x])
for i in soup.find_all("div",{"id":"innings_3"}):
x=i.text.find('Batsman')
in_1=i.text
print(in_1[0:x])
for i in soup.find_all("div",{"id":"innings_4"}):
x=i.text.find('Batsman')
in_1=i.text
print(in_1[0:x])
# do your stuff
#do what ever
s.enter(5, 1, do_something, (sc,))
s.enter(5, 1, do_something, (s,))
s.run()
```
note that instead of using 9876543210 as username and abcdefghij as password use the credentials of actual account.
sign up at way2sms.com for those credentials | 2016/09/25 | [
"https://Stackoverflow.com/questions/39689012",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6878406/"
] | i am sorry, i added a bit too many double-quotes in the above code. instead it should be this way:
```
asm (".section .drectve\n\t.ascii \" -export:DllInitialize=api.DllInitialize @2\"");
```
If you need to use it many times, consider putting it in a macro, e.g.
```
#ifdef _MSC_VER
#define FORWARDED_EXPORT_WITH_ORDINAL(exp_name, ordinal, target_name) __pragma (comment (linker, "/export:" #exp_name "=" #target_name ",@" #ordinal))
#endif
#ifdef __GNUC__
#define FORWARDED_EXPORT_WITH_ORDINAL(exp_name, ordinal, target_name) asm (".section .drectve\n\t.ascii \" -export:" #exp_name "= " #target_name " @" #ordinal "\"");
#endif
FORWARDED_EXPORT_WITH_ORDINAL(DllInitialize, 2, api.DllInitialize)
FORWARDED_EXPORT_WITH_ORDINAL(my_create_file_a, 100, kernel32.CreateFileA)
```
you get the idea | here is how you can do it:
```
#ifdef _MSC_VER
#pragma comment (linker, "/export:DllInitialize=api.DllInitialize,@2")
#endif
#ifdef __GNUC__
asm (".section .drectve\n\t.ascii \" -export:\\\"DllInitialize=api.DllInitialize\\\" @2\"");
#endif
```
Note that "drectve" is not a typo, thats how it must be written however odd it may seem. By the way, this strange abbreviation is a microsoft's idea, not GCC's. |
71,940,988 | I have trained a model based on YOLOv5 on a custom dataset which has two classes (for example human and car)
I am using `detect.py` with the following command:
```
> python detect.py --weights best.pt --source video.mp4
```
I want only car class to be detected without detecting humans, how it could be done? | 2022/04/20 | [
"https://Stackoverflow.com/questions/71940988",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16637958/"
] | You can specify classes, which you want to detect **[--classes]** arguments will be used.
**Example**
```
python detect.py --weights "your weights.pt" --source "video/image/stream" --classes 0,1,2
```
In above command, 0,1,2 are classId, so when you will run it, only mentioned classes will be detect. | I think you can use the argument --classes of detect.py. Just use the index of the classes. |
23,784,951 | I have a string that looks like this:
`POLYGON ((148210.445767647 172418.761192525, 148183.930888667 172366.054787545, 148183.866770629 172365.316772032, 148184.328078148 172364.737139913, 148220.543522168 172344.042601933, 148221.383518338 172343.971823159), (148221.97916844 172344.568316375, 148244.61381946 172406.651932395, 148244.578100039 172407.422441673, 148244.004662562 172407.938319453, 148211.669446582 172419.255646473, 148210.631989339 172419.018894911, 148210.445767647 172418.761192525))`
I can easily strip `POLYGON` out of the string to focus on the numbers but I'm kinda wondering what would be the easiest/best way to parse this string into a list of dict.
The first parenthesis (right after POLYGON) indicates that multiple elements can be provided (separated by a comma `,`).
So each pair of numbers is to supposed to be `x` and `y`.
I'd like to parse this string to end up with the following data structure (using `python 2.7`):
```
list [ //list of polygons
list [ //polygon n°1
dict { //polygon n°1's first point
'x': 148210.445767647, //first number
'y': 172418.761192525 //second number
},
dict { //polygon n°1's second point
'x': 148183.930888667,
'y': 148183.930888667
},
... // rest of polygon n°1's points
], //end of polygon n°1
list [ // polygon n°2
dict { // polygon n°2's first point
'x': 148221.9791684,
'y': 172344.568316375
},
... // rest of polygon n°2's points
] // end of polygon n°2
] // end of list of polygons
```
Polygons' number of points is virtually infinite.
Each point's numbers are separated by a blank.
Do you guys know a way to do this in a loop or any recursive way ?
PS: I'm kind of a python beginner (only a few months under my belt) so don't hesitate to explain in details. Thank you! | 2014/05/21 | [
"https://Stackoverflow.com/questions/23784951",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1300454/"
] | The data structure you have defining your Polygon object looks very similar to a python tuple declaration. One option, albeit a bit hacky would be to use python's [AST parser](https://docs.python.org/2/library/ast.html#ast.literal_eval).
You would have to strip off the POLYGON part and this solution may not work for other declarations that are more complex.
```
import ast
your_str = "POLYGON (...)"
# may be better to use a regex to split off the class part
# if you have different types
data = ast.literal_eval(your_str.replace("POLYGON ",""))
x, y = data
#now you can zip the two x and y pairs together or make them into a dictionary
``` | Lets say u have a string that looks like this
my\_str = 'POLYGON ((148210.445767647 172418.761192525, 148183.930888667 172366.054787545, 148183.866770629 172365.316772032, 148184.328078148 172364.737139913, 148220.543522168 172344.042601933, 148221.383518338 172343.971823159), (148221.97916844 172344.568316375, 148244.61381946 172406.651932395, 148244.578100039 172407.422441673, 148244.004662562 172407.938319453, 148211.669446582 172419.255646473, 148210.631989339 172419.018894911, 148210.445767647 172418.761192525))'
```
my_str = my_str.replace('POLYGON ', '')
coords_groups = my_str.split('), (')
for coords in coords_groups:
coords.replace('(', '').replace(')', '')
coords_list = coords.split(', ')
coords_list2 = []
for item in coords_list:
item_split = item.split(' ')
coords_list2.append({'x', item_split[0], 'y': item_split[1]})
```
I think this should help a little
All u need now is a way to get info between parenthesis, this should help [Regular expression to return text between parenthesis](https://stackoverflow.com/questions/4894069/python-regex-help-return-text-between-parenthesis)
**UPDATE** updated code above thanks to another answer by <https://stackoverflow.com/users/2635860/mccakici> , but this works only if u have structure of string as u have said in your question |
23,784,951 | I have a string that looks like this:
`POLYGON ((148210.445767647 172418.761192525, 148183.930888667 172366.054787545, 148183.866770629 172365.316772032, 148184.328078148 172364.737139913, 148220.543522168 172344.042601933, 148221.383518338 172343.971823159), (148221.97916844 172344.568316375, 148244.61381946 172406.651932395, 148244.578100039 172407.422441673, 148244.004662562 172407.938319453, 148211.669446582 172419.255646473, 148210.631989339 172419.018894911, 148210.445767647 172418.761192525))`
I can easily strip `POLYGON` out of the string to focus on the numbers but I'm kinda wondering what would be the easiest/best way to parse this string into a list of dict.
The first parenthesis (right after POLYGON) indicates that multiple elements can be provided (separated by a comma `,`).
So each pair of numbers is to supposed to be `x` and `y`.
I'd like to parse this string to end up with the following data structure (using `python 2.7`):
```
list [ //list of polygons
list [ //polygon n°1
dict { //polygon n°1's first point
'x': 148210.445767647, //first number
'y': 172418.761192525 //second number
},
dict { //polygon n°1's second point
'x': 148183.930888667,
'y': 148183.930888667
},
... // rest of polygon n°1's points
], //end of polygon n°1
list [ // polygon n°2
dict { // polygon n°2's first point
'x': 148221.9791684,
'y': 172344.568316375
},
... // rest of polygon n°2's points
] // end of polygon n°2
] // end of list of polygons
```
Polygons' number of points is virtually infinite.
Each point's numbers are separated by a blank.
Do you guys know a way to do this in a loop or any recursive way ?
PS: I'm kind of a python beginner (only a few months under my belt) so don't hesitate to explain in details. Thank you! | 2014/05/21 | [
"https://Stackoverflow.com/questions/23784951",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1300454/"
] | can you try?
```
import ast
POLYGON = '((148210.445767647 172418.761192525, 148183.930888667 172366.054787545, 148183.866770629 172365.316772032, 148184.328078148 172364.737139913, 148220.543522168 172344.042601933, 148221.383518338 172343.971823159), (148221.97916844 172344.568316375, 148244.61381946 172406.651932395, 148244.578100039 172407.422441673, 148244.004662562 172407.938319453, 148211.669446582 172419.255646473, 148210.631989339 172419.018894911, 148210.445767647 172418.761192525))'
new_polygon = '(' + POLYGON.replace(', ', '),(').replace(' ', ',') + ')'
data = ast.literal_eval(new_polygon)
result_list = list()
for items in data:
sub_list = list()
for item in items:
sub_list.append({
'x': item[0],
'y': item[1]
})
result_list.append(sub_list)
print result_list
``` | Lets say u have a string that looks like this
my\_str = 'POLYGON ((148210.445767647 172418.761192525, 148183.930888667 172366.054787545, 148183.866770629 172365.316772032, 148184.328078148 172364.737139913, 148220.543522168 172344.042601933, 148221.383518338 172343.971823159), (148221.97916844 172344.568316375, 148244.61381946 172406.651932395, 148244.578100039 172407.422441673, 148244.004662562 172407.938319453, 148211.669446582 172419.255646473, 148210.631989339 172419.018894911, 148210.445767647 172418.761192525))'
```
my_str = my_str.replace('POLYGON ', '')
coords_groups = my_str.split('), (')
for coords in coords_groups:
coords.replace('(', '').replace(')', '')
coords_list = coords.split(', ')
coords_list2 = []
for item in coords_list:
item_split = item.split(' ')
coords_list2.append({'x', item_split[0], 'y': item_split[1]})
```
I think this should help a little
All u need now is a way to get info between parenthesis, this should help [Regular expression to return text between parenthesis](https://stackoverflow.com/questions/4894069/python-regex-help-return-text-between-parenthesis)
**UPDATE** updated code above thanks to another answer by <https://stackoverflow.com/users/2635860/mccakici> , but this works only if u have structure of string as u have said in your question |
54,485,654 | Simplified example of my code, please ignore syntax errors:
```
import numpy as np
import pandas as pd
import pymysql.cursors
from datetime import date, datetime
connection = pymysql.connect(host=,
user=,
password=,
db=,
cursorclass=pymysql.cursors.DictCursor)
df1 = pd.read_sql()
df2 = pd.read_sql(
df3 = pd.read_sql()
np.where(a=1, b, c)
df1.append([df2, d3])
path = r'C:\Users\\'
df.to_csv(path+'a.csv')
```
On Jupyternotebook it outputs the csv file like it supposed to. However, it I download the .py and run with python. It will only output a csv the first time I run it after restarting my computer. Other times it will just run and nothing happens. Why this is happening is blowing my mind. | 2019/02/01 | [
"https://Stackoverflow.com/questions/54485654",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9637684/"
] | Have you seen the join command? This in combination with sort maybe what you are looking for. <https://shapeshed.com/unix-join/>
for example:
```
$ cat a
aaaa bbbb
cccc dddd
$ cat b
aaaa eeee
ffff gggg
$ join a b
aaaa bbbb eeee
```
If the values in the first column are not sorted, than you have to sort them first, otherwise join will not work.
`join <(sort a) <(sort b)`
Kind regards
Oliver | There are different kinds and different tools to compare:
* diff
* cmp
* comm
* ...
All commands have options to vary the comparison.
For each command, you can specify filters. E.g.
```
# remove comments before comparison
diff <( grep -v ^# file1) <( grep -v ^# file2)
```
Without concrete examples, it is impossible to be more exact. |
54,485,654 | Simplified example of my code, please ignore syntax errors:
```
import numpy as np
import pandas as pd
import pymysql.cursors
from datetime import date, datetime
connection = pymysql.connect(host=,
user=,
password=,
db=,
cursorclass=pymysql.cursors.DictCursor)
df1 = pd.read_sql()
df2 = pd.read_sql(
df3 = pd.read_sql()
np.where(a=1, b, c)
df1.append([df2, d3])
path = r'C:\Users\\'
df.to_csv(path+'a.csv')
```
On Jupyternotebook it outputs the csv file like it supposed to. However, it I download the .py and run with python. It will only output a csv the first time I run it after restarting my computer. Other times it will just run and nothing happens. Why this is happening is blowing my mind. | 2019/02/01 | [
"https://Stackoverflow.com/questions/54485654",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9637684/"
] | There are different kinds and different tools to compare:
* diff
* cmp
* comm
* ...
All commands have options to vary the comparison.
For each command, you can specify filters. E.g.
```
# remove comments before comparison
diff <( grep -v ^# file1) <( grep -v ^# file2)
```
Without concrete examples, it is impossible to be more exact. | You can use `awk`, like this:
```
awk 'NR==FNR{a[NR]=$1;b[NR]=$2;next}
a[FNR]==$1{printf "%s and %s match\n", b[FNR], $2}' file1 file2
```
Output:
```
bbbb and eeee match
```
Explanation (the same code broken into multiple lines):
```
# As long as we are reading file1, the overall record
# number NR is the same as the record number in the
# current input file FNR
NR==FNR{
# Store column 1 and 2 in arrays called a and b
# indexed by the record number
a[NR]=$1
b[NR]=$2
next # Do not process more actions for file1
}
# The following code gets only executed when we read
# file2 because of the above _next_ statement
# Check if column 1 in file1 is the same as in file2
# for this line
a[FNR]==$1{
printf "%s and %s match\n", b[FNR], $2
}
``` |
54,485,654 | Simplified example of my code, please ignore syntax errors:
```
import numpy as np
import pandas as pd
import pymysql.cursors
from datetime import date, datetime
connection = pymysql.connect(host=,
user=,
password=,
db=,
cursorclass=pymysql.cursors.DictCursor)
df1 = pd.read_sql()
df2 = pd.read_sql(
df3 = pd.read_sql()
np.where(a=1, b, c)
df1.append([df2, d3])
path = r'C:\Users\\'
df.to_csv(path+'a.csv')
```
On Jupyternotebook it outputs the csv file like it supposed to. However, it I download the .py and run with python. It will only output a csv the first time I run it after restarting my computer. Other times it will just run and nothing happens. Why this is happening is blowing my mind. | 2019/02/01 | [
"https://Stackoverflow.com/questions/54485654",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9637684/"
] | Have you seen the join command? This in combination with sort maybe what you are looking for. <https://shapeshed.com/unix-join/>
for example:
```
$ cat a
aaaa bbbb
cccc dddd
$ cat b
aaaa eeee
ffff gggg
$ join a b
aaaa bbbb eeee
```
If the values in the first column are not sorted, than you have to sort them first, otherwise join will not work.
`join <(sort a) <(sort b)`
Kind regards
Oliver | Assuming your tab separated file maintains the correct file structure, this should work:
```
diff <(awk '{print $2}' f1) <(awk '{print $2}' f2)
# File names: f1, f2
# Column: 2nd column.
```
The output when there is something different,
```
2c2
< dx
---
> ldx
```
No output when the column is the same.
I tried @Wiimm's answer and it didn't work for me. |
54,485,654 | Simplified example of my code, please ignore syntax errors:
```
import numpy as np
import pandas as pd
import pymysql.cursors
from datetime import date, datetime
connection = pymysql.connect(host=,
user=,
password=,
db=,
cursorclass=pymysql.cursors.DictCursor)
df1 = pd.read_sql()
df2 = pd.read_sql(
df3 = pd.read_sql()
np.where(a=1, b, c)
df1.append([df2, d3])
path = r'C:\Users\\'
df.to_csv(path+'a.csv')
```
On Jupyternotebook it outputs the csv file like it supposed to. However, it I download the .py and run with python. It will only output a csv the first time I run it after restarting my computer. Other times it will just run and nothing happens. Why this is happening is blowing my mind. | 2019/02/01 | [
"https://Stackoverflow.com/questions/54485654",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9637684/"
] | Assuming your tab separated file maintains the correct file structure, this should work:
```
diff <(awk '{print $2}' f1) <(awk '{print $2}' f2)
# File names: f1, f2
# Column: 2nd column.
```
The output when there is something different,
```
2c2
< dx
---
> ldx
```
No output when the column is the same.
I tried @Wiimm's answer and it didn't work for me. | You can use `awk`, like this:
```
awk 'NR==FNR{a[NR]=$1;b[NR]=$2;next}
a[FNR]==$1{printf "%s and %s match\n", b[FNR], $2}' file1 file2
```
Output:
```
bbbb and eeee match
```
Explanation (the same code broken into multiple lines):
```
# As long as we are reading file1, the overall record
# number NR is the same as the record number in the
# current input file FNR
NR==FNR{
# Store column 1 and 2 in arrays called a and b
# indexed by the record number
a[NR]=$1
b[NR]=$2
next # Do not process more actions for file1
}
# The following code gets only executed when we read
# file2 because of the above _next_ statement
# Check if column 1 in file1 is the same as in file2
# for this line
a[FNR]==$1{
printf "%s and %s match\n", b[FNR], $2
}
``` |
54,485,654 | Simplified example of my code, please ignore syntax errors:
```
import numpy as np
import pandas as pd
import pymysql.cursors
from datetime import date, datetime
connection = pymysql.connect(host=,
user=,
password=,
db=,
cursorclass=pymysql.cursors.DictCursor)
df1 = pd.read_sql()
df2 = pd.read_sql(
df3 = pd.read_sql()
np.where(a=1, b, c)
df1.append([df2, d3])
path = r'C:\Users\\'
df.to_csv(path+'a.csv')
```
On Jupyternotebook it outputs the csv file like it supposed to. However, it I download the .py and run with python. It will only output a csv the first time I run it after restarting my computer. Other times it will just run and nothing happens. Why this is happening is blowing my mind. | 2019/02/01 | [
"https://Stackoverflow.com/questions/54485654",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9637684/"
] | Have you seen the join command? This in combination with sort maybe what you are looking for. <https://shapeshed.com/unix-join/>
for example:
```
$ cat a
aaaa bbbb
cccc dddd
$ cat b
aaaa eeee
ffff gggg
$ join a b
aaaa bbbb eeee
```
If the values in the first column are not sorted, than you have to sort them first, otherwise join will not work.
`join <(sort a) <(sort b)`
Kind regards
Oliver | You can use `awk`, like this:
```
awk 'NR==FNR{a[NR]=$1;b[NR]=$2;next}
a[FNR]==$1{printf "%s and %s match\n", b[FNR], $2}' file1 file2
```
Output:
```
bbbb and eeee match
```
Explanation (the same code broken into multiple lines):
```
# As long as we are reading file1, the overall record
# number NR is the same as the record number in the
# current input file FNR
NR==FNR{
# Store column 1 and 2 in arrays called a and b
# indexed by the record number
a[NR]=$1
b[NR]=$2
next # Do not process more actions for file1
}
# The following code gets only executed when we read
# file2 because of the above _next_ statement
# Check if column 1 in file1 is the same as in file2
# for this line
a[FNR]==$1{
printf "%s and %s match\n", b[FNR], $2
}
``` |
26,506,830 | I have the expectation that `reduce(add,lst)` and `sum(lst)` should give me the same result, but
```
In [18]: class p():
def __init__(self, x, y):
self.x=x ; self.y=y
def __repr__(self):
return "(%r,%r)"%(self.x,self.y)
def __add__(self, P):
return p(self.x+P.x, self.y+P.y)
....:
In [19]: pts=[p(1,0), p(2,1), p(-3,4)]
In [20]: from operator import add
In [21]: print reduce(add,pts)
(0,5)
In [22]: print sum(pts)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-22-98a81789d257> in <module>()
----> 1 print sum(pts)
TypeError: unsupported operand type(s) for +: 'int' and 'instance'
In [23]:
```
of course I'm not understanding something, possibly obvious; could someone enlighten me? | 2014/10/22 | [
"https://Stackoverflow.com/questions/26506830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2749397/"
] | `reduce()` starts with `pts[0]` as the initial value (unless you give it an explicit starting value), but `sum()` defaults with `0`. From the [`sum()` function documentation](https://docs.python.org/2/library/functions.html#sum):
>
> `sum(iterable[, start])`
>
> Sums *start* and the items of an iterable from left to right and returns the total. *start* defaults to `0`.
>
>
>
Give `sum()` a better *start* value; use the first object just like `reduce()` does:
```
sum(pts[1:], pts[0])
```
or a suitable empty value:
```
sum(pts, p(0, 0))
``` | If you define `__radd__` to special case if 0 is added to an instance of your object, then you can just use `sum()`.
```
... rest of Point class defn ...
def __radd__(self, other):
if other == 0:
return self
raise TypeError("invalid type to add to point")
print sum(pts)
```
will give `(0,5)`. |
63,940,481 | I know that my view is correct, however, when I put `{% url 'sub_detail' subc.id %}`in index.html it suddenly gives an error of no reverse match. Once I remove it index works fine. I tried changing id, but it did not change anything as it still gives the same error.
Thanks in advance.
views.py:
```
from django.shortcuts import render, redirect, reverse, get_object_or_404
from django.contrib import messages
from django.contrib.auth.decorators import login_required
from .models import Slides, MainContent, SubContent
from .forms import TitleForm, SubContentForm, SlidesForm
def index(request):
slides = Slides.objects.all()
maincontent = MainContent.objects.all()
subcontent = SubContent.objects.all()
context = {
'slides': slides,
'maincontent': maincontent,
'subcontent': subcontent,
}
return render(request, 'home/index.html', context)
def sub_detail(request, subc_id):
subcontent = get_object_or_404(SubContent, pk=subc_id)
context = {
'subcontent': subcontent,
}
return render(request, 'home/sub_detail.html', context)
```
urls.py:
```
path('', views.index, name='home'),
path('<int:subc_id>/', views.sub_detail, name='sub_detail'),
path('manage/', views.manage, name='manage'),
path('slides/', views.slides, name='slides'),
path('title/', views.add_title, name='add_title'),
path('sub/', views.add_sub_content, name='add_sub_content'),
]
```
models.py:
```
class SubContent(models.Model):
class Meta:
verbose_name_plural = 'Sub Content'
title = models.CharField(max_length=28, null=False, blank=False)
image = models.ImageField()
description = models.TextField()
def __str__(self):
return self.title
```
index.html:
```
<a href="{% url 'sub_detail' subc.id %}">
<div class="col-md-6 section-index-img">
<img src="{{ sub.image.url }}" class="rounded img-fluid" alt=""/>
</div>
</a>
```
error code:
```
Environment:
Request Method: GET
Request URL: http://localhost:8000/
Django Version: 3.1
Python Version: 3.8.3
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.sites',
'allauth',
'allauth.account',
'allauth.socialaccount',
'home',
'crispy_forms',
'products']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Template error:
In template /workspace/hunt-interiors/templates/base.html, error at line 0
Reverse for 'sub_detail' with arguments '('',)' not found. 1 pattern(s) tried: ['(?P<subc_id>[0-9]+)/$']
1 : <!doctype html>
2 : {% load static %}
3 :
4 :
5 : <html lang="en">
6 : <head>
7 :
8 : {% block meta %}
9 : <meta http-equiv="X-UA-Compatible" content="ie=edge">
10 : <meta charset="utf-8">
Traceback (most recent call last):
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/core/handlers/base.py", line 179, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/workspace/hunt-interiors/home/views.py", line 18, in index
return render(request, 'home/index.html', context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/shortcuts.py", line 19, in render
content = loader.render_to_string(template_name, context, request, using=using)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/loader.py", line 62, in render_to_string
return template.render(context, request)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/backends/django.py", line 61, in render
return self.template.render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 170, in render
return self._render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 162, in _render
return self.nodelist.render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 938, in render
bit = node.render_annotated(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 905, in render_annotated
return self.render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/loader_tags.py", line 150, in render
return compiled_parent._render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 162, in _render
return self.nodelist.render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 938, in render
bit = node.render_annotated(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 905, in render_annotated
return self.render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/loader_tags.py", line 62, in render
result = block.nodelist.render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 938, in render
bit = node.render_annotated(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 905, in render_annotated
return self.render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/defaulttags.py", line 211, in render
nodelist.append(node.render_annotated(context))
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 905, in render_annotated
return self.render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/defaulttags.py", line 312, in render
return nodelist.render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 938, in render
bit = node.render_annotated(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 905, in render_annotated
return self.render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/defaulttags.py", line 312, in render
return nodelist.render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 938, in render
bit = node.render_annotated(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 905, in render_annotated
return self.render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/defaulttags.py", line 446, in render
url = reverse(view_name, args=args, kwargs=kwargs, current_app=current_app)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/urls/base.py", line 87, in reverse
return iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs))
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/urls/resolvers.py", line 685, in _reverse_with_prefix
raise NoReverseMatch(msg)
Exception Type: NoReverseMatch at /
Exception Value: Reverse for 'sub_detail' with arguments '('',)' not found. 1 pattern(s) tried: ['(?P<subc_id>[0-9]+)/$']
``` | 2020/09/17 | [
"https://Stackoverflow.com/questions/63940481",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14294805/"
] | Rather than many `if` statements, I just reproduced the `match` statement
with a repetition `$( ... )*` for all the available branches.
It seems to behave like the extensive `match` expression.
```rust
macro_rules! run_questions {
( $chosen_question: expr, $( $question_num: expr, $question_mod: expr ), * ) => {
match $chosen_question {
$($question_num => run_question($question_mod),)*
_ => {
println!("Question doesn't exist.");
}
}
};
}
``` | The error message explained:
```
macro_rules! run_questions {
($chosen_question: expr, $($question_num: expr, $question_mod: expr),*) => {{
```
In the above pattern you have a repetition with the `*` operator that involves variables `$question_num` and `$question_mod`
```
if $chosen_question == $question_num {
run_question($question_mod::solve);
}
```
In the corresponding code, you can't use `$question_num` and `$question_mod` directly: since they are repeated they potentially have more than one value and which one should the compiler use here? Instead, you need to tell the compiler to repeat the block of code that uses these variables. This is done by surrounding the repeated code block with `$()` and adding the `*` operator:
```
$(if $chosen_question == $question_num {
run_question($question_mod::solve);
})*
```
Although as pointed out by @prog-fh's answer, better to use a `match` in the macro, same as in the straight code:
```
match $chosen_question {
$($question_num => run_question ($question_mod::solve),)*
_ => println!("Question doesn't exist.")
};
``` |
70,699,537 | Given two arrays:
```
import numpy as np
array1 = np.array([7, 2, 4, 1, 20], dtype = "int")
array2 = np.array([2, 4, 4, 3, 10], dtype = "int")
```
and WITHOUT the use of any loop or if-else statement; I am trying to create a third array that will take the value equal to the sum of the
(corresponding) elements from array1 and array2 if the element from
array1 is bigger than the element from array2. If they are equal, the new element should have a value equal to their product. If the element from array2 is bigger, then the new element should be the difference between the element from array2 and array1
I have tried to implement this using python list and if-else statement with loop, but would like to know how to implement with numpy methods.
My implementation:
```
array1 = [7, 2, 4, 1, 20]
array2 = [2, 4, 4, 3, 10]
array3 = []
for i, j in enumerate(array1):
if j>array2[i]:
sum = j + array2[i]
array3.append(sum)
elif j==array2[i]:
product = j * array2[i]
array3.append(product)
else:
sub = array2[i] - j
array3.append(sub)
print("output: ",array3)
```
output: [9, 2, 16, 2, 30] | 2022/01/13 | [
"https://Stackoverflow.com/questions/70699537",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12256590/"
] | You can use three mask arrays, like so:
```
>>> array3 = np.zeros(array1.shape, dtype=array1.dtype)
>>> a1_gt = array1 > array2 # for when element at array 1 is greater
>>> a2_gt = array1 < array2 # for when element at array 2 is greater
>>> a1_eq_a2 = array1 == array2 # for when elements at array 1 and array 2 are equal
>>> array3[a1_gt] = array1[a1_gt] + array2[a1_gt]
>>> array3[a2_gt] = array2[a2_gt] - array1[a2_gt]
>>> array3[a1_eq_a2] = array2[a1_eq_a2] * array1[a1_eq_a2]
>>> array3
array([ 9., 2., 16., 2., 30.])
``` | I renamed your arrays to `a` and `b`
```
print((a>b)*(a+b)+(a==b)*(a*b)+(a<b)*(b-a))
```
direct comparison between arrays give you boolean reasults that you can interpret as `0` or `1`. That means a simple multiplication can turn an element "on" or "off". So we can just piece everything together. |
70,699,537 | Given two arrays:
```
import numpy as np
array1 = np.array([7, 2, 4, 1, 20], dtype = "int")
array2 = np.array([2, 4, 4, 3, 10], dtype = "int")
```
and WITHOUT the use of any loop or if-else statement; I am trying to create a third array that will take the value equal to the sum of the
(corresponding) elements from array1 and array2 if the element from
array1 is bigger than the element from array2. If they are equal, the new element should have a value equal to their product. If the element from array2 is bigger, then the new element should be the difference between the element from array2 and array1
I have tried to implement this using python list and if-else statement with loop, but would like to know how to implement with numpy methods.
My implementation:
```
array1 = [7, 2, 4, 1, 20]
array2 = [2, 4, 4, 3, 10]
array3 = []
for i, j in enumerate(array1):
if j>array2[i]:
sum = j + array2[i]
array3.append(sum)
elif j==array2[i]:
product = j * array2[i]
array3.append(product)
else:
sub = array2[i] - j
array3.append(sub)
print("output: ",array3)
```
output: [9, 2, 16, 2, 30] | 2022/01/13 | [
"https://Stackoverflow.com/questions/70699537",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12256590/"
] | You can use `np.select` here:
```
>>> import numpy as np
>>> array1 = np.array([7, 2, 4, 1, 20], dtype = "int")
>>> array2 = np.array([2, 4, 4, 3, 10], dtype = "int")
```
Here is the `help` for `np.select`:
```
select(condlist, choicelist, default=0)
Return an array drawn from elements in choicelist, depending on conditions.
Parameters
----------
condlist : list of bool ndarrays
The list of conditions which determine from which array in `choicelist`
the output elements are taken. When multiple conditions are satisfied,
the first one encountered in `condlist` is used.
choicelist : list of ndarrays
The list of arrays from which the output elements are taken. It has
to be of the same length as `condlist`.
default : scalar, optional
The element inserted in `output` when all conditions evaluate to False.
Returns
-------
output : ndarray
The output at position m is the m-th element of the array in
`choicelist` where the m-th element of the corresponding array in
`condlist` is True.
```
So, applied to your problem:
```
>>> np.select(
... [array1 > array2, array1 == array2, array1 < array2],
... [array1 + array2, array1*array2, array2 - array1]
... )
array([ 9, 2, 16, 2, 30])
>>>
``` | You can use three mask arrays, like so:
```
>>> array3 = np.zeros(array1.shape, dtype=array1.dtype)
>>> a1_gt = array1 > array2 # for when element at array 1 is greater
>>> a2_gt = array1 < array2 # for when element at array 2 is greater
>>> a1_eq_a2 = array1 == array2 # for when elements at array 1 and array 2 are equal
>>> array3[a1_gt] = array1[a1_gt] + array2[a1_gt]
>>> array3[a2_gt] = array2[a2_gt] - array1[a2_gt]
>>> array3[a1_eq_a2] = array2[a1_eq_a2] * array1[a1_eq_a2]
>>> array3
array([ 9., 2., 16., 2., 30.])
``` |
70,699,537 | Given two arrays:
```
import numpy as np
array1 = np.array([7, 2, 4, 1, 20], dtype = "int")
array2 = np.array([2, 4, 4, 3, 10], dtype = "int")
```
and WITHOUT the use of any loop or if-else statement; I am trying to create a third array that will take the value equal to the sum of the
(corresponding) elements from array1 and array2 if the element from
array1 is bigger than the element from array2. If they are equal, the new element should have a value equal to their product. If the element from array2 is bigger, then the new element should be the difference between the element from array2 and array1
I have tried to implement this using python list and if-else statement with loop, but would like to know how to implement with numpy methods.
My implementation:
```
array1 = [7, 2, 4, 1, 20]
array2 = [2, 4, 4, 3, 10]
array3 = []
for i, j in enumerate(array1):
if j>array2[i]:
sum = j + array2[i]
array3.append(sum)
elif j==array2[i]:
product = j * array2[i]
array3.append(product)
else:
sub = array2[i] - j
array3.append(sub)
print("output: ",array3)
```
output: [9, 2, 16, 2, 30] | 2022/01/13 | [
"https://Stackoverflow.com/questions/70699537",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12256590/"
] | You can use `np.select` here:
```
>>> import numpy as np
>>> array1 = np.array([7, 2, 4, 1, 20], dtype = "int")
>>> array2 = np.array([2, 4, 4, 3, 10], dtype = "int")
```
Here is the `help` for `np.select`:
```
select(condlist, choicelist, default=0)
Return an array drawn from elements in choicelist, depending on conditions.
Parameters
----------
condlist : list of bool ndarrays
The list of conditions which determine from which array in `choicelist`
the output elements are taken. When multiple conditions are satisfied,
the first one encountered in `condlist` is used.
choicelist : list of ndarrays
The list of arrays from which the output elements are taken. It has
to be of the same length as `condlist`.
default : scalar, optional
The element inserted in `output` when all conditions evaluate to False.
Returns
-------
output : ndarray
The output at position m is the m-th element of the array in
`choicelist` where the m-th element of the corresponding array in
`condlist` is True.
```
So, applied to your problem:
```
>>> np.select(
... [array1 > array2, array1 == array2, array1 < array2],
... [array1 + array2, array1*array2, array2 - array1]
... )
array([ 9, 2, 16, 2, 30])
>>>
``` | I renamed your arrays to `a` and `b`
```
print((a>b)*(a+b)+(a==b)*(a*b)+(a<b)*(b-a))
```
direct comparison between arrays give you boolean reasults that you can interpret as `0` or `1`. That means a simple multiplication can turn an element "on" or "off". So we can just piece everything together. |
70,699,537 | Given two arrays:
```
import numpy as np
array1 = np.array([7, 2, 4, 1, 20], dtype = "int")
array2 = np.array([2, 4, 4, 3, 10], dtype = "int")
```
and WITHOUT the use of any loop or if-else statement; I am trying to create a third array that will take the value equal to the sum of the
(corresponding) elements from array1 and array2 if the element from
array1 is bigger than the element from array2. If they are equal, the new element should have a value equal to their product. If the element from array2 is bigger, then the new element should be the difference between the element from array2 and array1
I have tried to implement this using python list and if-else statement with loop, but would like to know how to implement with numpy methods.
My implementation:
```
array1 = [7, 2, 4, 1, 20]
array2 = [2, 4, 4, 3, 10]
array3 = []
for i, j in enumerate(array1):
if j>array2[i]:
sum = j + array2[i]
array3.append(sum)
elif j==array2[i]:
product = j * array2[i]
array3.append(product)
else:
sub = array2[i] - j
array3.append(sub)
print("output: ",array3)
```
output: [9, 2, 16, 2, 30] | 2022/01/13 | [
"https://Stackoverflow.com/questions/70699537",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12256590/"
] | Using `numpy.select` with a default value:
```
array1 = np.array([7, 2, 4, 1, 20], dtype = "int")
array2 = np.array([2, 4, 4, 3, 10], dtype = "int")
np.select([array1>array2, array1<array2],
[array1+array2, array2-array1],
default=array1*array2)
```
output: `array([ 9, 2, 16, 2, 30])` | I renamed your arrays to `a` and `b`
```
print((a>b)*(a+b)+(a==b)*(a*b)+(a<b)*(b-a))
```
direct comparison between arrays give you boolean reasults that you can interpret as `0` or `1`. That means a simple multiplication can turn an element "on" or "off". So we can just piece everything together. |
70,699,537 | Given two arrays:
```
import numpy as np
array1 = np.array([7, 2, 4, 1, 20], dtype = "int")
array2 = np.array([2, 4, 4, 3, 10], dtype = "int")
```
and WITHOUT the use of any loop or if-else statement; I am trying to create a third array that will take the value equal to the sum of the
(corresponding) elements from array1 and array2 if the element from
array1 is bigger than the element from array2. If they are equal, the new element should have a value equal to their product. If the element from array2 is bigger, then the new element should be the difference between the element from array2 and array1
I have tried to implement this using python list and if-else statement with loop, but would like to know how to implement with numpy methods.
My implementation:
```
array1 = [7, 2, 4, 1, 20]
array2 = [2, 4, 4, 3, 10]
array3 = []
for i, j in enumerate(array1):
if j>array2[i]:
sum = j + array2[i]
array3.append(sum)
elif j==array2[i]:
product = j * array2[i]
array3.append(product)
else:
sub = array2[i] - j
array3.append(sub)
print("output: ",array3)
```
output: [9, 2, 16, 2, 30] | 2022/01/13 | [
"https://Stackoverflow.com/questions/70699537",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12256590/"
] | You can use `np.select` here:
```
>>> import numpy as np
>>> array1 = np.array([7, 2, 4, 1, 20], dtype = "int")
>>> array2 = np.array([2, 4, 4, 3, 10], dtype = "int")
```
Here is the `help` for `np.select`:
```
select(condlist, choicelist, default=0)
Return an array drawn from elements in choicelist, depending on conditions.
Parameters
----------
condlist : list of bool ndarrays
The list of conditions which determine from which array in `choicelist`
the output elements are taken. When multiple conditions are satisfied,
the first one encountered in `condlist` is used.
choicelist : list of ndarrays
The list of arrays from which the output elements are taken. It has
to be of the same length as `condlist`.
default : scalar, optional
The element inserted in `output` when all conditions evaluate to False.
Returns
-------
output : ndarray
The output at position m is the m-th element of the array in
`choicelist` where the m-th element of the corresponding array in
`condlist` is True.
```
So, applied to your problem:
```
>>> np.select(
... [array1 > array2, array1 == array2, array1 < array2],
... [array1 + array2, array1*array2, array2 - array1]
... )
array([ 9, 2, 16, 2, 30])
>>>
``` | Using `numpy.select` with a default value:
```
array1 = np.array([7, 2, 4, 1, 20], dtype = "int")
array2 = np.array([2, 4, 4, 3, 10], dtype = "int")
np.select([array1>array2, array1<array2],
[array1+array2, array2-array1],
default=array1*array2)
```
output: `array([ 9, 2, 16, 2, 30])` |
39,372,494 | ```
#!/usr/bin/python
# -*- coding: utf-8 -*-
def to_weird_case(string):
lines = string.split()
new_word = ''
new_line = ''
for word in lines:
for item in word:
if word.index(item) %2 ==0:
item = item.upper()
new_word += item
else:
new_word += item
new_line = new_word +' '
return new_line
print to_weird_case('what do you mean')
```
I want to get `WhAt Do YoU MeAn`, instead I got `WhAtDoYoUMeAn`. I already add the line `new_line = new_word +' '`. where is my problem? | 2016/09/07 | [
"https://Stackoverflow.com/questions/39372494",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6511336/"
] | First, you overwrite `new_line` with every iteration. Second, `new_word` is getting longer because you never "clear" it. Third, you add space to the end of the entire `new_line` and not after every new word (because of *Second*).
*See comments*
```
def to_weird_case(string):
lines = string.split()
new_line = ''
for word in lines:
new_word = '' # start new word from an empty string
for item in word:
if word.index(item) %2 ==0:
item = item.upper()
new_word += item
else:
new_word += item
print new_word
new_line = new_line + new_word + " " # add new word to the existing new line
return new_line
``` | It is correct that your code did not reset the value of `new_word` and you overwrote the `new_line` within the loop, but I'd like to share a next to one-liner solution with a regex:
```
import re
def to_weird_case(string):
return re.sub(r'(\S)(\S?)', lambda m: "{0}{1}".format(m.group(1).upper(), m.group(2)), string);
print to_weird_case('what do you mean')
```
See [Python demo](https://ideone.com/QIzRyE)
The `(\S)(\S?)` regex captures a non-whitespace into Group 1 and one or zero non-whitespaces into Group 2, and then, inside the `re.sub`, the Group 1 value is replaced with the uppercased counterpart.
Look at how [`(\S)(\S?)`](https://regex101.com/r/yV9rJ7/1) matches your `what do you mean`:
* `wh` is matches and `w` is in Group 1 and `h` is in Group 2 ([](https://i.stack.imgur.com/VhLfo.png)). The match is passed as `m` to the lambda expression, and Group 1 is modified, and Group 2 is just passed as is.
* The next match contains `at`, and the same thing happens with groups
* Next, the space is not matched since `\S` matches any character but a whitespace.
* `do` is matched, the same things happens as described above
* space, `yo` are matched and handled as described above
* `u` + *empty space* are matched next because the second `\S` has a `?` quantifier that matches one or zero occurrences of the pattern it modifies. Thus, the first char is uppercased and the second empty string is used as is.
* Then the rest is processed in a similar way. |
26,650,057 | I am working on a simple python script for retrieving information from a mysql database.
Here are my two examples which are almost IDENTICAL and the first successfully compiles while the second returns:
```
File "dbconnection.py", line 17
print ip
^
SyntaxError: invalid syntax
```
I have tried deleting the try catch code from the second example with no results though.
There is a relevant post [on syntax error on Python 3](https://stackoverflow.com/questions/826948/syntax-error-on-print-with-python-3) which makes my second example successfully run but it doesn't answer the question "why these almost identical examples have different behavior".
First example:
```
import datetime
import mysql.connector
cnx = mysql.connector.connect(user='user', database='test', password='test')
cursor = cnx.cursor()
query = ("SELECT ip FROM guralps")
cursor.execute(query)
for (ip) in cursor:
print ip
cursor.close()
cnx.close()
```
Second which does not compile:
```
from __future__ import print_function
from datetime import date, datetime, timedelta
import mysql.connector
from mysql.connector import errorcode
import time
try:
cnx = mysql.connector.connect(user='user', database='test', password='test')
cursor = cnx.cursor()
query = ("SELECT ip FROM guralps")
cursor.execute(query)
for (ip) in cursor:
print ip
cursor.close()
cnx.close()
except mysql.connector.Error as err:
if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
print("Something is wrong with your user name or password")
elif err.errno == errorcode.ER_BAD_DB_ERROR:
print("Database does not exists")
else:
print(err)
else:
cnx.close()
``` | 2014/10/30 | [
"https://Stackoverflow.com/questions/26650057",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1210196/"
] | You used:
```
from __future__ import print_function
```
at the top of your module. This *disables* the `print` statement for that module so you can use the [`print()` **function**](https://docs.python.org/2/library/functions.html#print) instead:
```
print(id)
```
From the function documentation:
>
> **Note**: This function is not normally available as a built-in since the name `print` is recognized as the [`print`](https://docs.python.org/2/reference/simple_stmts.html#print) statement. To disable the statement and use the `print()` function, use this future statement at the top of your module:
>
>
>
> ```
> from __future__ import print_function
>
> ```
>
> | from **future** import print\_function, division require Python 2.6 or later. **print\_function** will allow you to use print as a function. So you can't use it as **print ip**.
```
>>> from __future__ import print_function
>>>print('# of entries', len(dictionary), file=sys.stderr)
``` |
58,048,079 | Upon attempting to compile python 3.7 I hit `Could not import runpy module`:
```
jeremyr@b88:$ wget https://www.python.org/ftp/python/3.7.3/Python-3.7.3.tar.xz
....
jeremyr@b88:~/Python-3.7.3$ ./configure --enable-optimizations
jeremyr@b88:~/Python-3.7.3$ make clean
jeremyr@b88:~/Python-3.7.3$ make -j32
....
gcc -pthread -Xlinker -export-dynamic -o Programs/_testembed Programs/_testembed.o libpython3.7m.a -lcrypt -lpthread -ldl -lutil -lm
./python -E -S -m sysconfig --generate-posix-vars ;\
if test $? -ne 0 ; then \
echo "generate-posix-vars failed" ; \
rm -f ./pybuilddir.txt ; \
exit 1 ; \
fi
Could not import runpy module
Traceback (most recent call last):
File "/home/jeremyr/Python-3.7.3/Lib/runpy.py", line 15, in <module>
import importlib.util
File "/home/jeremyr/Python-3.7.3/Lib/importlib/util.py", line 14, in <module>
from contextlib import contextmanager
File "/home/jeremyr/Python-3.7.3/Lib/contextlib.py", line 4, in <module>
import _collections_abc
SystemError: <built-in function compile> returned NULL without setting an error
generate-posix-vars failed
Makefile:603: recipe for target 'pybuilddir.txt' failed
make[1]: *** [pybuilddir.txt] Error 1
make[1]: Leaving directory '/home/jeremyr/Python-3.7.3'
Makefile:531: recipe for target 'profile-opt' failed
make: *** [profile-opt] Error 2
jeremyr@88:~/Python-3.7.3$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 8.11 (jessie)
Release: 8.11
Codename: jessie
jeremyr@88:~/Python-3.7.3$ gcc --version
gcc (Debian 4.9.2-10+deb8u2) 4.9.2
Copyright (C) 2014 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
jeremyr@88:~/Python-3.7.3$ sudo apt upgrade gcc
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... gcc is already the newest version.
jeremyr@b88:~/Python-3.7.3$ echo $PYTHONPATH
```
Any advice on how to overcome this and install python3.7 appreciated.
Edit - the solution listed below seems to work for various other python versions, so I changed title to python 3.x from 3.7 | 2019/09/22 | [
"https://Stackoverflow.com/questions/58048079",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3817456/"
] | It seems the enable-optimizations was the problem,
```
jeremyr@b88:~/Python-3.7.3$ ./configure
jeremyr@b88:~/Python-3.7.3$ make clean
```
takes care of it in my case. | In case others come across this question: I encountered the same problem on Centos 7. I also had `--enable-optimizations` but didn't want to remove that flag. Updating my build dependencies and then re-running solved the problem. To do that I ran:
```
sudo yum groupinstall "Development Tools" -y
```
In case the yum group is not available, you can also install the pacakges individually using:
```
sudo yum install bison byacc cscope ctags cvs diffstat doxygen flex gcc gcc-c++ gcc-gfortran gettext git indent intltool libtool patch patchutils rcs redhat-rpm-config rpm-build subversion swig systemtap
``` |
58,048,079 | Upon attempting to compile python 3.7 I hit `Could not import runpy module`:
```
jeremyr@b88:$ wget https://www.python.org/ftp/python/3.7.3/Python-3.7.3.tar.xz
....
jeremyr@b88:~/Python-3.7.3$ ./configure --enable-optimizations
jeremyr@b88:~/Python-3.7.3$ make clean
jeremyr@b88:~/Python-3.7.3$ make -j32
....
gcc -pthread -Xlinker -export-dynamic -o Programs/_testembed Programs/_testembed.o libpython3.7m.a -lcrypt -lpthread -ldl -lutil -lm
./python -E -S -m sysconfig --generate-posix-vars ;\
if test $? -ne 0 ; then \
echo "generate-posix-vars failed" ; \
rm -f ./pybuilddir.txt ; \
exit 1 ; \
fi
Could not import runpy module
Traceback (most recent call last):
File "/home/jeremyr/Python-3.7.3/Lib/runpy.py", line 15, in <module>
import importlib.util
File "/home/jeremyr/Python-3.7.3/Lib/importlib/util.py", line 14, in <module>
from contextlib import contextmanager
File "/home/jeremyr/Python-3.7.3/Lib/contextlib.py", line 4, in <module>
import _collections_abc
SystemError: <built-in function compile> returned NULL without setting an error
generate-posix-vars failed
Makefile:603: recipe for target 'pybuilddir.txt' failed
make[1]: *** [pybuilddir.txt] Error 1
make[1]: Leaving directory '/home/jeremyr/Python-3.7.3'
Makefile:531: recipe for target 'profile-opt' failed
make: *** [profile-opt] Error 2
jeremyr@88:~/Python-3.7.3$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 8.11 (jessie)
Release: 8.11
Codename: jessie
jeremyr@88:~/Python-3.7.3$ gcc --version
gcc (Debian 4.9.2-10+deb8u2) 4.9.2
Copyright (C) 2014 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
jeremyr@88:~/Python-3.7.3$ sudo apt upgrade gcc
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... gcc is already the newest version.
jeremyr@b88:~/Python-3.7.3$ echo $PYTHONPATH
```
Any advice on how to overcome this and install python3.7 appreciated.
Edit - the solution listed below seems to work for various other python versions, so I changed title to python 3.x from 3.7 | 2019/09/22 | [
"https://Stackoverflow.com/questions/58048079",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3817456/"
] | It seems the enable-optimizations was the problem,
```
jeremyr@b88:~/Python-3.7.3$ ./configure
jeremyr@b88:~/Python-3.7.3$ make clean
```
takes care of it in my case. | For whomever MicGer's answer didn't work and would like to retain --enable-optimizations, check your gcc version. The error was solved for me on gcc 8.3.0. |
58,048,079 | Upon attempting to compile python 3.7 I hit `Could not import runpy module`:
```
jeremyr@b88:$ wget https://www.python.org/ftp/python/3.7.3/Python-3.7.3.tar.xz
....
jeremyr@b88:~/Python-3.7.3$ ./configure --enable-optimizations
jeremyr@b88:~/Python-3.7.3$ make clean
jeremyr@b88:~/Python-3.7.3$ make -j32
....
gcc -pthread -Xlinker -export-dynamic -o Programs/_testembed Programs/_testembed.o libpython3.7m.a -lcrypt -lpthread -ldl -lutil -lm
./python -E -S -m sysconfig --generate-posix-vars ;\
if test $? -ne 0 ; then \
echo "generate-posix-vars failed" ; \
rm -f ./pybuilddir.txt ; \
exit 1 ; \
fi
Could not import runpy module
Traceback (most recent call last):
File "/home/jeremyr/Python-3.7.3/Lib/runpy.py", line 15, in <module>
import importlib.util
File "/home/jeremyr/Python-3.7.3/Lib/importlib/util.py", line 14, in <module>
from contextlib import contextmanager
File "/home/jeremyr/Python-3.7.3/Lib/contextlib.py", line 4, in <module>
import _collections_abc
SystemError: <built-in function compile> returned NULL without setting an error
generate-posix-vars failed
Makefile:603: recipe for target 'pybuilddir.txt' failed
make[1]: *** [pybuilddir.txt] Error 1
make[1]: Leaving directory '/home/jeremyr/Python-3.7.3'
Makefile:531: recipe for target 'profile-opt' failed
make: *** [profile-opt] Error 2
jeremyr@88:~/Python-3.7.3$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 8.11 (jessie)
Release: 8.11
Codename: jessie
jeremyr@88:~/Python-3.7.3$ gcc --version
gcc (Debian 4.9.2-10+deb8u2) 4.9.2
Copyright (C) 2014 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
jeremyr@88:~/Python-3.7.3$ sudo apt upgrade gcc
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... gcc is already the newest version.
jeremyr@b88:~/Python-3.7.3$ echo $PYTHONPATH
```
Any advice on how to overcome this and install python3.7 appreciated.
Edit - the solution listed below seems to work for various other python versions, so I changed title to python 3.x from 3.7 | 2019/09/22 | [
"https://Stackoverflow.com/questions/58048079",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3817456/"
] | In case others come across this question: I encountered the same problem on Centos 7. I also had `--enable-optimizations` but didn't want to remove that flag. Updating my build dependencies and then re-running solved the problem. To do that I ran:
```
sudo yum groupinstall "Development Tools" -y
```
In case the yum group is not available, you can also install the pacakges individually using:
```
sudo yum install bison byacc cscope ctags cvs diffstat doxygen flex gcc gcc-c++ gcc-gfortran gettext git indent intltool libtool patch patchutils rcs redhat-rpm-config rpm-build subversion swig systemtap
``` | For whomever MicGer's answer didn't work and would like to retain --enable-optimizations, check your gcc version. The error was solved for me on gcc 8.3.0. |
50,685,300 | I want to upload a flask server to bluemix. The structure of my project is something like this
* Classes
+ functions.py
* Watson
+ bot.py
* requirements.txt
* runtime.txt
* Procfile
* manifest.yml
my bot.py has this dependency:
```
from classes import functions
```
I have tried to include it in the manifest using things like this:
./classes or ./classes/functions
but I have had no luck, it keeps saying either that module is not found or things like pip.exceptions.InstallationError: Invalid requirement: './classes/functions'
I dont know how to add the dependency
manifest.yml
```
---
applications:
- name: chatbotstest
random-route: true
memory: 256M
```
Procfile (the file that I use to run the app)
```
web: python watson/bot.py
```
when I print my sys.path I get this:
```
['..', '/home/vcap/app/watson', '/home/vcap/deps/0/python/lib/python36.zip', '/home/vcap/deps/0/py
e/vcap/deps/0/python/lib/python3.6/lib-dynload', '/home/vcap/deps/0/python/lib/python3.6/site-packages', '/home/vcap/deps/0/python/lib/python3.6/site-
-py3.6.egg', '/home/vcap/deps/0/python/lib/python3.6/site-packages/pip-9.0.1-py3.6.egg']
```
I have tried to add the folder parent to my script using
Thanks a lot for your help!!! | 2018/06/04 | [
"https://Stackoverflow.com/questions/50685300",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4590839/"
] | You don't need to include it into the manifest file. Your entire app directory and its subdirectories are uploaded as part of the `push` command. Thereafter, it is possible to reference the file as shown.
This imports a file in the current directory:
```
import myfile
```
This should work for your `functions.py`:
```
from classes import functions
``` | Thanks a lot, this finally worked for me, the answered you pointed me to gave me the solution, thanks a lot again!
```
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(currentdir)
sys.path.insert(0,parentdir)
``` |
14,594,402 | I have 3 files a.py, b.py, c.py
I am trying to dynamically import a class called "C" defined in c.py from within a.py
and have the evaluated name available in b.py
python a.py is currently catching the NameError. I'm trying to avoid this and create an
instance in b.py which calls C.do\_int(10)
a.py
```
import b
#older
#services = __import__('services')
#interface = eval('services.MyRestInterface')
# python2.7
import importlib
module = importlib.import_module('c')
interface = eval('module.C')
# will work
i = interface()
print i.do_int(10)
# interface isn't defined in b.py after call to eval
try:
print b.call_eval('interface')
except NameError:
print "b.call_eval('interface'): interface is not defined in b.py"
```
---
b.py
```
def call_eval(name):
interface = eval(name)
i = interface()
return i.do_int(10)
```
---
c.py
```
class C(object):
my_int = 32
def do_int(self, number):
self.my_int += number
return self.my_int
```
How can I achieve this? | 2013/01/30 | [
"https://Stackoverflow.com/questions/14594402",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/566741/"
] | One way would be to use indexOf() to see if /admin is at pos 0.
```
var msg = "/admin this is a message";
var n = msg.indexOf("/admin");
```
If n = 0, then you know /admin was at the start of the message.
If the string does not exist in the message, n would equal -1. | You could use [`Array.slice(beg, end)`](https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Global_Objects/Array/slice):
```javascript
var message = '/admin this is a message';
if (message.slice(0, 6) === '/admin') {
var adminMessage = message.slice(6).trim();
// Now do something with the "adminMessage".
}
``` |
14,594,402 | I have 3 files a.py, b.py, c.py
I am trying to dynamically import a class called "C" defined in c.py from within a.py
and have the evaluated name available in b.py
python a.py is currently catching the NameError. I'm trying to avoid this and create an
instance in b.py which calls C.do\_int(10)
a.py
```
import b
#older
#services = __import__('services')
#interface = eval('services.MyRestInterface')
# python2.7
import importlib
module = importlib.import_module('c')
interface = eval('module.C')
# will work
i = interface()
print i.do_int(10)
# interface isn't defined in b.py after call to eval
try:
print b.call_eval('interface')
except NameError:
print "b.call_eval('interface'): interface is not defined in b.py"
```
---
b.py
```
def call_eval(name):
interface = eval(name)
i = interface()
return i.do_int(10)
```
---
c.py
```
class C(object):
my_int = 32
def do_int(self, number):
self.my_int += number
return self.my_int
```
How can I achieve this? | 2013/01/30 | [
"https://Stackoverflow.com/questions/14594402",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/566741/"
] | One way would be to use indexOf() to see if /admin is at pos 0.
```
var msg = "/admin this is a message";
var n = msg.indexOf("/admin");
```
If n = 0, then you know /admin was at the start of the message.
If the string does not exist in the message, n would equal -1. | To achieve this, you could look for a "special command character" `/` and if found, get the text until next whitespace/end of line, check this against your list of commands and if there is a match, do some special action
```
var msg = "/admin this is a message", command, i;
if (msg.charAt(0) === '/') { // special
i = msg.indexOf(' ', 1);
i===-1 ? i = msg.length : i; // end of line if no space
command = msg.slice(1, i); // command (this case "admin")
if (command === 'admin') {
msg = msg.slice(i+1); // rest of message
// .. etc
} /* else if (command === foo) {
} */ else {
// warn about unknown command
}
} else {
// treat as normal message
}
``` |
14,594,402 | I have 3 files a.py, b.py, c.py
I am trying to dynamically import a class called "C" defined in c.py from within a.py
and have the evaluated name available in b.py
python a.py is currently catching the NameError. I'm trying to avoid this and create an
instance in b.py which calls C.do\_int(10)
a.py
```
import b
#older
#services = __import__('services')
#interface = eval('services.MyRestInterface')
# python2.7
import importlib
module = importlib.import_module('c')
interface = eval('module.C')
# will work
i = interface()
print i.do_int(10)
# interface isn't defined in b.py after call to eval
try:
print b.call_eval('interface')
except NameError:
print "b.call_eval('interface'): interface is not defined in b.py"
```
---
b.py
```
def call_eval(name):
interface = eval(name)
i = interface()
return i.do_int(10)
```
---
c.py
```
class C(object):
my_int = 32
def do_int(self, number):
self.my_int += number
return self.my_int
```
How can I achieve this? | 2013/01/30 | [
"https://Stackoverflow.com/questions/14594402",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/566741/"
] | One way would be to use indexOf() to see if /admin is at pos 0.
```
var msg = "/admin this is a message";
var n = msg.indexOf("/admin");
```
If n = 0, then you know /admin was at the start of the message.
If the string does not exist in the message, n would equal -1. | Or,
```
string.match(/^\/admin/)
```
According to <http://jsperf.com/matching-initial-substring>, this is up to two times faster than either `indexOf` or `slice` in the case that there is no match, but slower when there is a match. So if you expect to mainly have non-matches, this is faster, it would appear. |
14,594,402 | I have 3 files a.py, b.py, c.py
I am trying to dynamically import a class called "C" defined in c.py from within a.py
and have the evaluated name available in b.py
python a.py is currently catching the NameError. I'm trying to avoid this and create an
instance in b.py which calls C.do\_int(10)
a.py
```
import b
#older
#services = __import__('services')
#interface = eval('services.MyRestInterface')
# python2.7
import importlib
module = importlib.import_module('c')
interface = eval('module.C')
# will work
i = interface()
print i.do_int(10)
# interface isn't defined in b.py after call to eval
try:
print b.call_eval('interface')
except NameError:
print "b.call_eval('interface'): interface is not defined in b.py"
```
---
b.py
```
def call_eval(name):
interface = eval(name)
i = interface()
return i.do_int(10)
```
---
c.py
```
class C(object):
my_int = 32
def do_int(self, number):
self.my_int += number
return self.my_int
```
How can I achieve this? | 2013/01/30 | [
"https://Stackoverflow.com/questions/14594402",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/566741/"
] | Or,
```
string.match(/^\/admin/)
```
According to <http://jsperf.com/matching-initial-substring>, this is up to two times faster than either `indexOf` or `slice` in the case that there is no match, but slower when there is a match. So if you expect to mainly have non-matches, this is faster, it would appear. | You could use [`Array.slice(beg, end)`](https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Global_Objects/Array/slice):
```javascript
var message = '/admin this is a message';
if (message.slice(0, 6) === '/admin') {
var adminMessage = message.slice(6).trim();
// Now do something with the "adminMessage".
}
``` |
14,594,402 | I have 3 files a.py, b.py, c.py
I am trying to dynamically import a class called "C" defined in c.py from within a.py
and have the evaluated name available in b.py
python a.py is currently catching the NameError. I'm trying to avoid this and create an
instance in b.py which calls C.do\_int(10)
a.py
```
import b
#older
#services = __import__('services')
#interface = eval('services.MyRestInterface')
# python2.7
import importlib
module = importlib.import_module('c')
interface = eval('module.C')
# will work
i = interface()
print i.do_int(10)
# interface isn't defined in b.py after call to eval
try:
print b.call_eval('interface')
except NameError:
print "b.call_eval('interface'): interface is not defined in b.py"
```
---
b.py
```
def call_eval(name):
interface = eval(name)
i = interface()
return i.do_int(10)
```
---
c.py
```
class C(object):
my_int = 32
def do_int(self, number):
self.my_int += number
return self.my_int
```
How can I achieve this? | 2013/01/30 | [
"https://Stackoverflow.com/questions/14594402",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/566741/"
] | Or,
```
string.match(/^\/admin/)
```
According to <http://jsperf.com/matching-initial-substring>, this is up to two times faster than either `indexOf` or `slice` in the case that there is no match, but slower when there is a match. So if you expect to mainly have non-matches, this is faster, it would appear. | To achieve this, you could look for a "special command character" `/` and if found, get the text until next whitespace/end of line, check this against your list of commands and if there is a match, do some special action
```
var msg = "/admin this is a message", command, i;
if (msg.charAt(0) === '/') { // special
i = msg.indexOf(' ', 1);
i===-1 ? i = msg.length : i; // end of line if no space
command = msg.slice(1, i); // command (this case "admin")
if (command === 'admin') {
msg = msg.slice(i+1); // rest of message
// .. etc
} /* else if (command === foo) {
} */ else {
// warn about unknown command
}
} else {
// treat as normal message
}
``` |
25,395,915 | I'm after a threadsafe queue that can be pickled or serialized to disk. Are there any datastructures in python that do this. The standard python Queue could not be pickled. | 2014/08/20 | [
"https://Stackoverflow.com/questions/25395915",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3716723/"
] | This can be done using the [`copy_reg`](https://docs.python.org/2/library/copy_reg.html) module, but it's not the most elegant thing in the world:
```
import copy_reg
import threading
import pickle
from Queue import Queue as _Queue
# Make Queue a new-style class, so it can be used with copy_reg
class Queue(_Queue, object):
pass
def pickle_queue(q):
# Shallow copy of __dict__ (the underlying deque isn't actually copied, so this is fast)
q_dct = q.__dict__.copy()
# Remove all non-picklable synchronization primitives
del q_dct['mutex']
del q_dct['not_empty']
del q_dct['not_full']
del q_dct['all_tasks_done']
return Queue, (), q_dct
def unpickle_queue(state):
# Recreate our queue.
q = state[0]()
q.mutex = threading.Lock()
q.not_empty = threading.Condition(q.mutex)
q.not_full = threading.Condition(q.mutex)
q.all_tasks_done = threading.Condition(q.mutex)
q.__dict__ = state[2]
return q
copy_reg.pickle(Queue, pickle_queue, unpickle_queue)
q = Queue()
q.put("hey")
d = pickle.dumps(q)
new_q = pickle.loads(d)
print new_q.get()
# Outputs 'hey'
```
`copy_reg` allows you to register helper functions or pickling and unpickling arbitrary objects. So, we register a new-style version of the `Queue` class, and use the helper functions to remove all the unpickleable `Lock`/`Condition` instance variables prior to pickling, and add them back after unpickling. | There are modules like `dill` and `cloudpickle` that already know how to serialize a `Queue`.
They already have done the `copy_reg` for you.
```
>>> from Queue import Queue
>>> q = Queue()
>>> q.put('hey')
>>> import dill as pickle
>>> d = pickle.dumps(q)
>>> _q = pickle.loads(d)
>>> print _q.get()
hey
>>>
```
It's that easy! Just `import dill as pickle` and problem solved.
Get `dill` here: <https://github.com/uqfoundation> |
45,447,325 | I am using service workers to create an offline page for my website.
At the moment I am saving `offline.html` into cache so that the browser can show this file if there is no interent connection.
In the `fetch` event of my service worker I attempt to load `index.html`, and if this fails (no internet connection) I load `offline.html` from cache.
However, whenever I check offline mode in developer tools and refresh the page `index.html` still shows...
The request isn't failing, and it looks like `index.html` is being cached even though I didn't specify it to be.
Here is my HTML for `index.html`:
```
<!DOCTYPE html>
<html>
<head>
<title>Service Workers - Test</title>
</head>
<body>
<h1> Online page! </h1>
<h3> You are connected to the internet. </h3>
</body>
<script>
if ('serviceWorker' in navigator)
{
navigator.serviceWorker.register('service-worker.js');
}
</script>
</html>
```
Here is my HTML for `offline.html`:
```
<!DOCTYPE html>
<html>
<head>
<title>You are Offline - Service Workers - Test</title>
</head>
<body>
<h1> Welcome to the Offline Page!</h1>
<h2> You are not connected to the internet but you can still do certain things offline. </h2>
</body>
</html>
```
Here is my javascript for `service-worker.js`:
```
const PRECACHE = "version1"
const CACHED = ["offline.html"];
// Caches "offline.html" incase there is no internet
self.addEventListener('install', event => {
console.log("[Service Worker] Installed");
caches.delete(PRECACHE)
event.waitUntil (
caches.open(PRECACHE)
.then(cache => cache.addAll(CACHED))
.then( _ => self.skipWaiting())
);
});
// Clears any caches that do not match this version
self.addEventListener("activate", event => {
event.waitUntil (
caches.keys()
.then(keys => {
return Promise.all (
keys.filter(key => {
return !key.startsWith(PRECACHE);
})
.map(key => {
return caches.delete(key);
})
);
})
.then(() => {
console.log('[Service Worker] Cleared Old Cache');
})
);
});
this.addEventListener('fetch', function(event) {
if (event.request.method !== 'GET') return;
console.log("[Service Worker] Handling Request ");
// If the request to `index.html` works it shows it, but if it fails it shows the cached version of `offline.html`
// This isn't working because `fetch` doesn't fail when there is no internet for some reason...
event.respondWith (
fetch(event.request)
.then(response => {
console.log("[Service Worker] Served from NETWORK");
return response;
}, () => {
console.log("[Service Worker] Served from CACHE");
return catches.match(event.request.url + OFFLINE_URL);
})
);
});
```
I am running a server using python's simple http server like so:
```
python -m SimpleHTTPServer
```
Does anyone know why the offline page isn't working and how I can fix this?
Thanks for the help,
David
**EDIT:**
These images are showing that `index.html` (localhost) is still loading without internet which means it must be cached.
[](https://i.stack.imgur.com/kqUbY.png)
[](https://i.stack.imgur.com/lOLci.png)
**Edit 2:**
I've tried to add `no-cache` to the fetch of `index.html` and it still is fetching `index.html` when I have offline checked.
```
fetch(event.request, {cache: "no-cache"}) ...
``` | 2017/08/01 | [
"https://Stackoverflow.com/questions/45447325",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1541397/"
] | I think we have all forgotten how the network request works from a browser's point of view.
The issue here is, `index.html` is served from the disk cache when the service worker intercepts requests.
**browser** ===> **Service Worker** ===>**fetch event**
>
> inside the fetch event, we have ,
>
>
> * Check If there is network connectivity
> + If there is, fetch from network and respond
> + Else, fetch from cache and respond
>
>
>
Now, how does
>
> If there is network connectivity, fetch from network work?
>
>
>
**Service Worker OnFetch** ===> **Check in Disk Cache** ===>**Nothing? Fetch Online**
The page being fetched here, is `index.html`
and the `cache-control` headers for `index.html` ,
**Do Not** Specify a `no-cache`
Hence the whole issue of the offline page not showing up.
### Solution
* Set a `cache-control` header with limiting values for `index.html` - On the server side
* Or, add headers in the fetch request to the effect
+ `pragma:no-cache`
+ `cache-control:no-cache`
### How Do I add these headers to fetch?
Apparently, fetch and the browser have their own reservations about the request body when it comes to a GET
Also, weirdness and utter chaos happens If you reuse the `event.request` object, for a fetch request, and add custom headers.
The chaos is a list of `Uncaught Exceptions` due to the `fetch` event's `request.mode` attribute , which bars you from adding custom headers to a fetch when under a no-cors or a navigate mode.
Our goal is to :
**Identify** that the browser is truly **offline** and then serve a page that says so
Here's How:
>
> Check If you can fetch a dummy html page say `test-connectivity.html` under your origin, with a custom `cache: no-cache` header. If you can, proceed, else throw the offline page
>
>
>
```
self.addEventListener( 'fetch', ( event ) => {
let headers = new Headers();
headers.append( 'cache-control', 'no-cache' );
headers.append( 'pragma', 'no-cache' );
var req = new Request( 'test-connectivity.html', {
method: 'GET',
mode: 'same-origin',
headers: headers,
redirect: 'manual' // let browser handle redirects
} );
event.respondWith( fetch( req, {
cache: 'no-store'
} )
.then( function ( response ) {
return fetch( event.request )
} )
.catch( function ( err ) {
return new Response( '<div><h2>Uh oh that did not work</h2></div>', {
headers: {
'Content-type': 'text/html'
}
} )
} ) )
} );
```
The `{cache:'no-store'}` object as the second parameter to `fetch` , is an unfortunate **NO-OP**. Just doesn't work.
Just keep it for the sake of a future scenario. It is **really** optional as of today.
If that worked, then you do not need to build a whole new `Request` object for `fetch`
cheers!
>
> The code piece that creates a new request is generously borrowed from
> @pirxpilot 's answer [here](https://stackoverflow.com/questions/35420980/how-to-alter-the-headers-of-a-request)
>
>
>
### The offline worker for this specific question on pastebin
<https://pastebin.com/sNCutAw7> | David, you have two errors in one line.
Your line
```
return catches.match(event.request.url + OFFLINE_URL);
```
should be
```
return caches.match('offline.html');
```
It's `catches` and you haven't defined `OFFLINE_URL` and you don't need event request url |
45,447,325 | I am using service workers to create an offline page for my website.
At the moment I am saving `offline.html` into cache so that the browser can show this file if there is no interent connection.
In the `fetch` event of my service worker I attempt to load `index.html`, and if this fails (no internet connection) I load `offline.html` from cache.
However, whenever I check offline mode in developer tools and refresh the page `index.html` still shows...
The request isn't failing, and it looks like `index.html` is being cached even though I didn't specify it to be.
Here is my HTML for `index.html`:
```
<!DOCTYPE html>
<html>
<head>
<title>Service Workers - Test</title>
</head>
<body>
<h1> Online page! </h1>
<h3> You are connected to the internet. </h3>
</body>
<script>
if ('serviceWorker' in navigator)
{
navigator.serviceWorker.register('service-worker.js');
}
</script>
</html>
```
Here is my HTML for `offline.html`:
```
<!DOCTYPE html>
<html>
<head>
<title>You are Offline - Service Workers - Test</title>
</head>
<body>
<h1> Welcome to the Offline Page!</h1>
<h2> You are not connected to the internet but you can still do certain things offline. </h2>
</body>
</html>
```
Here is my javascript for `service-worker.js`:
```
const PRECACHE = "version1"
const CACHED = ["offline.html"];
// Caches "offline.html" incase there is no internet
self.addEventListener('install', event => {
console.log("[Service Worker] Installed");
caches.delete(PRECACHE)
event.waitUntil (
caches.open(PRECACHE)
.then(cache => cache.addAll(CACHED))
.then( _ => self.skipWaiting())
);
});
// Clears any caches that do not match this version
self.addEventListener("activate", event => {
event.waitUntil (
caches.keys()
.then(keys => {
return Promise.all (
keys.filter(key => {
return !key.startsWith(PRECACHE);
})
.map(key => {
return caches.delete(key);
})
);
})
.then(() => {
console.log('[Service Worker] Cleared Old Cache');
})
);
});
this.addEventListener('fetch', function(event) {
if (event.request.method !== 'GET') return;
console.log("[Service Worker] Handling Request ");
// If the request to `index.html` works it shows it, but if it fails it shows the cached version of `offline.html`
// This isn't working because `fetch` doesn't fail when there is no internet for some reason...
event.respondWith (
fetch(event.request)
.then(response => {
console.log("[Service Worker] Served from NETWORK");
return response;
}, () => {
console.log("[Service Worker] Served from CACHE");
return catches.match(event.request.url + OFFLINE_URL);
})
);
});
```
I am running a server using python's simple http server like so:
```
python -m SimpleHTTPServer
```
Does anyone know why the offline page isn't working and how I can fix this?
Thanks for the help,
David
**EDIT:**
These images are showing that `index.html` (localhost) is still loading without internet which means it must be cached.
[](https://i.stack.imgur.com/kqUbY.png)
[](https://i.stack.imgur.com/lOLci.png)
**Edit 2:**
I've tried to add `no-cache` to the fetch of `index.html` and it still is fetching `index.html` when I have offline checked.
```
fetch(event.request, {cache: "no-cache"}) ...
``` | 2017/08/01 | [
"https://Stackoverflow.com/questions/45447325",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1541397/"
] | I think we have all forgotten how the network request works from a browser's point of view.
The issue here is, `index.html` is served from the disk cache when the service worker intercepts requests.
**browser** ===> **Service Worker** ===>**fetch event**
>
> inside the fetch event, we have ,
>
>
> * Check If there is network connectivity
> + If there is, fetch from network and respond
> + Else, fetch from cache and respond
>
>
>
Now, how does
>
> If there is network connectivity, fetch from network work?
>
>
>
**Service Worker OnFetch** ===> **Check in Disk Cache** ===>**Nothing? Fetch Online**
The page being fetched here, is `index.html`
and the `cache-control` headers for `index.html` ,
**Do Not** Specify a `no-cache`
Hence the whole issue of the offline page not showing up.
### Solution
* Set a `cache-control` header with limiting values for `index.html` - On the server side
* Or, add headers in the fetch request to the effect
+ `pragma:no-cache`
+ `cache-control:no-cache`
### How Do I add these headers to fetch?
Apparently, fetch and the browser have their own reservations about the request body when it comes to a GET
Also, weirdness and utter chaos happens If you reuse the `event.request` object, for a fetch request, and add custom headers.
The chaos is a list of `Uncaught Exceptions` due to the `fetch` event's `request.mode` attribute , which bars you from adding custom headers to a fetch when under a no-cors or a navigate mode.
Our goal is to :
**Identify** that the browser is truly **offline** and then serve a page that says so
Here's How:
>
> Check If you can fetch a dummy html page say `test-connectivity.html` under your origin, with a custom `cache: no-cache` header. If you can, proceed, else throw the offline page
>
>
>
```
self.addEventListener( 'fetch', ( event ) => {
let headers = new Headers();
headers.append( 'cache-control', 'no-cache' );
headers.append( 'pragma', 'no-cache' );
var req = new Request( 'test-connectivity.html', {
method: 'GET',
mode: 'same-origin',
headers: headers,
redirect: 'manual' // let browser handle redirects
} );
event.respondWith( fetch( req, {
cache: 'no-store'
} )
.then( function ( response ) {
return fetch( event.request )
} )
.catch( function ( err ) {
return new Response( '<div><h2>Uh oh that did not work</h2></div>', {
headers: {
'Content-type': 'text/html'
}
} )
} ) )
} );
```
The `{cache:'no-store'}` object as the second parameter to `fetch` , is an unfortunate **NO-OP**. Just doesn't work.
Just keep it for the sake of a future scenario. It is **really** optional as of today.
If that worked, then you do not need to build a whole new `Request` object for `fetch`
cheers!
>
> The code piece that creates a new request is generously borrowed from
> @pirxpilot 's answer [here](https://stackoverflow.com/questions/35420980/how-to-alter-the-headers-of-a-request)
>
>
>
### The offline worker for this specific question on pastebin
<https://pastebin.com/sNCutAw7> | I tried your code and I got the same result as you in the dev tools network tab. The network tab says it loaded the index.html from service-worker, but actually the service-worker returns the cached Offline Page as expected!
[](https://i.stack.imgur.com/m0OLt.png) |
73,558,009 | I am attempting to run celery on it's own container from my Flask app. Right now I am just setting up a simple email app. The container CMD is
>
> "["celery", "worker", "--loglevel=info"]"
>
>
>
The message gets sent to the redis broker and celery picks it up, but celery gives me the error.
>
> "Received unregistered task of type
> 'flask\_project.views.send\_async\_email'. The message has been ignored
> and discarded."
>
>
>
I am setting the include in the celery config on my flask app. I have restarted and rebuilt my containers and still the same issue.
```
from flask import Blueprint, current_app
from flask_mail import Mail
from os import getenv
from celery import Celery
from .support_func import decorator_require_api
views = Blueprint('views', __name__)
celery = Celery(views.name,
broker='redis://redis:6379/0',
include=["views.tasks"])
@celery.task
def send_async_email(email_data):
mail = Mail()
mail.send(email_data)
@views.route('/')
def home():
with current_app.app_context():
email_data = {'sender': getenv('MAIL_USERNAME'), 'recipients': ['mrjoli021@gmail.com'],
'message': "This is a test email"}
send_async_email.delay(email_data)
return "Message sent!"
```
Compose:
```
---
version: "3.9"
services:
flask:
build:
context: ./Docker/flask
container_name: flask
volumes:
- ./app/:/app
restart: unless-stopped
stdin_open: true
#entrypoint: /bin/bash
networks:
- api
nginx:
image: nginx:latest
container_name: nginx
depends_on:
- flask
#entrypoint: /bin/bash
volumes:
- ./nginx_config:/etc/nginx/conf.d
- ./app/:/app
ports:
- "5000:443"
networks:
- api
celery:
build:
context: ./Docker/celery
container_name: celery
depends_on:
- redis
restart: unless-stopped
stdin_open: true
networks:
- api
redis:
image: redis:latest
container_name: redis
depends_on:
- flask
#entrypoint: /bin/bash
networks:
- api
networks:
api:
driver: bridge
-----------------
DockerFile:
FROM python:3.9.7-slim-buster
WORKDIR /app
RUN apt-get update && apt-get install -y \
build-essential # python-dev libssl-dev openssl
COPY ./ .
RUN pip3 install -r requirements.txt
ENV CELERY_BROKER_URL=redis://redis:6379/0
CMD ["celery", "worker", "--loglevel=info"]
``` | 2022/08/31 | [
"https://Stackoverflow.com/questions/73558009",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2236794/"
] | You need to pass the celery app to the worker with `--app` or `-A` flag (see my answer/example [here](https://stackoverflow.com/a/45908901/1011253)).
I would recommend to refactor a bit and extract this snippet:
```
celery = Celery(views.name,
broker='redis://redis:6379/0',
include=["views.tasks"])
```
to external file, such as `celery_app.py` and then import it for your flask app and use it for the worker:
```
["celery", "--app", "your_module.celery_app:celery", "worker", "--loglevel=info"]
```
You should see the registered tasks within the worker's startup logs (when you see the big C (Celery) logo.. | I finally figured it out. I used <https://blog.miguelgrinberg.com/post/celery-and-the-flask-application-factory-pattern>
as a reference. Now I can register new blueprints without touching the celery config. It is a work in progress, but now the containers are all up and running.
```
.
├── Docker
│ ├── celery
│ │ ├── Dockerfile
│ │ └── requirements.txt
│ └── flask
│ ├── Dockerfile
│ └── requirements.txt
├── app
│ ├── flask_project
│ │ ├── __init__.py
│ │ ├── celery_app.py
│ │ └── views.py
├── docker-compose.yml
Compose:
--------------------------------------------------------------------------------
---
version: "3.9"
services:
flask:
build:
context: ./Docker/flask
container_name: flask
volumes:
- ./app/:/app
restart: unless-stopped
stdin_open: true
networks:
- api
nginx:
image: nginx:latest
container_name: nginx
depends_on:
- flask
#entrypoint: /bin/bash
volumes:
- ./nginx_config:/etc/nginx/conf.d
- ./app/:/app
ports:
- "5000:443"
networks:
- api
celery:
build:
context: ./Docker/celery
container_name: celery
depends_on:
- redis
volumes:
- ./app/:/app
restart: unless-stopped
stdin_open: true
networks:
- api
redis:
image: redis:latest
container_name: redis
depends_on:
- flask
#entrypoint: /bin/bash
networks:
- api
networks:
api:
driver: bridge
celery_app.py:
--------------------------------------------------------------------------------
from . import celery, create_app
app = create_app()
app.app_context().push()
__init__.py:
--------------------------------------------------------------------------------
from celery import Celery
celery = Celery(__name__, broker=getenv('CELERY_BROKER_URL'))
def create_app():
app = Flask(__name__)
# Celery stuff
celery.conf.update(app.config)
# Register Blueprints
from .views import views
app.register_blueprint(views, url_prefix='/')
return app
views.py:
--------------------------------------------------------------------------------
from flask import Blueprint, current_app
from flask_mail import Message, Mail
from os import getenv
from . import celery
views = Blueprint('views', __name__)
@celery.task
def send_async_email(email_data):
msg = Message(email_data['subject'],
sender=email_data['sender'],
recipients=email_data['recipients'],
)
msg.body = email_data['message']
mail = Mail()
mail.send(msg)
@views.route('/')
def home():
with current_app.app_context():
email_data = {'sender': getenv('MAIL_USERNAME'),
'recipients': ['some_email@gmail.com'],
'subject': 'testing123',
'message': "testing123"
}
msg = Message(email_data['subject'],
sender=email_data['sender'],
recipients=email_data['recipients'],
)
msg.body = email_data['message']
send_async_email.delay(email_data)
return "Message sent!"
``` |
69,776,068 | I created a list of files in a directory using os.listdir(), and I'm trying to move percentages of the files(which are images) to different folders. So, I'm trying to move 70%, 15%, and 15% of the files to three different target folders.
Here is a slice of the file list:
```
print(cnv_list[0:5])
['CNV-9890872-5.jpeg', 'CNV-9911627-97.jpeg', 'CNV-9935363-11.jpeg', 'CNV-9911627-15.jpeg', 'CNV-9935363-118.jpeg']
```
So, I'm trying to send 70% of these files to one folder, 15% of them to another folder, and 15% to a third folder.
I saw this code below in another answer here which addresses how to move files, but not my specific question around percentages of those files:
[Moving all files from one directory to another using Python](https://stackoverflow.com/questions/41826868/moving-all-files-from-one-directory-to-another-using-python)
```
import shutil
import os
source_dir = '/path/to/source_folder'
target_dir = '/path/to/dest_folder'
file_names = os.listdir(source_dir)
for file_name in file_names:
shutil.move(os.path.join(source_dir, file_name), target_dir)
``` | 2021/10/30 | [
"https://Stackoverflow.com/questions/69776068",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7248794/"
] | If you can partition a list 70/30, and partition a list 50/50, then you can get 70/15/15 just by partitioning twice (once 70/30, once 50/50).
```
def partition_pct(lst, point):
idx = int(len(lst) * point)
return lst[:idx], lst[idx:]
l = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]
l_70, l_30 = partition_pct(l, 0.7)
l_15_1, l_15_2 = partition_pct(l_30, 0.5)
```
Assign `l` from `os.listdir()`, and you get filenames instead of numbers. Thus, given your preexisting `cnv_list` of filenames:
```
cnv_list_70, cnv_list_30 = partition_pct(cnv_list, .7)
cnv_list_15_1, cnv_list_15_2 = partition_pct(cnv_list_30, .5)
for (file_list, dirname) in ((cnv_list_70, 'dst_70'),
(cnv_list_15_1, 'dst_15_1'),
(cnv_list_15_2, 'dst_15_2')):
for f in file_list:
shutil.move(f, dirname)
```
...will move 70% of your files to the directory `dst_70`, 15% to `dst_15_1`, and another 15% to `dst_15_2`. | Don't know if there's a better way but that's what i have:
```
def split(lst, weights):
sizes = []
fractions = []
for i in weights:
sizes.append(round(i * len(lst)))
fractions.append((i * len(lst)) % 1)
if sum(sizes) < len(lst):
i = max(range(len(fractions)), key=fractions.__getitem__)
sizes[i] += 1
elif sum(sizes) > len(lst):
i = min(range(len(fractions)), key=fractions.__getitem__)
sizes[i] -= 1
it = iter(lst)
return [[next(it) for _ in range(size)] for size in sizes]
```
It take as a argument two lists one the list to split and the other with weights, it handles any configuration of weights or list lenght e.g. :
```
print(split(range(19), [.1,.5,.4]))
```
Outputs:
```
[[0, 1], [2, 3, 4, 5, 6, 7, 8, 9, 10], [11, 12, 13, 14, 15, 16, 17, 18]]
```
Note that weights are floats and sum up to 1 |
54,758,444 | We have 32 V-CPUs with 28 GB ram with `Local Executor` but still airflow is utilizing all the resources and this is resulting in over-utilization of resources which ultimately breaks the system execution.
Below is the output for ps -aux ordered by memory usage.
```
PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
1336 3.5 0.9 1600620 271644 ? Ss Feb18 23:41 /usr/bin/python /usr/local/bin/airflow webs
9434 32.3 0.9 1835796 267844 ? Sl 03:09 0:31 [ready] gunicorn: worker [airflow-webserver
10043 9.1 0.9 1835796 267844 ? Sl 03:05 0:33 [ready] gunicorn: worker [airflow-webserver
25397 17.4 0.9 1835796 267844 ? Sl 03:08 0:30 [ready] gunicorn: worker [airflow-webserver
30680 13.0 0.9 1835796 267844 ? Sl 03:06 0:36 [ready] gunicorn: worker [airflow-webserver
28434 60.5 0.5 1720548 152380 ? Rl 03:10 0:12 gunicorn: worker [airflow-webserver]
20202 2.2 0.3 1671280 111316 ? Sl 03:07 0:04 /usr/bin/python /usr/local/bin/airflow run
14353 1.9 0.3 1671484 111208 ? Sl 03:07 0:04 /usr/bin/python /usr/local/bin/airflow run
14497 1.8 0.3 1671480 111192 ? Sl 03:07 0:03 /usr/bin/python /usr/local/bin/airflow run
25170 2.0 0.3 1671024 110964 ? Sl 03:08 0:03 /usr/bin/python /usr/local/bin/airflow run
21887 1.8 0.3 1670692 110672 ? Sl 03:07 0:03 /usr/bin/python /usr/local/bin/airflow run
5211 4.7 0.3 1670488 110456 ? Sl 03:09 0:05 /usr/bin/python /usr/local/bin/airflow run
8819 4.9 0.3 1670140 110264 ? Sl 03:09 0:04 /usr/bin/python /usr/local/bin/airflow run
6034 3.9 0.3 1670324 110080 ? Sl 03:09 0:04 /usr/bin/python /usr/local/bin/airflow run
8817 4.6 0.3 1670136 110044 ? Sl 03:09 0:04 /usr/bin/python /usr/local/bin/airflow run
8829 4.0 0.3 1670076 110012 ? Sl 03:09 0:04 /usr/bin/python /usr/local/bin/airflow run
14349 1.6 0.3 1670360 109988 ? Sl 03:07 0:03 /usr/bin/python /usr/local/bin/airflow run
8815 3.5 0.3 1670140 109984 ? Sl 03:09 0:03 /usr/bin/python /usr/local/bin/airflow run
8917 4.2 0.3 1669980 109980 ? Sl 03:09 0:04 /usr/bin/python /usr/local/bin/airflow run
```
From the `RSS` field we can see that the RAM being utilized for web-server is more than 10 GB and per task an average of 1 GB is being used.
The tasks are just for monitoring an end point of a rest API.
Below is the Airflow Configuration file
```
[core]
# The home folder for airflow, default is ~/airflow
airflow_home = /airflow
# The folder where your airflow pipelines live, most likely a
# subfolder in a code repository
# This path must be absolute
dags_folder = /airflow/dags
# The folder where airflow should store its log files
# This path must be absolute
base_log_folder = /airflow/logs/
# Airflow can store logs remotely in AWS S3 or Google Cloud Storage. Users
# must supply an Airflow connection id that provides access to the storage
# location.
remote_logging = True
remote_log_conn_id = datalake_gcp_connection
encrypt_s3_logs = False
# Logging level
logging_level = INFO
# Logging class
# Specify the class that will specify the logging configuration
# This class has to be on the python classpath
logging_config_class = log_config.LOGGING_CONFIG
# Log format
log_format = [%%(asctime)s] {%%(filename)s:%%(lineno)d} %%(levelname)s - %%(message)s
simple_log_format = %%(asctime)s %%(levelname)s - %%(message)s
# The executor class that airflow should use. Choices include
# SequentialExecutor, LocalExecutor, CeleryExecutor, DaskExecutor
executor = LocalExecutor
# The SqlAlchemy connection string to the metadata database.
# SqlAlchemy supports many different database engine, more information
# their website
sql_alchemy_conn = mysql://user:pass@127.0.0.1/airflow_db
# The SqlAlchemy pool size is the maximum number of database connections
# in the pool.
sql_alchemy_pool_size = 400
# The SqlAlchemy pool recycle is the number of seconds a connection
# can be idle in the pool before it is invalidated. This config does
# not apply to sqlite.
sql_alchemy_pool_recycle = 3000
# The amount of parallelism = 32
# the max number of task instances that should run simultaneously
# on this airflow installation
parallelism = 64
# The number of task instances allowed to run concurrently by the scheduler
dag_concurrency = 32
# Are DAGs paused by default at creation
dags_are_paused_at_creation = True
# When not using pools, tasks are run in the "default pool",
# whose size is guided by this config element
non_pooled_task_slot_count = 400
# The maximum number of active DAG runs per DAG
max_active_runs_per_dag = 16
# Whether to load the examples that ship with Airflow. It's good to
# get started, but you probably want to set this to False in a production
# environment
load_examples = False
# Where your Airflow plugins are stored
plugins_folder = /airflow/plugins
# Secret key to save connection passwords in the db
fernet_key = <FERNET KEY>
# Whether to disable pickling dags
donot_pickle = False
# How long before timing out a python file import while filling the DagBag
dagbag_import_timeout = 120
# The class to use for running task instances in a subprocess
task_runner = BashTaskRunner
# If set, tasks without a `run_as_user` argument will be run with this user
# Can be used to de-elevate a sudo user running Airflow when executing tasks
default_impersonation =
# What security module to use (for example kerberos):
security =
# Turn unit test mode on (overwrites many configuration options with test
# values at runtime)
unit_test_mode = False
# Name of handler to read task instance logs.
# Default to use file task handler.
task_log_reader = gcs.task
# Whether to enable pickling for xcom (note that this is insecure and allows for
# RCE exploits). This will be deprecated in Airflow 2.0 (be forced to False).
enable_xcom_pickling = True
# When a task is killed forcefully, this is the amount of time in seconds that
# it has to cleanup after it is sent a SIGTERM, before it is SIGKILLED
killed_task_cleanup_time = 60
[cli]
# In what way should the cli access the API. The LocalClient will use the
# database directly, while the json_client will use the api running on the
# webserver
api_client = airflow.api.client.json_client
endpoint_url = http://0.0.0.0:8080
[api]
# How to authenticate users of the API
auth_backend = airflow.api.auth.backend.default
[operators]
# The default owner assigned to each new operator, unless
# provided explicitly or passed via `default_args`
default_owner = Airflow
default_cpus = 1
default_ram = 125
default_disk = 125
default_gpus = 0
[webserver]
# The base url of your website as airflow cannot guess what domain or
# cname you are using. This is used in automated emails that
# airflow sends to point links to the right web server
base_url = http://localhost:8080
authenticate = False
auth_backend = airflow.contrib.auth.backends.password_auth
# The ip specified when starting the web server
web_server_host = 0.0.0.0
# The port on which to run the web server
web_server_port = 8080
# Paths to the SSL certificate and key for the web server. When both are
# provided SSL will be enabled. This does not change the web server port.
web_server_ssl_cert =
web_server_ssl_key =
# Number of seconds the gunicorn webserver waits before timing out on a worker
web_server_worker_timeout = 120
# Number of workers to refresh at a time. When set to 0, worker refresh is
# disabled. When nonzero, airflow periodically refreshes webserver workers by
# bringing up new ones and killing old ones.
worker_refresh_batch_size = 1
# Number of seconds to wait before refreshing a batch of workers.
worker_refresh_interval = 30
# Secret key used to run your flask app
secret_key = temporary_key
# Number of workers to run the Gunicorn web server
workers = 4
# The worker class gunicorn should use. Choices include
# sync (default), eventlet, gevent
worker_class = sync
# Log files for the gunicorn webserver. '-' means log to stderr.
access_logfile = -
error_logfile = -
# Expose the configuration file in the web server
expose_config = False
# Set to true to turn on authentication:
# http://pythonhosted.org/airflow/security.html#web-authentication
#authenticate = False
# Filter the list of dags by owner name (requires authentication to be enabled)
filter_by_owner = False
# Filtering mode. Choices include user (default) and ldapgroup.
# Ldap group filtering requires using the ldap backend
#
# Note that the ldap server needs the "memberOf" overlay to be set up
# in order to user the ldapgroup mode.
owner_mode = user
# Default DAG view. Valid values are:
# tree, graph, duration, gantt, landing_times
dag_default_view = graph
# Default DAG orientation. Valid values are:
# LR (Left->Right), TB (Top->Bottom), RL (Right->Left), BT (Bottom->Top)
dag_orientation = LR
# Puts the webserver in demonstration mode; blurs the names of Operators for
# privacy.
demo_mode = False
# The amount of time (in secs) webserver will wait for initial handshake
# while fetching logs from other worker machine
log_fetch_timeout_sec = 5
# By default, the webserver shows paused DAGs. Flip this to hide paused
# DAGs by default
hide_paused_dags_by_default = True
# Consistent page size across all listing views in the UI
page_size = 40
[email]
email_backend = airflow.utils.email.send_email_smtp
[smtp]
# If you want airflow to send emails on retries, failure, and you want to use
# the airflow.utils.email.send_email_smtp function, you have to configure an
# smtp server here
smtp_host = smtp.gmail.com
smtp_starttls = True
smtp_ssl = False
# Uncomment and set the user/pass settings if you want to use SMTP AUTH
#smtp_user = airflow
#smtp_password = airflow
smtp_port = 25
smtp_mail_from = airflow@example.com
[celery]
# This section only applies if you are using the CeleryExecutor in
# [core] section above
# The app name that will be used by celery
celery_app_name = airflow.executors.celery_executor
# The concurrency that will be used when starting workers with the
# "airflow worker" command. This defines the number of task instances that
# a worker will take, so size up your workers based on the resources on
# your worker box and the nature of your tasks
celeryd_concurrency = 16
# When you start an airflow worker, airflow starts a tiny web server
# subprocess to serve the workers local log files to the airflow main
# web server, who then builds pages and sends them to users. This defines
# the port on which the logs are served. It needs to be unused, and open
# visible from the main web server to connect into the workers.
worker_log_server_port = 8793
# The Celery broker URL. Celery supports RabbitMQ, Redis and experimentally
# a sqlalchemy database. Refer to the Celery documentation for more
# information.
broker_url = sqla+mysql://user:pass@127.0.0.1/airflow_db
# Another key Celery setting
celery_result_backend = db+mysql://user:pass@127.0.0.1/airflow_db
# Celery Flower is a sweet UI for Celery. Airflow has a shortcut to start
# it `airflow flower`. This defines the IP that Celery Flower runs on
flower_host = 0.0.0.0
# This defines the port that Celery Flower runs on
flower_port = 5555
# Default queue that tasks get assigned to and that worker listen on.
default_queue = default
# Import path for celery configuration options
celery_config_options = airflow.config_templates.default_celery.DEFAULT_CELERY_CONFIG
[dask]
# This section only applies if you are using the DaskExecutor in
# [core] section above
# The IP address and port of the Dask cluster's scheduler.
cluster_address = 127.0.0.1:8786
[scheduler]
# Task instances listen for external kill signal (when you clear tasks
# from the CLI or the UI), this defines the frequency at which they should
# listen (in seconds).
job_heartbeat_sec = 20
# The scheduler constantly tries to trigger new tasks (look at the
# scheduler section in the docs for more information). This defines
# how often the scheduler should run (in seconds).
scheduler_heartbeat_sec = 60
# after how much time should the scheduler terminate in seconds
# -1 indicates to run continuously (see also num_runs)
run_duration = -1
# after how much time a new DAGs should be picked up from the filesystem
min_file_process_interval = 5
dag_dir_list_interval = 300
# How often should stats be printed to the logs
print_stats_interval = 30
child_process_log_directory = /airflow/logs/scheduler
# Local task jobs periodically heartbeat to the DB. If the job has
# not heartbeat in this many seconds, the scheduler will mark the
# associated task instance as failed and will re-schedule the task.
scheduler_zombie_task_threshold = 300
# Turn off scheduler catchup by setting this to False.
# Default behavior is unchanged and
# Command Line Backfills still work, but the scheduler
# will not do scheduler catchup if this is False,
# however it can be set on a per DAG basis in the
# DAG definition (catchup)
catchup_by_default = False
# This changes the batch size of queries in the scheduling main loop.
# This depends on query length limits and how long you are willing to hold locks.
# 0 for no limit
max_tis_per_query = 256
# Statsd (https://github.com/etsy/statsd) integration settings
statsd_on = False
statsd_host = localhost
statsd_port = 8125
statsd_prefix = airflow
# The scheduler can run multiple threads in parallel to schedule dags.
# This defines how many threads will run.
max_threads = 12
authenticate = False
[ldap]
# set this to ldaps://<your.ldap.server>:<port>
uri =
user_filter = objectClass=*
user_name_attr = uid
group_member_attr = memberOf
superuser_filter =
data_profiler_filter =
bind_user = cn=Manager,dc=example,dc=com
bind_password = insecure
basedn = dc=example,dc=com
cacert = /etc/ca/ldap_ca.crt
search_scope = LEVEL
[mesos]
# Mesos master address which MesosExecutor will connect to.
master = localhost:5050
# The framework name which Airflow scheduler will register itself as on mesos
framework_name = Airflow
# Number of cpu cores required for running one task instance using
# 'airflow run <dag_id> <task_id> <execution_date> --local -p <pickle_id>'
# command on a mesos slave
task_cpu = 1
# Memory in MB required for running one task instance using
# 'airflow run <dag_id> <task_id> <execution_date> --local -p <pickle_id>'
# command on a mesos slave
task_memory = 256
# Enable framework checkpointing for mesos
# See http://mesos.apache.org/documentation/latest/slave-recovery/
checkpoint = False
# Failover timeout in milliseconds.
# When checkpointing is enabled and this option is set, Mesos waits
# until the configured timeout for
# the MesosExecutor framework to re-register after a failover. Mesos
# shuts down running tasks if the
# MesosExecutor framework fails to re-register within this timeframe.
# failover_timeout = 604800
# Enable framework authentication for mesos
# See http://mesos.apache.org/documentation/latest/configuration/
authenticate = False
# Mesos credentials, if authentication is enabled
# default_principal = admin
# default_secret = admin
[kerberos]
ccache = /tmp/airflow_krb5_ccache
# gets augmented with fqdn
principal = airflow
reinit_frequency = 3600
kinit_path = kinit
keytab = airflow.keytab
[github_enterprise]
api_rev = v3
[admin]
# UI to hide sensitive variable fields when set to True
hide_sensitive_variable_fields = True
```
What are we doing wrong here? | 2019/02/19 | [
"https://Stackoverflow.com/questions/54758444",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6823560/"
] | [The size shown in `RSS` field is in `KB`](http://man7.org/linux/man-pages/man1/ps.1.html). The first process is using about 265 MB, not something over 10 GB.
The `MEM` field shows the memory usage in *percentage*, not GB. 0.9% of 28 GB is 252 MB. You can see stats about memory with the `free` command.
See <http://man7.org/linux/man-pages/man1/ps.1.html>. In short, it's not airflow over utilising resources that's breaking your system. | A recommended method is to set the CPUQuota of Airflow to max 80%. This will ensure that Airflow process does not eat up all the CPU resources which sometimes cause the system to hang.
You can use a ready-made AMI (namely, LightningFLow) from AWS Marketplace which is pre-configured with the recommended configurations.
Note: LightningFlow also comes pre-integrated with all required libraries, Livy, custom operators, and local Spark cluster.
Link for AWS Marketplace: <https://aws.amazon.com/marketplace/pp/Lightning-Analytics-Inc-LightningFlow-Integrated-o/B084BSD66V> |
56,791,917 | In the shell:
```
$ date
Do 27. Jun 15:13:13 CEST 2019
```
In python:
```
>>> from datetime import datetime
>>> datetime.now()
datetime.datetime(2019, 6, 27, 15, 14, 51, 314560)
>>> a = datetime.now()
>>> a.strftime("%Y%m%d")
'20190627'
```
What is the format specifier needed to get the *exactly same output* as `date`, including evaluation of the locale settings? | 2019/06/27 | [
"https://Stackoverflow.com/questions/56791917",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10895273/"
] | Looks like you need to use the [locale](https://docs.python.org/2/library/locale.html) module
Playing in the shell:
```
$ date
Thu Jun 27 10:01:03 EDT 2019
$ LC_ALL=fr_FR.UTF-8 date
jeu. juin 27 10:01:12 EDT 2019
```
In python
```
$ LC_ALL=fr_FR.UTF-8 python
Python 2.7.5 (default, Jun 20 2019, 20:27:34)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from datetime import datetime
>>> datetime.now().strftime("%c")
'Thu Jun 27 10:03:13 2019'
```
Hmm, I expected python to respect my environment. Let's force the issue:
```
>>> import locale
>>> import os
>>> locale.setlocale(locale.LC_ALL, os.environ['LC_ALL'])
'fr_FR.UTF-8'
>>> datetime.now().strftime("%c")
'jeu. 27 juin 2019 10:04:48 '
```
Ah.
---
Reading a little further into the locale docs, I see
>
> Initially, when a program is started, the locale is the C locale, no matter what the user’s preferred locale is. The program must explicitly say that it wants the user’s preferred locale settings by calling `setlocale(LC_ALL, '')`.
>
>
> | you can use [`.strftime`](https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior) to get your own string format.
in your case you want:
```py
from datetime import datetime
now = datetime.now()
print(now.strftime("%a %d. %b %H:%M:%S %Z %Y"))
```
**NOTE:** how the day/month name are printed will be affected by your machine's current locale. you can set a custom datetime locale and timezone if you need specific ones. |
56,791,917 | In the shell:
```
$ date
Do 27. Jun 15:13:13 CEST 2019
```
In python:
```
>>> from datetime import datetime
>>> datetime.now()
datetime.datetime(2019, 6, 27, 15, 14, 51, 314560)
>>> a = datetime.now()
>>> a.strftime("%Y%m%d")
'20190627'
```
What is the format specifier needed to get the *exactly same output* as `date`, including evaluation of the locale settings? | 2019/06/27 | [
"https://Stackoverflow.com/questions/56791917",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10895273/"
] | you can use [`.strftime`](https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior) to get your own string format.
in your case you want:
```py
from datetime import datetime
now = datetime.now()
print(now.strftime("%a %d. %b %H:%M:%S %Z %Y"))
```
**NOTE:** how the day/month name are printed will be affected by your machine's current locale. you can set a custom datetime locale and timezone if you need specific ones. | ```
from datetime import datetime
now=datetime.now()
year=now.strftime("%Y")
print("Year:",year)
month=now.strftime("%m")
print("Month:",month)
day=now.strftime("%d")
print("Day:",day)
time=now.strftime("%H:%M:%S")
print("Time:",time)
date_time=now.strftime("%m/%d/%Y, %H:%M:%S")
print("date and time:",date_time)
```
output:
Year: 2019
Month: 06
Day: 27
Time: 21:19:33
date and time: 06/27/2019, 21:19:33 |
56,791,917 | In the shell:
```
$ date
Do 27. Jun 15:13:13 CEST 2019
```
In python:
```
>>> from datetime import datetime
>>> datetime.now()
datetime.datetime(2019, 6, 27, 15, 14, 51, 314560)
>>> a = datetime.now()
>>> a.strftime("%Y%m%d")
'20190627'
```
What is the format specifier needed to get the *exactly same output* as `date`, including evaluation of the locale settings? | 2019/06/27 | [
"https://Stackoverflow.com/questions/56791917",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10895273/"
] | Looks like you need to use the [locale](https://docs.python.org/2/library/locale.html) module
Playing in the shell:
```
$ date
Thu Jun 27 10:01:03 EDT 2019
$ LC_ALL=fr_FR.UTF-8 date
jeu. juin 27 10:01:12 EDT 2019
```
In python
```
$ LC_ALL=fr_FR.UTF-8 python
Python 2.7.5 (default, Jun 20 2019, 20:27:34)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from datetime import datetime
>>> datetime.now().strftime("%c")
'Thu Jun 27 10:03:13 2019'
```
Hmm, I expected python to respect my environment. Let's force the issue:
```
>>> import locale
>>> import os
>>> locale.setlocale(locale.LC_ALL, os.environ['LC_ALL'])
'fr_FR.UTF-8'
>>> datetime.now().strftime("%c")
'jeu. 27 juin 2019 10:04:48 '
```
Ah.
---
Reading a little further into the locale docs, I see
>
> Initially, when a program is started, the locale is the C locale, no matter what the user’s preferred locale is. The program must explicitly say that it wants the user’s preferred locale settings by calling `setlocale(LC_ALL, '')`.
>
>
> | ```
from datetime import datetime
now=datetime.now()
year=now.strftime("%Y")
print("Year:",year)
month=now.strftime("%m")
print("Month:",month)
day=now.strftime("%d")
print("Day:",day)
time=now.strftime("%H:%M:%S")
print("Time:",time)
date_time=now.strftime("%m/%d/%Y, %H:%M:%S")
print("date and time:",date_time)
```
output:
Year: 2019
Month: 06
Day: 27
Time: 21:19:33
date and time: 06/27/2019, 21:19:33 |
34,636,391 | I've been searching for the last few hours and cannot find a library that allows me to add hyperlinks to a word document using python. In my ideal world I'd be able to manipulate a word doc using python to add hyperlinks to footnotes which link to internal documents. Python-docx doesn't seem to have this feature.
It breaks down into 2 questions. 1) Is there a way to add hyperlinks to word docs using python? 2) Is there a way to manipulate footnotes in word docs using python?
Does anyone know how to do this or any part of this? | 2016/01/06 | [
"https://Stackoverflow.com/questions/34636391",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3368835/"
] | Hyperlinks can be added using the win32com package:
```
import win32com.client
#connect to Word (start it if it isn't already running)
wordapp = win32com.client.Dispatch("Word.Application")
#add a new document
doc = wordapp.Documents.Add()
#add some text and turn it into a hyperlink
para = doc.Paragraphs.Add()
para.Range.Text = "Adding hyperlinks in Microsoft word using python"
doc.Hyperlinks.Add(Anchor=para.Range, Address="http://stackoverflow.com/questions/34636391/adding-hyperlinks-in-microsoft-word-using-python")
#In theory you should be able to also pass in a TextToDisplay argument to the above call but I haven't been able to get this to work
#The workaround is to insert the link text into the document first and then convert it into a hyperlink
``` | ```
# How to insert hyperlinks into an existing MS Word document using win32com:
# Use the same call as in the example above to connect to Word:
wordapp = win32com.client.Dispatch("Word.Application")
# Open the input file where you want to insert the hyperlinks:
wordapp.Documents.Open("my_input_file.docx")
# Select the currently active document
doc = wordapp.ActiveDocument
# For my application, I want to replace references to identifiers in another
# document with the general format of "MSS-XXXX", where X is any digit, with
# hyperlinks to local html pages that capture the supporting details...
# First capture the entire document's content as text
docText = doc.Content.text
# Search for all identifiers that match the format criteria in the document:
mss_ids_to_link = re.findall('MSS-[0-9]+', docText)
# Now loop over all the identifier strings that were found, construct the link
# address for each html page to be linked, select the desired text where I want
# to insert the hyperlink, and then apply the link to the correct range of
# characters:
for linkIndex in range(len(mss_ids_to_link)):
current_string_to_link = mss_ids_to_link[linkIndex]
link_address = html_file_pathname + \
current_string_to_link + '.htm'
if wordapp.Selection.Find.Execute(FindText=current_string_to_link, \
Address=link_address) == True:
doc.Hyperlinks.Add(Anchor=wordapp.Selection.Range, \
Address=link_address)
# Save off the result:
doc.SaveAs('my_input_file.docx')
``` |
27,310,426 | I am trying to create an application which can detect heartbeat using your computer webcam. I am working on the code since 2 weeks and developed this code and here I got so far
How does it works? Illustrated below ...
1. Detecting face using opencv
2. Getting image of forehead
3. Applying filter to convert it into grayscale image [you can skip it]
4. Finding the average intensity of green pixle per frame
5. Saving the averages into an Array
6. Applying FFT (I have used minim library)Extract heart beat from FFT spectrum (Here, I need some help)
Here, I need help for extracting heartbeat from FFT spectrum. Can anyone help me. [Here](https://github.com/thearn/webcam-pulse-detector), is the similar application developed in python but I am not able to undersand this code so I am developing same in the proessing. Can anyone help me to undersatnd the part of this python code where it is extracting the heartbeat.
```
//---------import required ilbrary -----------
import gab.opencv.*;
import processing.video.*;
import java.awt.*;
import java.util.*;
import ddf.minim.analysis.*;
import ddf.minim.*;
//----------create objects---------------------------------
Capture video; // camera object
OpenCV opencv; // opencv object
Minim minim;
FFT fft;
//IIRFilter filt;
//--------- Create ArrayList--------------------------------
ArrayList<Float> poop = new ArrayList();
float[] sample;
int bufferSize = 128;
int sampleRate = 512;
int bandWidth = 20;
int centerFreq = 80;
//---------------------------------------------------
void setup() {
size(640, 480); // size of the window
minim = new Minim(this);
fft = new FFT( bufferSize, sampleRate);
video = new Capture(this, 640/2, 480/2); // initializing video object
opencv = new OpenCV(this, 640/2, 480/2); // initializing opencv object
opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE); // loading haar cscade file for face detection
video.start(); // start video
}
void draw() {
background(0);
// image(video, 0, 0 ); // show video in the background
opencv.loadImage(video);
Rectangle[] faces = opencv.detect();
video.loadPixels();
//------------ Finding faces in the video -----------
float gavg = 0;
for (int i = 0; i < faces.length; i++) {
noFill();
stroke(#FFB700); // yellow rectangle
rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height); // creating rectangle around the face (YELLOW)
stroke(#0070FF); //blue rectangle
rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height-2*faces[i].height/3); // creating a blue rectangle around the forehead
//-------------------- storing forehead white rectangle part into an image -------------------
stroke(0, 255, 255);
rect(faces[i].x+faces[i].width/2-15, faces[i].y+15, 30, 15);
PImage img = video.get(faces[i].x+faces[i].width/2-15, faces[i].y+15, 30, 15); // storing the forehead aera into a image
img.loadPixels();
img.filter(GRAY); // converting capture image rgb to gray
img.updatePixels();
int numPixels = img.width*img.height;
for (int px = 0; px < numPixels; px++) { // For each pixel in the video frame...
final color c = img.pixels[px];
final color luminG = c>>010 & 0xFF;
final float luminRangeG = luminG/255.0;
gavg = gavg + luminRangeG;
}
//--------------------------------------------------------
gavg = gavg/numPixels;
if (poop.size()< bufferSize) {
poop.add(gavg);
}
else poop.remove(0);
}
sample = new float[poop.size()];
for (int i=0;i<poop.size();i++) {
Float f = (float) poop.get(i);
sample[i] = f;
}
if (sample.length>=bufferSize) {
//fft.window(FFT.NONE);
fft.forward(sample, 0);
// bpf = new BandPass(centerFreq, bandwidth, sampleRate);
// in.addEffect(bpf);
float bw = fft.getBandWidth(); // returns the width of each frequency band in the spectrum (in Hz).
println(bw); // returns 21.5332031 Hz for spectrum [0] & [512]
for (int i = 0; i < fft.specSize(); i++)
{
// println( " Freq" + max(sample));
stroke(0, 255, 0);
float x = map(i, 0, fft.specSize(), 0, width);
line( x, height, x, height - fft.getBand(i)*100);
// text("FFT FREQ " + fft.getFreq(i), width/2-100, 10*(i+1));
// text("FFT BAND " + fft.getBand(i), width/2+100, 10*(i+1));
}
}
else {
println(sample.length + " " + poop.size());
}
}
void captureEvent(Capture c) {
c.read();
}
``` | 2014/12/05 | [
"https://Stackoverflow.com/questions/27310426",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3789164/"
] | The FFT is applied in a window with 128 samples.
```
int bufferSize = 128;
```
During the draw method the samples are stored in a array until fill the buffer for the FFT to be applied. Then after that the buffer is keep full. To insert a new sample the oldest is removed. gavg is the average gray channel color.
```
gavg = gavg/numPixels;
if (poop.size()< bufferSize) {
poop.add(gavg);
}
else poop.remove(0);
```
Coping poop to sample
```
sample = new float[poop.size()];
for (int i=0;i < poop.size();i++) {
Float f = (float) poop.get(i);
sample[i] = f;
}
```
Now is possible to apply the FFT to sample Array
```
fft.forward(sample, 0);
```
In the code is only show the spectrum result. The heartbeat frequency must be calculated.
For each band in fft you have to find the maximum and that position is the frequency of heartbeat.
```
for(int i = 0; i < fft.specSize(); i++)
{ // draw the line for frequency band i, scaling it up a bit so we can see it
heartBeatFrequency = max(heartBeatFrequency,fft.getBand(i));
}
```
Then get the bandwidth to know the frequency.
```
float bw = fft.getBandWidth();
```
Adjusting frequency.
```
heartBeatFrequency = fft.getBandWidth() * heartBeatFrequency ;
``` | After you get samples size 128 that is bufferSize value or greater than that, forward the fft with the samples array and then get the peak value of the spectrum which would be our heartBeatRate
Following Papers explains the same :
1. Measuring Heart Rate from Video - *Isabel Bush* - Stanford - [link](https://web.stanford.edu/class/cs231a/prev_projects_2016/finalReport.pdf) (Page 4 paragraphs below Figure 2 explain this.)
2. Real Time Heart Rate Monitoring From Facial RGB Color Video Using Webcam - *H. Rahman, M.U. Ahmed, S. Begum, P. Funk* - [link](http://www.ep.liu.se/ecp/129/002/ecp16129002.pdf) (Page 4)
---
After looking at your question , I thought let me get my hands onto this and I tried making a [repository](https://github.com/pishangujeniya/FaceToHeart) for this.
Well, having some issues if someone can have a look at it.
Thank you [David Clifte](https://stackoverflow.com/users/940802/david-clifte) for [this](https://stackoverflow.com/a/27317153/7703497) answer it helped a lot. |
53,477,114 | When i run `sudo docker-compose build` i get
```
Building web
Step 1/8 : FROM python:3.7-alpine
ERROR: Service 'web' failed to build: error parsing HTTP 403 response body: invalid character '<' looking for beginning of value: "<html><body><h1>403 Forbidden</h1>\nSince Docker is a US company, we must comply with US export control regulations. In an effort to comply with these, we now block all IP addresses that are located in Cuba, Iran, North Korea, Republic of Crimea, Sudan, and Syria. If you are not in one of these cities, countries, or regions and are blocked, please reach out to https://support.docker.com\n</body></html>\n\n"
```
I need to set proxy for `docker-compose` for build
things i have tried:
looking at <https://docs.docker.com/network/proxy/#configure-the-docker-client>
* i have tried setting `~/.docker/config.json`
```
{
"proxies":
{
"default":
{
"httpProxy": "http://127.0.0.1:9278"
}
}
}
```
* tried with `--env` argument
* tried setting proxy variables on the server with no result
* i also have tried this [link](https://stackoverflow.com/a/36084324/4626485)
```
services:
myservice:
build:
context: .
args:
- http_proxy
- https_proxy
- no_proxy
```
but i get this on `version: '3.6'`
```
Unsupported config option for services.web: 'args'
```
these settings seem to be set on docker and not docker-compose
i also don't need to set any proxy on my local device (i don't want to loose portability if possible)
```
docker-compose version 1.23.1, build b02f1306
Docker version 18.06.1-ce, build e68fc7a
``` | 2018/11/26 | [
"https://Stackoverflow.com/questions/53477114",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4626485/"
] | You must be from restricted countries which are banned by docker (from [403](https://httpstatuses.com/403) status code). only way is to use proxies in your docker service.
>
> [Service]
>
>
> ...
>
>
> Environment="HTTP\_PROXY=http://proxy.example.com:80/
> HTTPS\_PROXY=http://proxy.example.com:80/"
>
>
> ...
>
>
>
after that you should issue:
```
$ systemctl daemon-reload
$ systemctl restart docker
``` | Include proxy details for each service in docker-compose.yml file, the sample configuration looks as below mentioned. Restart the docker and then run "docker-compose build" again. You might also run "docker-compose ps" to see if all the services mentioned in the compose file running successfully.
```
services:
<service_name>:
image:
hostname:
container_name:
ports:
environment:
HTTP_PROXY: 'http://host:port'
HTTPS_PROXY: 'http://host:port'
NO_PROXY: 'localhost, *.test.lan'
``` |
53,477,114 | When i run `sudo docker-compose build` i get
```
Building web
Step 1/8 : FROM python:3.7-alpine
ERROR: Service 'web' failed to build: error parsing HTTP 403 response body: invalid character '<' looking for beginning of value: "<html><body><h1>403 Forbidden</h1>\nSince Docker is a US company, we must comply with US export control regulations. In an effort to comply with these, we now block all IP addresses that are located in Cuba, Iran, North Korea, Republic of Crimea, Sudan, and Syria. If you are not in one of these cities, countries, or regions and are blocked, please reach out to https://support.docker.com\n</body></html>\n\n"
```
I need to set proxy for `docker-compose` for build
things i have tried:
looking at <https://docs.docker.com/network/proxy/#configure-the-docker-client>
* i have tried setting `~/.docker/config.json`
```
{
"proxies":
{
"default":
{
"httpProxy": "http://127.0.0.1:9278"
}
}
}
```
* tried with `--env` argument
* tried setting proxy variables on the server with no result
* i also have tried this [link](https://stackoverflow.com/a/36084324/4626485)
```
services:
myservice:
build:
context: .
args:
- http_proxy
- https_proxy
- no_proxy
```
but i get this on `version: '3.6'`
```
Unsupported config option for services.web: 'args'
```
these settings seem to be set on docker and not docker-compose
i also don't need to set any proxy on my local device (i don't want to loose portability if possible)
```
docker-compose version 1.23.1, build b02f1306
Docker version 18.06.1-ce, build e68fc7a
``` | 2018/11/26 | [
"https://Stackoverflow.com/questions/53477114",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4626485/"
] | You must be from restricted countries which are banned by docker (from [403](https://httpstatuses.com/403) status code). only way is to use proxies in your docker service.
>
> [Service]
>
>
> ...
>
>
> Environment="HTTP\_PROXY=http://proxy.example.com:80/
> HTTPS\_PROXY=http://proxy.example.com:80/"
>
>
> ...
>
>
>
after that you should issue:
```
$ systemctl daemon-reload
$ systemctl restart docker
``` | 1: edit resolve.conf in linux, add [ip](https://shecan.ir/) to the top of line in `resolv.conf`
```
nameserver {type ip}
```
2: use a poxy and create an account in docker hub (<https://hub.docker.com/>)
3:login into docker
```
sudo docker login
user:
password:
```
4: if you have problem try step 3 again |
53,477,114 | When i run `sudo docker-compose build` i get
```
Building web
Step 1/8 : FROM python:3.7-alpine
ERROR: Service 'web' failed to build: error parsing HTTP 403 response body: invalid character '<' looking for beginning of value: "<html><body><h1>403 Forbidden</h1>\nSince Docker is a US company, we must comply with US export control regulations. In an effort to comply with these, we now block all IP addresses that are located in Cuba, Iran, North Korea, Republic of Crimea, Sudan, and Syria. If you are not in one of these cities, countries, or regions and are blocked, please reach out to https://support.docker.com\n</body></html>\n\n"
```
I need to set proxy for `docker-compose` for build
things i have tried:
looking at <https://docs.docker.com/network/proxy/#configure-the-docker-client>
* i have tried setting `~/.docker/config.json`
```
{
"proxies":
{
"default":
{
"httpProxy": "http://127.0.0.1:9278"
}
}
}
```
* tried with `--env` argument
* tried setting proxy variables on the server with no result
* i also have tried this [link](https://stackoverflow.com/a/36084324/4626485)
```
services:
myservice:
build:
context: .
args:
- http_proxy
- https_proxy
- no_proxy
```
but i get this on `version: '3.6'`
```
Unsupported config option for services.web: 'args'
```
these settings seem to be set on docker and not docker-compose
i also don't need to set any proxy on my local device (i don't want to loose portability if possible)
```
docker-compose version 1.23.1, build b02f1306
Docker version 18.06.1-ce, build e68fc7a
``` | 2018/11/26 | [
"https://Stackoverflow.com/questions/53477114",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4626485/"
] | You must be from restricted countries which are banned by docker (from [403](https://httpstatuses.com/403) status code). only way is to use proxies in your docker service.
>
> [Service]
>
>
> ...
>
>
> Environment="HTTP\_PROXY=http://proxy.example.com:80/
> HTTPS\_PROXY=http://proxy.example.com:80/"
>
>
> ...
>
>
>
after that you should issue:
```
$ systemctl daemon-reload
$ systemctl restart docker
``` | You need to make an env file which you put proxy settings in
/usr/local/etc/myproxy.env
```
HTTP_PROXY=http://proxy.mydomain.net:3128
HTTPS_PROXY=http://proxy.mydomain.net:3128
```
Then run docker-compose with something like:
```
docker-compose -f /opt/docker-compose.yml --env-file /usr/local/etc/myproxy.env up
``` |
53,477,114 | When i run `sudo docker-compose build` i get
```
Building web
Step 1/8 : FROM python:3.7-alpine
ERROR: Service 'web' failed to build: error parsing HTTP 403 response body: invalid character '<' looking for beginning of value: "<html><body><h1>403 Forbidden</h1>\nSince Docker is a US company, we must comply with US export control regulations. In an effort to comply with these, we now block all IP addresses that are located in Cuba, Iran, North Korea, Republic of Crimea, Sudan, and Syria. If you are not in one of these cities, countries, or regions and are blocked, please reach out to https://support.docker.com\n</body></html>\n\n"
```
I need to set proxy for `docker-compose` for build
things i have tried:
looking at <https://docs.docker.com/network/proxy/#configure-the-docker-client>
* i have tried setting `~/.docker/config.json`
```
{
"proxies":
{
"default":
{
"httpProxy": "http://127.0.0.1:9278"
}
}
}
```
* tried with `--env` argument
* tried setting proxy variables on the server with no result
* i also have tried this [link](https://stackoverflow.com/a/36084324/4626485)
```
services:
myservice:
build:
context: .
args:
- http_proxy
- https_proxy
- no_proxy
```
but i get this on `version: '3.6'`
```
Unsupported config option for services.web: 'args'
```
these settings seem to be set on docker and not docker-compose
i also don't need to set any proxy on my local device (i don't want to loose portability if possible)
```
docker-compose version 1.23.1, build b02f1306
Docker version 18.06.1-ce, build e68fc7a
``` | 2018/11/26 | [
"https://Stackoverflow.com/questions/53477114",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4626485/"
] | Include proxy details for each service in docker-compose.yml file, the sample configuration looks as below mentioned. Restart the docker and then run "docker-compose build" again. You might also run "docker-compose ps" to see if all the services mentioned in the compose file running successfully.
```
services:
<service_name>:
image:
hostname:
container_name:
ports:
environment:
HTTP_PROXY: 'http://host:port'
HTTPS_PROXY: 'http://host:port'
NO_PROXY: 'localhost, *.test.lan'
``` | 1: edit resolve.conf in linux, add [ip](https://shecan.ir/) to the top of line in `resolv.conf`
```
nameserver {type ip}
```
2: use a poxy and create an account in docker hub (<https://hub.docker.com/>)
3:login into docker
```
sudo docker login
user:
password:
```
4: if you have problem try step 3 again |
53,477,114 | When i run `sudo docker-compose build` i get
```
Building web
Step 1/8 : FROM python:3.7-alpine
ERROR: Service 'web' failed to build: error parsing HTTP 403 response body: invalid character '<' looking for beginning of value: "<html><body><h1>403 Forbidden</h1>\nSince Docker is a US company, we must comply with US export control regulations. In an effort to comply with these, we now block all IP addresses that are located in Cuba, Iran, North Korea, Republic of Crimea, Sudan, and Syria. If you are not in one of these cities, countries, or regions and are blocked, please reach out to https://support.docker.com\n</body></html>\n\n"
```
I need to set proxy for `docker-compose` for build
things i have tried:
looking at <https://docs.docker.com/network/proxy/#configure-the-docker-client>
* i have tried setting `~/.docker/config.json`
```
{
"proxies":
{
"default":
{
"httpProxy": "http://127.0.0.1:9278"
}
}
}
```
* tried with `--env` argument
* tried setting proxy variables on the server with no result
* i also have tried this [link](https://stackoverflow.com/a/36084324/4626485)
```
services:
myservice:
build:
context: .
args:
- http_proxy
- https_proxy
- no_proxy
```
but i get this on `version: '3.6'`
```
Unsupported config option for services.web: 'args'
```
these settings seem to be set on docker and not docker-compose
i also don't need to set any proxy on my local device (i don't want to loose portability if possible)
```
docker-compose version 1.23.1, build b02f1306
Docker version 18.06.1-ce, build e68fc7a
``` | 2018/11/26 | [
"https://Stackoverflow.com/questions/53477114",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4626485/"
] | Include proxy details for each service in docker-compose.yml file, the sample configuration looks as below mentioned. Restart the docker and then run "docker-compose build" again. You might also run "docker-compose ps" to see if all the services mentioned in the compose file running successfully.
```
services:
<service_name>:
image:
hostname:
container_name:
ports:
environment:
HTTP_PROXY: 'http://host:port'
HTTPS_PROXY: 'http://host:port'
NO_PROXY: 'localhost, *.test.lan'
``` | You need to make an env file which you put proxy settings in
/usr/local/etc/myproxy.env
```
HTTP_PROXY=http://proxy.mydomain.net:3128
HTTPS_PROXY=http://proxy.mydomain.net:3128
```
Then run docker-compose with something like:
```
docker-compose -f /opt/docker-compose.yml --env-file /usr/local/etc/myproxy.env up
``` |
27,058,171 | I am fairly new to python coding, I am getting this error when i try to run my python script, can anyone tell me what i am doing wrong here?
I am trying to make a maths competition program, it should first ask for both player's names, then continue on to give each player a question until both players have answered 10 questions each. After that, it should show the score each player has got, and tell them who the winner is.
```
## Maths Competition ##
import sys
import time
import random
p1_score = 0
p2_score = 0
main_loop = 'y'
loop = 'y'
if sys.platform == 'darwin':
print('Welcome Mac user')
elif sys.plaform == 'win32' or 'win64':
print('Welcome Windows user')
else:
print('Welcome Linux user')
time.sleep(2)
print('This is a two player maths competition game, Player 1, please enter your name.')
p1_name = input()
print('now player 2 please..')
p2_name = input()
print('Processing...')
time.sleep(2)
print(p1_name+''' will first be given a random maths question,
they then have to answer that question or just press enter if they can't get it.
Then '''+ p2_name +''' will be given a question and they have to do the same thing. Each
time a player gets an answer correct, 10 points are automatically added to their score.
Each player will be given 10 questions in total, in the end, the one with the most right
answers will win. If it is a draw, a penalty round will happen, enjoy
Ps. '**' means 'to the power off'. ''')
time.sleep(5)
while main_loop == 'y':
num_of_tries = 0
while loop == 'y':
num_of_tries = num_of_tries + 1
if num_of_tries >20:
break
ops = ['x','/','+','-','**']
num1 = random.randrange(100)
num2 = random.randrange(35)
sel_op = random.choice(ops)
print(p1_name+', please press enter once you are ready to get your question')
input()
if sel_op == 'x':
ans = num1 * num2
elif sel_op == '/':
ans = num1 / num2
elif sel_op == '+':
ans = num1 + num2
elif sel_op == '-':
ans = num1 - num2
elif sel_op == '**':
ans = num1 ** num2
p1_ans = input('Your question is: %d %s %d' % (num1,sel_op,num2))
if p1_ans == ans:
p1_score = p1_score + 10
num1 = random.randrange(100)
num2 = random.randrange(35)
sel_op = random.choice(ops)
print(p2_name+', please press enter once you are ready to get your question')
input()
if sel_op == 'x':
ans2 = num1 * num2
elif sel_op == '/':
ans2 = num1 / num2
elif sel_op == '+':
ans2 = num1 + num2
elif sel_op == '-':
ans2 = num1 - num2
elif sel_op == '**':
ans2 = num1 ** num2
p2_ans = input('Your question is: %d %s %d' % (num1,sel_op,num2))
if p2_ans == ans2:
p2_score = p2_score + 10
print(p1_name+' got %d' % (p1_score))
print(p2_name+' got %d' % (p2_score))
if p1_score > p2_score:
print(p1_name+' is the WINNER!')
elif p2_score > p1_score:
print(p2_name+' is the WINNER!')
print('Would you like to play another? y/n')
repeat = input()
if any ( [repeat == 'y', repeat == 'Y'] ):
print('Sure thing, wait a couple of seconds for me to set things up again...')
time.sleep(3)
elif any ( [repeat == 'n', repeat == 'N'] ):
break
else:
print('I\'ll take that as a NO')
time.sleep(2)
break
``` | 2014/11/21 | [
"https://Stackoverflow.com/questions/27058171",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4277883/"
] | ```
num2 = random.randrange(35)
```
can give you zero and will lead to a division by zero in this line:
```
ans2 = num1 / num2
```
you probably want something like:
```
random.randrange(start = 1, stop = 35 + 1)
```
which will generate numbers between 1 and 35 (both inclusive).
---
A side remark: unless you want the user to enter floating point numbers such as 0.8333333333333334 (which is very likely NOT to be exactly equal to the value calculated in your program) for division (assuming you are using python3), it is better to throw a value for the result and for the divisor and then calculate the dividend from it. | Andre Holzner is correct. Here is some Examples of basic usage:
`>>> random.random() # Random float x, 0.0 <= x < 1.0
0.37444887175646646`
`>>> random.uniform(1, 10) # Random float x, 1.0 <= x < 10.0
1.1800146073117523`
`>>> random.randint(1, 10) # Integer from 1 to 10, endpoints included
7`
`>>> random.randrange(0, 101, 2) # Even integer from 0 to 100
26`
`>>> random.choice('abcdefghij') # Choose a random element
'c'`
```
>>> items = [1, 2, 3, 4, 5, 6, 7]
>>> random.shuffle(items)
>>> items
[7, 3, 2, 5, 6, 4, 1]
```
`>>> random.sample([1, 2, 3, 4, 5], 3) # Choose 3 elements
[4, 1, 5]`
**To learn more about random here is the** [Link](https://docs.python.org/2/library/random.html) |
41,241,005 | I am new to python and pandas. Trying to implement below condition but getting below error:
```
ValueError: The truth value of an array is ambiguous. Use a.empty, a.any() or a.all().
```
Below is my code:
```
df['col2'].fillna('.', inplace=True)
import copy
dict_YM = {}
for yearmonth in [201104, 201105, 201106,201107,201108,201109,201110,201111,201112,
201201,201202,201203,201204, 201205, 201206,201207,201208,201209,201210,201211,201212,
201301,201302,201303,201304, 201305, 201306,201307,201308,201309,201310,201311,201312,
201401,201402,201403,201404, 201405, 201406,201407,201408,201409,201410,201411,201412,
201501,201502,201503,201504, 201505, 201506,201507,201508,201509,201510,201511,201512,
201601,201602,201603,201604,201605]:
key_name = 'text'+str(yearmonth)
c1=df['col1']
c2=df['col2']
c3=df['flag']
if((yearmonth >= c1) & (yearmonth < c2) & (c3==1)):
print "in if ..."
dict_YM [key_name] = copy.deepcopy(df)
dict_YM [key_name].loc[:, 'col4'] = yearmonth
elif((yearmonth >= c1) & (c2==".") & (c3==1)):
print "in else if"
dict_YM [key_name] = copy.deepcopy(df)
dict_YM [key_name].loc[:, 'col4'] = yearmonth
dict_YM
```
Now i understand we need to use c1.all() or c1.any(). But my requirement is for all the only true values of yearmonth >= c1 and yearmonth < c2 and c3==1 want do some operation. But if i use all then few of rows has true records and if go by any then the false record are also getting. Please help me how i can solve obove condition as each value/row check not as a series.
Note: col1 is int and col2,flag are float as they contain nan as well.
Edit: I am not trying to compare the string(yearmonth) is greater with whole df column (col1) but actually i want to iterate over the column col1 of df if condition satisfies then respective operation should perform those rows only.
```
Also df has huge records with various columns but col1 and col2 will have data as we have yearmonth in dictonary and flag will have 1 or nan.
Col2 has data like this {192106.0,192107.0, 195103.0 etc} and col1 has data like this {192104,201204,201206 etc}
```
Please let me know if you need any other inputs.
EDIT2: df col1, col2, flag sample data values
```
df
col1 col2 flag
192104 NaN 1.0
192104 200301.0 1.0
200301 201204.0 1.0
201204 NaN 0.0
200410 201206.0 1.0
201206 NaN 0.0
192104 198001.0 1.0
198001 NaN 1.0
```
Edit 3: I have tried like this but getting is not working seems did any one get any idea i am stuck with this issue:
```
dict_YM [key_name] =np.where(
(df[(df['col1']<=yearmonth) &
(df['col2']>yearmonth) & (df['Active']==1)]),
copy.deepcopy(df),
np.where((df[(df['col1']<=yearmonth) &
(df['col2']==".") & (df['Active']==1)]),
copy.deepcopy(df),np.nan))
then i can add col4 once dict_YM [key_name] got generated
```
Got below error when i tried above code:
```
ValueError: operands could not be broadcast together with shapes (2,3) (8,3) ()
``` | 2016/12/20 | [
"https://Stackoverflow.com/questions/41241005",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7320512/"
] | You are printing `result` to `stdout`. The objects in this list are of type `Person`. That is the `Person.toString()` method is used to get a string representation of `result`.
As mentioned in the comments either change the `toString` method of Person to just return the value of `age` or iterate over the result and write the value of `age` to `stdout`. | The method `public static <T> List<T> searchIn( List<T> list , Matcher<T> m )` returns `List<T>`, in your case Person if you want to get person age
try `result.stream().map(Person::getAge).forEach(System.out::println);` |
39,800,524 | The below function retains the values in its list every time it is run. I recently learned about this issue as a Python ['gotcha'](http://docs.python-guide.org/en/latest/writing/gotchas/) due to using a mutable default argument.
How do I fix it? Creating a global variable outside the function causes the same issue. Passing a list into the function breaks the recursion and only displays the first level of categories.
```
def build_category_list(categories, depth=0, items=[]):
'''Builds category data for parent select field'''
for category in categories:
items.append((category.id, '-' * depth + ' ' + category.name))
if category.children:
build_category_list(category.children, depth + 1)
return items
``` | 2016/09/30 | [
"https://Stackoverflow.com/questions/39800524",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/827174/"
] | There is no need to pass the list inside the recursive function, just concatenate the results of the subsequent calls to the current list:
```
def build_category_list(categories, depth=0):
'''Builds category data for parent select field'''
items = []
for category in categories:
items.append((category.id, '-' * depth + ' ' + category.name))
if category.children:
items += build_category_list(category.children, depth + 1)
return items
``` | Passing the list in or by checking a null value would solve the issue. But you need to pass the list down the recursion:
```
def build_category_list(categories, depth=0, items=None):
if not items:
items = []
'''Builds category data for parent select field'''
for category in categories:
items.append((category.id, '-' * depth + ' ' + category.name))
if category.children:
build_category_list(category.children, depth + 1, items)
^^^^^
return items
```
Alternatively, use the return value to construct the answer - my preference see Antoine's answer... |
48,621,360 | I was browsing the python `asyncio` module documentation this night looking for some ideas for one of my course projects, but I soon find that there might be a lack of feature in python's standard `aysncio` module.
If you look through the documentation, you'll find that there's a callback based API and a coroutine based API. And the callback API could be used for building both UDP and TCP applications, while it looks that the coroutine API could only be used for building TCP application, as it utilizes the use of a stream-style API.
This quite causes a problem for me because I was looking for a coroutine-based API for UDP networking, although I did find that `asyncio` supports low-level coroutine based socket methods like [`sock_recv`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.AbstractEventLoop.sock_recv) and [`sock_sendall`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.AbstractEventLoop.sock_sendall), but the crucial APIs for UDP networking, `recvfrom` and `sendto` are not there.
What I wish to do is to write some codes like:
```
async def handle_income_packet(sock):
await data, addr = sock.recvfrom(4096)
# data handling here...
await sock.sendto(addr, response)
```
I know that this could be equivalently implemented using a callback API, but the problem here is that callbacks are not coroutines but regular functions, so that in it you cannot yield control back to the event loop and preserve the function execution state.
Just look at the above code, if we need to do some blocking-IO operations in the data handling part, we won't have a problem in the coroutine version as long as our IO operations are done in coroutines as well:
```
async def handle_income_packet(sock):
await data, addr = sock.recvfrom(4096)
async with aiohttp.ClientSession() as session:
info = await session.get(...)
response = generate_response_from_info(info)
await sock.sendto(addr, response)
```
As long as we use `await` the event loop would take the control flow from that point to handle other things until that IO is done. But sadly these codes are **not** usable at this moment because we do not have a coroutined version of `socket.sendto` and `socket.recvfrom` in `asyncio`.
What we could implement this in is to use the transport-protocol callback API:
```
class EchoServerClientProtocol(asyncio.Protocol):
def connection_made(self, transport):
peername = transport.get_extra_info('peername')
self.transport = transport
def data_received(self, data):
info = requests.get(...)
response = generate_response_from_info(info)
self.transport.write(response)
self.transport.close()
```
we cannot `await` a coroutine there because callbacks are not coroutines, and using a blocking IO call like above would stall the control flow in the callback and prevent the loop to handle any other events until the IO is done
Another recommended implementation idea is to create a `Future` object in the `data_received` function, add it to the event loop, and store any needed state variable in the Protocol class, then explicitly return control to the loop. While this could work, it does create a lot of complex codes where in the coroutine version they're not needed in any way.
Also [here](https://www.pythonsheets.com/notes/python-asyncio.html#simple-asyncio-udp-echo-server) we have an example of using non-blocking socket and `add_reader` for handle UDP sockets. But the code still looks complex comparing to coroutine-version's a few lines.
The point I want to make is that coroutine is a really good design that could utilize the power of concurrency in one single thread while also has a really straightforward design pattern that could save both brainpower and unnecessary lines of codes, but the crucial part to get it work for UDP networking is really lacking in our `asyncio` standard library.
What do you guys think about this?
Also, if there's any other suggestions for 3rd party libraries supporting this kind of API for UDP networking, I would be extremely grateful for the sake of my course project. I found [Bluelet](https://github.com/sampsyo/bluelet) is quite like such a thing but it does not seem to be actively maintained.
edit:
It seems that this [PR](https://github.com/python/asyncio/pull/321) did implement this feature but was rejected by the `asyncio` developers. The developers claim that all functions could be implemented using `create_datagram_endpoint()`, the protocol-transfer API. But just as I have discussed above, coroutine API has the power of simpleness compared to using the callback API in many use cases, it is really unfortunate that we do not have these with UDP. | 2018/02/05 | [
"https://Stackoverflow.com/questions/48621360",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1548129/"
] | The reason a stream-based API is not provided is because streams offer *ordering* on top of the callbacks, and UDP communication is inherently unordered, so the two are fundamentally incompatible.
But none of that means you can't invoke coroutines from your callbacks - it's in fact quite easy! Starting from the [`EchoServerProtocol` example](https://docs.python.org/3/library/asyncio-protocol.html#udp-echo-server), you can do this:
```
def datagram_received(self, data, addr):
loop = asyncio.get_event_loop()
loop.create_task(self.handle_income_packet(data, addr))
async def handle_income_packet(self, data, addr):
# echo back the message, but 2 seconds later
await asyncio.sleep(2)
self.transport.sendto(data, addr)
```
Here `datagram_received` starts your `handle_income_packet` coroutine which is free to await any number of coroutines. Since the coroutine runs in the "background", the event loop is not blocked at any point and `datagram_received` returns immediately, just as intended. | You might be interested in [this module providing high-level UDP endpoints for asyncio](https://gist.github.com/vxgmichel/e47bff34b68adb3cf6bd4845c4bed448):
```
async def main():
# Create a local UDP enpoint
local = await open_local_endpoint('localhost', 8888)
# Create a remote UDP enpoint, pointing to the first one
remote = await open_remote_endpoint(*local.address)
# The remote endpoint sends a datagram
remote.send(b'Hey Hey, My My')
# The local endpoint receives the datagram, along with the address
data, address = await local.receive()
# Print: Got 'Hey Hey, My My' from 127.0.0.1 port 50603
print(f"Got {data!r} from {address[0]} port {address[1]}")
``` |
48,621,360 | I was browsing the python `asyncio` module documentation this night looking for some ideas for one of my course projects, but I soon find that there might be a lack of feature in python's standard `aysncio` module.
If you look through the documentation, you'll find that there's a callback based API and a coroutine based API. And the callback API could be used for building both UDP and TCP applications, while it looks that the coroutine API could only be used for building TCP application, as it utilizes the use of a stream-style API.
This quite causes a problem for me because I was looking for a coroutine-based API for UDP networking, although I did find that `asyncio` supports low-level coroutine based socket methods like [`sock_recv`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.AbstractEventLoop.sock_recv) and [`sock_sendall`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.AbstractEventLoop.sock_sendall), but the crucial APIs for UDP networking, `recvfrom` and `sendto` are not there.
What I wish to do is to write some codes like:
```
async def handle_income_packet(sock):
await data, addr = sock.recvfrom(4096)
# data handling here...
await sock.sendto(addr, response)
```
I know that this could be equivalently implemented using a callback API, but the problem here is that callbacks are not coroutines but regular functions, so that in it you cannot yield control back to the event loop and preserve the function execution state.
Just look at the above code, if we need to do some blocking-IO operations in the data handling part, we won't have a problem in the coroutine version as long as our IO operations are done in coroutines as well:
```
async def handle_income_packet(sock):
await data, addr = sock.recvfrom(4096)
async with aiohttp.ClientSession() as session:
info = await session.get(...)
response = generate_response_from_info(info)
await sock.sendto(addr, response)
```
As long as we use `await` the event loop would take the control flow from that point to handle other things until that IO is done. But sadly these codes are **not** usable at this moment because we do not have a coroutined version of `socket.sendto` and `socket.recvfrom` in `asyncio`.
What we could implement this in is to use the transport-protocol callback API:
```
class EchoServerClientProtocol(asyncio.Protocol):
def connection_made(self, transport):
peername = transport.get_extra_info('peername')
self.transport = transport
def data_received(self, data):
info = requests.get(...)
response = generate_response_from_info(info)
self.transport.write(response)
self.transport.close()
```
we cannot `await` a coroutine there because callbacks are not coroutines, and using a blocking IO call like above would stall the control flow in the callback and prevent the loop to handle any other events until the IO is done
Another recommended implementation idea is to create a `Future` object in the `data_received` function, add it to the event loop, and store any needed state variable in the Protocol class, then explicitly return control to the loop. While this could work, it does create a lot of complex codes where in the coroutine version they're not needed in any way.
Also [here](https://www.pythonsheets.com/notes/python-asyncio.html#simple-asyncio-udp-echo-server) we have an example of using non-blocking socket and `add_reader` for handle UDP sockets. But the code still looks complex comparing to coroutine-version's a few lines.
The point I want to make is that coroutine is a really good design that could utilize the power of concurrency in one single thread while also has a really straightforward design pattern that could save both brainpower and unnecessary lines of codes, but the crucial part to get it work for UDP networking is really lacking in our `asyncio` standard library.
What do you guys think about this?
Also, if there's any other suggestions for 3rd party libraries supporting this kind of API for UDP networking, I would be extremely grateful for the sake of my course project. I found [Bluelet](https://github.com/sampsyo/bluelet) is quite like such a thing but it does not seem to be actively maintained.
edit:
It seems that this [PR](https://github.com/python/asyncio/pull/321) did implement this feature but was rejected by the `asyncio` developers. The developers claim that all functions could be implemented using `create_datagram_endpoint()`, the protocol-transfer API. But just as I have discussed above, coroutine API has the power of simpleness compared to using the callback API in many use cases, it is really unfortunate that we do not have these with UDP. | 2018/02/05 | [
"https://Stackoverflow.com/questions/48621360",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1548129/"
] | The reason a stream-based API is not provided is because streams offer *ordering* on top of the callbacks, and UDP communication is inherently unordered, so the two are fundamentally incompatible.
But none of that means you can't invoke coroutines from your callbacks - it's in fact quite easy! Starting from the [`EchoServerProtocol` example](https://docs.python.org/3/library/asyncio-protocol.html#udp-echo-server), you can do this:
```
def datagram_received(self, data, addr):
loop = asyncio.get_event_loop()
loop.create_task(self.handle_income_packet(data, addr))
async def handle_income_packet(self, data, addr):
# echo back the message, but 2 seconds later
await asyncio.sleep(2)
self.transport.sendto(data, addr)
```
Here `datagram_received` starts your `handle_income_packet` coroutine which is free to await any number of coroutines. Since the coroutine runs in the "background", the event loop is not blocked at any point and `datagram_received` returns immediately, just as intended. | [asyncudp](https://github.com/eerimoq/asyncudp) provides easy to use UDP sockets in asyncio.
Here is an example:
```py
import asyncio
import asyncudp
async def main():
sock = await asyncudp.create_socket(remote_addr=('127.0.0.1', 9999))
sock.sendto(b'Hello!')
print(await sock.recvfrom())
sock.close()
asyncio.run(main())
``` |
48,621,360 | I was browsing the python `asyncio` module documentation this night looking for some ideas for one of my course projects, but I soon find that there might be a lack of feature in python's standard `aysncio` module.
If you look through the documentation, you'll find that there's a callback based API and a coroutine based API. And the callback API could be used for building both UDP and TCP applications, while it looks that the coroutine API could only be used for building TCP application, as it utilizes the use of a stream-style API.
This quite causes a problem for me because I was looking for a coroutine-based API for UDP networking, although I did find that `asyncio` supports low-level coroutine based socket methods like [`sock_recv`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.AbstractEventLoop.sock_recv) and [`sock_sendall`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.AbstractEventLoop.sock_sendall), but the crucial APIs for UDP networking, `recvfrom` and `sendto` are not there.
What I wish to do is to write some codes like:
```
async def handle_income_packet(sock):
await data, addr = sock.recvfrom(4096)
# data handling here...
await sock.sendto(addr, response)
```
I know that this could be equivalently implemented using a callback API, but the problem here is that callbacks are not coroutines but regular functions, so that in it you cannot yield control back to the event loop and preserve the function execution state.
Just look at the above code, if we need to do some blocking-IO operations in the data handling part, we won't have a problem in the coroutine version as long as our IO operations are done in coroutines as well:
```
async def handle_income_packet(sock):
await data, addr = sock.recvfrom(4096)
async with aiohttp.ClientSession() as session:
info = await session.get(...)
response = generate_response_from_info(info)
await sock.sendto(addr, response)
```
As long as we use `await` the event loop would take the control flow from that point to handle other things until that IO is done. But sadly these codes are **not** usable at this moment because we do not have a coroutined version of `socket.sendto` and `socket.recvfrom` in `asyncio`.
What we could implement this in is to use the transport-protocol callback API:
```
class EchoServerClientProtocol(asyncio.Protocol):
def connection_made(self, transport):
peername = transport.get_extra_info('peername')
self.transport = transport
def data_received(self, data):
info = requests.get(...)
response = generate_response_from_info(info)
self.transport.write(response)
self.transport.close()
```
we cannot `await` a coroutine there because callbacks are not coroutines, and using a blocking IO call like above would stall the control flow in the callback and prevent the loop to handle any other events until the IO is done
Another recommended implementation idea is to create a `Future` object in the `data_received` function, add it to the event loop, and store any needed state variable in the Protocol class, then explicitly return control to the loop. While this could work, it does create a lot of complex codes where in the coroutine version they're not needed in any way.
Also [here](https://www.pythonsheets.com/notes/python-asyncio.html#simple-asyncio-udp-echo-server) we have an example of using non-blocking socket and `add_reader` for handle UDP sockets. But the code still looks complex comparing to coroutine-version's a few lines.
The point I want to make is that coroutine is a really good design that could utilize the power of concurrency in one single thread while also has a really straightforward design pattern that could save both brainpower and unnecessary lines of codes, but the crucial part to get it work for UDP networking is really lacking in our `asyncio` standard library.
What do you guys think about this?
Also, if there's any other suggestions for 3rd party libraries supporting this kind of API for UDP networking, I would be extremely grateful for the sake of my course project. I found [Bluelet](https://github.com/sampsyo/bluelet) is quite like such a thing but it does not seem to be actively maintained.
edit:
It seems that this [PR](https://github.com/python/asyncio/pull/321) did implement this feature but was rejected by the `asyncio` developers. The developers claim that all functions could be implemented using `create_datagram_endpoint()`, the protocol-transfer API. But just as I have discussed above, coroutine API has the power of simpleness compared to using the callback API in many use cases, it is really unfortunate that we do not have these with UDP. | 2018/02/05 | [
"https://Stackoverflow.com/questions/48621360",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1548129/"
] | You might be interested in [this module providing high-level UDP endpoints for asyncio](https://gist.github.com/vxgmichel/e47bff34b68adb3cf6bd4845c4bed448):
```
async def main():
# Create a local UDP enpoint
local = await open_local_endpoint('localhost', 8888)
# Create a remote UDP enpoint, pointing to the first one
remote = await open_remote_endpoint(*local.address)
# The remote endpoint sends a datagram
remote.send(b'Hey Hey, My My')
# The local endpoint receives the datagram, along with the address
data, address = await local.receive()
# Print: Got 'Hey Hey, My My' from 127.0.0.1 port 50603
print(f"Got {data!r} from {address[0]} port {address[1]}")
``` | [asyncudp](https://github.com/eerimoq/asyncudp) provides easy to use UDP sockets in asyncio.
Here is an example:
```py
import asyncio
import asyncudp
async def main():
sock = await asyncudp.create_socket(remote_addr=('127.0.0.1', 9999))
sock.sendto(b'Hello!')
print(await sock.recvfrom())
sock.close()
asyncio.run(main())
``` |
31,221,586 | I was wondering if anyone could give me a hand with this...
Basically I am trying to modernize the news system of my site but I can't seem to limit the amount of posts showing in the foreach loop that is on my blog part of the site. I need to skip the first instance as it is already promoted at the top of the page. I've tried various google searches but im getting results for C++ Perl and python, what is really irritating. I just need a simple PHP solution. I'll pop my code below and see if anyone can help. Thanks for any help in-advance. And please remember to leave your responses as an answer so I can mark them up if they helped ;)
```
<div class="view22 full" style="margin-top:0;">
<h3>Recent News and Announcements</h3>
<?php foreach ($articles as $article) {
?>
<div class="ah7_ clearfix">
<p class="date"><?php echo date('F j', $article['article_timestamp']); ?>, <?php echo date('Y', $article['article_timestamp']); ?></p>
<h3><a href="<?php echo $url.'/newsroom/'.$article['article_id']; ?>"><?php echo $article['article_title']; ?></a></h3>
</div>
<?php
}
?>
</div>
``` | 2015/07/04 | [
"https://Stackoverflow.com/questions/31221586",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3925360/"
] | I assume that the $articles array has keys starting with 0. How about modifying the loop like this:
```
foreach ($articles as $key => $article)
```
and checking if $key is 0 at the beginning?
```
if($key == 0)
continue;
```
If the array keys are different: Create a new variable $i, set it to 0 and increase the value by 1 in every foreach loop iteration.
```
$i = 0;
foreach ($articles as $article) {
$i++;
if($i == 1)
continue;
elseif($i > 8)
break;
//the other code goes here
}
```
In case it is based on a SQL query, using "limit" might help to reduce load! | To remove the first instance you can manually unset the item ($articles[0]) after making a copy of it or printing it as a featured news.
To limit the number of post you can use the mysql LIMIT Clause;
Or you can do something like this
```
foreach($articles as $key => $article){
if($key===0)
continue;
if($key===8)
break;
echo $article;// or_do_whatever_youwant_with($article);
}
``` |
31,221,586 | I was wondering if anyone could give me a hand with this...
Basically I am trying to modernize the news system of my site but I can't seem to limit the amount of posts showing in the foreach loop that is on my blog part of the site. I need to skip the first instance as it is already promoted at the top of the page. I've tried various google searches but im getting results for C++ Perl and python, what is really irritating. I just need a simple PHP solution. I'll pop my code below and see if anyone can help. Thanks for any help in-advance. And please remember to leave your responses as an answer so I can mark them up if they helped ;)
```
<div class="view22 full" style="margin-top:0;">
<h3>Recent News and Announcements</h3>
<?php foreach ($articles as $article) {
?>
<div class="ah7_ clearfix">
<p class="date"><?php echo date('F j', $article['article_timestamp']); ?>, <?php echo date('Y', $article['article_timestamp']); ?></p>
<h3><a href="<?php echo $url.'/newsroom/'.$article['article_id']; ?>"><?php echo $article['article_title']; ?></a></h3>
</div>
<?php
}
?>
</div>
``` | 2015/07/04 | [
"https://Stackoverflow.com/questions/31221586",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3925360/"
] | I assume that the $articles array has keys starting with 0. How about modifying the loop like this:
```
foreach ($articles as $key => $article)
```
and checking if $key is 0 at the beginning?
```
if($key == 0)
continue;
```
If the array keys are different: Create a new variable $i, set it to 0 and increase the value by 1 in every foreach loop iteration.
```
$i = 0;
foreach ($articles as $article) {
$i++;
if($i == 1)
continue;
elseif($i > 8)
break;
//the other code goes here
}
```
In case it is based on a SQL query, using "limit" might help to reduce load! | There are a few things you can do:
1. If you $articles is an array of array, having continous indexes, use a `for` loop instead of `foreach` and do something like
```
for ($i = 1; $i < 8 : $i++ ) {
// and access it like
$articles[$i]['some_index'] ...
}
```
2. If it is not then you can use an external counter
Say
```
$counter = -1;
foreach ( $articles as $article) {
$counter++;
if (!$counter) continue;
if ($counter > 7 ) break;
...// your code //
}
```
3. You can change your Mysql query to give you only the desired data, using LIMIT and OFFSET |
56,513,918 | I have created a python script with a single function in it. Is there a way to call the function from the python terminal to test some arguments?
```py
import time
import random
def string_teletyper(string):
'''Prints out each character in a string with time delay'''
for chr in string:
print(chr, end='', flush=True)
time.sleep(random.randint(1,2)/20)
```
If I want to test an argument for the function, I would have to add string\_teletyper(argument) inside the script itself and run it, is there a faster way? | 2019/06/09 | [
"https://Stackoverflow.com/questions/56513918",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9906064/"
] | You can use `itertools.count()` to make infinite loop and `itertools.filterfalse` to filter values you don't need:
```
from random import randint
from itertools import count, filterfalse
f = filterfalse(lambda i: i % 2 == 0, [(yield randint(1, 99)) for i in count()])
for i in f:
print(i)
```
Prints:
```
...
61
21
91
77
39
... and so on
```
**Version 2** (without itertools):
```
from random import randint
for val in (i for i in iter(lambda: randint(1, 99), 0) if i % 2 != 0):
print(val)
``` | Do this: (Python 3)
```py
stream = (lambda min_, max_: type("randint_stream", (), {'__next__': (lambda self: 1+2*__import__('random').randint(min_-1,max_//2))}))(1,99)()
```
Get randint with `next(stream)`.
Change min and max by changing the `(1,99)`.
Real 1 line! Can change min & max!
```
=========== Adding ===========
```
The version above isn't a strict generator -- it's another class. Version 2:
```py
stream = (lambda min_, max_: (1+2*__import__('random').randint(min_-1,max_//2) for x in iter(int, 1)))(1,99)
```
Use `next()` to get random odd number.
Change min and max by changing the `(1,99)`. |
56,513,918 | I have created a python script with a single function in it. Is there a way to call the function from the python terminal to test some arguments?
```py
import time
import random
def string_teletyper(string):
'''Prints out each character in a string with time delay'''
for chr in string:
print(chr, end='', flush=True)
time.sleep(random.randint(1,2)/20)
```
If I want to test an argument for the function, I would have to add string\_teletyper(argument) inside the script itself and run it, is there a faster way? | 2019/06/09 | [
"https://Stackoverflow.com/questions/56513918",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9906064/"
] | ```
import random as rdm
g = (1+2*rdm.randint(0, 49) for r in iter(int, 1))
```
`rdm.randint(0, 49)` gives you a random int between 0 and 49. So `(1+2*rdm.randint(0, 49)` gives you a random number odd number between 1 and 99.
`iter(int, 1)` is an infinite iterator (which is always 0 and just used to keep the generator going). | Do this: (Python 3)
```py
stream = (lambda min_, max_: type("randint_stream", (), {'__next__': (lambda self: 1+2*__import__('random').randint(min_-1,max_//2))}))(1,99)()
```
Get randint with `next(stream)`.
Change min and max by changing the `(1,99)`.
Real 1 line! Can change min & max!
```
=========== Adding ===========
```
The version above isn't a strict generator -- it's another class. Version 2:
```py
stream = (lambda min_, max_: (1+2*__import__('random').randint(min_-1,max_//2) for x in iter(int, 1)))(1,99)
```
Use `next()` to get random odd number.
Change min and max by changing the `(1,99)`. |
11,254,763 | I am making a script to test some software that is always running and I want to test it's recovery from a BSOD. Is there a way to throw a bsod from python without calling an external script or executable like OSR's BANG! | 2012/06/29 | [
"https://Stackoverflow.com/questions/11254763",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1470373/"
] | Funny thing. There is Windows kernel function that does just that.
I'm assuming that this is intended behaviour as the function has been ther
The following python code will crash any windows computer from usermode without any additional setup.
```
from ctypes import windll
from ctypes import c_int
from ctypes import c_uint
from ctypes import c_ulong
from ctypes import POINTER
from ctypes import byref
nullptr = POINTER(c_int)()
windll.ntdll.RtlAdjustPrivilege(
c_uint(19),
c_uint(1),
c_uint(0),
byref(c_int())
)
windll.ntdll.NtRaiseHardError(
c_ulong(0xC000007B),
c_ulong(0),
nullptr,
nullptr,
c_uint(6),
byref(c_uint())
)
``` | i hope this helps (:
```
import ctypes
ntdll = ctypes.windll.ntdll
prev_value = ctypes.c_bool()
res = ctypes.c_ulong()
ntdll.RtlAdjustPrivilege(19, True, False, ctypes.byref(prev_value))
if not ntdll.NtRaiseHardError(0xDEADDEAD, 0, 0, 0, 6, ctypes.byref(res)):
print("BSOD Successfull!")
else:
print("BSOD Failed...")
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.