qid
int64 469
74.7M
| question
stringlengths 36
37.8k
| date
stringlengths 10
10
| metadata
sequence | response_j
stringlengths 5
31.5k
| response_k
stringlengths 10
31.6k
|
---|---|---|---|---|---|
41,850,558 | I have a model called "document-detail-sample" and when you call it with a GET, something like this, **GET** `https://url/document-detail-sample/` then you get every "document-detail-sample".
Inside the model is the id. So, if you want every Id, you could just "iterate" on the list and ask for the id. Easy.
But... the front-end Developers don't want to do it :D they say it's too much work...
So, I gotta return the id list. :D
I was thinking something like **GET** `https://url/document-detail-sample/id-list`
But I don't know how to return just a list. I read [this post](https://stackoverflow.com/questions/27647871/django-python-how-to-get-a-list-of-ids-from-a-list-of-objects) and I know how to get the id\_list in the backend. But I don't know what should I implement to just return a list in that url...
the view that I have it's pretty easy:
```
class DocumentDetailSampleViewSet(viewsets.ModelViewSet):
queryset = DocumentDetailSample.objects.all()
serializer_class = DocumentDetailSampleSerializer
```
and the url is so:
```
router.register(r'document-detail-sample', DocumentDetailSampleViewSet)
```
so:
**1**- is a good Idea do it with an url like `.../document-detail-sample/id-list"` ?
**2**- if yes, how can I do it?
**3**- if not, what should I do then? | 2017/01/25 | [
"https://Stackoverflow.com/questions/41850558",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4050960/"
] | You could use `@list_route` decorator
```
from rest_framework.decorators import detail_route, list_route
from rest_framework.response import Response
class DocumentDetailSampleViewSet(viewsets.ModelViewSet):
queryset = DocumentDetailSample.objects.all()
serializer_class = DocumentDetailSampleSerializer
@list_route()
def id_list(self, request):
q = self.get_queryset().values('id')
return Response(list(q))
```
This decorator allows you provide additional endpoint with the same name as a method. `/document-detail-sample/id_list/`
[reference to docs about extra actions in a viewset](http://www.django-rest-framework.org/api-guide/viewsets/#marking-extra-actions-for-routing) | Assuming you don't need pagination, just override the `list` method like so
```
class DocumentDetailSampleViewSet(viewsets.ModelViewSet):
queryset = DocumentDetailSample.objects.all()
serializer_class = DocumentDetailSampleSerializer
def list(self, request):
return Response(self.get_queryset().values_list("id", flat=True))
``` |
14,585,722 | Suppose you have a python function, as so:
```
def foo(spam, eggs, ham):
pass
```
You could call it using the positional arguments only (`foo(1, 2, 3)`), but you could also be explicit and say `foo(spam=1, eggs=2, ham=3)`, or mix the two (`foo(1, 2, ham=3)`).
Is it possible to get the same kind of functionality with argparse? I have a couple of positional arguments with keywords, and I don't want to define all of them when using just one. | 2013/01/29 | [
"https://Stackoverflow.com/questions/14585722",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/731881/"
] | You can do something like this:
```
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('foo',nargs='?',default=argparse.SUPPRESS)
parser.add_argument('--foo',dest='foo',default=None)
parser.add_argument('bar',nargs='?',default=argparse.SUPPRESS)
parser.add_argument('--bar',dest='bar',default=None)
parser.add_argument('baz',nargs='?',default=argparse.SUPPRESS)
parser.add_argument('--baz',dest='baz',default=None)
print parser.parse_args()
```
which works mostly as you describe:
```
temp $ python test.py 1 2 --baz=3
Namespace(bar='2', baz='3', foo='1')
temp $ python test.py --baz=3
Namespace(bar=None, baz='3', foo=None)
temp $ python test.py --foo=2 --baz=3
Namespace(bar=None, baz='3', foo='2')
temp $ python test.py 1 2 3
Namespace(bar='2', baz='3', foo='1')
```
python would give you an error for the next one in the function call analogy, but argparse will allow it:
```
temp $ python test.py 1 2 3 --foo=27.5
Namespace(bar='2', baz='3', foo='27.5')
```
You could probably work around that by using [mutually exclusive groupings](http://docs.python.org/2.7/library/argparse.html#mutual-exclusion) | I believe this is what you are looking for [Argparse defaults](http://docs.python.org/dev/library/argparse.html#default) |
14,585,722 | Suppose you have a python function, as so:
```
def foo(spam, eggs, ham):
pass
```
You could call it using the positional arguments only (`foo(1, 2, 3)`), but you could also be explicit and say `foo(spam=1, eggs=2, ham=3)`, or mix the two (`foo(1, 2, ham=3)`).
Is it possible to get the same kind of functionality with argparse? I have a couple of positional arguments with keywords, and I don't want to define all of them when using just one. | 2013/01/29 | [
"https://Stackoverflow.com/questions/14585722",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/731881/"
] | You can also use this module: [docopt](https://github.com/docopt/docopt) | I believe this is what you are looking for [Argparse defaults](http://docs.python.org/dev/library/argparse.html#default) |
14,585,722 | Suppose you have a python function, as so:
```
def foo(spam, eggs, ham):
pass
```
You could call it using the positional arguments only (`foo(1, 2, 3)`), but you could also be explicit and say `foo(spam=1, eggs=2, ham=3)`, or mix the two (`foo(1, 2, ham=3)`).
Is it possible to get the same kind of functionality with argparse? I have a couple of positional arguments with keywords, and I don't want to define all of them when using just one. | 2013/01/29 | [
"https://Stackoverflow.com/questions/14585722",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/731881/"
] | You can do something like this:
```
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('foo',nargs='?',default=argparse.SUPPRESS)
parser.add_argument('--foo',dest='foo',default=None)
parser.add_argument('bar',nargs='?',default=argparse.SUPPRESS)
parser.add_argument('--bar',dest='bar',default=None)
parser.add_argument('baz',nargs='?',default=argparse.SUPPRESS)
parser.add_argument('--baz',dest='baz',default=None)
print parser.parse_args()
```
which works mostly as you describe:
```
temp $ python test.py 1 2 --baz=3
Namespace(bar='2', baz='3', foo='1')
temp $ python test.py --baz=3
Namespace(bar=None, baz='3', foo=None)
temp $ python test.py --foo=2 --baz=3
Namespace(bar=None, baz='3', foo='2')
temp $ python test.py 1 2 3
Namespace(bar='2', baz='3', foo='1')
```
python would give you an error for the next one in the function call analogy, but argparse will allow it:
```
temp $ python test.py 1 2 3 --foo=27.5
Namespace(bar='2', baz='3', foo='27.5')
```
You could probably work around that by using [mutually exclusive groupings](http://docs.python.org/2.7/library/argparse.html#mutual-exclusion) | You can also use this module: [docopt](https://github.com/docopt/docopt) |
72,950,868 | I would like to add a closing parenthesis to strings that have an open parenthesis but are missing a closing parenthesis.
For instance, I would like to modify "The dog walked (ABC in the park" to be "The dog walked (ABC) in the park".
I found a similar question and solution but it is in Python ([How to add a missing closing parenthesis to a string in Python?](https://stackoverflow.com/questions/67400960/how-to-add-a-missing-closing-parenthesis-to-a-string-in-python)). I have tried to modify the code to be used in R but to no avail. Can someone help me with this please?
I have tried modifying the original python solution as R doesn't recognise the "r" and "\" has been replaced by "\\" but this solution doesn't work properly and does not capture the string preceded before the bracket I would like to add:
```
text = "The dog walked (ABC in the park"
str_replace_all(text, '\\([A-Z]+(?!\\))\\b', '\\)')
text
```
The python solution that works is as follows:
```
text = "The dog walked (ABC in the park"
text = re.sub(r'(\([A-Z]+(?!\))\b)', r"\1)", text)
print(text)
``` | 2022/07/12 | [
"https://Stackoverflow.com/questions/72950868",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19533566/"
] | Try this
```
stringr::str_replace_all(text, '\\([A-Z]+(?!\\))\\b', '\\0\\)')
```
* output
```
"The dog walked (ABC) in the park"
``` | Not a one liner, but it does the trick and is (hopefully!) intuitive.
```
library(stringr)
add_brackets = function(text) {
brackets = str_extract(text, "\\([:alpha:]+") # finds the open bracket and any following letters
brackets_new = paste0(brackets, ")") # adds in the closing brackets
str_replace(text, paste0("\\", brackets), brackets_new) # replaces the unclosed string with the closed one
}
```
```
> add_brackets(text)
[1] "The dog walked (ABC) in the park"
``` |
72,950,868 | I would like to add a closing parenthesis to strings that have an open parenthesis but are missing a closing parenthesis.
For instance, I would like to modify "The dog walked (ABC in the park" to be "The dog walked (ABC) in the park".
I found a similar question and solution but it is in Python ([How to add a missing closing parenthesis to a string in Python?](https://stackoverflow.com/questions/67400960/how-to-add-a-missing-closing-parenthesis-to-a-string-in-python)). I have tried to modify the code to be used in R but to no avail. Can someone help me with this please?
I have tried modifying the original python solution as R doesn't recognise the "r" and "\" has been replaced by "\\" but this solution doesn't work properly and does not capture the string preceded before the bracket I would like to add:
```
text = "The dog walked (ABC in the park"
str_replace_all(text, '\\([A-Z]+(?!\\))\\b', '\\)')
text
```
The python solution that works is as follows:
```
text = "The dog walked (ABC in the park"
text = re.sub(r'(\([A-Z]+(?!\))\b)', r"\1)", text)
print(text)
``` | 2022/07/12 | [
"https://Stackoverflow.com/questions/72950868",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19533566/"
] | Try this
```
stringr::str_replace_all(text, '\\([A-Z]+(?!\\))\\b', '\\0\\)')
```
* output
```
"The dog walked (ABC) in the park"
``` | You might also use gsub, and first use the word boundary and then the negative lookahead.
In the replacement use the first capture group followed by `)`
```
text = "The dog walked (ABC in the park"
gsub('(\\([A-Z]+)\\b(?!\\))', '\\1\\)', text, perl=T)
```
Output
```
[1] "The dog walked (ABC) in the park"
``` |
72,950,868 | I would like to add a closing parenthesis to strings that have an open parenthesis but are missing a closing parenthesis.
For instance, I would like to modify "The dog walked (ABC in the park" to be "The dog walked (ABC) in the park".
I found a similar question and solution but it is in Python ([How to add a missing closing parenthesis to a string in Python?](https://stackoverflow.com/questions/67400960/how-to-add-a-missing-closing-parenthesis-to-a-string-in-python)). I have tried to modify the code to be used in R but to no avail. Can someone help me with this please?
I have tried modifying the original python solution as R doesn't recognise the "r" and "\" has been replaced by "\\" but this solution doesn't work properly and does not capture the string preceded before the bracket I would like to add:
```
text = "The dog walked (ABC in the park"
str_replace_all(text, '\\([A-Z]+(?!\\))\\b', '\\)')
text
```
The python solution that works is as follows:
```
text = "The dog walked (ABC in the park"
text = re.sub(r'(\([A-Z]+(?!\))\b)', r"\1)", text)
print(text)
``` | 2022/07/12 | [
"https://Stackoverflow.com/questions/72950868",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19533566/"
] | You might also use gsub, and first use the word boundary and then the negative lookahead.
In the replacement use the first capture group followed by `)`
```
text = "The dog walked (ABC in the park"
gsub('(\\([A-Z]+)\\b(?!\\))', '\\1\\)', text, perl=T)
```
Output
```
[1] "The dog walked (ABC) in the park"
``` | Not a one liner, but it does the trick and is (hopefully!) intuitive.
```
library(stringr)
add_brackets = function(text) {
brackets = str_extract(text, "\\([:alpha:]+") # finds the open bracket and any following letters
brackets_new = paste0(brackets, ")") # adds in the closing brackets
str_replace(text, paste0("\\", brackets), brackets_new) # replaces the unclosed string with the closed one
}
```
```
> add_brackets(text)
[1] "The dog walked (ABC) in the park"
``` |
67,609,973 | I chose to use Python 3.8.1 Azure ML in Azure Machine learning studio, but when i run the command
`!python train.py`, it uses python Anconda 3.6.9, when i downloaded python 3.8 and run the command `!python38 train.py` in the same dir as before, the response was `python3.8: can't open file` .
Any idea?
Also Python 3 in azure, is always busy, without anything running from my side.
Thank you. | 2021/05/19 | [
"https://Stackoverflow.com/questions/67609973",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14915505/"
] | You should try adding a new Python 3.8 Kernel. Here and instructions how to add a new Kernel: <https://learn.microsoft.com/en-us/azure/machine-learning/how-to-access-terminal#add-new-kernels> | Yeah I understand your pain point, and I agree that calling bash commands in a notebook cell should execute in the same conda environment as the one associated with the selected kernel of the notebook. I think this is bug, I'll flag it to the notebook feature team, but I encourage you to open a priority support ticket if you want to ensure that your problem is addressed! |
58,483,706 | I am new to python and trying my hands on certain problems. I have a situation where I have 2 dataframe which I want to combine to achieve my desired dataframe.
I have tried .merge and .join, both of which was not able to get my desired outbcome.
let us suppose I have the below scenario:
```
lt = list(['a','b','c','d','a','b','a','b'])
df = pd.DataFrame(columns = lt)
data = [[10,11,12,12], [15,14,12,10]]
df1 = pd.DataFrame(data, columns = ['a','b','c','d'])
```
I want df and df1 to be combined and get desired dataframe as df2 as:
```
a b c d a b a b
0 10 11 12 12 10 11 10 11
1 15 14 12 10 15 14 15 14
``` | 2019/10/21 | [
"https://Stackoverflow.com/questions/58483706",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11378087/"
] | If you don't mind the order of the columns changing, this is just a right join. The only caveat is that those are performed on rows rather than columns, so you need to transpose first:
```py
In [44]: df.T.join(df1.T, how='right').T
Out[44]:
a a a b b b c d
0 10 10 10 11 11 11 12 12
1 15 15 15 14 14 14 12 10
``` | Use [`concat()`](https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html)
```py
pd.concat([df, df1], axis=0, join='inner', sort=False)
a b c d a b a b
0 10 11 12 12 10 11 10 11
1 15 14 12 10 15 14 15 14
``` |
58,483,706 | I am new to python and trying my hands on certain problems. I have a situation where I have 2 dataframe which I want to combine to achieve my desired dataframe.
I have tried .merge and .join, both of which was not able to get my desired outbcome.
let us suppose I have the below scenario:
```
lt = list(['a','b','c','d','a','b','a','b'])
df = pd.DataFrame(columns = lt)
data = [[10,11,12,12], [15,14,12,10]]
df1 = pd.DataFrame(data, columns = ['a','b','c','d'])
```
I want df and df1 to be combined and get desired dataframe as df2 as:
```
a b c d a b a b
0 10 11 12 12 10 11 10 11
1 15 14 12 10 15 14 15 14
``` | 2019/10/21 | [
"https://Stackoverflow.com/questions/58483706",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11378087/"
] | What you can do is to use the columns of `df` and select the corresponding columns in `df1`, like so:
```py
lt = list(['a','b','c','d','a','b','a','b'])
df = pd.DataFrame(columns = lt)
data = [[10,11,12,12], [15,14,12,10]]
df1 = pd.DataFrame(data, columns = ['a','b','c','d'])
df2 = df1[df.columns]
print(df2)
```
prints:
```
a b c d a b a b
0 10 11 12 12 10 11 10 11
1 15 14 12 10 15 14 15 14
``` | Use [`concat()`](https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html)
```py
pd.concat([df, df1], axis=0, join='inner', sort=False)
a b c d a b a b
0 10 11 12 12 10 11 10 11
1 15 14 12 10 15 14 15 14
``` |
14,187,973 | Simmilar question (related with Python2: [Python: check if method is static](https://stackoverflow.com/questions/8727059/python-check-if-method-is-static))
Lets concider following class definition:
```
class A:
def f(self):
return 'this is f'
@staticmethod
def g():
return 'this is g'
```
In Python 3 there is no `instancemethod` anymore, everything is function, so the answer related to Python 2 will not work anymore.
As I told, everything is function, so we can call `A.f(0)`, but of course we cannot call `A.f()` (argument missmatch). But if we make an instance `a=A()` and we call `a.f()` Python passes to the function `A.f` the `self` as first argument. Calling `a.g()` prevents from sending it or captures the `self` - so there have to be a way to test if this is staticmethod or not.
So can we check in Python3 if a method was declared as `static` or not? | 2013/01/06 | [
"https://Stackoverflow.com/questions/14187973",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/889902/"
] | ```
class A:
def f(self):
return 'this is f'
@staticmethod
def g():
return 'this is g'
print(type(A.__dict__['g']))
print(type(A.g))
<class 'staticmethod'>
<class 'function'>
``` | I needed this solution and wrote the following based on the answer from @root
```
def is_method_static(cls, method_name):
# http://stackoverflow.com/questions/14187973/python3-check-if-method-is-static
for c in cls.mro():
if method_name in c.__dict__:
return isinstance(c.__dict__[method_name], staticmethod)
raise RuntimeError("Unable to find %s in %s" % (method_name, cls.__name__))
``` |
14,187,973 | Simmilar question (related with Python2: [Python: check if method is static](https://stackoverflow.com/questions/8727059/python-check-if-method-is-static))
Lets concider following class definition:
```
class A:
def f(self):
return 'this is f'
@staticmethod
def g():
return 'this is g'
```
In Python 3 there is no `instancemethod` anymore, everything is function, so the answer related to Python 2 will not work anymore.
As I told, everything is function, so we can call `A.f(0)`, but of course we cannot call `A.f()` (argument missmatch). But if we make an instance `a=A()` and we call `a.f()` Python passes to the function `A.f` the `self` as first argument. Calling `a.g()` prevents from sending it or captures the `self` - so there have to be a way to test if this is staticmethod or not.
So can we check in Python3 if a method was declared as `static` or not? | 2013/01/06 | [
"https://Stackoverflow.com/questions/14187973",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/889902/"
] | ```
class A:
def f(self):
return 'this is f'
@staticmethod
def g():
return 'this is g'
print(type(A.__dict__['g']))
print(type(A.g))
<class 'staticmethod'>
<class 'function'>
``` | For Python 3.2 or newer, use [`inspect.getattr_static()`](https://docs.python.org/3/library/inspect.html#inspect.getattr_static) to retrieve the attribute without invoking the descriptor protocol:
>
> Retrieve attributes without triggering dynamic lookup via the descriptor protocol, `__getattr__()` or `__getattribute__()`.
>
>
>
Use `isinstance(..., staticmethod)` on the result:
```
>>> from inspect import getattr_static
>>> isinstance(getattr_static(A, 'g'), staticmethod)
True
```
The function can handle both instances and classes, and will scan the full class hierarchy for you:
```
>>> class B(A): pass
...
>>> isinstance(getattr_static(B, 'g'), staticmethod) # inherited
True
>>> isinstance(getattr_static(B(), 'g'), staticmethod) # instance, inherited
True
``` |
14,187,973 | Simmilar question (related with Python2: [Python: check if method is static](https://stackoverflow.com/questions/8727059/python-check-if-method-is-static))
Lets concider following class definition:
```
class A:
def f(self):
return 'this is f'
@staticmethod
def g():
return 'this is g'
```
In Python 3 there is no `instancemethod` anymore, everything is function, so the answer related to Python 2 will not work anymore.
As I told, everything is function, so we can call `A.f(0)`, but of course we cannot call `A.f()` (argument missmatch). But if we make an instance `a=A()` and we call `a.f()` Python passes to the function `A.f` the `self` as first argument. Calling `a.g()` prevents from sending it or captures the `self` - so there have to be a way to test if this is staticmethod or not.
So can we check in Python3 if a method was declared as `static` or not? | 2013/01/06 | [
"https://Stackoverflow.com/questions/14187973",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/889902/"
] | For Python 3.2 or newer, use [`inspect.getattr_static()`](https://docs.python.org/3/library/inspect.html#inspect.getattr_static) to retrieve the attribute without invoking the descriptor protocol:
>
> Retrieve attributes without triggering dynamic lookup via the descriptor protocol, `__getattr__()` or `__getattribute__()`.
>
>
>
Use `isinstance(..., staticmethod)` on the result:
```
>>> from inspect import getattr_static
>>> isinstance(getattr_static(A, 'g'), staticmethod)
True
```
The function can handle both instances and classes, and will scan the full class hierarchy for you:
```
>>> class B(A): pass
...
>>> isinstance(getattr_static(B, 'g'), staticmethod) # inherited
True
>>> isinstance(getattr_static(B(), 'g'), staticmethod) # instance, inherited
True
``` | I needed this solution and wrote the following based on the answer from @root
```
def is_method_static(cls, method_name):
# http://stackoverflow.com/questions/14187973/python3-check-if-method-is-static
for c in cls.mro():
if method_name in c.__dict__:
return isinstance(c.__dict__[method_name], staticmethod)
raise RuntimeError("Unable to find %s in %s" % (method_name, cls.__name__))
``` |
46,132,431 | I have written code to generate numbers from 0500000000 to 0500000100:
```
def generator(nums):
count = 0
while count < 100:
gg=print('05',count, sep='')
count += 1
g = generator(10)
```
as I use linux, I thought I may be able to use this command `python pythonfilename.py >> file.txt`
Yet, I get an error.
So, before `g = generator(10)` I added:
```
with open('file.txt', 'w') as f:
f.write(gg)
f.close()
```
but I got an error:
>
> TypeError: write() argument must be str, not None
>
>
>
Any solution? | 2017/09/09 | [
"https://Stackoverflow.com/questions/46132431",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5548783/"
] | Here I've assumed we're laying out two general images, rather than plots. If your images are actually plots you've created, then you can lay them out as a single image for display using `gridExtra::grid.arrange` for grid graphics or `par(mfrow=c(1,2))` for base graphics and thereby avoid the complications of laying out two separate images.
I'm not sure if there's a "natural" way to left justify the left-hand image and right-justify the right-hand image. As a hack, you could add a blank "spacer" image to separate the two "real" images and set the widths of each image to match paper-width minus 2\*margin-width.
Here's an example where the paper is assumed to be 8.5" wide and the right and left margins are each 1":
```
---
output: pdf_document
geometry: margin=1in
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = FALSE)
library(ggplot2)
library(knitr)
# Create a blank image to use for spacing
spacer = ggplot() + theme_void() + ggsave("spacer.png")
```
```{r, out.width=c('2.75in','1in','2.75in')}
include_graphics(c("Rplot59.png","spacer.png", "Rplot60.png"))
```
```
And here's what the document looks like:
[![enter image description here](https://i.stack.imgur.com/jiqHx.png)](https://i.stack.imgur.com/jiqHx.png) | Put them in the same code chunk and do not use align. Let them use html.
THis has worked for me.
```
````{r echo=FALSE, fig.height=3.0, fig.width=3.0}
#type your code here
ggplot(anscombe, aes(x=x1 , y=y1)) + geom_point()
+geom_smooth(method="lm") +
ggtitle("Results for x1 and y1 ")
ggplot(anscombe, aes(x=x2 , y=y2)) + geom_point() +geom_smooth(method="lm") +
ggtitle("Results for x2 and y2 ")
ggplot(anscombe, aes(x=x3 , y=y3)) + geom_point() +geom_smooth(method="lm") +
ggtitle("Results for x3 and y3 ")
ggplot(anscombe, aes(x=x4 , y=y4)) + geom_point() +geom_smooth(method="lm") +
ggtitle("Results for x4 and y4 ")
````
``` |
54,007,542 | input is like:
```
text="""Hi Team from the following Server :
<table border="0" cellpadding="0" cellspacing="0" style="width:203pt">
<tbody>
<tr>
<td style="height:15.0pt; width:203pt">ratsuite.sby.ibm.com</td>
</tr>
</tbody>
</table>
<p> </p>
<p>Please archive the following Project Areas :</p>
<table border="0" cellpadding="0" cellspacing="0" style="width:1436pt">
<tbody>
<tr>
<td style="height:15.0pt; width:505pt">UNIT TEST - IBM OPAL 3.3 RC3</td>
<td style="width:328pt">https://ratsuite.sby.ibm.com:9460/ccm</td>
<td style="width:603pt">https://ratsuite.sby.ibm.com:9460/ccm/process/project-areas/_ckR-QJiUEeOXmZKjKhPE4Q</td>
</tr>
</tbody>
</table>"""
```
In output i want these 2 lines only, want to remove table tag with data in python:
Hi Team from the following Server :
Please archive the following Project Areas : | 2019/01/02 | [
"https://Stackoverflow.com/questions/54007542",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9901523/"
] | Use `BeautifulSoup` to parse HTML
**Ex:**
```
from bs4 import BeautifulSoup
text="""<p>Hi Team from the following Server :</p>
<table border="0" cellpadding="0" cellspacing="0" style="width:203pt">
<tbody>
<tr>
<td style="height:15.0pt; width:203pt">ratsuite.sby.ibm.com</td>
</tr>
</tbody>
</table>
<p> </p>
<p>Please archive the following Project Areas :</p>
<table border="0" cellpadding="0" cellspacing="0" style="width:1436pt">
<tbody>
<tr>
<td style="height:15.0pt; width:505pt">UNIT TEST - IBM OPAL 3.3 RC3</td>
<td style="width:328pt">https://ratsuite.sby.ibm.com:9460/ccm</td>
<td style="width:603pt">https://ratsuite.sby.ibm.com:9460/ccm/process/project-areas/_ckR-QJiUEeOXmZKjKhPE4Q</td>
</tr>
</tbody>
</table>"""
soup = BeautifulSoup(text, "html.parser")
for p in soup.find_all("p"):
print(p.text)
```
**Output:**
```
Hi Team from the following Server :
Please archive the following Project Areas :
``` | You can use `HTMLParser` as demonstrated below:
```
from HTMLParser import HTMLParser
s = \
"""
<html>
<p>Hi Team from the following Server :</p>
<table border="0" cellpadding="0" cellspacing="0" style="width:203pt">
<tbody>
<tr>
<td style="height:15.0pt; width:203pt">ratsuite.sby.ibm.com</td>
</tr>
</tbody>
</table>
<p> </p>
<p>Please archive the following Project Areas :</p>
<table border="0" cellpadding="0" cellspacing="0" style="width:1436pt">
<tbody>
<tr>
<td style="height:15.0pt; width:505pt">UNIT TEST - IBM OPAL 3.3 RC3</td>
<td style="width:328pt">https://ratsuite.sby.ibm.com:9460/ccm</td>
<td style="width:603pt">https://ratsuite.sby.ibm.com:9460/ccm/process/project-areas/_ckR-QJiUEeOXmZKjKhPE4Q</td>
</tr>
</tbody>
</table>
</html>
"""
# create a subclass and override the handler methods
class MyHTMLParser(HTMLParser):
def __init__(self):
HTMLParser.__init__(self)
self._last_tag = ''
def handle_starttag(self, tag, attrs):
#print "Encountered a start tag:", tag
self._last_tag = tag
def handle_endtag(self, tag):
#print "Encountered an end tag :", tag
self._last_tag = ''
def handle_data(self, data):
#print "Encountered some data :", data
if self._last_tag == 'p':
print("<%s> tag data: %s" % (self._last_tag, data))
# instantiate the parser and fed it some HTML
parser = MyHTMLParser()
parser.feed(s)
```
Output:
```
<p> tag data: Hi Team from the following Server :
<p> tag data: Please archive the following Project Areas :
``` |
54,007,542 | input is like:
```
text="""Hi Team from the following Server :
<table border="0" cellpadding="0" cellspacing="0" style="width:203pt">
<tbody>
<tr>
<td style="height:15.0pt; width:203pt">ratsuite.sby.ibm.com</td>
</tr>
</tbody>
</table>
<p> </p>
<p>Please archive the following Project Areas :</p>
<table border="0" cellpadding="0" cellspacing="0" style="width:1436pt">
<tbody>
<tr>
<td style="height:15.0pt; width:505pt">UNIT TEST - IBM OPAL 3.3 RC3</td>
<td style="width:328pt">https://ratsuite.sby.ibm.com:9460/ccm</td>
<td style="width:603pt">https://ratsuite.sby.ibm.com:9460/ccm/process/project-areas/_ckR-QJiUEeOXmZKjKhPE4Q</td>
</tr>
</tbody>
</table>"""
```
In output i want these 2 lines only, want to remove table tag with data in python:
Hi Team from the following Server :
Please archive the following Project Areas : | 2019/01/02 | [
"https://Stackoverflow.com/questions/54007542",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9901523/"
] | Use `BeautifulSoup` to parse HTML
**Ex:**
```
from bs4 import BeautifulSoup
text="""<p>Hi Team from the following Server :</p>
<table border="0" cellpadding="0" cellspacing="0" style="width:203pt">
<tbody>
<tr>
<td style="height:15.0pt; width:203pt">ratsuite.sby.ibm.com</td>
</tr>
</tbody>
</table>
<p> </p>
<p>Please archive the following Project Areas :</p>
<table border="0" cellpadding="0" cellspacing="0" style="width:1436pt">
<tbody>
<tr>
<td style="height:15.0pt; width:505pt">UNIT TEST - IBM OPAL 3.3 RC3</td>
<td style="width:328pt">https://ratsuite.sby.ibm.com:9460/ccm</td>
<td style="width:603pt">https://ratsuite.sby.ibm.com:9460/ccm/process/project-areas/_ckR-QJiUEeOXmZKjKhPE4Q</td>
</tr>
</tbody>
</table>"""
soup = BeautifulSoup(text, "html.parser")
for p in soup.find_all("p"):
print(p.text)
```
**Output:**
```
Hi Team from the following Server :
Please archive the following Project Areas :
``` | If you do not want to use external library, you can use `re` module to remove tables:
```
output = re.sub('<table.+?</table>','',text,flags=re.DOTALL)
```
printing output give:
```
Hi Team from the following Server :
<p> </p>
<p>Please archive the following Project Areas :</p>
```
(and 2 empty lines which are not visible there).
Regarding pattern notice that `+` is followed by `?` meaning use non-greedy matching - otherwise it would purge anything between begin of first table and end of last table. `re.DOTALL` is required, because our substrings contain newlines (`\n`) |
54,007,542 | input is like:
```
text="""Hi Team from the following Server :
<table border="0" cellpadding="0" cellspacing="0" style="width:203pt">
<tbody>
<tr>
<td style="height:15.0pt; width:203pt">ratsuite.sby.ibm.com</td>
</tr>
</tbody>
</table>
<p> </p>
<p>Please archive the following Project Areas :</p>
<table border="0" cellpadding="0" cellspacing="0" style="width:1436pt">
<tbody>
<tr>
<td style="height:15.0pt; width:505pt">UNIT TEST - IBM OPAL 3.3 RC3</td>
<td style="width:328pt">https://ratsuite.sby.ibm.com:9460/ccm</td>
<td style="width:603pt">https://ratsuite.sby.ibm.com:9460/ccm/process/project-areas/_ckR-QJiUEeOXmZKjKhPE4Q</td>
</tr>
</tbody>
</table>"""
```
In output i want these 2 lines only, want to remove table tag with data in python:
Hi Team from the following Server :
Please archive the following Project Areas : | 2019/01/02 | [
"https://Stackoverflow.com/questions/54007542",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9901523/"
] | If you do not want to use external library, you can use `re` module to remove tables:
```
output = re.sub('<table.+?</table>','',text,flags=re.DOTALL)
```
printing output give:
```
Hi Team from the following Server :
<p> </p>
<p>Please archive the following Project Areas :</p>
```
(and 2 empty lines which are not visible there).
Regarding pattern notice that `+` is followed by `?` meaning use non-greedy matching - otherwise it would purge anything between begin of first table and end of last table. `re.DOTALL` is required, because our substrings contain newlines (`\n`) | You can use `HTMLParser` as demonstrated below:
```
from HTMLParser import HTMLParser
s = \
"""
<html>
<p>Hi Team from the following Server :</p>
<table border="0" cellpadding="0" cellspacing="0" style="width:203pt">
<tbody>
<tr>
<td style="height:15.0pt; width:203pt">ratsuite.sby.ibm.com</td>
</tr>
</tbody>
</table>
<p> </p>
<p>Please archive the following Project Areas :</p>
<table border="0" cellpadding="0" cellspacing="0" style="width:1436pt">
<tbody>
<tr>
<td style="height:15.0pt; width:505pt">UNIT TEST - IBM OPAL 3.3 RC3</td>
<td style="width:328pt">https://ratsuite.sby.ibm.com:9460/ccm</td>
<td style="width:603pt">https://ratsuite.sby.ibm.com:9460/ccm/process/project-areas/_ckR-QJiUEeOXmZKjKhPE4Q</td>
</tr>
</tbody>
</table>
</html>
"""
# create a subclass and override the handler methods
class MyHTMLParser(HTMLParser):
def __init__(self):
HTMLParser.__init__(self)
self._last_tag = ''
def handle_starttag(self, tag, attrs):
#print "Encountered a start tag:", tag
self._last_tag = tag
def handle_endtag(self, tag):
#print "Encountered an end tag :", tag
self._last_tag = ''
def handle_data(self, data):
#print "Encountered some data :", data
if self._last_tag == 'p':
print("<%s> tag data: %s" % (self._last_tag, data))
# instantiate the parser and fed it some HTML
parser = MyHTMLParser()
parser.feed(s)
```
Output:
```
<p> tag data: Hi Team from the following Server :
<p> tag data: Please archive the following Project Areas :
``` |
38,776,104 | I would like to redirect the standard error and standard output of a Python script to the same output file. From the terminal I could use
```
$ python myfile.py &> out.txt
```
to do the same task that I want, but I need to do it from the Python script itself.
I looked into the questions [Redirect subprocess stderr to stdout](https://stackoverflow.com/questions/11495783/redirect-subprocess-stderr-to-stdout), [How to redirect stderr in Python?](https://stackoverflow.com/questions/1956142/how-to-redirect-stderr-in-python), and Example 10.10 from [here](http://www.diveintopython.net/scripts_and_streams/stdin_stdout_stderr.html), and then I tried the following:
```
import sys
fsock = open('out.txt', 'w')
sys.stdout = sys.stderr = fsock
print "a"
```
which rightly prints the letter "a" in the file out.txt; however, when I try the following:
```
import sys
fsock = open('out.txt', 'w')
sys.stdout = sys.stderr = fsock
print "a # missing end quote, will give error
```
I get the error message "SyntaxError ..." on the terminal, but not in the file out.txt. What do I need to do to send the SyntaxError to the file out.txt? I do not want to write an Exception, because in that case I have to write too many Exceptions in the script. I am using Python 2.7.
Update: As pointed out in the answers and comments below, that SyntaxError will always output to screen, I replaced the line
```
print "a # missing end quote, will give error
```
by
```
print 1/0 # Zero division error
```
The ZeroDivisionError is output to file, as I wanted to have it in my question. | 2016/08/04 | [
"https://Stackoverflow.com/questions/38776104",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1461999/"
] | This works
```
sys.stdout = open('out.log', 'w')
sys.stderr = sys.stdout
``` | A SyntaxError in a Python file like the above is raised before your program even begins to run:
Python files are compiled just like in any other compiled language - if the parser or compiler can't find sense in your Python file, no executable bytecode is generated, therefore the program does not run.
The correct way to have an exception generated on purpose in your code - from simple test cases like yours, up to implementing complex flow control patterns, is to use the Pyton command `raise`.
Just leave your print there, and a line like this at the end:
```
raise Exception
```
Then you can see that your trick will work.
Your program could fail in runtime in many other ways without an explict raise, like, if you force a division by 0, or simply try to use an unassigned (and therefore "undeclared") variable - but a deliberate SyntaxError will have the effect that the program never runs to start with - not even the first few lines. |
57,843,695 | I haven't changed my system configuration, But I'm spotting this error for the first time today.
I've reported it here: <https://github.com/jupyter/notebook/issues/4871>
```
> jupyter notebook
[I 10:44:20.102 NotebookApp] JupyterLab extension loaded from /usr/local/anaconda3/lib/python3.7/site-packages/jupyterlab
[I 10:44:20.102 NotebookApp] JupyterLab application directory is /usr/local/anaconda3/share/jupyter/lab
[I 10:44:20.104 NotebookApp] Serving notebooks from local directory: /Users/pi
[I 10:44:20.104 NotebookApp] The Jupyter Notebook is running at:
[I 10:44:20.104 NotebookApp] http://localhost:8888/?token=586797fb9049c0faea24f2583c4de32c08d45c89051fb07d
[I 10:44:20.104 NotebookApp] or http://127.0.0.1:8888/?token=586797fb9049c0faea24f2583c4de32c08d45c89051fb07d
[I 10:44:20.104 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 10:44:20.110 NotebookApp]
To access the notebook, open this file in a browser:
file:///Users/pi/Library/Jupyter/runtime/nbserver-65385-open.html
Or copy and paste one of these URLs:
http://localhost:8888/?token=586797fb9049c0faea24f2583c4de32c08d45c89051fb07d
or http://127.0.0.1:8888/?token=586797fb9049c0faea24f2583c4de32c08d45c89051fb07d
[E 10:44:21.457 NotebookApp] Could not open static file ''
[W 10:44:21.512 NotebookApp] 404 GET /static/components/react/react-dom.production.min.js (::1) 9.02ms referer=http://localhost:8888/tree?token=BLA
[W 10:44:21.548 NotebookApp] 404 GET /static/components/react/react-dom.production.min.js (::1) 0.99ms referer=http://localhost:8888/tree?token=BLA
Set
```
Looks like this issue was fixed in `Jupyter 6.0.1`
So the question becomes: can I force-install `jupyter 6.0.1`?
As the initial question has now provoked a second question, I now ask this new question here: [How to force `conda` to install the latest version of `jupyter`?](https://stackoverflow.com/questions/57843733/how-to-force-conda-to-install-the-latest-version-of-jupyter)
Alternatively I can manually provide the missing file, but I'm not sure *where*. I've asked here: [Where does Jupyter install site-packages on macOS?](https://stackoverflow.com/questions/57843888/where-does-jupyter-install-site-packages-on-macos)
Research:
=========
<https://github.com/jupyter/notebook/pull/4772> *"add missing react-dom js to package data #4772"* on 6 Aug 2019
>
> minrk added this to the 6.0.1 milestone on 18 Jul
>
>
>
Ok, so can I get Jupyter Notebook 6.0.1?
`brew cask install anaconda` downloads `~/Library/Caches/Homebrew/downloads/{LONG HEX}--Anaconda3-2019.07-MacOSX-x86_64` which is July, and `conda --version` reports `conda 4.7.10`. But this is for `Anaconda` which is the Package *Manager*.
```
> conda list | grep jupy
jupyter 1.0.0 py37_7
jupyter_client 5.3.1 py_0
jupyter_console 6.0.0 py37_0
jupyter_core 4.5.0 py_0
jupyterlab 1.0.2 py37hf63ae98_0
jupyterlab_server 1.0.0 py_0
```
So that's a bit confusing. No `jupyter notebook` here.
```
> which jupyter
/usr/local/anaconda3/bin/jupyter
> jupyter --version
jupyter core : 4.5.0
jupyter-notebook : 6.0.0
qtconsole : 4.5.1
ipython : 7.6.1
ipykernel : 5.1.1
jupyter client : 5.3.1
jupyter lab : 1.0.2
nbconvert : 5.5.0
ipywidgets : 7.5.0
nbformat : 4.4.0
traitlets : 4.3.2
```
Ok, so it appears `jupyter-notebook` is in `jupyter` which is maintained by Anaconda.
Can we update this?
<https://jupyter.readthedocs.io/en/latest/projects/upgrade-notebook.html>
```
> conda update jupyter
:
```
Alas `jupyter --version` is still `6.0.0` | 2019/09/08 | [
"https://Stackoverflow.com/questions/57843695",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/435129/"
] | I fixed this by updating both jupyter on pip and pip3 (just to be safe) and this fixed the problem
using both
>
> `pip install --upgrade jupyter`
>
>
>
and
>
> `pip3 install --upgrade jupyter --no-cache-dir`
>
>
>
I believe you can do this in the terminal as well as in conda's terminal (since conda envs also have pip) | As per [Where does Jupyter install site-packages on macOS?](https://stackoverflow.com/questions/57843888/where-does-jupyter-install-site-packages-on-macos), I locate where on my system `jupyter` is searching for this missing file:
```
> find / -path '*/static/components' 2>/dev/null
/usr/local/anaconda3/pkgs/notebook-6.0.0-py37_0/lib/python3.7/site-packages/notebook/static/components
/usr/local/anaconda3/lib/python3.7/site-packages/notebook/static/components
```
And as per <https://github.com/jupyter/notebook/pull/4772#issuecomment-515794823>, if I download that file and deposit it in the second location, i.e. creating:
```
/usr/local/anaconda3/lib/python3.7/site-packages/notebook/static/components/react/react-dom.production.min.js
```
... now `jupyter notebook` launches without errors.
(*NOTE: Being cautious I have also copied it into the first location. But that doesn't seem to have any effect.*) |
44,175,800 | Simple question: given a string
```
string = "Word1 Word2 Word3 ... WordN"
```
is there a pythonic way to do this?
```
firstWord = string.split(" ")[0]
otherWords = string.split(" ")[1:]
```
Like an unpacking or something?
Thank you | 2017/05/25 | [
"https://Stackoverflow.com/questions/44175800",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2131783/"
] | Since Python 3 and [PEP 3132](https://www.python.org/dev/peps/pep-3132/), you can use extended unpacking.
This way, you can unpack arbitrary string containing any number of words. The first will be stored into the variable `first`, and the others will belong to the list (possibly empty) `others`.
```
first, *others = string.split()
```
Also, note that default delimiter for `.split()` is a space, so you do not need to specify it explicitly. | From [Extended Iterable Unpacking](https://www.python.org/dev/peps/pep-3132/).
Many algorithms require splitting a sequence in a "first, rest" pair, if you're using Python2.x, you need to try this:
```
seq = string.split()
first, rest = seq[0], seq[1:]
```
and it is replaced by the cleaner and probably more efficient in `Python3.x`:
```
first, *rest = seq
```
For more complex unpacking patterns, the new syntax looks even cleaner, and the clumsy index handling is not necessary anymore. |
28,717,067 | I am trying to place a condition after the for loop. It will print the word available if the retrieved rows is not equal to zero, however if I would be entering a value which is not stored in my database, it will return a message. My problem here is that, if I'd be inputting value that isn't stored on my database, it would not go to the else statement. I'm new to this. What would be my mistake in this function?
```
def search(title):
query = "SELECT * FROM books WHERE title = %s"
entry = (title,)
try:
conn = mysql.connector.connect(user='root', password='', database='python_mysql') # connect to the database server
cursor = conn.cursor()
cursor.execute(query, entry)
rows = cursor.fetchall()
for row in rows:
if row != 0:
print('Available')
else:
print('No available copies of the said book in the library')
except Error as e:
print(e)
finally:
cursor.close()
conn.close()
def main():
title = input("Enter book title: ")
search(title)
if __name__ == '__main__':
main()
``` | 2015/02/25 | [
"https://Stackoverflow.com/questions/28717067",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4529171/"
] | Quite apart from the 0/NULL confusion, your logic is wrong. If there are no matching rows, you won't get a 0 as the value of a row; in fact you won't get any rows at all, and you will never even get into the for loop.
A much better way to do this would be simply run a COUNT query, get the single result with `fetchone()`, and check that directly.
```
query = "SELECT COUNT(*) FROM books WHERE title = %s"
entry = (title,)
try:
conn = mysql.connector.connect(user='root', password='', database='python_mysql') # connect to the database server
cursor = conn.cursor()
cursor.execute(query, entry)
result = cursor.fetchone()
if result != 0:
print('Available')
else:
print('No available copies of the said book in the library')
``` | In python you should check for `None` not `NULL`. In your code you can just check for object, if it is not None then control should go inside `if` otherwise `else` will be executed
```
for row in rows:
if row:
print('Available')
else:
print('No available copies of the said book in the library')
```
`UPDATE after auther edited the question:`
Now in for loop you should check for column value not the whole `row`. If your column name is suppose `quantity` then `if` statement should be like this
```
if row["quantity"] != 0:
``` |
28,717,067 | I am trying to place a condition after the for loop. It will print the word available if the retrieved rows is not equal to zero, however if I would be entering a value which is not stored in my database, it will return a message. My problem here is that, if I'd be inputting value that isn't stored on my database, it would not go to the else statement. I'm new to this. What would be my mistake in this function?
```
def search(title):
query = "SELECT * FROM books WHERE title = %s"
entry = (title,)
try:
conn = mysql.connector.connect(user='root', password='', database='python_mysql') # connect to the database server
cursor = conn.cursor()
cursor.execute(query, entry)
rows = cursor.fetchall()
for row in rows:
if row != 0:
print('Available')
else:
print('No available copies of the said book in the library')
except Error as e:
print(e)
finally:
cursor.close()
conn.close()
def main():
title = input("Enter book title: ")
search(title)
if __name__ == '__main__':
main()
``` | 2015/02/25 | [
"https://Stackoverflow.com/questions/28717067",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4529171/"
] | Quite apart from the 0/NULL confusion, your logic is wrong. If there are no matching rows, you won't get a 0 as the value of a row; in fact you won't get any rows at all, and you will never even get into the for loop.
A much better way to do this would be simply run a COUNT query, get the single result with `fetchone()`, and check that directly.
```
query = "SELECT COUNT(*) FROM books WHERE title = %s"
entry = (title,)
try:
conn = mysql.connector.connect(user='root', password='', database='python_mysql') # connect to the database server
cursor = conn.cursor()
cursor.execute(query, entry)
result = cursor.fetchone()
if result != 0:
print('Available')
else:
print('No available copies of the said book in the library')
``` | First of all NULL in python is called None.
Next:
according to documentation:
"The method fetches all (or all remaining) rows of a query result set and returns a list of tuples. If no more rows are available, it returns an empty list.
"
enpty list is not None
```
>>> row = []
>>> row is None
False
```
So you need to redesign your if statment in the way like this:
```
for i in rows:
if i:
blah-blah
else:
blah-blah-blah
``` |
65,995,857 | I'm quite new to coding and I'm working on a math problem in python.
To solve it, I would like to extract the first 7 numbers from a string of one hundred 50-digit number (take first 7 numbers, skip 43 numbers, and then take the first 7 again). The numbers aren't separated in any way (just one long string).
Then I want to sum up those fifty seven-digit numbers which I have extracted.
How can I do this?
(I have written this code, but it only takes the first digit, I don't know any stepping/slicing methods to make it seven)
```py
number = """3710728753390210279879799822083759024651013574025046376937677490007126481248969700780504170182605387432498619952474105947423330951305812372661730962991942213363574161572522430563301811072406154908250230675882075393461711719803104210475137780632466768926167069662363382013637841838368417873436172675728112879812849979408065481931592621691275889832738442742289174325203219235894228767964876702721893184745144573600130643909116721685684458871160315327670386486105843025439939619828917593665686757934951621764571418565606295021572231965867550793241933316490635246274190492910143244581382266334794475817892575867718337217661963751590579239728245598838407582035653253593990084026335689488301894586282278288018119938482628201427819413994056758715117009439035398664372827112653829987240784473053190104293586865155060062958648615320752733719591914205172558297169388870771546649911559348760353292171497005693854370070576826684624621495650076471787294438377604532826541087568284431911906346940378552177792951453612327252500029607107508256381565671088525835072145876576172410976447339110607218265236877223636045174237069058518606604482076212098132878607339694128114266041808683061932846081119106155694051268969251934325451728388641918047049293215058642563049483624672216484350762017279180399446930047329563406911573244438690812579451408905770622942919710792820955037687525678773091862540744969844508330393682126183363848253301546861961243487676812975343759465158038628759287849020152168555482871720121925776695478182833757993103614740356856449095527097864797581167263201004368978425535399209318374414978068609844840309812907779179908821879532736447567559084803087086987551392711854517078544161852424320693150332599594068957565367821070749269665376763262354472106979395067965269474259770973916669376304263398708541052684708299085211399427365734116182760315001271653786073615010808570091499395125570281987460043753582903531743471732693212357815498262974255273730794953759765105305946966067683156574377167401875275889028025717332296191766687138199318110487701902712526768027607800301367868099252546340106163286652636270218540497705585629946580636237993140746255962240744869082311749777923654662572469233228109171419143028819710328859780666976089293863828502533340334413065578016127815921815005561868836468420090470230530811728164304876237919698424872550366387845831148769693215490281042402013833512446218144177347063783299490636259666498587618221225225512486764533677201869716985443124195724099139590089523100588229554825530026352078153229679624948164195386821877476085327132285723110424803456124867697064507995236377742425354112916842768655389262050249103265729672370191327572567528565324825826546309220705859652229798860272258331913126375147341994889534765745501184957014548792889848568277260777137214037988797153829820378303147352772158034814451349137322665138134829543829199918180278916522431027392251122869539409579530664052326325380441000596549391598795936352974615218550237130764225512118369380358038858490341698116222072977186158236678424689157993532961922624679571944012690438771072750481023908955235974572318970677254791506150550495392297953090112996751986188088225875314529584099251203829009407770775672113067397083047244838165338735023408456470580773088295917476714036319800818712901187549131054712658197623331044818386269515456334926366572897563400500428462801835170705278318394258821455212272512503275512160354698120058176216521282765275169129689778932238195734329339946437501907836945765883352399886755061649651847751807381688378610915273579297013376217784275219262340194239963916804498399317331273132924185707147349566916674687634660915035914677504995186714302352196288948901024233251169136196266227326746080059154747183079839286853520694694454072476841822524674417161514036427982273348055556214818971426179103425986472045168939894221798260880768528778364618279934631376775430780936333301898264209010848802521674670883215120185883543223812876952786713296124747824645386369930090493103636197638780396218407357239979422340623539380833965132740801111666627891981488087797941876876144230030984490851411606618262936828367647447792391803351109890697907148578694408955299065364044742557608365997664579509666024396409905389607120198219976047599490197230297649139826800329731560371200413779037855660850892521673093931987275027546890690370753941304265231501194809377245048795150954100921645863754710598436791786391670211874924319957006419179697775990283006991536871371193661495281130587638027841075444973307840789923115535562561142322423255033685442488917353448899115014406480203690680639606723221932041495354150312888033953605329934036800697771065056663195481234880673210146739058568557934581403627822703280826165707739483275922328459417065250945123252306082291880205877731971983945018088807242966198081119777158542502016545090413245809786882778948721859617721078384350691861554356628840622574736922845095162084960398013400172393067166682355524525280460972253503534226472524250874054075591789781264330331690"""
first_digits = list(number[::50])
first_digits_int = list(map(int, first_digits))
result = 0
for n in first_digits_int:
result += n
print(result)
``` | 2021/02/01 | [
"https://Stackoverflow.com/questions/65995857",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15117090/"
] | Python allows you to iterate over a range with custom step sizes. So that should be allow you to do something like:
```py
your_list = []
for idx in range(0, len(string), 50): # Indexes 0, 50, 100, so on
first_seven_digits = string[idx:idx+7] # Say, "1234567"
str_to_int = int(first_seven_digits) # Converts to the number 1234567
your_list.append(str_to_int) # Add the number to the list
your_sum = sum(your_list) # Find the sum
```
You store the numbers made up of those first 7 digits in a list, and finally, sum them up. | first of all your number string is 4999 characters long so you'll have to add one. secondly if you want to use numpy you could make a 100 by 50 array by reshaping the original 5000 long array. like this
```
arr = np.array(list(number)).reshape(100, 50)
```
than you can slice the arr in a way that the first 7 elements the arrays second axis and all of the first. like this
```
nums = arr[:, :7]
```
than you can just construct your result list by iterating over every element of nums and joining all the chars to a list like so and sum there integers together
```
res = sum([int("".join(n)) for n in nums])
```
so if we putt all that together we get
```
import numpy as np
number = """37107287533902102798797998228083759024651013574025046376937677490007126481248969700780504170182605387432498619952474105947423330951305812372661730962991942213363574161572522430563301811072406154908250230675882075393461711719803104210475137780632466768926167069662363382013637841838368417873436172675728112879812849979408065481931592621691275889832738442742289174325203219235894228767964876702721893184745144573600130643909116721685684458871160315327670386486105843025439939619828917593665686757934951621764571418565606295021572231965867550793241933316490635246274190492910143244581382266334794475817892575867718337217661963751590579239728245598838407582035653253593990084026335689488301894586282278288018119938482628201427819413994056758715117009439035398664372827112653829987240784473053190104293586865155060062958648615320752733719591914205172558297169388870771546649911559348760353292171497005693854370070576826684624621495650076471787294438377604532826541087568284431911906346940378552177792951453612327252500029607107508256381565671088525835072145876576172410976447339110607218265236877223636045174237069058518606604482076212098132878607339694128114266041808683061932846081119106155694051268969251934325451728388641918047049293215058642563049483624672216484350762017279180399446930047329563406911573244438690812579451408905770622942919710792820955037687525678773091862540744969844508330393682126183363848253301546861961243487676812975343759465158038628759287849020152168555482871720121925776695478182833757993103614740356856449095527097864797581167263201004368978425535399209318374414978068609844840309812907779179908821879532736447567559084803087086987551392711854517078544161852424320693150332599594068957565367821070749269665376763262354472106979395067965269474259770973916669376304263398708541052684708299085211399427365734116182760315001271653786073615010808570091499395125570281987460043753582903531743471732693212357815498262974255273730794953759765105305946966067683156574377167401875275889028025717332296191766687138199318110487701902712526768027607800301367868099252546340106163286652636270218540497705585629946580636237993140746255962240744869082311749777923654662572469233228109171419143028819710328859780666976089293863828502533340334413065578016127815921815005561868836468420090470230530811728164304876237919698424872550366387845831148769693215490281042402013833512446218144177347063783299490636259666498587618221225225512486764533677201869716985443124195724099139590089523100588229554825530026352078153229679624948164195386821877476085327132285723110424803456124867697064507995236377742425354112916842768655389262050249103265729672370191327572567528565324825826546309220705859652229798860272258331913126375147341994889534765745501184957014548792889848568277260777137214037988797153829820378303147352772158034814451349137322665138134829543829199918180278916522431027392251122869539409579530664052326325380441000596549391598795936352974615218550237130764225512118369380358038858490341698116222072977186158236678424689157993532961922624679571944012690438771072750481023908955235974572318970677254791506150550495392297953090112996751986188088225875314529584099251203829009407770775672113067397083047244838165338735023408456470580773088295917476714036319800818712901187549131054712658197623331044818386269515456334926366572897563400500428462801835170705278318394258821455212272512503275512160354698120058176216521282765275169129689778932238195734329339946437501907836945765883352399886755061649651847751807381688378610915273579297013376217784275219262340194239963916804498399317331273132924185707147349566916674687634660915035914677504995186714302352196288948901024233251169136196266227326746080059154747183079839286853520694694454072476841822524674417161514036427982273348055556214818971426179103425986472045168939894221798260880768528778364618279934631376775430780936333301898264209010848802521674670883215120185883543223812876952786713296124747824645386369930090493103636197638780396218407357239979422340623539380833965132740801111666627891981488087797941876876144230030984490851411606618262936828367647447792391803351109890697907148578694408955299065364044742557608365997664579509666024396409905389607120198219976047599490197230297649139826800329731560371200413779037855660850892521673093931987275027546890690370753941304265231501194809377245048795150954100921645863754710598436791786391670211874924319957006419179697775990283006991536871371193661495281130587638027841075444973307840789923115535562561142322423255033685442488917353448899115014406480203690680639606723221932041495354150312888033953605329934036800697771065056663195481234880673210146739058568557934581403627822703280826165707739483275922328459417065250945123252306082291880205877731971983945018088807242966198081119777158542502016545090413245809786882778948721859617721078384350691861554356628840622574736922845095162084960398013400172393067166682355524525280460972253503534226472524250874054075591789781264330331690"""
arr = np.array(list(number)).reshape(100, 50)
nums = arr[:, :7]
res = sum([int("".join(n)) for n in nums])
print(res)
``` |
21,307,128 | Since I have to mock a static method, I am using **Power Mock** to test my application.
My application uses \**Camel 2.1*\*2.
I define routes in *XML* that is read by *camel-spring* context.
There were no issues when `Junit` alone was used for testing.
While using power mock, I get the error listed at the end of the post.
I have also listed the XML used.
*Camel* is unable to recognize any of its tags when power mock is used.
I wonder whether the byte-level manipulation done by power mock to mock static methods interferes with camel engine in some way. Let me know what could possibly be wrong.
PS:
The problem disappears if I do not use power mock.
+++++++++++++++++++++++++ Error +++++++++++++++++++++++++++++++++++++++++++++++++
```
[ main] CamelNamespaceHandler DEBUG Using org.apache.camel.spring.CamelContextFactoryBean as CamelContextBeanDefinitionParser
org.springframework.beans.factory.BeanDefinitionStoreException: Failed to parse JAXB element; nested exception is javax.xml.bind.UnmarshalException: unexpected element (uri:"http://camel.apache.org/schema/spring", local:"camelContext"). Expected elements are <{}aggregate>,<{}aop>,<{}avro>,<{}base64>,<{}batchResequencerConfig>,<{}bean>,<{}beanPostProcessor>,<{}beanio>,<{}bindy>,<{}camelContext>,<{}castor>,<{}choice>,<{}constant>,<{}consumerTemplate>,<{}contextScan>,<{}convertBodyTo>,<{}crypto>,<{}csv>,<{}customDataFormat>,<{}customLoadBalancer>,<{}dataFormats>,<{}delay>,<{}description>,<{}doCatch>,<{}doFinally>,<{}doTry>,<{}dynamicRouter>,<{}el>,<{}endpoint>,<{}enrich>,<{}errorHandler>,<{}export>,<{}expression>,<{}expressionDefinition>,<{}failover>,<{}filter>,<{}flatpack>,<{}from>,<{}groovy>,<{}gzip>,<{}header>,<{}hl7>,<{}idempotentConsumer>,<{}inOnly>,<{}inOut>,<{}intercept>,<{}interceptFrom>,<{}interceptToEndpoint>,<{}javaScript>,<{}jaxb>,<{}jibx>,<{}jmxAgent>,<{}json>,<{}jxpath>,<{}keyStoreParameters>,<{}language>,<{}loadBalance>,<{}log>,<{}loop>,<{}marshal>,<{}method>,<{}multicast>,<{}mvel>,<{}ognl>,<{}onCompletion>,<{}onException>,<{}optimisticLockRetryPolicy>,<{}otherwise>,<{}packageScan>,<{}pgp>,<{}php>,<{}pipeline>,<{}policy>,<{}pollEnrich>,<{}process>,<{}properties>,<{}property>,<{}propertyPlaceholder>,<{}protobuf>,<{}proxy>,<{}python>,<{}random>,<{}recipientList>,<{}redeliveryPolicy>,<{}redeliveryPolicyProfile>,<{}ref>,<{}removeHeader>,<{}removeHeaders>,<{}removeProperty>,<{}resequence>,<{}rollback>,<{}roundRobin>,<{}route>,<{}routeBuilder>,<{}routeContext>,<{}routeContextRef>,<{}routes>,<{}routingSlip>,<{}rss>,<{}ruby>,<{}sample>,<{}secureRandomParameters>,<{}secureXML>,<{}serialization>,<{}setBody>,<{}setExchangePattern>,<{}setFaultBody>,<{}setHeader>,<{}setOutHeader>,<{}setProperty>,<{}simple>,<{}soapjaxb>,<{}sort>,<{}spel>,<{}split>,<{}sql>,<{}sslContextParameters>,<{}sticky>,<{}stop>,<{}streamCaching>,<{}streamResequencerConfig>,<{}string>,<{}syslog>,<{}template>,<{}threadPool>,<{}threadPoolProfile>,<{}threads>,<{}throttle>,<{}throwException>,<{}tidyMarkup>,<{}to>,<{}tokenize>,<{}topic>,<{}transacted>,<{}transform>,<{}unmarshal>,<{}validate>,<{}vtdxml>,<{}weighted>,<{}when>,<{}wireTap>,<{}xmlBeans>,<{}xmljson>,<{}xmlrpc>,<{}xpath>,<{}xquery>,<{}xstream>,<{}zip>,<{}zipFile> at org.apache.camel.spring.handler.CamelNamespaceHandler.parseUsingJaxb(CamelNamespaceHandler.java:169)
at org.apache.camel.spring.handler.CamelNamespaceHandler$CamelContextBeanDefinitionParser.doParse(CamelNamespaceHandler.java:307)
at org.springframework.beans.factory.xml.AbstractSingleBeanDefinitionParser.parseInternal(AbstractSingleBeanDefinitionParser.java:85)
at org.springframework.beans.factory.xml.AbstractBeanDefinitionParser.parse(AbstractBeanDefinitionParser.java:59)
at org.springframework.beans.factory.xml.NamespaceHandlerSupport.parse(NamespaceHandlerSupport.java:73)
at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1438)
at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1428)
at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.parseBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:185)
at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.doRegisterBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:139)
at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.registerBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:108)
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.registerBeanDefinitions(XmlBeanDefinitionReader.java:493)
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:390)
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:334)
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:302)
at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:174)
at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:209)
at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:180)
at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:243)
at org.springframework.context.support.AbstractXmlApplicationContext.loadBeanDefinitions(AbstractXmlApplicationContext.java:127)
at org.springframework.context.support.AbstractXmlApplicationContext.loadBeanDefinitions(AbstractXmlApplicationContext.java:93)
at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:130)
at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:537)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:451)
at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:139)
at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:83)
at org.apache.camel.spring.SpringCamelContext.springCamelContext(SpringCamelContext.java:100)
at com.ericsson.bss.edm.integrationFramework.Context.<init>(Context.java:50)
at com.ericsson.bss.edm.integrationFramework.RouteEngine.main(RouteEngine.java:55)
at com.ericsson.bss.edm.integrationFramework.RouteEngineTest.testMultiRouteCondition(RouteEngineTest.java:174)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:66)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:312)
at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:86)
at org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:94)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:296)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:112)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:73)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:284)
at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:84)
at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:49)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:209)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:148)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:122)
at org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34)
at org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.run(PowerMockJUnit44RunnerDelegateImpl.java:120)
at org.powermock.modules.junit4.common.internal.impl.JUnit4TestSuiteChunkerImpl.run(JUnit4TestSuiteChunkerImpl.java:102)
at org.powermock.modules.junit4.common.internal.impl.AbstractCommonPowerMockRunner.run(AbstractCommonPowerMockRunner.java:53)
at org.powermock.modules.junit4.PowerMockRunner.run(PowerMockRunner.java:42)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:24)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at org.junit.runner.JUnitCore.run(JUnitCore.java:157)
at org.junit.runner.JUnitCore.run(JUnitCore.java:136)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:62)
at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:139)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
Caused by: javax.xml.bind.UnmarshalException: unexpected element (uri:"http://camel.apache.org/schema/spring", local:"camelContext"). Expected elements are <{}aggregate>,<{}aop>,<{}avro>,<{}base64>,<{}batchResequencerConfig>,<{}bean>,<{}beanPostProcessor>,<{}beanio>,<{}bindy>,<{}camelContext>,<{}castor>,<{}choice>,<{}constant>,<{}consumerTemplate>,<{}contextScan>,<{}convertBodyTo>,<{}crypto>,<{}csv>,<{}customDataFormat>,<{}customLoadBalancer>,<{}dataFormats>,<{}delay>,<{}description>,<{}doCatch>,<{}doFinally>,<{}doTry>,<{}dynamicRouter>,<{}el>,<{}endpoint>,<{}enrich>,<{}errorHandler>,<{}export>,<{}expression>,<{}expressionDefinition>,<{}failover>,<{}filter>,<{}flatpack>,<{}from>,<{}groovy>,<{}gzip>,<{}header>,<{}hl7>,<{}idempotentConsumer>,<{}inOnly>,<{}inOut>,<{}intercept>,<{}interceptFrom>,<{}interceptToEndpoint>,<{}javaScript>,<{}jaxb>,<{}jibx>,<{}jmxAgent>,<{}json>,<{}jxpath>,<{}keyStoreParameters>,<{}language>,<{}loadBalance>,<{}log>,<{}loop>,<{}marshal>,<{}method>,<{}multicast>,<{}mvel>,<{}ognl>,<{}onCompletion>,<{}onException>,<{}optimisticLockRetryPolicy>,<{}otherwise>,<{}packageScan>,<{}pgp>,<{}php>,<{}pipeline>,<{}policy>,<{}pollEnrich>,<{}process>,<{}properties>,<{}property>,<{}propertyPlaceholder>,<{}protobuf>,<{}proxy>,<{}python>,<{}random>,<{}recipientList>,<{}redeliveryPolicy>,<{}redeliveryPolicyProfile>,<{}ref>,<{}removeHeader>,<{}removeHeaders>,<{}removeProperty>,<{}resequence>,<{}rollback>,<{}roundRobin>,<{}route>,<{}routeBuilder>,<{}routeContext>,<{}routeContextRef>,<{}routes>,<{}routingSlip>,<{}rss>,<{}ruby>,<{}sample>,<{}secureRandomParameters>,<{}secureXML>,<{}serialization>,<{}setBody>,<{}setExchangePattern>,<{}setFaultBody>,<{}setHeader>,<{}setOutHeader>,<{}setProperty>,<{}simple>,<{}soapjaxb>,<{}sort>,<{}spel>,<{}split>,<{}sql>,<{}sslContextParameters>,<{}sticky>,<{}stop>,<{}streamCaching>,<{}streamResequencerConfig>,<{}string>,<{}syslog>,<{}template>,<{}threadPool>,<{}threadPoolProfile>,<{}threads>,<{}throttle>,<{}throwException>,<{}tidyMarkup>,<{}to>,<{}tokenize>,<{}topic>,<{}transacted>,<{}transform>,<{}unmarshal>,<{}validate>,<{}vtdxml>,<{}weighted>,<{}when>,<{}wireTap>,<{}xmlBeans>,<{}xmljson>,<{}xmlrpc>,<{}xpath>,<{}xquery>,<{}xstream>,<{}zip>,<{}zipFile>
at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext.handleEvent(UnmarshallingContext.java:647)
at com.sun.xml.bind.v2.runtime.unmarshaller.Loader.reportError(Loader.java:258)
at com.sun.xml.bind.v2.runtime.unmarshaller.Loader.reportError(Loader.java:253)
at com.sun.xml.bind.v2.runtime.unmarshaller.Loader.reportUnexpectedChildElement(Loader.java:120)
at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext$DefaultRootLoader.childElement(UnmarshallingContext.java:1052)
at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext._startElement(UnmarshallingContext.java:483)
at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext.startElement(UnmarshallingContext.java:464)
at com.sun.xml.bind.v2.runtime.unmarshaller.InterningXmlVisitor.startElement(InterningXmlVisitor.java:75)
at com.sun.xml.bind.v2.runtime.unmarshaller.SAXConnector.startElement(SAXConnector.java:152)
at com.sun.xml.bind.unmarshaller.DOMScanner.visit(DOMScanner.java:244)
at com.sun.xml.bind.unmarshaller.DOMScanner.scan(DOMScanner.java:127)
at com.sun.xml.bind.unmarshaller.DOMScanner.scan(DOMScanner.java:105)
at com.sun.xml.bind.v2.runtime.BinderImpl.associativeUnmarshal(BinderImpl.java:161)
at com.sun.xml.bind.v2.runtime.BinderImpl.unmarshal(BinderImpl.java:132)
at org.apache.camel.spring.handler.CamelNamespaceHandler.parseUsingJaxb(CamelNamespaceHandler.java:167)
... 72 more
```
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++ Route.xml +++++++++++++++++++++++++++++++++++++++++++++
```
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://camel.apache.org/schema/spring
http://camel.apache.org/schema/spring/camel-spring.xsd">
<camelContext xmlns="http://camel.apache.org/schema/spring">
<route id="simpleroute">
<from uri="ftp://admin@x.y.z.a:2121/?password=admin&noop=true&maximumReconnectAttempts=3&download=false&delay=2000&throwExceptionOnConnectFailed=true;"/>
<to uri="file:/home/emeensa/NetBeansProjects/CamelFileCopier/output" />
</route>
</camelContext>
</beans>
```
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ | 2014/01/23 | [
"https://Stackoverflow.com/questions/21307128",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2345966/"
] | This error message usually means that your specified truststore can not be read. What I would check:
* Is the path correct? (I'm sure you checked this...)
* Has the user who started the JVM enough access privileges to read the
trustore?
* When do you set the system properties? Are they already set when the webservice is invoked?
* Perhaps another component has overridden the values. Are the system properties still set when the webservice is invoked?
* Does the trustore contains the Salesforce certificate and is the file not corrupt (e.g. check with `keytool -list`)?
**Edit:**
* Don't use `System.setProperty` but set the options when starting the Java process with `-Djavax.net.ssl.XXX`. The reason for this advice is as follows: The IBM security framework may read the options **before** you set the property (e.g. in a `static` block of a class). Of course this is framework specific and may change from version to version. | ```
Caused by: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
```
>
> * In my case, I have 2 duplicate Java installations (OpenJDK and
> JDK-17).
> * I installed JDK-17 after configuring environment variable for OpenJDK and before uninstalling OpenJDK.
> * So, maybe that is the problem.
>
>
>
This is how I SOLVED it **in my case:**
* First, I have completely removed openJDK and JDK-17 from my computer (including JDK-17/lib/security/cacerts).
* Then, I deleted the java environment variable and restarted the computer.
* Next, I thoroughly checked that there aren't any JDKs on the computer anymore.
* Finally, I just reinstalled JDK-17 (JDK-17/lib/security/cacerts is default). And it worked fine for me.
**Note:** kill any Java runtime tasks before uninstalling them. |
49,059,660 | I am looking for a simple way to constantly monitor a log file, and send me an email notification every time thhis log file has changed (new lines have been added to it).
The system runs on a Raspberry Pi 2 (OS Raspbian /Debian Stretch) and the log monitors a GPIO python script running as daemon.
I need something very simple and lightweight, don't even care to have the text of the new log entry, because I know what it says, it is always the same. 24 lines of text at the end.
Also, the log.txt file gets recreated every day at midnight, so that might represent another issue.
I already have a working python script to send me a simple email via gmail (called it sendmail.py)
What I tried so far was creating and running the following bash script:
monitorlog.sh
`#!/bin/bash
tail -F log.txt | python ./sendmail.py`
The problem is that it just sends an email every time I execute it, but when the log actually changes, it just quits.
I am really new to linux so apologies if I missed something.
Cheers | 2018/03/01 | [
"https://Stackoverflow.com/questions/49059660",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9431262/"
] | You asked for simple:
```
#!/bin/bash
cur_line_count="$(wc -l myfile.txt)"
while true
do
new_line_count="$(wc -l myfile.txt)"
if [ "$cur_line_count" != "$new_line_count" ]
then
python ./sendmail.py
fi
cur_line_count="$new_line_count"
sleep 5
done
``` | I've done this a bunch of different ways. If you run a cron job every minute that counts the number of lines (wc -l) compares that to a stored count (e.g. in /tmp/myfilecounter) and sends the emails when the numbers are different.
If you have inotify, there are more direct ways to get "woken up" when the file changes, e.g <https://serverfault.com/a/780522/97447> or <https://serverfault.com/search?q=inotifywait>.
If you don't mind adding a package to the system, incron is a very convenient way to run a script whenever a file or directory is modified, and it looks like it's supported on raspbian (internally it uses inotify). <https://www.linux.com/learn/how-use-incron-monitor-important-files-and-folders>. Looks like it's as simple as:
```
sudo apt-get install incron
sudo vi /etc/incron.allow # Add your userid to this file (or just rm /etc/incron.allow to let everyone use incron)
incron -e # Add the following line to the "cron" file
/path/to/log.txt IN_MODIFY python ./sendmail.py
```
And you'd be done! |
56,794,886 | guys! So I recently started learning about python classes and objects.
For instance, I have a following list of strings:
```
alist = ["Four", "Three", "Five", "One", "Two"]
```
Which is comparable to a class of Numbers I have:
```
class Numbers(object):
One=1
Two=2
Three=3
Four=4
Five=5
```
How could I convert `alist` into
```
alist = [4, 3, 5, 1, 2]
```
based on the class above?
My initial thought was to create a new (empty) list and use a `for loop` that adds the corresponding object value (e.g. `Numbers.One`) to the empty list as it goes through `alist`. But I'm unsure whether that'd be the most efficient solution.
Therefore, I was wondering if there was a simpler way of completing this task using Python Classes / Inheritance.
I hope someone can help me and explain to me what way would work better and why!
Thank you!! | 2019/06/27 | [
"https://Stackoverflow.com/questions/56794886",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10713538/"
] | If you are set on using the class, one way would be to use [`__getattribute__()`](https://docs.python.org/3/reference/datamodel.html#object.__getattribute__)
```
print([Numbers().__getattribute__(a) for a in alist])
#[4, 3, 5, 1, 2]
```
But a much better (and more pythonic IMO) way would be to use a `dict`:
```
NumbersDict = dict(
One=1,
Two=2,
Three=3,
Four=4,
Five=5
)
print([NumbersDict[a] for a in alist])
#[4, 3, 5, 1, 2]
``` | **EDIT:** I suppose that the words and numbers are just a trivial example, a dictionary is the right way to do it if that's not the case as written in the comments.
Your assumptions are correct - either create an empty list and populate it using for loop, or use list comprehension with a for loop to create a new list with the required elements.
Empty list with for loop
========================
```py
#... Numbers class defined above
alist = ["Four", "Three", "Five", "One", "Two"]
nlist = []
numbers = Numbers()
for anumber in alist:
nlist.append(getattr(numbers, anumber))
print(nlist)
[4, 3, 5, 1, 2]
```
List comprehension with for loop
================================
```py
#... Numbers class defined above
alist = ["Four", "Three", "Five", "One", "Two"]
numbers = Numbers()
nlist = [getattr(numbers, anumber) for anumber in alist]
print(nlist)
[4, 3, 5, 1, 2]
``` |
56,794,886 | guys! So I recently started learning about python classes and objects.
For instance, I have a following list of strings:
```
alist = ["Four", "Three", "Five", "One", "Two"]
```
Which is comparable to a class of Numbers I have:
```
class Numbers(object):
One=1
Two=2
Three=3
Four=4
Five=5
```
How could I convert `alist` into
```
alist = [4, 3, 5, 1, 2]
```
based on the class above?
My initial thought was to create a new (empty) list and use a `for loop` that adds the corresponding object value (e.g. `Numbers.One`) to the empty list as it goes through `alist`. But I'm unsure whether that'd be the most efficient solution.
Therefore, I was wondering if there was a simpler way of completing this task using Python Classes / Inheritance.
I hope someone can help me and explain to me what way would work better and why!
Thank you!! | 2019/06/27 | [
"https://Stackoverflow.com/questions/56794886",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10713538/"
] | Most objects (and hence classes) in python have the `__dict__` field, which is a mapping from attribute names to their values. You can access this field using the built-in [`vars`](https://docs.python.org/3/library/functions.html#vars), so
```
values = [vars(Numbers)[a] for a in alist]
```
will give you what you want. | **EDIT:** I suppose that the words and numbers are just a trivial example, a dictionary is the right way to do it if that's not the case as written in the comments.
Your assumptions are correct - either create an empty list and populate it using for loop, or use list comprehension with a for loop to create a new list with the required elements.
Empty list with for loop
========================
```py
#... Numbers class defined above
alist = ["Four", "Three", "Five", "One", "Two"]
nlist = []
numbers = Numbers()
for anumber in alist:
nlist.append(getattr(numbers, anumber))
print(nlist)
[4, 3, 5, 1, 2]
```
List comprehension with for loop
================================
```py
#... Numbers class defined above
alist = ["Four", "Three", "Five", "One", "Two"]
numbers = Numbers()
nlist = [getattr(numbers, anumber) for anumber in alist]
print(nlist)
[4, 3, 5, 1, 2]
``` |
56,794,886 | guys! So I recently started learning about python classes and objects.
For instance, I have a following list of strings:
```
alist = ["Four", "Three", "Five", "One", "Two"]
```
Which is comparable to a class of Numbers I have:
```
class Numbers(object):
One=1
Two=2
Three=3
Four=4
Five=5
```
How could I convert `alist` into
```
alist = [4, 3, 5, 1, 2]
```
based on the class above?
My initial thought was to create a new (empty) list and use a `for loop` that adds the corresponding object value (e.g. `Numbers.One`) to the empty list as it goes through `alist`. But I'm unsure whether that'd be the most efficient solution.
Therefore, I was wondering if there was a simpler way of completing this task using Python Classes / Inheritance.
I hope someone can help me and explain to me what way would work better and why!
Thank you!! | 2019/06/27 | [
"https://Stackoverflow.com/questions/56794886",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10713538/"
] | While I totally agree that using a `dict` for `Numbers` would be easier and straight forward, but showing you the `Enum` way as your class involves magic numbers and sort of a valid use case for using enums.
A similar implementation using `Enum` would be:
```
from enum import Enum
class Numbers(Enum):
One = 1
Two = 2
Three = 3
Four = 4
Five = 5
```
Then you can use `getattr` and `Numbers.<attr>.value` to get the constant numbers:
```
In [592]: alist = ["Four", "Three", "Five", "One", "Two"]
In [593]: [getattr(Numbers, n).value for n in alist]
Out[593]: [4, 3, 5, 1, 2]
```
---
**Edit based on comment:**
If you want to get the names back from a number list:
```
In [952]: l = [4, 3, 5, 1, 2]
In [953]: [Numbers(num).name for num in l]
Out[953]: ['Four', 'Three', 'Five', 'One', 'Two']
``` | **EDIT:** I suppose that the words and numbers are just a trivial example, a dictionary is the right way to do it if that's not the case as written in the comments.
Your assumptions are correct - either create an empty list and populate it using for loop, or use list comprehension with a for loop to create a new list with the required elements.
Empty list with for loop
========================
```py
#... Numbers class defined above
alist = ["Four", "Three", "Five", "One", "Two"]
nlist = []
numbers = Numbers()
for anumber in alist:
nlist.append(getattr(numbers, anumber))
print(nlist)
[4, 3, 5, 1, 2]
```
List comprehension with for loop
================================
```py
#... Numbers class defined above
alist = ["Four", "Three", "Five", "One", "Two"]
numbers = Numbers()
nlist = [getattr(numbers, anumber) for anumber in alist]
print(nlist)
[4, 3, 5, 1, 2]
``` |
36,108,377 | I want to count the number of times a word is being repeated in the review string
I am reading the csv file and storing it in a python dataframe using the below line
```
reviews = pd.read_csv("amazon_baby.csv")
```
The code in the below lines work when I apply it to a single review.
```
print reviews["review"][1]
a = reviews["review"][1].split("disappointed")
print a
b = len(a)
print b
```
The output for the above lines were
```
it came early and was not disappointed. i love planet wise bags and now my wipe holder. it keps my osocozy wipes moist and does not leak. highly recommend it.
['it came early and was not ', '. i love planet wise bags and now my wipe holder. it keps my osocozy wipes moist and does not leak. highly recommend it.']
2
```
When I apply the same logic to the entire dataframe using the below line. I receive an error message
```
reviews['disappointed'] = len(reviews["review"].split("disappointed"))-1
```
Error message:
```
Traceback (most recent call last):
File "C:/Users/gouta/PycharmProjects/MLCourse1/Classifier.py", line 12, in <module>
reviews['disappointed'] = len(reviews["review"].split("disappointed"))-1
File "C:\Users\gouta\Anaconda2\lib\site-packages\pandas\core\generic.py", line 2360, in __getattr__
(type(self).__name__, name))
AttributeError: 'Series' object has no attribute 'split'
``` | 2016/03/19 | [
"https://Stackoverflow.com/questions/36108377",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2861976/"
] | You're trying to split the entire review column of the data frame (which is the Series mentioned in the error message). What you want to do is apply a function to each row of the data frame, which you can do by calling [apply](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html) on the data frame:
```
f = lambda x: len(x["review"].split("disappointed")) -1
reviews["disappointed"] = reviews.apply(f, axis=1)
``` | Well, the problem is with:
```
reviews["review"]
```
The above is a Series. In your first snippet, you are doing this:
```
reviews["review"][1].split("disappointed")
```
That is, you are putting an index for the review. You could try looping over all rows of the column and perform your desired action. For example:
```
for index, row in reviews.iterrows():
print len(row['review'].split("disappointed"))
``` |
36,108,377 | I want to count the number of times a word is being repeated in the review string
I am reading the csv file and storing it in a python dataframe using the below line
```
reviews = pd.read_csv("amazon_baby.csv")
```
The code in the below lines work when I apply it to a single review.
```
print reviews["review"][1]
a = reviews["review"][1].split("disappointed")
print a
b = len(a)
print b
```
The output for the above lines were
```
it came early and was not disappointed. i love planet wise bags and now my wipe holder. it keps my osocozy wipes moist and does not leak. highly recommend it.
['it came early and was not ', '. i love planet wise bags and now my wipe holder. it keps my osocozy wipes moist and does not leak. highly recommend it.']
2
```
When I apply the same logic to the entire dataframe using the below line. I receive an error message
```
reviews['disappointed'] = len(reviews["review"].split("disappointed"))-1
```
Error message:
```
Traceback (most recent call last):
File "C:/Users/gouta/PycharmProjects/MLCourse1/Classifier.py", line 12, in <module>
reviews['disappointed'] = len(reviews["review"].split("disappointed"))-1
File "C:\Users\gouta\Anaconda2\lib\site-packages\pandas\core\generic.py", line 2360, in __getattr__
(type(self).__name__, name))
AttributeError: 'Series' object has no attribute 'split'
``` | 2016/03/19 | [
"https://Stackoverflow.com/questions/36108377",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2861976/"
] | You're trying to split the entire review column of the data frame (which is the Series mentioned in the error message). What you want to do is apply a function to each row of the data frame, which you can do by calling [apply](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html) on the data frame:
```
f = lambda x: len(x["review"].split("disappointed")) -1
reviews["disappointed"] = reviews.apply(f, axis=1)
``` | You can use `.str` to use string methods on series of strings:
```
reviews["review"].str.split("disappointed")
``` |
36,108,377 | I want to count the number of times a word is being repeated in the review string
I am reading the csv file and storing it in a python dataframe using the below line
```
reviews = pd.read_csv("amazon_baby.csv")
```
The code in the below lines work when I apply it to a single review.
```
print reviews["review"][1]
a = reviews["review"][1].split("disappointed")
print a
b = len(a)
print b
```
The output for the above lines were
```
it came early and was not disappointed. i love planet wise bags and now my wipe holder. it keps my osocozy wipes moist and does not leak. highly recommend it.
['it came early and was not ', '. i love planet wise bags and now my wipe holder. it keps my osocozy wipes moist and does not leak. highly recommend it.']
2
```
When I apply the same logic to the entire dataframe using the below line. I receive an error message
```
reviews['disappointed'] = len(reviews["review"].split("disappointed"))-1
```
Error message:
```
Traceback (most recent call last):
File "C:/Users/gouta/PycharmProjects/MLCourse1/Classifier.py", line 12, in <module>
reviews['disappointed'] = len(reviews["review"].split("disappointed"))-1
File "C:\Users\gouta\Anaconda2\lib\site-packages\pandas\core\generic.py", line 2360, in __getattr__
(type(self).__name__, name))
AttributeError: 'Series' object has no attribute 'split'
``` | 2016/03/19 | [
"https://Stackoverflow.com/questions/36108377",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2861976/"
] | You're trying to split the entire review column of the data frame (which is the Series mentioned in the error message). What you want to do is apply a function to each row of the data frame, which you can do by calling [apply](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html) on the data frame:
```
f = lambda x: len(x["review"].split("disappointed")) -1
reviews["disappointed"] = reviews.apply(f, axis=1)
``` | pandas 0.20.3 has **pandas.Series.str.split()** which acts on every string of the series and does the split. So you can simply split and then count the number of splits made
```
len(reviews['review'].str.split('disappointed')) - 1
```
[pandas.Series.str.split](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html) |
36,108,377 | I want to count the number of times a word is being repeated in the review string
I am reading the csv file and storing it in a python dataframe using the below line
```
reviews = pd.read_csv("amazon_baby.csv")
```
The code in the below lines work when I apply it to a single review.
```
print reviews["review"][1]
a = reviews["review"][1].split("disappointed")
print a
b = len(a)
print b
```
The output for the above lines were
```
it came early and was not disappointed. i love planet wise bags and now my wipe holder. it keps my osocozy wipes moist and does not leak. highly recommend it.
['it came early and was not ', '. i love planet wise bags and now my wipe holder. it keps my osocozy wipes moist and does not leak. highly recommend it.']
2
```
When I apply the same logic to the entire dataframe using the below line. I receive an error message
```
reviews['disappointed'] = len(reviews["review"].split("disappointed"))-1
```
Error message:
```
Traceback (most recent call last):
File "C:/Users/gouta/PycharmProjects/MLCourse1/Classifier.py", line 12, in <module>
reviews['disappointed'] = len(reviews["review"].split("disappointed"))-1
File "C:\Users\gouta\Anaconda2\lib\site-packages\pandas\core\generic.py", line 2360, in __getattr__
(type(self).__name__, name))
AttributeError: 'Series' object has no attribute 'split'
``` | 2016/03/19 | [
"https://Stackoverflow.com/questions/36108377",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2861976/"
] | pandas 0.20.3 has **pandas.Series.str.split()** which acts on every string of the series and does the split. So you can simply split and then count the number of splits made
```
len(reviews['review'].str.split('disappointed')) - 1
```
[pandas.Series.str.split](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html) | Well, the problem is with:
```
reviews["review"]
```
The above is a Series. In your first snippet, you are doing this:
```
reviews["review"][1].split("disappointed")
```
That is, you are putting an index for the review. You could try looping over all rows of the column and perform your desired action. For example:
```
for index, row in reviews.iterrows():
print len(row['review'].split("disappointed"))
``` |
36,108,377 | I want to count the number of times a word is being repeated in the review string
I am reading the csv file and storing it in a python dataframe using the below line
```
reviews = pd.read_csv("amazon_baby.csv")
```
The code in the below lines work when I apply it to a single review.
```
print reviews["review"][1]
a = reviews["review"][1].split("disappointed")
print a
b = len(a)
print b
```
The output for the above lines were
```
it came early and was not disappointed. i love planet wise bags and now my wipe holder. it keps my osocozy wipes moist and does not leak. highly recommend it.
['it came early and was not ', '. i love planet wise bags and now my wipe holder. it keps my osocozy wipes moist and does not leak. highly recommend it.']
2
```
When I apply the same logic to the entire dataframe using the below line. I receive an error message
```
reviews['disappointed'] = len(reviews["review"].split("disappointed"))-1
```
Error message:
```
Traceback (most recent call last):
File "C:/Users/gouta/PycharmProjects/MLCourse1/Classifier.py", line 12, in <module>
reviews['disappointed'] = len(reviews["review"].split("disappointed"))-1
File "C:\Users\gouta\Anaconda2\lib\site-packages\pandas\core\generic.py", line 2360, in __getattr__
(type(self).__name__, name))
AttributeError: 'Series' object has no attribute 'split'
``` | 2016/03/19 | [
"https://Stackoverflow.com/questions/36108377",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2861976/"
] | pandas 0.20.3 has **pandas.Series.str.split()** which acts on every string of the series and does the split. So you can simply split and then count the number of splits made
```
len(reviews['review'].str.split('disappointed')) - 1
```
[pandas.Series.str.split](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html) | You can use `.str` to use string methods on series of strings:
```
reviews["review"].str.split("disappointed")
``` |
72,329,252 | Let's say we have following list. This list contains response times of a REST server in a traffic run.
[1, 2, 3, 3, 4, 5, 6, 7, 9, 1]
I need following output
Percentage of the requests served within a certain time (ms)
50% 3
60% 4
70% 5
80% 6
90% 7
100% 9
How can we get it done in python? This is apache bench kind of output. So basically lets say at 50%, we need to find point in list below which 50% of the list elements are present and so on. | 2022/05/21 | [
"https://Stackoverflow.com/questions/72329252",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4137009/"
] | You can try something like this:
```
responseTimes = [1, 2, 3, 3, 4, 5, 6, 7, 9, 1]
for time in range(3,10):
percentage = len([x for x in responseTimes if x <= time])/(len(responseTimes))
print(f'{percentage*100}%')
```
>
> *"So basically lets say at 50%, we need to find point in list below which 50% of the list elements are present and so on"*
>
>
>
```
responseTimes = [1, 2, 3, 3, 4, 5, 6, 7, 9, 1]
percentage = 0
time = 0
while(percentage <= 0.5):
percentage = len([x for x in responseTimes if x <= time])/(len(responseTimes))
time+=1
print(f'Every time under {time}(ms) occurrs lower than 50% of the time')
``` | You basically need to compute the cumulative ratio of the sorted response times.
```py
from collections import Counter
values = [1, 2, 3, 3, 4, 5, 6, 7, 9, 1]
frequency = Counter(values) # {1: 2, 2: 1, 3: 2, ...}
total = 0
n = len(values)
for time in sorted(frequency):
total += frequency[time]
print(time, f'{100*total/n}%')
```
This will print all times with the corresponding ratios.
```py
1 20.0%
2 30.0%
3 50.0%
4 60.0%
5 70.0%
6 80.0%
7 90.0%
9 100.0%
``` |
50,239,640 | In python have three one dimensional arrays of different shapes (like the ones given below)
```
a0 = np.array([5,6,7,8,9])
a1 = np.array([1,2,3,4])
a2 = np.array([11,12])
```
I am assuming that the array `a0` corresponds to an index `i=0`, `a1` corresponds to index `i=1` and `a2` corresponds to `i=2`. With these assumptions I want to construct a new two dimensional array where the rows would correspond to indices of the arrays (`i=0,1,2`) and the columns would be entries of the arrays `a0, a1, a2`.
In the example that I have given here, I will like the two dimensional array to look like
```
result = np.array([ [0,5], [0,6], [0,7], [0,8], [0,9], [1,1], [1,2],\
[1,3], [1,4], [2,11], [2,12] ])
```
I will very appreciate to have an answer as to how I can achieve this. In the actual problem that I am working with, I am dealing more than three one dimensional arrays. So, it will be very nice if the answer gives consideration to this. | 2018/05/08 | [
"https://Stackoverflow.com/questions/50239640",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3761166/"
] | You can use `numpy` stack functions to speed up:
```
aa = [a0, a1, a2]
np.hstack(tuple(np.vstack((np.full(ai.shape, i), ai)) for i, ai in enumerate(aa))).T
``` | One way to do this would be a simple list comprehension:
```
result = np.array([[i, arr_v] for i, arr in enumerate([a0, a1, a2])
for arr_v in arr])
>>> result
array([[ 0, 5],
[ 0, 6],
[ 0, 7],
[ 0, 8],
[ 0, 9],
[ 1, 1],
[ 1, 2],
[ 1, 3],
[ 1, 4],
[ 2, 11],
[ 2, 12]])
```
Adressing your concern about scaling this to more arrays, you can easily add as many arrays as you wish by simply creating a list of your array names, and using that list as the argument to `enumerate`:
```
.... for i, arr in enumerate(my_list_of_arrays) ...
``` |
50,239,640 | In python have three one dimensional arrays of different shapes (like the ones given below)
```
a0 = np.array([5,6,7,8,9])
a1 = np.array([1,2,3,4])
a2 = np.array([11,12])
```
I am assuming that the array `a0` corresponds to an index `i=0`, `a1` corresponds to index `i=1` and `a2` corresponds to `i=2`. With these assumptions I want to construct a new two dimensional array where the rows would correspond to indices of the arrays (`i=0,1,2`) and the columns would be entries of the arrays `a0, a1, a2`.
In the example that I have given here, I will like the two dimensional array to look like
```
result = np.array([ [0,5], [0,6], [0,7], [0,8], [0,9], [1,1], [1,2],\
[1,3], [1,4], [2,11], [2,12] ])
```
I will very appreciate to have an answer as to how I can achieve this. In the actual problem that I am working with, I am dealing more than three one dimensional arrays. So, it will be very nice if the answer gives consideration to this. | 2018/05/08 | [
"https://Stackoverflow.com/questions/50239640",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3761166/"
] | You can use `numpy` stack functions to speed up:
```
aa = [a0, a1, a2]
np.hstack(tuple(np.vstack((np.full(ai.shape, i), ai)) for i, ai in enumerate(aa))).T
``` | Here's an almost vectorized approach -
```
L = [a0,a1,a2] # list of all arrays
lens = [len(i) for i in L] # only looping part*
out = np.dstack(( np.repeat(np.arange(len(L)), lens), np.concatenate(L)))
```
\*The looping part is simply to get the lengths of the arrays, which should have negligible impact on the total runtime.
Sample run -
```
In [19]: L = [a0,a1,a2] # list of all arrays
In [20]: lens = [len(i) for i in L]
In [21]: np.dstack(( np.repeat(np.arange(len(L)), lens), np.concatenate(L)))
Out[21]:
array([[[ 0, 5],
[ 0, 6],
[ 0, 7],
[ 0, 8],
[ 0, 9],
[ 1, 1],
[ 1, 2],
[ 1, 3],
[ 1, 4],
[ 2, 11],
[ 2, 12]]])
```
Another way could be to avoid `np.repeat` and use some array-initialization + cumsum method, which would be better for large number of arrays, as shown below -
```
col1 = np.concatenate(L)
col0 = np.zeros(len(col1), dtype=col1.dtype)
col0[np.cumsum(lens[:-1])] = 1
out = np.dstack((col0.cumsum(), col1))
```
Or use `np.maximum.accumulate` to replace the second `cumsum` -
```
col0[np.cumsum(lens[:-1])] = np.arange(1,len(L))
out = np.dstack((np.maximum.accumulate(col0), col1))
``` |
45,939,564 | I am accessing a python file via python.
The google sheets looks like the following:
[![enter image description here](https://i.stack.imgur.com/eIW7v.png)](https://i.stack.imgur.com/eIW7v.png)
But when I access it via:
```
self.probe=[]
self.scope = ['https://spreadsheets.google.com/feeds']
self.creds = ServiceAccountCredentials.from_json_keyfile_name('client_secret.json', self.scope)
self.client = gspread.authorize(self.creds)
self.sheet = self.client.open('Beziehende').sheet1
self.probe = self.sheet.get_all_records()
print(self.probe)
```
it results in [![enter image description here](https://i.stack.imgur.com/2tHia.png)](https://i.stack.imgur.com/2tHia.png)
Ho can I get the results in the same order as they are written in the google sheet?
Thank you for your help.
**Edit** Sorry, here are some more information. My program has two functions:
1.) It can check if a name / address etc. is already in the database. If the name is in the database, it prints all the information about that person.
2.) It lets me add people's information to the database.
**The Problem**: I am loading the whole database into the list and later writing it all back. But when writing it back, the order gets messed up, as the get\_all\_records stored it in a random order. (This is the very first program I have ever written by myself, so please forgive the bad coding).
I wanted to know if there is a possibility to get the data in order. but if not, than I just have to find a way, online to write the newest entry (which is probably more efficient anyway I guess...)
```
def create_window(self):
self.t = Toplevel(self)
self.t.geometry("250x150")
Message(self.t, text="Name", width=100, anchor=W).grid(row=1, column=1)
self.name_entry = Entry(self.t)
self.name_entry.grid(row=1, column=2)
Message(self.t, text="Adresse", width=100, anchor=W).grid(row=2, column=1)
self.adr_entry = Entry(self.t)
self.adr_entry.grid(row=2, column=2)
Message(self.t, text="Organisation", width=100, anchor=W).grid(row=3, column=1)
self.org_entry = Entry(self.t)
self.org_entry.grid(row=3, column=2)
Message(self.t, text="Datum", width=100, anchor=W).grid(row=4, column=1)
self.date_entry = Entry(self.t)
self.date_entry.grid(row=4, column=2)
self.t.button = Button(self.t, text="Speichern", command=self.verify).grid(row=5, column=2)
#name
#window = Toplevel(self.insert_window)
def verify(self):
self.ver = Toplevel(self)
self.ver.geometry("300x150")
self.ver.grid_columnconfigure(1, minsize=100)
Message(self.ver, text=self.name_entry.get(), width=100).grid(row=1, column=1)
Message(self.ver, text=self.adr_entry.get(), width=100).grid(row=2, column=1)
Message(self.ver, text=self.org_entry.get(), width=100).grid(row=3, column=1)
Message(self.ver, text=self.date_entry.get(), width=100).grid(row=4, column=1)
confirm_button=Button(self.ver, text='BestΓ€tigen', command=self.data_insert).grid(row=4, column=1)
cancle_button=Button(self.ver, text='Abbrechen', command=self.ver.destroy).grid(row=4, column=2)
def data_insert(self):
new_dict = collections.OrderedDict()
new_dict['name'] = self.name_entry.get()
new_dict['adresse'] = self.adr_entry.get()
new_dict['organisation'] = self.org_entry.get()
new_dict['datum'] = self.date_entry.get()
print(new_dict)
self.probe.append(new_dict)
#self.sheet.update_acell('A4',new_dict['name'])
self.update_gsheet()
self.ver.destroy()
self.t.destroy()
def update_gsheet(self):
i = 2
for dic_object in self.probe:
j = 1
for category in dic_object:
self.sheet.update_cell(i,j,dic_object[category])
j += 1
i += 1
def search(self):
print(self.probe)
self.result = []
self.var = self.entry.get() #starting index better
self.search_algo()
self.outputtext.delete('1.0', END)
for dict in self.result:
print(dict['Name'], dict['Adresse'], dict['Organisation'])
self.outputtext.insert(END, dict['Name'] + '\n')
self.outputtext.insert(END, dict['Adresse']+ '\n')
self.outputtext.insert(END, dict['Organisation']+ '\n')
self.outputtext.insert(END, 'Erhalten am '+dict['Datum']+'\n'+'\n')
if not self.result:
self.outputtext.insert(END, 'Name not found')
return FALSE
return TRUE
def search_algo(self):
category = self.v.get()
print(category)
for dict_object in self.probe:
if dict_object[category] == self.var:
self.result.append(dict_object)
``` | 2017/08/29 | [
"https://Stackoverflow.com/questions/45939564",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3554329/"
] | I'm not familiar with gspread, which appears to be a third-party client for the Google Sheets API, but it looks like you should be using [`get_all_values`](https://github.com/burnash/gspread#getting-all-values-from-a-worksheet-as-a-list-of-lists) rather than `get_all_records`. That will give you a list of lists, rather than a list of dicts. | Python dictionaries are unordered. There is the [OrderedDict](https://docs.python.org/3.6/library/collections.html#collections.OrderedDict) in collections, but hard to say more about what the best course of action should be without more insight into why you need this dictionary ordered... |
55,508,830 | In a virtual Env with Python 3.7.2, I am trying to run django's `python manage.py startap myapp` and I get this error:
```
raise ImproperlyConfigured('SQLite 3.8.3 or later is required (found %s).' % Database.sqlite_version)
django.core.exceptions.ImproperlyConfigured: SQLite 3.8.3 or later is required (found 3.8.2).
```
I'm running Ubuntu Trusty 14.04 Server.
How do I upgrade or update my sqlite version to >=3.8.3?
*I ran*
`$ apt list --installed | grep sqlite`
```
libaprutil1-dbd-sqlite3/trusty,now 1.5.3-1 amd64 [installed,automatic]
libdbd-sqlite3/trusty,now 0.9.0-2ubuntu2 amd64 [installed]
libsqlite3-0/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
libsqlite3-dev/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
python-pysqlite2/trusty,now 2.6.3-3 amd64 [installed]
python-pysqlite2-dbg/trusty,now 2.6.3-3 amd64 [installed]
sqlite3/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
```
*and*
`sudo apt install --only-upgrade libsqlite3-0`
```
Reading package lists... Done
Building dependency tree
Reading state information... Done
libsqlite3-0 is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 14 not upgraded.
```
EDIT:
the `settings.py` is stock standard:
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
``` | 2019/04/04 | [
"https://Stackoverflow.com/questions/55508830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6154769/"
] | I've just been through this. I had to install a separate newer version of SQLite, from
<https://www.sqlite.org/download.html>
That is in /usr/local/bin. Then I had to recompile Python, telling it to look there:
```
sudo LD_RUN_PATH=/usr/local/lib ./configure --enable-optimizations
sudo LD_RUN_PATH=/usr/local/lib make altinstall
```
To check which version of SQLite Python is using:
```
$ python
Python 3.7.3 (default, Apr 12 2019, 16:23:13)
>>> import sqlite3
>>> sqlite3.sqlite_version
'3.27.2'
``` | In addition to the above mentioned answers, just in case if you experience this behaviour on Travis CI, add `dist: xenial` directive to fix it. |
55,508,830 | In a virtual Env with Python 3.7.2, I am trying to run django's `python manage.py startap myapp` and I get this error:
```
raise ImproperlyConfigured('SQLite 3.8.3 or later is required (found %s).' % Database.sqlite_version)
django.core.exceptions.ImproperlyConfigured: SQLite 3.8.3 or later is required (found 3.8.2).
```
I'm running Ubuntu Trusty 14.04 Server.
How do I upgrade or update my sqlite version to >=3.8.3?
*I ran*
`$ apt list --installed | grep sqlite`
```
libaprutil1-dbd-sqlite3/trusty,now 1.5.3-1 amd64 [installed,automatic]
libdbd-sqlite3/trusty,now 0.9.0-2ubuntu2 amd64 [installed]
libsqlite3-0/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
libsqlite3-dev/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
python-pysqlite2/trusty,now 2.6.3-3 amd64 [installed]
python-pysqlite2-dbg/trusty,now 2.6.3-3 amd64 [installed]
sqlite3/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
```
*and*
`sudo apt install --only-upgrade libsqlite3-0`
```
Reading package lists... Done
Building dependency tree
Reading state information... Done
libsqlite3-0 is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 14 not upgraded.
```
EDIT:
the `settings.py` is stock standard:
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
``` | 2019/04/04 | [
"https://Stackoverflow.com/questions/55508830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6154769/"
] | I've just been through this. I had to install a separate newer version of SQLite, from
<https://www.sqlite.org/download.html>
That is in /usr/local/bin. Then I had to recompile Python, telling it to look there:
```
sudo LD_RUN_PATH=/usr/local/lib ./configure --enable-optimizations
sudo LD_RUN_PATH=/usr/local/lib make altinstall
```
To check which version of SQLite Python is using:
```
$ python
Python 3.7.3 (default, Apr 12 2019, 16:23:13)
>>> import sqlite3
>>> sqlite3.sqlite_version
'3.27.2'
``` | I have applied the following fix and it worked for my CentOS 7.x server.
Edit `/usr/lib64/python3.6/site-packages/django/db/backends/sqlite3/base.py` file as per the below example:
```
def check_sqlite_version():
# if Database.sqlite_version_info < (3, 8, 3):
# 2018-07-07, edit
if Database.sqlite_version_info < (3, 6, 3):
raise ImproperlyConfigured('SQLite 3.8.3 or later is required (found %s).' % Database.sqlite_version)
``` |
55,508,830 | In a virtual Env with Python 3.7.2, I am trying to run django's `python manage.py startap myapp` and I get this error:
```
raise ImproperlyConfigured('SQLite 3.8.3 or later is required (found %s).' % Database.sqlite_version)
django.core.exceptions.ImproperlyConfigured: SQLite 3.8.3 or later is required (found 3.8.2).
```
I'm running Ubuntu Trusty 14.04 Server.
How do I upgrade or update my sqlite version to >=3.8.3?
*I ran*
`$ apt list --installed | grep sqlite`
```
libaprutil1-dbd-sqlite3/trusty,now 1.5.3-1 amd64 [installed,automatic]
libdbd-sqlite3/trusty,now 0.9.0-2ubuntu2 amd64 [installed]
libsqlite3-0/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
libsqlite3-dev/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
python-pysqlite2/trusty,now 2.6.3-3 amd64 [installed]
python-pysqlite2-dbg/trusty,now 2.6.3-3 amd64 [installed]
sqlite3/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
```
*and*
`sudo apt install --only-upgrade libsqlite3-0`
```
Reading package lists... Done
Building dependency tree
Reading state information... Done
libsqlite3-0 is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 14 not upgraded.
```
EDIT:
the `settings.py` is stock standard:
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
``` | 2019/04/04 | [
"https://Stackoverflow.com/questions/55508830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6154769/"
] | If you don't want to recompile Python and you're using a virtualenv you can do this to set it up without affecting the system as a whole (I've done this with Ubuntu 16/18):
1. Download SQLite tarball from <https://www.sqlite.org/download.html>
2. Extract the contents and cd into the folder.
3. Run the following commands:
./configure
sudo make install
4. Now edit the `activate` script used to start your virtualenv so Python looks in the right place for the newly installed SQLite. Add the following line to the top of `/path/to/virtualenv/bin/activate`:
export LD\_LIBRARY\_PATH="/usr/local/lib"
Now, when active, Django 2.2+ should work fine in the virtualenv. Hope that helps. | I've just been through this. I had to install a separate newer version of SQLite, from
<https://www.sqlite.org/download.html>
That is in /usr/local/bin. Then I had to recompile Python, telling it to look there:
```
sudo LD_RUN_PATH=/usr/local/lib ./configure --enable-optimizations
sudo LD_RUN_PATH=/usr/local/lib make altinstall
```
To check which version of SQLite Python is using:
```
$ python
Python 3.7.3 (default, Apr 12 2019, 16:23:13)
>>> import sqlite3
>>> sqlite3.sqlite_version
'3.27.2'
``` |
55,508,830 | In a virtual Env with Python 3.7.2, I am trying to run django's `python manage.py startap myapp` and I get this error:
```
raise ImproperlyConfigured('SQLite 3.8.3 or later is required (found %s).' % Database.sqlite_version)
django.core.exceptions.ImproperlyConfigured: SQLite 3.8.3 or later is required (found 3.8.2).
```
I'm running Ubuntu Trusty 14.04 Server.
How do I upgrade or update my sqlite version to >=3.8.3?
*I ran*
`$ apt list --installed | grep sqlite`
```
libaprutil1-dbd-sqlite3/trusty,now 1.5.3-1 amd64 [installed,automatic]
libdbd-sqlite3/trusty,now 0.9.0-2ubuntu2 amd64 [installed]
libsqlite3-0/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
libsqlite3-dev/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
python-pysqlite2/trusty,now 2.6.3-3 amd64 [installed]
python-pysqlite2-dbg/trusty,now 2.6.3-3 amd64 [installed]
sqlite3/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
```
*and*
`sudo apt install --only-upgrade libsqlite3-0`
```
Reading package lists... Done
Building dependency tree
Reading state information... Done
libsqlite3-0 is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 14 not upgraded.
```
EDIT:
the `settings.py` is stock standard:
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
``` | 2019/04/04 | [
"https://Stackoverflow.com/questions/55508830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6154769/"
] | If you don't want to recompile Python and you're using a virtualenv you can do this to set it up without affecting the system as a whole (I've done this with Ubuntu 16/18):
1. Download SQLite tarball from <https://www.sqlite.org/download.html>
2. Extract the contents and cd into the folder.
3. Run the following commands:
./configure
sudo make install
4. Now edit the `activate` script used to start your virtualenv so Python looks in the right place for the newly installed SQLite. Add the following line to the top of `/path/to/virtualenv/bin/activate`:
export LD\_LIBRARY\_PATH="/usr/local/lib"
Now, when active, Django 2.2+ should work fine in the virtualenv. Hope that helps. | In addition to the above mentioned answers, just in case if you experience this behaviour on Travis CI, add `dist: xenial` directive to fix it. |
55,508,830 | In a virtual Env with Python 3.7.2, I am trying to run django's `python manage.py startap myapp` and I get this error:
```
raise ImproperlyConfigured('SQLite 3.8.3 or later is required (found %s).' % Database.sqlite_version)
django.core.exceptions.ImproperlyConfigured: SQLite 3.8.3 or later is required (found 3.8.2).
```
I'm running Ubuntu Trusty 14.04 Server.
How do I upgrade or update my sqlite version to >=3.8.3?
*I ran*
`$ apt list --installed | grep sqlite`
```
libaprutil1-dbd-sqlite3/trusty,now 1.5.3-1 amd64 [installed,automatic]
libdbd-sqlite3/trusty,now 0.9.0-2ubuntu2 amd64 [installed]
libsqlite3-0/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
libsqlite3-dev/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
python-pysqlite2/trusty,now 2.6.3-3 amd64 [installed]
python-pysqlite2-dbg/trusty,now 2.6.3-3 amd64 [installed]
sqlite3/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
```
*and*
`sudo apt install --only-upgrade libsqlite3-0`
```
Reading package lists... Done
Building dependency tree
Reading state information... Done
libsqlite3-0 is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 14 not upgraded.
```
EDIT:
the `settings.py` is stock standard:
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
``` | 2019/04/04 | [
"https://Stackoverflow.com/questions/55508830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6154769/"
] | This error comes because your virtual environment could not connect to newly updated sqlite3 database. For that you have to update your sqlite3 database version manually and then give path of it to your virtual environment. Kindly follow below steps:
1. Download latest sqlite3 from official site. (<https://www.sqlite.org/download.html>)`wget http://www.sqlite.org/sqlite-autoconf-3070603.tar.gz`
2. Then go to that folder and fire command.
`tar xvfz sqlite-autoconf-3070603.tar.gz`
3. Go to respective folder. `cd sqlite-autoconf-3070603`
4. `./configure`
5. `make`
6. `make install` It may take too time but wait till end. If it's take too much then terminate that process and continue rest of steps.
7. Now you successfully install updated sqlite3. Now fire this command `sudo LD_RUN_PATH=/usr/local/lib ./configure --enable-optimizations`
8. Open your activate file of virtual environment (e.g., venv/bin/activate) and add this line top of the file...
`export LD_LIBRARY_PATH="/usr/local/lib"`
9. Now for checking you can type this commands to your python shell
```py
$ python
Python 3.7.3 (default, Apr 12 2019, 16:23:13)
>>> import sqlite3
>>> sqlite3.sqlite_version
'3.27.2'
``` | In addition to the above mentioned answers, just in case if you experience this behaviour on Travis CI, add `dist: xenial` directive to fix it. |
55,508,830 | In a virtual Env with Python 3.7.2, I am trying to run django's `python manage.py startap myapp` and I get this error:
```
raise ImproperlyConfigured('SQLite 3.8.3 or later is required (found %s).' % Database.sqlite_version)
django.core.exceptions.ImproperlyConfigured: SQLite 3.8.3 or later is required (found 3.8.2).
```
I'm running Ubuntu Trusty 14.04 Server.
How do I upgrade or update my sqlite version to >=3.8.3?
*I ran*
`$ apt list --installed | grep sqlite`
```
libaprutil1-dbd-sqlite3/trusty,now 1.5.3-1 amd64 [installed,automatic]
libdbd-sqlite3/trusty,now 0.9.0-2ubuntu2 amd64 [installed]
libsqlite3-0/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
libsqlite3-dev/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
python-pysqlite2/trusty,now 2.6.3-3 amd64 [installed]
python-pysqlite2-dbg/trusty,now 2.6.3-3 amd64 [installed]
sqlite3/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
```
*and*
`sudo apt install --only-upgrade libsqlite3-0`
```
Reading package lists... Done
Building dependency tree
Reading state information... Done
libsqlite3-0 is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 14 not upgraded.
```
EDIT:
the `settings.py` is stock standard:
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
``` | 2019/04/04 | [
"https://Stackoverflow.com/questions/55508830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6154769/"
] | If you don't want to recompile Python and you're using a virtualenv you can do this to set it up without affecting the system as a whole (I've done this with Ubuntu 16/18):
1. Download SQLite tarball from <https://www.sqlite.org/download.html>
2. Extract the contents and cd into the folder.
3. Run the following commands:
./configure
sudo make install
4. Now edit the `activate` script used to start your virtualenv so Python looks in the right place for the newly installed SQLite. Add the following line to the top of `/path/to/virtualenv/bin/activate`:
export LD\_LIBRARY\_PATH="/usr/local/lib"
Now, when active, Django 2.2+ should work fine in the virtualenv. Hope that helps. | I have applied the following fix and it worked for my CentOS 7.x server.
Edit `/usr/lib64/python3.6/site-packages/django/db/backends/sqlite3/base.py` file as per the below example:
```
def check_sqlite_version():
# if Database.sqlite_version_info < (3, 8, 3):
# 2018-07-07, edit
if Database.sqlite_version_info < (3, 6, 3):
raise ImproperlyConfigured('SQLite 3.8.3 or later is required (found %s).' % Database.sqlite_version)
``` |
55,508,830 | In a virtual Env with Python 3.7.2, I am trying to run django's `python manage.py startap myapp` and I get this error:
```
raise ImproperlyConfigured('SQLite 3.8.3 or later is required (found %s).' % Database.sqlite_version)
django.core.exceptions.ImproperlyConfigured: SQLite 3.8.3 or later is required (found 3.8.2).
```
I'm running Ubuntu Trusty 14.04 Server.
How do I upgrade or update my sqlite version to >=3.8.3?
*I ran*
`$ apt list --installed | grep sqlite`
```
libaprutil1-dbd-sqlite3/trusty,now 1.5.3-1 amd64 [installed,automatic]
libdbd-sqlite3/trusty,now 0.9.0-2ubuntu2 amd64 [installed]
libsqlite3-0/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
libsqlite3-dev/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
python-pysqlite2/trusty,now 2.6.3-3 amd64 [installed]
python-pysqlite2-dbg/trusty,now 2.6.3-3 amd64 [installed]
sqlite3/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
```
*and*
`sudo apt install --only-upgrade libsqlite3-0`
```
Reading package lists... Done
Building dependency tree
Reading state information... Done
libsqlite3-0 is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 14 not upgraded.
```
EDIT:
the `settings.py` is stock standard:
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
``` | 2019/04/04 | [
"https://Stackoverflow.com/questions/55508830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6154769/"
] | This error comes because your virtual environment could not connect to newly updated sqlite3 database. For that you have to update your sqlite3 database version manually and then give path of it to your virtual environment. Kindly follow below steps:
1. Download latest sqlite3 from official site. (<https://www.sqlite.org/download.html>)`wget http://www.sqlite.org/sqlite-autoconf-3070603.tar.gz`
2. Then go to that folder and fire command.
`tar xvfz sqlite-autoconf-3070603.tar.gz`
3. Go to respective folder. `cd sqlite-autoconf-3070603`
4. `./configure`
5. `make`
6. `make install` It may take too time but wait till end. If it's take too much then terminate that process and continue rest of steps.
7. Now you successfully install updated sqlite3. Now fire this command `sudo LD_RUN_PATH=/usr/local/lib ./configure --enable-optimizations`
8. Open your activate file of virtual environment (e.g., venv/bin/activate) and add this line top of the file...
`export LD_LIBRARY_PATH="/usr/local/lib"`
9. Now for checking you can type this commands to your python shell
```py
$ python
Python 3.7.3 (default, Apr 12 2019, 16:23:13)
>>> import sqlite3
>>> sqlite3.sqlite_version
'3.27.2'
``` | I have applied the following fix and it worked for my CentOS 7.x server.
Edit `/usr/lib64/python3.6/site-packages/django/db/backends/sqlite3/base.py` file as per the below example:
```
def check_sqlite_version():
# if Database.sqlite_version_info < (3, 8, 3):
# 2018-07-07, edit
if Database.sqlite_version_info < (3, 6, 3):
raise ImproperlyConfigured('SQLite 3.8.3 or later is required (found %s).' % Database.sqlite_version)
``` |
55,508,830 | In a virtual Env with Python 3.7.2, I am trying to run django's `python manage.py startap myapp` and I get this error:
```
raise ImproperlyConfigured('SQLite 3.8.3 or later is required (found %s).' % Database.sqlite_version)
django.core.exceptions.ImproperlyConfigured: SQLite 3.8.3 or later is required (found 3.8.2).
```
I'm running Ubuntu Trusty 14.04 Server.
How do I upgrade or update my sqlite version to >=3.8.3?
*I ran*
`$ apt list --installed | grep sqlite`
```
libaprutil1-dbd-sqlite3/trusty,now 1.5.3-1 amd64 [installed,automatic]
libdbd-sqlite3/trusty,now 0.9.0-2ubuntu2 amd64 [installed]
libsqlite3-0/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
libsqlite3-dev/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
python-pysqlite2/trusty,now 2.6.3-3 amd64 [installed]
python-pysqlite2-dbg/trusty,now 2.6.3-3 amd64 [installed]
sqlite3/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
```
*and*
`sudo apt install --only-upgrade libsqlite3-0`
```
Reading package lists... Done
Building dependency tree
Reading state information... Done
libsqlite3-0 is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 14 not upgraded.
```
EDIT:
the `settings.py` is stock standard:
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
``` | 2019/04/04 | [
"https://Stackoverflow.com/questions/55508830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6154769/"
] | If you don't want to recompile Python and you're using a virtualenv you can do this to set it up without affecting the system as a whole (I've done this with Ubuntu 16/18):
1. Download SQLite tarball from <https://www.sqlite.org/download.html>
2. Extract the contents and cd into the folder.
3. Run the following commands:
./configure
sudo make install
4. Now edit the `activate` script used to start your virtualenv so Python looks in the right place for the newly installed SQLite. Add the following line to the top of `/path/to/virtualenv/bin/activate`:
export LD\_LIBRARY\_PATH="/usr/local/lib"
Now, when active, Django 2.2+ should work fine in the virtualenv. Hope that helps. | This error comes because your virtual environment could not connect to newly updated sqlite3 database. For that you have to update your sqlite3 database version manually and then give path of it to your virtual environment. Kindly follow below steps:
1. Download latest sqlite3 from official site. (<https://www.sqlite.org/download.html>)`wget http://www.sqlite.org/sqlite-autoconf-3070603.tar.gz`
2. Then go to that folder and fire command.
`tar xvfz sqlite-autoconf-3070603.tar.gz`
3. Go to respective folder. `cd sqlite-autoconf-3070603`
4. `./configure`
5. `make`
6. `make install` It may take too time but wait till end. If it's take too much then terminate that process and continue rest of steps.
7. Now you successfully install updated sqlite3. Now fire this command `sudo LD_RUN_PATH=/usr/local/lib ./configure --enable-optimizations`
8. Open your activate file of virtual environment (e.g., venv/bin/activate) and add this line top of the file...
`export LD_LIBRARY_PATH="/usr/local/lib"`
9. Now for checking you can type this commands to your python shell
```py
$ python
Python 3.7.3 (default, Apr 12 2019, 16:23:13)
>>> import sqlite3
>>> sqlite3.sqlite_version
'3.27.2'
``` |
46,143,091 | I'm pretty new to python so it's a basic question.
I have data that I imported from a csv file. Each row reflects a person and his data. Two attributes are Sex and Pclass. I want to add a new column (predictions) that is fully depended on those two in one line. If both attributes' values are 1 it should assign 1 to the person's predictions data field, 0 otherwise.
How do I do it in one line (let's say with Pandas)? | 2017/09/10 | [
"https://Stackoverflow.com/questions/46143091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5252187/"
] | You could try adding a composite index
```
create index test on screenshot (DateTaken, id)
``` | Try running this query:
```
SELECT COUNT(*) as total
FROM screenshot
WHERE DateTaken BETWEEN '2000-05-01' AND '2000-06-10';
```
The reference to `ID` in the `SELECT` could be affecting the use of the index. |
46,143,091 | I'm pretty new to python so it's a basic question.
I have data that I imported from a csv file. Each row reflects a person and his data. Two attributes are Sex and Pclass. I want to add a new column (predictions) that is fully depended on those two in one line. If both attributes' values are 1 it should assign 1 to the person's predictions data field, 0 otherwise.
How do I do it in one line (let's say with Pandas)? | 2017/09/10 | [
"https://Stackoverflow.com/questions/46143091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5252187/"
] | There is no problem. Your index is fine. To explain...
The `5730138` in `EXPLAIN` is an *estimate*. It can be larger or smaller than the actual value, sometimes by a large amount. Do not be bothered by it.
You have 2.8M of screenshots in that date range, correct? Well, it could take 15 seconds to scan the index to count that many rows.
If you would like further analysis, please provide:
RAM size
`innodb_buffer_pool_size`
`SHOW CREATE TABLE screenshot;` (this will show the Engine)
How big the table is (GB)
What type of disk you have (spinning versus SSD)
With those, we can discuss further the impact of caching and I/O and engine. And it may help explain the "15 seconds" versus "20".
(And, yes, use `COUNT(*)`, not `COUNT(x)` unless you need to test `x` for NULL.)
If you are using InnoDB, then `INDEX(DateTaken, id)` is identical to `INDEX(DateTaken)`, so I suggest you were hasty at accepting that answer.
**Buffer pool**
`innodb_buffer_pool_size` should be set to about 70% of RAM. What you have is so tiny (the old 16M default), that not even the suggested index can fit in cache. Hence, the query will always be hitting the disk, at least some of the time. Increasing the buffer pool should significantly improve the speed, perhaps down to 2 seconds. | Try running this query:
```
SELECT COUNT(*) as total
FROM screenshot
WHERE DateTaken BETWEEN '2000-05-01' AND '2000-06-10';
```
The reference to `ID` in the `SELECT` could be affecting the use of the index. |
71,568,396 | We are using a beam multi-language pipeline using python and java(ref <https://beam.apache.org/documentation/sdks/python-multi-language-pipelines/>). We are creating a cross-language pipeline using java. We have some external jar files that required a java library path. Code gets compiled properly and is able to create a jar file. When I run the jar file it creates a Grpc server but when I use the python pipeline to call External transform it is not picking up the java library path it picks the default java library path.
![jni_emdq required library path to overwrite](https://i.stack.imgur.com/N24DB.png)
Tried -Djava.library.path=<path\_to\_dll> while running jar file.
Tried System.setProperty(βjava.library.pathβ, β/path/to/libraryβ).
(Ref <https://examples.javacodegeeks.com/java-library-path-what-is-java-library-and-how-to-use/>)
Tried JvmInitializer of beam to overwrite system property. (Ref <https://examples.javacodegeeks.com/java-library-path-what-is-java-library-and-how-to-use/>)
Tried to pull code beam open source and tried to overwrite system proprty before expansion starts. It overwrite but it is not picking correct java path when calls using python external transform. (ref <https://github.com/apache/beam/blob/master/sdks/java/expansion-service/src/main/java/org/apache/beam/sdk/expansion/service/ExpansionService.java>) | 2022/03/22 | [
"https://Stackoverflow.com/questions/71568396",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9648514/"
] | A Worksheet Change Event: Monitor Change in Column's Data
---------------------------------------------------------
* I personally would go with JvdV's suggestion in the comments.
* On each manual change of a cell, e.g. in column `A`, it will check the formula
`=SUM(A2:ALastRow)` in cell `A1` and if it is not correct it will overwrite it with the correct one.
* You can use this for multiple non-adjacent columns e.g. `"A,C:D,E"`.
* Nothing needs to be run. Just copy the code into the appropriate sheet module e.g. `Sheet1` and exit the Visual Basic Editor.
**Sheet Module e.g. `Sheet1` (not Standard Module e.g. `Module1`)**
```
Option Explicit
Private Sub Worksheet_Change(ByVal Target As Range)
UpdateFirstRowFormula Target, "A"
End Sub
Private Sub UpdateFirstRowFormula( _
ByVal Target As Range, _
ByVal ColumnList As String)
On Error GoTo ClearError
Dim ws As Worksheet: Set ws = Target.Worksheet
Dim Cols() As String: Cols = Split(ColumnList, ",")
Application.EnableEvents = False
Dim irg As Range, arg As Range, crg As Range, lCell As Range
Dim n As Long
Dim Formula As String
For n = 0 To UBound(Cols)
With ws.Columns(Cols(n))
With .Resize(.Rows.Count - 1).Offset(1)
Set irg = Intersect(.Cells, Target.EntireColumn)
End With
End With
If Not irg Is Nothing Then
For Each arg In irg.Areas
For Each crg In arg.Columns
Set lCell = crg.Find("*", , xlFormulas, , , xlPrevious)
If Not lCell Is Nothing Then
Formula = "=SUM(" & crg.Cells(1).Address(0, 0) & ":" _
& lCell.Address(0, 0) & ")"
With crg.Cells(1).Offset(-1)
If .Formula <> Formula Then .Formula = Formula
End With
End If
Next crg
Next arg
Set irg = Nothing
End If
Next n
SafeExit:
If Not Application.EnableEvents Then Application.EnableEvents = True
Exit Sub
ClearError:
Debug.Print "Run-time error '" & Err.Number & "': " & Err.Description
Resume SafeExit
End Sub
``` | Use a nested function as below:
=SUM(OFFSET(A2,,,COUNTA(A2:A26))) |
49,005,651 | This question is motivated by my another question: [How to await in cdef?](https://stackoverflow.com/questions/48989065/how-to-await-in-cdef)
There are tons of articles and blog posts on the web about `asyncio`, but they are all very superficial. I couldn't find any information about how `asyncio` is actually implemented, and what makes I/O asynchronous. I was trying to read the source code, but it's thousands of lines of not the highest grade C code, a lot of which deals with auxiliary objects, but most crucially, it is hard to connect between Python syntax and what C code it would translate into.
Asycnio's own documentation is even less helpful. There's no information there about how it works, only some guidelines about how to use it, which are also sometimes misleading / very poorly written.
I'm familiar with Go's implementation of coroutines, and was kind of hoping that Python did the same thing. If that was the case, the code I came up in the post linked above would have worked. Since it didn't, I'm now trying to figure out why. My best guess so far is as follows, please correct me where I'm wrong:
1. Procedure definitions of the form `async def foo(): ...` are actually interpreted as methods of a class inheriting `coroutine`.
2. Perhaps, `async def` is actually split into multiple methods by `await` statements, where the object, on which these methods are called is able to keep track of the progress it made through the execution so far.
3. If the above is true, then, essentially, execution of a coroutine boils down to calling methods of coroutine object by some global manager (loop?).
4. The global manager is somehow (how?) aware of when I/O operations are performed by Python (only?) code and is able to choose one of the pending coroutine methods to execute after the current executing method relinquished control (hit on the `await` statement).
In other words, here's my attempt at "desugaring" of some `asyncio` syntax into something more understandable:
```
async def coro(name):
print('before', name)
await asyncio.sleep()
print('after', name)
asyncio.gather(coro('first'), coro('second'))
# translated from async def coro(name)
class Coro(coroutine):
def before(self, name):
print('before', name)
def after(self, name):
print('after', name)
def __init__(self, name):
self.name = name
self.parts = self.before, self.after
self.pos = 0
def __call__():
self.parts[self.pos](self.name)
self.pos += 1
def done(self):
return self.pos == len(self.parts)
# translated from asyncio.gather()
class AsyncIOManager:
def gather(*coros):
while not every(c.done() for c in coros):
coro = random.choice(coros)
coro()
```
Should my guess prove correct: then I have a problem. How does I/O actually happen in this scenario? In a separate thread? Is the whole interpreter suspended and I/O happens outside the interpreter? What exactly is meant by I/O? If my python procedure called C `open()` procedure, and it in turn sent interrupt to kernel, relinquishing control to it, how does Python interpreter know about this and is able to continue running some other code, while kernel code does the actual I/O and until it wakes up the Python procedure which sent the interrupt originally? How can Python interpreter in principle, be aware of this happening? | 2018/02/27 | [
"https://Stackoverflow.com/questions/49005651",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5691066/"
] | It all boils down to the two main challenges that asyncio is addressing:
* How to perform multiple I/O in a single thread?
* How to implement cooperative multitasking?
The answer to the first point has been around for a long while and is called a [select loop](https://en.wikipedia.org/wiki/Asynchronous_I/O#Select(/poll)_loops). In python, it is implemented in the [selectors module](https://docs.python.org/3/library/selectors.html).
The second question is related to the concept of [coroutine](https://en.wikipedia.org/wiki/Coroutine), i.e. functions that can stop their execution and be restored later on. In python, coroutines are implemented using [generators](https://wiki.python.org/moin/Generators) and the [yield from](https://www.python.org/dev/peps/pep-0380/) statement. That's what is hiding behind the [async/await syntax](https://www.python.org/dev/peps/pep-0492/).
More resources in this [answer](https://stackoverflow.com/a/41208685/2846140).
---
**EDIT:** Addressing your comment about goroutines:
The closest equivalent to a goroutine in asyncio is actually not a coroutine but a task (see the difference in the [documentation](https://docs.python.org/3/library/asyncio-task.html)). In python, a coroutine (or a generator) knows nothing about the concepts of event loop or I/O. It simply is a function that can stop its execution using `yield` while keeping its current state, so it can be restored later on. The `yield from` syntax allows for chaining them in a transparent way.
Now, within an asyncio task, the coroutine at the very bottom of the chain always ends up yielding a [future](https://docs.python.org/3.4/library/asyncio-task.html#asyncio.Future). This future then bubbles up to the event loop, and gets integrated into the inner machinery. When the future is set to done by some other inner callback, the event loop can restore the task by sending the future back into the coroutine chain.
---
**EDIT:** Addressing some of the questions in your post:
>
> How does I/O actually happen in this scenario? In a separate thread? Is the whole interpreter suspended and I/O happens outside the interpreter?
>
>
>
No, nothing happens in a thread. I/O is always managed by the event loop, mostly through file descriptors. However the registration of those file descriptors is usually hidden by high-level coroutines, making the dirty work for you.
>
> What exactly is meant by I/O? If my python procedure called C open() procedure, and it in turn sent interrupt to kernel, relinquishing control to it, how does Python interpreter know about this and is able to continue running some other code, while kernel code does the actual I/O and until it wakes up the Python procedure which sent the interrupt originally? How can Python interpreter in principle, be aware of this happening?
>
>
>
An I/O is any blocking call. In asyncio, all the I/O operations should go through the event loop, because as you said, the event loop has no way to be aware that a blocking call is being performed in some synchronous code. That means you're not supposed to use a synchronous `open` within the context of a coroutine. Instead, use a dedicated library such [aiofiles](https://github.com/Tinche/aiofiles) which provides an asynchronous version of `open`. | If you picture an airport control tower, with many planes waiting to land on the same runway. The control tower can be seen as the event loop and runway as the thread. Each plane is a separate function waiting to execute. In reality only one plane can land on the runway at a time. What asyncio basically does it allows many planes to land simultaneously on the same runway by using the event loop to suspend functions and allow other functions to run when you use the await syntax it basically means that plane(function can be suspended and allow other functions to process |
49,005,651 | This question is motivated by my another question: [How to await in cdef?](https://stackoverflow.com/questions/48989065/how-to-await-in-cdef)
There are tons of articles and blog posts on the web about `asyncio`, but they are all very superficial. I couldn't find any information about how `asyncio` is actually implemented, and what makes I/O asynchronous. I was trying to read the source code, but it's thousands of lines of not the highest grade C code, a lot of which deals with auxiliary objects, but most crucially, it is hard to connect between Python syntax and what C code it would translate into.
Asycnio's own documentation is even less helpful. There's no information there about how it works, only some guidelines about how to use it, which are also sometimes misleading / very poorly written.
I'm familiar with Go's implementation of coroutines, and was kind of hoping that Python did the same thing. If that was the case, the code I came up in the post linked above would have worked. Since it didn't, I'm now trying to figure out why. My best guess so far is as follows, please correct me where I'm wrong:
1. Procedure definitions of the form `async def foo(): ...` are actually interpreted as methods of a class inheriting `coroutine`.
2. Perhaps, `async def` is actually split into multiple methods by `await` statements, where the object, on which these methods are called is able to keep track of the progress it made through the execution so far.
3. If the above is true, then, essentially, execution of a coroutine boils down to calling methods of coroutine object by some global manager (loop?).
4. The global manager is somehow (how?) aware of when I/O operations are performed by Python (only?) code and is able to choose one of the pending coroutine methods to execute after the current executing method relinquished control (hit on the `await` statement).
In other words, here's my attempt at "desugaring" of some `asyncio` syntax into something more understandable:
```
async def coro(name):
print('before', name)
await asyncio.sleep()
print('after', name)
asyncio.gather(coro('first'), coro('second'))
# translated from async def coro(name)
class Coro(coroutine):
def before(self, name):
print('before', name)
def after(self, name):
print('after', name)
def __init__(self, name):
self.name = name
self.parts = self.before, self.after
self.pos = 0
def __call__():
self.parts[self.pos](self.name)
self.pos += 1
def done(self):
return self.pos == len(self.parts)
# translated from asyncio.gather()
class AsyncIOManager:
def gather(*coros):
while not every(c.done() for c in coros):
coro = random.choice(coros)
coro()
```
Should my guess prove correct: then I have a problem. How does I/O actually happen in this scenario? In a separate thread? Is the whole interpreter suspended and I/O happens outside the interpreter? What exactly is meant by I/O? If my python procedure called C `open()` procedure, and it in turn sent interrupt to kernel, relinquishing control to it, how does Python interpreter know about this and is able to continue running some other code, while kernel code does the actual I/O and until it wakes up the Python procedure which sent the interrupt originally? How can Python interpreter in principle, be aware of this happening? | 2018/02/27 | [
"https://Stackoverflow.com/questions/49005651",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5691066/"
] | It all boils down to the two main challenges that asyncio is addressing:
* How to perform multiple I/O in a single thread?
* How to implement cooperative multitasking?
The answer to the first point has been around for a long while and is called a [select loop](https://en.wikipedia.org/wiki/Asynchronous_I/O#Select(/poll)_loops). In python, it is implemented in the [selectors module](https://docs.python.org/3/library/selectors.html).
The second question is related to the concept of [coroutine](https://en.wikipedia.org/wiki/Coroutine), i.e. functions that can stop their execution and be restored later on. In python, coroutines are implemented using [generators](https://wiki.python.org/moin/Generators) and the [yield from](https://www.python.org/dev/peps/pep-0380/) statement. That's what is hiding behind the [async/await syntax](https://www.python.org/dev/peps/pep-0492/).
More resources in this [answer](https://stackoverflow.com/a/41208685/2846140).
---
**EDIT:** Addressing your comment about goroutines:
The closest equivalent to a goroutine in asyncio is actually not a coroutine but a task (see the difference in the [documentation](https://docs.python.org/3/library/asyncio-task.html)). In python, a coroutine (or a generator) knows nothing about the concepts of event loop or I/O. It simply is a function that can stop its execution using `yield` while keeping its current state, so it can be restored later on. The `yield from` syntax allows for chaining them in a transparent way.
Now, within an asyncio task, the coroutine at the very bottom of the chain always ends up yielding a [future](https://docs.python.org/3.4/library/asyncio-task.html#asyncio.Future). This future then bubbles up to the event loop, and gets integrated into the inner machinery. When the future is set to done by some other inner callback, the event loop can restore the task by sending the future back into the coroutine chain.
---
**EDIT:** Addressing some of the questions in your post:
>
> How does I/O actually happen in this scenario? In a separate thread? Is the whole interpreter suspended and I/O happens outside the interpreter?
>
>
>
No, nothing happens in a thread. I/O is always managed by the event loop, mostly through file descriptors. However the registration of those file descriptors is usually hidden by high-level coroutines, making the dirty work for you.
>
> What exactly is meant by I/O? If my python procedure called C open() procedure, and it in turn sent interrupt to kernel, relinquishing control to it, how does Python interpreter know about this and is able to continue running some other code, while kernel code does the actual I/O and until it wakes up the Python procedure which sent the interrupt originally? How can Python interpreter in principle, be aware of this happening?
>
>
>
An I/O is any blocking call. In asyncio, all the I/O operations should go through the event loop, because as you said, the event loop has no way to be aware that a blocking call is being performed in some synchronous code. That means you're not supposed to use a synchronous `open` within the context of a coroutine. Instead, use a dedicated library such [aiofiles](https://github.com/Tinche/aiofiles) which provides an asynchronous version of `open`. | It allows you to write single-threaded asynchronous code and implement concurrency in Python. Basically, `asyncio` provides an event loop for asynchronous programming. For example, if we need to make requests without blocking the main thread, we can use the `asyncio` library.
The asyncio module allows for the implementation of asynchronous programming
using a combination of the following elements:
* Event loop: The asyncio module allows an event loop per process.
* Coroutines: A coroutine is a generator that follows certain conventions. Its most interesting feature is that it can be suspended during execution to wait for external processing (the some routine in I/O) and return from the point it had stopped when the external processing was done.
* Futures: Futures represent a process that has still not finished. A future is an object that is supposed to have a result in the future and represents uncompleted tasks.
* Tasks: This is a subclass of `asyncio`.Future that encapsulates and manages
coroutines. We can use the asyncio.Task object to encapsulate a coroutine.
The most important concept within `asyncio` is the event loop. An event loop
allows you to write asynchronous code using either callbacks or coroutines.
The keys to understanding `asyncio` are the terms of coroutines and the event
loop. **Coroutines** are stateful functions whose execution can be stopped while another I/O operation is being executed. An event loop is used to orchestrate the execution of the coroutines.
To run any coroutine function, we need to get an event loop. We can do this
with
```
loop = asyncio.get_event_loop()
```
This gives us a `BaseEventLoop` object. This has a `run_until_complete` method that takes in a coroutine and runs it until completion. Then, the coroutine returns a result. At a low level, an event loop executes the `BaseEventLoop.rununtilcomplete(future)` method. |
49,005,651 | This question is motivated by my another question: [How to await in cdef?](https://stackoverflow.com/questions/48989065/how-to-await-in-cdef)
There are tons of articles and blog posts on the web about `asyncio`, but they are all very superficial. I couldn't find any information about how `asyncio` is actually implemented, and what makes I/O asynchronous. I was trying to read the source code, but it's thousands of lines of not the highest grade C code, a lot of which deals with auxiliary objects, but most crucially, it is hard to connect between Python syntax and what C code it would translate into.
Asycnio's own documentation is even less helpful. There's no information there about how it works, only some guidelines about how to use it, which are also sometimes misleading / very poorly written.
I'm familiar with Go's implementation of coroutines, and was kind of hoping that Python did the same thing. If that was the case, the code I came up in the post linked above would have worked. Since it didn't, I'm now trying to figure out why. My best guess so far is as follows, please correct me where I'm wrong:
1. Procedure definitions of the form `async def foo(): ...` are actually interpreted as methods of a class inheriting `coroutine`.
2. Perhaps, `async def` is actually split into multiple methods by `await` statements, where the object, on which these methods are called is able to keep track of the progress it made through the execution so far.
3. If the above is true, then, essentially, execution of a coroutine boils down to calling methods of coroutine object by some global manager (loop?).
4. The global manager is somehow (how?) aware of when I/O operations are performed by Python (only?) code and is able to choose one of the pending coroutine methods to execute after the current executing method relinquished control (hit on the `await` statement).
In other words, here's my attempt at "desugaring" of some `asyncio` syntax into something more understandable:
```
async def coro(name):
print('before', name)
await asyncio.sleep()
print('after', name)
asyncio.gather(coro('first'), coro('second'))
# translated from async def coro(name)
class Coro(coroutine):
def before(self, name):
print('before', name)
def after(self, name):
print('after', name)
def __init__(self, name):
self.name = name
self.parts = self.before, self.after
self.pos = 0
def __call__():
self.parts[self.pos](self.name)
self.pos += 1
def done(self):
return self.pos == len(self.parts)
# translated from asyncio.gather()
class AsyncIOManager:
def gather(*coros):
while not every(c.done() for c in coros):
coro = random.choice(coros)
coro()
```
Should my guess prove correct: then I have a problem. How does I/O actually happen in this scenario? In a separate thread? Is the whole interpreter suspended and I/O happens outside the interpreter? What exactly is meant by I/O? If my python procedure called C `open()` procedure, and it in turn sent interrupt to kernel, relinquishing control to it, how does Python interpreter know about this and is able to continue running some other code, while kernel code does the actual I/O and until it wakes up the Python procedure which sent the interrupt originally? How can Python interpreter in principle, be aware of this happening? | 2018/02/27 | [
"https://Stackoverflow.com/questions/49005651",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5691066/"
] | How does asyncio work?
======================
Before answering this question we need to understand a few base terms, skip these if you already know any of them.
[Generators](https://wiki.python.org/moin/Generators)
-----------------------------------------------------
Generators are objects that allow us to suspend the execution of a python function. User curated generators are implemented using the keyword [**`yield`**](https://docs.python.org/3/reference/expressions.html#yield-expressions). By creating a normal function containing the `yield` keyword, we turn that function into a generator:
```
>>> def test():
... yield 1
... yield 2
...
>>> gen = test()
>>> next(gen)
1
>>> next(gen)
2
>>> next(gen)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
```
As you can see, calling [`next()`](https://docs.python.org/3/library/functions.html#next) on the generator causes the interpreter to load the test's frame, and return the `yield`ed value. Calling `next()` again, causes the frame to load again into the interpreter stack, and continues on `yield`ing another value.
By the third time `next()` is called, our generator was finished, and [`StopIteration`](https://docs.python.org/3/library/exceptions.html#StopIteration) was thrown.
### Communicating with a generator
A less-known feature of generators is the fact that you can communicate with them using two methods: [`send()`](https://docs.python.org/3/reference/expressions.html#generator.send) and [`throw()`](https://docs.python.org/3/reference/expressions.html#generator.throw).
```
>>> def test():
... val = yield 1
... print(val)
... yield 2
... yield 3
...
>>> gen = test()
>>> next(gen)
1
>>> gen.send("abc")
abc
2
>>> gen.throw(Exception())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in test
Exception
```
Upon calling `gen.send()`, the value is passed as a return value from the `yield` keyword.
`gen.throw()` on the other hand, allows throwing Exceptions inside generators, with the exception raised at the same spot `yield` was called.
### Returning values from generators
Returning a value from a generator, results in the value being put inside the `StopIteration` exception. We can later on recover the value from the exception and use it to our needs.
```
>>> def test():
... yield 1
... return "abc"
...
>>> gen = test()
>>> next(gen)
1
>>> try:
... next(gen)
... except StopIteration as exc:
... print(exc.value)
...
abc
```
Behold, a new keyword: `yield from`
-----------------------------------
Python 3.4 came with the addition of a new keyword: [`yield from`](https://docs.python.org/3/reference/expressions.html#yield-expressions). What that keyword allows us to do, is pass on any `next()`, `send()` and `throw()` into an inner-most nested generator. If the inner generator returns a value, it is also the return value of `yield from`:
```
>>> def inner():
... inner_result = yield 2
... print('inner', inner_result)
... return 3
...
>>> def outer():
... yield 1
... val = yield from inner()
... print('outer', val)
... yield 4
...
>>> gen = outer()
>>> next(gen)
1
>>> next(gen) # Goes inside inner() automatically
2
>>> gen.send("abc")
inner abc
outer 3
4
```
I've written [an article](https://towardsdatascience.com/cpython-internals-how-do-generators-work-ba1c4405b4bc) to further elaborate on this topic.
Putting it all together
-----------------------
Upon introducing the new keyword `yield from` in Python 3.4, we were now able to create generators inside generators that just like a tunnel, pass the data back and forth from the inner-most to the outer-most generators. This has spawned a new meaning for generators - *coroutines*.
**Coroutines** are functions that can be stopped and resumed while being run. In Python, they are defined using the **[`async def`](https://docs.python.org/3/reference/compound_stmts.html#coroutine-function-definition)** keyword. Much like generators, they too use their own form of `yield from` which is **[`await`](https://docs.python.org/3/reference/expressions.html#await)**. Before `async` and `await` were introduced in Python 3.5, we created coroutines in the exact same way generators were created (with `yield from` instead of `await`).
```
async def inner():
return 1
async def outer():
await inner()
```
Just like all iterators and generators implement the `__iter__()` method, all coroutines implement `__await__()` which allows them to continue on every time `await coro` is called.
There's a nice [sequence diagram](https://docs.python.org/3.5/_images/tulip_coro.png) inside the [Python docs](https://docs.python.org/3.5/library/asyncio-task.html#example-chain-coroutines) that you should check out.
In asyncio, apart from coroutine functions, we have 2 important objects: **tasks** and **futures**.
### [Futures](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future)
Futures are objects that have the `__await__()` method implemented, and their job is to hold a certain state and result. The state can be one of the following:
1. PENDING - future does not have any result or exception set.
2. CANCELLED - future was cancelled using `fut.cancel()`
3. FINISHED - future was finished, either by a result set using [`fut.set_result()`](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future.set_result) or by an exception set using [`fut.set_exception()`](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future.set_exception)
The result, just like you have guessed, can either be a Python object, that will be returned, or an exception which may be raised.
Another **important** feature of `future` objects, is that they contain a method called **[`add_done_callback()`](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future.add_done_callback)**. This method allows functions to be called as soon as the task is done - whether it raised an exception or finished.
### [Tasks](https://docs.python.org/3/library/asyncio-task.html#task)
Task objects are special futures, which wrap around coroutines, and communicate with the inner-most and outer-most coroutines. Every time a coroutine `await`s a future, the future is passed all the way back to the task (just like in `yield from`), and the task receives it.
Next, the task binds itself to the future. It does so by calling `add_done_callback()` on the future. From now on, if the future will ever be done, by either being cancelled, passed an exception or passed a Python object as a result, the task's callback will be called, and it will rise back up to existence.
Asyncio
=======
The final burning question we must answer is - how is the IO implemented?
Deep inside asyncio, we have an event loop. An event loop of tasks. The event loop's job is to call tasks every time they are ready and coordinate all that effort into one single working machine.
The IO part of the event loop is built upon a single crucial function called **[`select`](https://docs.python.org/3/library/select.html#module-select)**. Select is a blocking function, implemented by the operating system underneath, that allows waiting on sockets for incoming or outgoing data. Upon receiving data it wakes up, and returns the sockets which received data, or the sockets which are ready for writing.
When you try to receive or send data over a socket through asyncio, what actually happens below is that the socket is first checked if it has any data that can be immediately read or sent. If its `.send()` buffer is full, or the `.recv()` buffer is empty, the socket is registered to the `select` function (by simply adding it to one of the lists, `rlist` for `recv` and `wlist` for `send`) and the appropriate function `await`s a newly created `future` object, tied to that socket.
When all available tasks are waiting for futures, the event loop calls `select` and waits. When the one of the sockets has incoming data, or its `send` buffer drained up, asyncio checks for the future object tied to that socket, and sets it to done.
Now all the magic happens. The future is set to done, the task that added itself before with `add_done_callback()` rises up back to life, and calls `.send()` on the coroutine which resumes the inner-most coroutine (because of the `await` chain) and you read the newly received data from a nearby buffer it was spilled unto.
**Method chain again, in case of `recv()`:**
1. `select.select` waits.
2. A ready socket, with data is returned.
3. Data from the socket is moved into a buffer.
4. `future.set_result()` is called.
5. Task that added itself with `add_done_callback()` is now woken up.
6. Task calls `.send()` on the coroutine which goes all the way into the inner-most coroutine and wakes it up.
7. Data is being read from the buffer and returned to our humble user.
In summary, asyncio uses generator capabilities, that allow pausing and resuming functions. It uses `yield from` capabilities that allow passing data back and forth from the inner-most generator to the outer-most. It uses all of those in order to halt function execution while it's waiting for IO to complete (by using the OS `select` function).
And the best of all? While one function is paused, another may run and interleave with the delicate fabric, which is asyncio. | Talking about `async/await` and `asyncio` is not the same thing. The first is a fundamental, low-level construct (coroutines) while the later is a library using these constructs. Conversely, there is no single ultimate answer.
The following is a general description of how `async/await` and `asyncio`-like libraries work. That is, there may be other tricks on top (there are...) but they are inconsequential unless you build them yourself. The difference should be negligible unless you already know enough to not have to ask such a question.
1. Coroutines versus subroutines in a nut shell
===============================================
Just like **subroutines** (functions, procedures, ...), **coroutines** (generators, ...) are an abstraction of call stack and instruction pointer: there is a stack of executing code pieces, and each is at a specific instruction.
The distinction of `def` versus `async def` is merely for clarity. The actual difference is `return` versus `yield`. From this, `await` or `yield from` take the difference from individual calls to entire stacks.
1.1. Subroutines
----------------
A subroutine represents a new stack level to hold local variables, and a single traversal of its instructions to reach an end. Consider a subroutine like this:
```
def subfoo(bar):
qux = 3
return qux * bar
```
When you run it, that means
1. allocate stack space for `bar` and `qux`
2. recursively execute the first statement and jump to the next statement
3. once at a `return`, push its value to the calling stack
4. clear the stack (1.) and instruction pointer (2.)
Notably, 4. means that a subroutine always starts at the same state. Everything exclusive to the function itself is lost upon completion. A function cannot be resumed, even if there are instructions after `return`.
```
root -\
: \- subfoo --\
:/--<---return --/
|
V
```
1.2. Coroutines as persistent subroutines
-----------------------------------------
A coroutine is like a subroutine, but can exit *without* destroying its state. Consider a coroutine like this:
```
def cofoo(bar):
qux = yield bar # yield marks a break point
return qux
```
When you run it, that means
1. allocate stack space for `bar` and `qux`
2. recursively execute the first statement and jump to the next statement
1. once at a `yield`, push its value to the calling stack *but store the stack and instruction pointer*
2. once calling into `yield`, restore stack and instruction pointer and push arguments to `qux`
3. once at a `return`, push its value to the calling stack
4. clear the stack (1.) and instruction pointer (2.)
Note the addition of 2.1 and 2.2 - a coroutine can be suspended and resumed at predefined points. This is similar to how a subroutine is suspended during calling another subroutine. The difference is that the active coroutine is not strictly bound to its calling stack. Instead, a suspended coroutine is part of a separate, isolated stack.
```
root -\
: \- cofoo --\
:/--<+--yield --/
| :
V :
```
This means that suspended coroutines can be freely stored or moved between stacks. Any call stack that has access to a coroutine can decide to resume it.
1.3. Traversing the call stack
------------------------------
So far, our coroutine only goes down the call stack with `yield`. A subroutine can go down *and up* the call stack with `return` and `()`. For completeness, coroutines also need a mechanism to go up the call stack. Consider a coroutine like this:
```
def wrap():
yield 'before'
yield from cofoo()
yield 'after'
```
When you run it, that means it still allocates the stack and instruction pointer like a subroutine. When it suspends, that still is like storing a subroutine.
However, `yield from` does *both*. It suspends stack and instruction pointer of `wrap` *and* runs `cofoo`. Note that `wrap` stays suspended until `cofoo` finishes completely. Whenever `cofoo` suspends or something is sent, `cofoo` is directly connected to the calling stack.
1.4. Coroutines all the way down
--------------------------------
As established, `yield from` allows to connect two scopes across another intermediate one. When applied recursively, that means the *top* of the stack can be connected to the *bottom* of the stack.
```
root -\
: \-> coro_a -yield-from-> coro_b --\
:/ <-+------------------------yield ---/
| :
:\ --+-- coro_a.send----------yield ---\
: coro_b <-/
```
Note that `root` and `coro_b` do not know about each other. This makes coroutines much cleaner than callbacks: coroutines still built on a 1:1 relation like subroutines. Coroutines suspend and resume their entire existing execution stack up until a regular call point.
Notably, `root` could have an arbitrary number of coroutines to resume. Yet, it can never resume more than one at the same time. Coroutines of the same root are concurrent but not parallel!
1.5. Python's `async` and `await`
---------------------------------
The explanation has so far explicitly used the `yield` and `yield from` vocabulary of generators - the underlying functionality is the same. The new Python3.5 syntax `async` and `await` exists mainly for clarity.
```
def foo(): # subroutine?
return None
def foo(): # coroutine?
yield from foofoo() # generator? coroutine?
async def foo(): # coroutine!
await foofoo() # coroutine!
return None
```
The `async for` and `async with` statements are needed because you would break the `yield from/await` chain with the bare `for` and `with` statements.
2. Anatomy of a simple event loop
=================================
By itself, a coroutine has no concept of yielding control to *another* coroutine. It can only yield control to the caller at the bottom of a coroutine stack. This caller can then switch to another coroutine and run it.
This root node of several coroutines is commonly an **event loop**: on suspension, a coroutine yields an **event** on which it wants resume. In turn, the event loop is capable of efficiently waiting for these events to occur. This allows it to decide which coroutine to run next, or how to wait before resuming.
Such a design implies that there is a set of pre-defined events that the loop understands. Several coroutines `await` each other, until finally an event is `await`ed. This event can communicate *directly* with the event loop by `yield`ing control.
```
loop -\
: \-> coroutine --await--> event --\
:/ <-+----------------------- yield --/
| :
| : # loop waits for event to happen
| :
:\ --+-- send(reply) -------- yield --\
: coroutine <--yield-- event <-/
```
The key is that coroutine suspension allows the event loop and events to directly communicate. The intermediate coroutine stack does not require *any* knowledge about which loop is running it, nor how events work.
2.1.1. Events in time
---------------------
The simplest event to handle is reaching a point in time. This is a fundamental block of threaded code as well: a thread repeatedly `sleep`s until a condition is true.
However, a regular `sleep` blocks execution by itself - we want other coroutines to not be blocked. Instead, we want tell the event loop when it should resume the current coroutine stack.
2.1.2. Defining an Event
------------------------
An event is simply a value we can identify - be it via an enum, a type or other identity. We can define this with a simple class that stores our target time. In addition to *storing* the event information, we can allow to `await` a class directly.
```
class AsyncSleep:
"""Event to sleep until a point in time"""
def __init__(self, until: float):
self.until = until
# used whenever someone ``await``s an instance of this Event
def __await__(self):
# yield this Event to the loop
yield self
def __repr__(self):
return '%s(until=%.1f)' % (self.__class__.__name__, self.until)
```
This class only *stores* the event - it does not say how to actually handle it.
The only special feature is `__await__` - it is what the `await` keyword looks for. Practically, it is an iterator but not available for the regular iteration machinery.
2.2.1. Awaiting an event
------------------------
Now that we have an event, how do coroutines react to it? We should be able to express the equivalent of `sleep` by `await`ing our event. To better see what is going on, we wait twice for half the time:
```
import time
async def asleep(duration: float):
"""await that ``duration`` seconds pass"""
await AsyncSleep(time.time() + duration / 2)
await AsyncSleep(time.time() + duration / 2)
```
We can directly instantiate and run this coroutine. Similar to a generator, using `coroutine.send` runs the coroutine until it `yield`s a result.
```
coroutine = asleep(100)
while True:
print(coroutine.send(None))
time.sleep(0.1)
```
This gives us two `AsyncSleep` events and then a `StopIteration` when the coroutine is done. Notice that the only delay is from `time.sleep` in the loop! Each `AsyncSleep` only stores an offset from the current time.
2.2.2. Event + Sleep
--------------------
At this point, we have *two* separate mechanisms at our disposal:
* `AsyncSleep` Events that can be yielded from inside a coroutine
* `time.sleep` that can wait without impacting coroutines
Notably, these two are orthogonal: neither one affects or triggers the other. As a result, we can come up with our own strategy to `sleep` to meet the delay of an `AsyncSleep`.
2.3. A naive event loop
-----------------------
If we have *several* coroutines, each can tell us when it wants to be woken up. We can then wait until the first of them wants to be resumed, then for the one after, and so on. Notably, at each point we only care about which one is *next*.
This makes for a straightforward scheduling:
1. sort coroutines by their desired wake up time
2. pick the first that wants to wake up
3. wait until this point in time
4. run this coroutine
5. repeat from 1.
A trivial implementation does not need any advanced concepts. A `list` allows to sort coroutines by date. Waiting is a regular `time.sleep`. Running coroutines works just like before with `coroutine.send`.
```
def run(*coroutines):
"""Cooperatively run all ``coroutines`` until completion"""
# store wake-up-time and coroutines
waiting = [(0, coroutine) for coroutine in coroutines]
while waiting:
# 2. pick the first coroutine that wants to wake up
until, coroutine = waiting.pop(0)
# 3. wait until this point in time
time.sleep(max(0.0, until - time.time()))
# 4. run this coroutine
try:
command = coroutine.send(None)
except StopIteration:
continue
# 1. sort coroutines by their desired suspension
if isinstance(command, AsyncSleep):
waiting.append((command.until, coroutine))
waiting.sort(key=lambda item: item[0])
```
Of course, this has ample room for improvement. We can use a heap for the wait queue or a dispatch table for events. We could also fetch return values from the `StopIteration` and assign them to the coroutine. However, the fundamental principle remains the same.
2.4. Cooperative Waiting
------------------------
The `AsyncSleep` event and `run` event loop are a fully working implementation of timed events.
```
async def sleepy(identifier: str = "coroutine", count=5):
for i in range(count):
print(identifier, 'step', i + 1, 'at %.2f' % time.time())
await asleep(0.1)
run(*(sleepy("coroutine %d" % j) for j in range(5)))
```
This cooperatively switches between each of the five coroutines, suspending each for 0.1 seconds. Even though the event loop is synchronous, it still executes the work in 0.5 seconds instead of 2.5 seconds. Each coroutine holds state and acts independently.
3. I/O event loop
=================
An event loop that supports `sleep` is suitable for *polling*. However, waiting for I/O on a file handle can be done more efficiently: the operating system implements I/O and thus knows which handles are ready. Ideally, an event loop should support an explicit "ready for I/O" event.
3.1. The `select` call
----------------------
Python already has an interface to query the OS for read I/O handles. When called with handles to read or write, it returns the handles *ready* to read or write:
```
readable, writable, _ = select.select(rlist, wlist, xlist, timeout)
```
For example, we can `open` a file for writing and wait for it to be ready:
```
write_target = open('/tmp/foo')
readable, writable, _ = select.select([], [write_target], [])
```
Once select returns, `writable` contains our open file.
3.2. Basic I/O event
--------------------
Similar to the `AsyncSleep` request, we need to define an event for I/O. With the underlying `select` logic, the event must refer to a readable object - say an `open` file. In addition, we store how much data to read.
```
class AsyncRead:
def __init__(self, file, amount=1):
self.file = file
self.amount = amount
self._buffer = b'' if 'b' in file.mode else ''
def __await__(self):
while len(self._buffer) < self.amount:
yield self
# we only get here if ``read`` should not block
self._buffer += self.file.read(1)
return self._buffer
def __repr__(self):
return '%s(file=%s, amount=%d, progress=%d)' % (
self.__class__.__name__, self.file, self.amount, len(self._buffer)
)
```
As with `AsyncSleep` we mostly just store the data required for the underlying system call. This time, `__await__` is capable of being resumed multiple times - until our desired `amount` has been read. In addition, we `return` the I/O result instead of just resuming.
3.3. Augmenting an event loop with read I/O
-------------------------------------------
The basis for our event loop is still the `run` defined previously. First, we need to track the read requests. This is no longer a sorted schedule, we only map read requests to coroutines.
```
# new
waiting_read = {} # type: Dict[file, coroutine]
```
Since `select.select` takes a timeout parameter, we can use it in place of `time.sleep`.
```
# old
time.sleep(max(0.0, until - time.time()))
# new
readable, _, _ = select.select(list(waiting_read), [], [])
```
This gives us all readable files - if there are any, we run the corresponding coroutine. If there are none, we have waited long enough for our current coroutine to run.
```
# new - reschedule waiting coroutine, run readable coroutine
if readable:
waiting.append((until, coroutine))
waiting.sort()
coroutine = waiting_read[readable[0]]
```
Finally, we have to actually listen for read requests.
```
# new
if isinstance(command, AsyncSleep):
...
elif isinstance(command, AsyncRead):
...
```
3.4. Putting it together
------------------------
The above was a bit of a simplification. We need to do some switching to not starve sleeping coroutines if we can always read. We need to handle having nothing to read or nothing to wait for. However, the end result still fits into 30 LOC.
```
def run(*coroutines):
"""Cooperatively run all ``coroutines`` until completion"""
waiting_read = {} # type: Dict[file, coroutine]
waiting = [(0, coroutine) for coroutine in coroutines]
while waiting or waiting_read:
# 2. wait until the next coroutine may run or read ...
try:
until, coroutine = waiting.pop(0)
except IndexError:
until, coroutine = float('inf'), None
readable, _, _ = select.select(list(waiting_read), [], [])
else:
readable, _, _ = select.select(list(waiting_read), [], [], max(0.0, until - time.time()))
# ... and select the appropriate one
if readable and time.time() < until:
if until and coroutine:
waiting.append((until, coroutine))
waiting.sort()
coroutine = waiting_read.pop(readable[0])
# 3. run this coroutine
try:
command = coroutine.send(None)
except StopIteration:
continue
# 1. sort coroutines by their desired suspension ...
if isinstance(command, AsyncSleep):
waiting.append((command.until, coroutine))
waiting.sort(key=lambda item: item[0])
# ... or register reads
elif isinstance(command, AsyncRead):
waiting_read[command.file] = coroutine
```
3.5. Cooperative I/O
--------------------
The `AsyncSleep`, `AsyncRead` and `run` implementations are now fully functional to sleep and/or read.
Same as for `sleepy`, we can define a helper to test reading:
```
async def ready(path, amount=1024*32):
print('read', path, 'at', '%d' % time.time())
with open(path, 'rb') as file:
result = await AsyncRead(file, amount)
print('done', path, 'at', '%d' % time.time())
print('got', len(result), 'B')
run(sleepy('background', 5), ready('/dev/urandom'))
```
Running this, we can see that our I/O is interleaved with the waiting task:
```
id background round 1
read /dev/urandom at 1530721148
id background round 2
id background round 3
id background round 4
id background round 5
done /dev/urandom at 1530721148
got 1024 B
```
4. Non-Blocking I/O
-------------------
While I/O on files gets the concept across, it is not really suitable for a library like `asyncio`: the `select` call [always returns for files](https://stackoverflow.com/questions/25776812/using-linux-c-select-system-call-to-monitor-files), and both `open` and `read` may [block indefinitely](http://man7.org/linux/man-pages/man2/open.2.html#DESCRIPTION). This blocks all coroutines of an event loop - which is bad. Libraries like `aiofiles` use threads and synchronization to fake non-blocking I/O and events on file.
However, sockets do allow for non-blocking I/O - and their inherent latency makes it much more critical. When used in an event loop, waiting for data and retrying can be wrapped without blocking anything.
4.1. Non-Blocking I/O event
---------------------------
Similar to our `AsyncRead`, we can define a suspend-and-read event for sockets. Instead of taking a file, we take a socket - which must be non-blocking. Also, our `__await__` uses `socket.recv` instead of `file.read`.
```
class AsyncRecv:
def __init__(self, connection, amount=1, read_buffer=1024):
assert not connection.getblocking(), 'connection must be non-blocking for async recv'
self.connection = connection
self.amount = amount
self.read_buffer = read_buffer
self._buffer = b''
def __await__(self):
while len(self._buffer) < self.amount:
try:
self._buffer += self.connection.recv(self.read_buffer)
except BlockingIOError:
yield self
return self._buffer
def __repr__(self):
return '%s(file=%s, amount=%d, progress=%d)' % (
self.__class__.__name__, self.connection, self.amount, len(self._buffer)
)
```
In contrast to `AsyncRead`, `__await__` performs truly non-blocking I/O. When data is available, it *always* reads. When no data is available, it *always* suspends. That means the event loop is only blocked while we perform useful work.
4.2. Un-Blocking the event loop
-------------------------------
As far as the event loop is concerned, nothing changes much. The event to listen for is still the same as for files - a file descriptor marked ready by `select`.
```
# old
elif isinstance(command, AsyncRead):
waiting_read[command.file] = coroutine
# new
elif isinstance(command, AsyncRead):
waiting_read[command.file] = coroutine
elif isinstance(command, AsyncRecv):
waiting_read[command.connection] = coroutine
```
At this point, it should be obvious that `AsyncRead` and `AsyncRecv` are the same kind of event. We could easily refactor them to be *one* event with an exchangeable I/O component. In effect, the event loop, coroutines and events [cleanly separate](http://sans-io.readthedocs.io) a scheduler, arbitrary intermediate code and the actual I/O.
4.3. The ugly side of non-blocking I/O
--------------------------------------
In principle, what you should do at this point is replicate the logic of `read` as a `recv` for `AsyncRecv`. However, this is much more ugly now - you have to handle early returns when functions block inside the kernel, but yield control to you. For example, opening a connection versus opening a file is much longer:
```
# file
file = open(path, 'rb')
# non-blocking socket
connection = socket.socket()
connection.setblocking(False)
# open without blocking - retry on failure
try:
connection.connect((url, port))
except BlockingIOError:
pass
```
Long story short, what remains is a few dozen lines of Exception handling. The events and event loop already work at this point.
```
id background round 1
read localhost:25000 at 1530783569
read /dev/urandom at 1530783569
done localhost:25000 at 1530783569 got 32768 B
id background round 2
id background round 3
id background round 4
done /dev/urandom at 1530783569 got 4096 B
id background round 5
```
Addendum
========
[Example code at github](https://gist.github.com/maxfischer2781/27d68e69c017d7c2605074a59ada04e5) |
49,005,651 | This question is motivated by my another question: [How to await in cdef?](https://stackoverflow.com/questions/48989065/how-to-await-in-cdef)
There are tons of articles and blog posts on the web about `asyncio`, but they are all very superficial. I couldn't find any information about how `asyncio` is actually implemented, and what makes I/O asynchronous. I was trying to read the source code, but it's thousands of lines of not the highest grade C code, a lot of which deals with auxiliary objects, but most crucially, it is hard to connect between Python syntax and what C code it would translate into.
Asycnio's own documentation is even less helpful. There's no information there about how it works, only some guidelines about how to use it, which are also sometimes misleading / very poorly written.
I'm familiar with Go's implementation of coroutines, and was kind of hoping that Python did the same thing. If that was the case, the code I came up in the post linked above would have worked. Since it didn't, I'm now trying to figure out why. My best guess so far is as follows, please correct me where I'm wrong:
1. Procedure definitions of the form `async def foo(): ...` are actually interpreted as methods of a class inheriting `coroutine`.
2. Perhaps, `async def` is actually split into multiple methods by `await` statements, where the object, on which these methods are called is able to keep track of the progress it made through the execution so far.
3. If the above is true, then, essentially, execution of a coroutine boils down to calling methods of coroutine object by some global manager (loop?).
4. The global manager is somehow (how?) aware of when I/O operations are performed by Python (only?) code and is able to choose one of the pending coroutine methods to execute after the current executing method relinquished control (hit on the `await` statement).
In other words, here's my attempt at "desugaring" of some `asyncio` syntax into something more understandable:
```
async def coro(name):
print('before', name)
await asyncio.sleep()
print('after', name)
asyncio.gather(coro('first'), coro('second'))
# translated from async def coro(name)
class Coro(coroutine):
def before(self, name):
print('before', name)
def after(self, name):
print('after', name)
def __init__(self, name):
self.name = name
self.parts = self.before, self.after
self.pos = 0
def __call__():
self.parts[self.pos](self.name)
self.pos += 1
def done(self):
return self.pos == len(self.parts)
# translated from asyncio.gather()
class AsyncIOManager:
def gather(*coros):
while not every(c.done() for c in coros):
coro = random.choice(coros)
coro()
```
Should my guess prove correct: then I have a problem. How does I/O actually happen in this scenario? In a separate thread? Is the whole interpreter suspended and I/O happens outside the interpreter? What exactly is meant by I/O? If my python procedure called C `open()` procedure, and it in turn sent interrupt to kernel, relinquishing control to it, how does Python interpreter know about this and is able to continue running some other code, while kernel code does the actual I/O and until it wakes up the Python procedure which sent the interrupt originally? How can Python interpreter in principle, be aware of this happening? | 2018/02/27 | [
"https://Stackoverflow.com/questions/49005651",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5691066/"
] | How does asyncio work?
======================
Before answering this question we need to understand a few base terms, skip these if you already know any of them.
[Generators](https://wiki.python.org/moin/Generators)
-----------------------------------------------------
Generators are objects that allow us to suspend the execution of a python function. User curated generators are implemented using the keyword [**`yield`**](https://docs.python.org/3/reference/expressions.html#yield-expressions). By creating a normal function containing the `yield` keyword, we turn that function into a generator:
```
>>> def test():
... yield 1
... yield 2
...
>>> gen = test()
>>> next(gen)
1
>>> next(gen)
2
>>> next(gen)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
```
As you can see, calling [`next()`](https://docs.python.org/3/library/functions.html#next) on the generator causes the interpreter to load the test's frame, and return the `yield`ed value. Calling `next()` again, causes the frame to load again into the interpreter stack, and continues on `yield`ing another value.
By the third time `next()` is called, our generator was finished, and [`StopIteration`](https://docs.python.org/3/library/exceptions.html#StopIteration) was thrown.
### Communicating with a generator
A less-known feature of generators is the fact that you can communicate with them using two methods: [`send()`](https://docs.python.org/3/reference/expressions.html#generator.send) and [`throw()`](https://docs.python.org/3/reference/expressions.html#generator.throw).
```
>>> def test():
... val = yield 1
... print(val)
... yield 2
... yield 3
...
>>> gen = test()
>>> next(gen)
1
>>> gen.send("abc")
abc
2
>>> gen.throw(Exception())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in test
Exception
```
Upon calling `gen.send()`, the value is passed as a return value from the `yield` keyword.
`gen.throw()` on the other hand, allows throwing Exceptions inside generators, with the exception raised at the same spot `yield` was called.
### Returning values from generators
Returning a value from a generator, results in the value being put inside the `StopIteration` exception. We can later on recover the value from the exception and use it to our needs.
```
>>> def test():
... yield 1
... return "abc"
...
>>> gen = test()
>>> next(gen)
1
>>> try:
... next(gen)
... except StopIteration as exc:
... print(exc.value)
...
abc
```
Behold, a new keyword: `yield from`
-----------------------------------
Python 3.4 came with the addition of a new keyword: [`yield from`](https://docs.python.org/3/reference/expressions.html#yield-expressions). What that keyword allows us to do, is pass on any `next()`, `send()` and `throw()` into an inner-most nested generator. If the inner generator returns a value, it is also the return value of `yield from`:
```
>>> def inner():
... inner_result = yield 2
... print('inner', inner_result)
... return 3
...
>>> def outer():
... yield 1
... val = yield from inner()
... print('outer', val)
... yield 4
...
>>> gen = outer()
>>> next(gen)
1
>>> next(gen) # Goes inside inner() automatically
2
>>> gen.send("abc")
inner abc
outer 3
4
```
I've written [an article](https://towardsdatascience.com/cpython-internals-how-do-generators-work-ba1c4405b4bc) to further elaborate on this topic.
Putting it all together
-----------------------
Upon introducing the new keyword `yield from` in Python 3.4, we were now able to create generators inside generators that just like a tunnel, pass the data back and forth from the inner-most to the outer-most generators. This has spawned a new meaning for generators - *coroutines*.
**Coroutines** are functions that can be stopped and resumed while being run. In Python, they are defined using the **[`async def`](https://docs.python.org/3/reference/compound_stmts.html#coroutine-function-definition)** keyword. Much like generators, they too use their own form of `yield from` which is **[`await`](https://docs.python.org/3/reference/expressions.html#await)**. Before `async` and `await` were introduced in Python 3.5, we created coroutines in the exact same way generators were created (with `yield from` instead of `await`).
```
async def inner():
return 1
async def outer():
await inner()
```
Just like all iterators and generators implement the `__iter__()` method, all coroutines implement `__await__()` which allows them to continue on every time `await coro` is called.
There's a nice [sequence diagram](https://docs.python.org/3.5/_images/tulip_coro.png) inside the [Python docs](https://docs.python.org/3.5/library/asyncio-task.html#example-chain-coroutines) that you should check out.
In asyncio, apart from coroutine functions, we have 2 important objects: **tasks** and **futures**.
### [Futures](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future)
Futures are objects that have the `__await__()` method implemented, and their job is to hold a certain state and result. The state can be one of the following:
1. PENDING - future does not have any result or exception set.
2. CANCELLED - future was cancelled using `fut.cancel()`
3. FINISHED - future was finished, either by a result set using [`fut.set_result()`](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future.set_result) or by an exception set using [`fut.set_exception()`](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future.set_exception)
The result, just like you have guessed, can either be a Python object, that will be returned, or an exception which may be raised.
Another **important** feature of `future` objects, is that they contain a method called **[`add_done_callback()`](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future.add_done_callback)**. This method allows functions to be called as soon as the task is done - whether it raised an exception or finished.
### [Tasks](https://docs.python.org/3/library/asyncio-task.html#task)
Task objects are special futures, which wrap around coroutines, and communicate with the inner-most and outer-most coroutines. Every time a coroutine `await`s a future, the future is passed all the way back to the task (just like in `yield from`), and the task receives it.
Next, the task binds itself to the future. It does so by calling `add_done_callback()` on the future. From now on, if the future will ever be done, by either being cancelled, passed an exception or passed a Python object as a result, the task's callback will be called, and it will rise back up to existence.
Asyncio
=======
The final burning question we must answer is - how is the IO implemented?
Deep inside asyncio, we have an event loop. An event loop of tasks. The event loop's job is to call tasks every time they are ready and coordinate all that effort into one single working machine.
The IO part of the event loop is built upon a single crucial function called **[`select`](https://docs.python.org/3/library/select.html#module-select)**. Select is a blocking function, implemented by the operating system underneath, that allows waiting on sockets for incoming or outgoing data. Upon receiving data it wakes up, and returns the sockets which received data, or the sockets which are ready for writing.
When you try to receive or send data over a socket through asyncio, what actually happens below is that the socket is first checked if it has any data that can be immediately read or sent. If its `.send()` buffer is full, or the `.recv()` buffer is empty, the socket is registered to the `select` function (by simply adding it to one of the lists, `rlist` for `recv` and `wlist` for `send`) and the appropriate function `await`s a newly created `future` object, tied to that socket.
When all available tasks are waiting for futures, the event loop calls `select` and waits. When the one of the sockets has incoming data, or its `send` buffer drained up, asyncio checks for the future object tied to that socket, and sets it to done.
Now all the magic happens. The future is set to done, the task that added itself before with `add_done_callback()` rises up back to life, and calls `.send()` on the coroutine which resumes the inner-most coroutine (because of the `await` chain) and you read the newly received data from a nearby buffer it was spilled unto.
**Method chain again, in case of `recv()`:**
1. `select.select` waits.
2. A ready socket, with data is returned.
3. Data from the socket is moved into a buffer.
4. `future.set_result()` is called.
5. Task that added itself with `add_done_callback()` is now woken up.
6. Task calls `.send()` on the coroutine which goes all the way into the inner-most coroutine and wakes it up.
7. Data is being read from the buffer and returned to our humble user.
In summary, asyncio uses generator capabilities, that allow pausing and resuming functions. It uses `yield from` capabilities that allow passing data back and forth from the inner-most generator to the outer-most. It uses all of those in order to halt function execution while it's waiting for IO to complete (by using the OS `select` function).
And the best of all? While one function is paused, another may run and interleave with the delicate fabric, which is asyncio. | If you picture an airport control tower, with many planes waiting to land on the same runway. The control tower can be seen as the event loop and runway as the thread. Each plane is a separate function waiting to execute. In reality only one plane can land on the runway at a time. What asyncio basically does it allows many planes to land simultaneously on the same runway by using the event loop to suspend functions and allow other functions to run when you use the await syntax it basically means that plane(function can be suspended and allow other functions to process |
49,005,651 | This question is motivated by my another question: [How to await in cdef?](https://stackoverflow.com/questions/48989065/how-to-await-in-cdef)
There are tons of articles and blog posts on the web about `asyncio`, but they are all very superficial. I couldn't find any information about how `asyncio` is actually implemented, and what makes I/O asynchronous. I was trying to read the source code, but it's thousands of lines of not the highest grade C code, a lot of which deals with auxiliary objects, but most crucially, it is hard to connect between Python syntax and what C code it would translate into.
Asycnio's own documentation is even less helpful. There's no information there about how it works, only some guidelines about how to use it, which are also sometimes misleading / very poorly written.
I'm familiar with Go's implementation of coroutines, and was kind of hoping that Python did the same thing. If that was the case, the code I came up in the post linked above would have worked. Since it didn't, I'm now trying to figure out why. My best guess so far is as follows, please correct me where I'm wrong:
1. Procedure definitions of the form `async def foo(): ...` are actually interpreted as methods of a class inheriting `coroutine`.
2. Perhaps, `async def` is actually split into multiple methods by `await` statements, where the object, on which these methods are called is able to keep track of the progress it made through the execution so far.
3. If the above is true, then, essentially, execution of a coroutine boils down to calling methods of coroutine object by some global manager (loop?).
4. The global manager is somehow (how?) aware of when I/O operations are performed by Python (only?) code and is able to choose one of the pending coroutine methods to execute after the current executing method relinquished control (hit on the `await` statement).
In other words, here's my attempt at "desugaring" of some `asyncio` syntax into something more understandable:
```
async def coro(name):
print('before', name)
await asyncio.sleep()
print('after', name)
asyncio.gather(coro('first'), coro('second'))
# translated from async def coro(name)
class Coro(coroutine):
def before(self, name):
print('before', name)
def after(self, name):
print('after', name)
def __init__(self, name):
self.name = name
self.parts = self.before, self.after
self.pos = 0
def __call__():
self.parts[self.pos](self.name)
self.pos += 1
def done(self):
return self.pos == len(self.parts)
# translated from asyncio.gather()
class AsyncIOManager:
def gather(*coros):
while not every(c.done() for c in coros):
coro = random.choice(coros)
coro()
```
Should my guess prove correct: then I have a problem. How does I/O actually happen in this scenario? In a separate thread? Is the whole interpreter suspended and I/O happens outside the interpreter? What exactly is meant by I/O? If my python procedure called C `open()` procedure, and it in turn sent interrupt to kernel, relinquishing control to it, how does Python interpreter know about this and is able to continue running some other code, while kernel code does the actual I/O and until it wakes up the Python procedure which sent the interrupt originally? How can Python interpreter in principle, be aware of this happening? | 2018/02/27 | [
"https://Stackoverflow.com/questions/49005651",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5691066/"
] | Talking about `async/await` and `asyncio` is not the same thing. The first is a fundamental, low-level construct (coroutines) while the later is a library using these constructs. Conversely, there is no single ultimate answer.
The following is a general description of how `async/await` and `asyncio`-like libraries work. That is, there may be other tricks on top (there are...) but they are inconsequential unless you build them yourself. The difference should be negligible unless you already know enough to not have to ask such a question.
1. Coroutines versus subroutines in a nut shell
===============================================
Just like **subroutines** (functions, procedures, ...), **coroutines** (generators, ...) are an abstraction of call stack and instruction pointer: there is a stack of executing code pieces, and each is at a specific instruction.
The distinction of `def` versus `async def` is merely for clarity. The actual difference is `return` versus `yield`. From this, `await` or `yield from` take the difference from individual calls to entire stacks.
1.1. Subroutines
----------------
A subroutine represents a new stack level to hold local variables, and a single traversal of its instructions to reach an end. Consider a subroutine like this:
```
def subfoo(bar):
qux = 3
return qux * bar
```
When you run it, that means
1. allocate stack space for `bar` and `qux`
2. recursively execute the first statement and jump to the next statement
3. once at a `return`, push its value to the calling stack
4. clear the stack (1.) and instruction pointer (2.)
Notably, 4. means that a subroutine always starts at the same state. Everything exclusive to the function itself is lost upon completion. A function cannot be resumed, even if there are instructions after `return`.
```
root -\
: \- subfoo --\
:/--<---return --/
|
V
```
1.2. Coroutines as persistent subroutines
-----------------------------------------
A coroutine is like a subroutine, but can exit *without* destroying its state. Consider a coroutine like this:
```
def cofoo(bar):
qux = yield bar # yield marks a break point
return qux
```
When you run it, that means
1. allocate stack space for `bar` and `qux`
2. recursively execute the first statement and jump to the next statement
1. once at a `yield`, push its value to the calling stack *but store the stack and instruction pointer*
2. once calling into `yield`, restore stack and instruction pointer and push arguments to `qux`
3. once at a `return`, push its value to the calling stack
4. clear the stack (1.) and instruction pointer (2.)
Note the addition of 2.1 and 2.2 - a coroutine can be suspended and resumed at predefined points. This is similar to how a subroutine is suspended during calling another subroutine. The difference is that the active coroutine is not strictly bound to its calling stack. Instead, a suspended coroutine is part of a separate, isolated stack.
```
root -\
: \- cofoo --\
:/--<+--yield --/
| :
V :
```
This means that suspended coroutines can be freely stored or moved between stacks. Any call stack that has access to a coroutine can decide to resume it.
1.3. Traversing the call stack
------------------------------
So far, our coroutine only goes down the call stack with `yield`. A subroutine can go down *and up* the call stack with `return` and `()`. For completeness, coroutines also need a mechanism to go up the call stack. Consider a coroutine like this:
```
def wrap():
yield 'before'
yield from cofoo()
yield 'after'
```
When you run it, that means it still allocates the stack and instruction pointer like a subroutine. When it suspends, that still is like storing a subroutine.
However, `yield from` does *both*. It suspends stack and instruction pointer of `wrap` *and* runs `cofoo`. Note that `wrap` stays suspended until `cofoo` finishes completely. Whenever `cofoo` suspends or something is sent, `cofoo` is directly connected to the calling stack.
1.4. Coroutines all the way down
--------------------------------
As established, `yield from` allows to connect two scopes across another intermediate one. When applied recursively, that means the *top* of the stack can be connected to the *bottom* of the stack.
```
root -\
: \-> coro_a -yield-from-> coro_b --\
:/ <-+------------------------yield ---/
| :
:\ --+-- coro_a.send----------yield ---\
: coro_b <-/
```
Note that `root` and `coro_b` do not know about each other. This makes coroutines much cleaner than callbacks: coroutines still built on a 1:1 relation like subroutines. Coroutines suspend and resume their entire existing execution stack up until a regular call point.
Notably, `root` could have an arbitrary number of coroutines to resume. Yet, it can never resume more than one at the same time. Coroutines of the same root are concurrent but not parallel!
1.5. Python's `async` and `await`
---------------------------------
The explanation has so far explicitly used the `yield` and `yield from` vocabulary of generators - the underlying functionality is the same. The new Python3.5 syntax `async` and `await` exists mainly for clarity.
```
def foo(): # subroutine?
return None
def foo(): # coroutine?
yield from foofoo() # generator? coroutine?
async def foo(): # coroutine!
await foofoo() # coroutine!
return None
```
The `async for` and `async with` statements are needed because you would break the `yield from/await` chain with the bare `for` and `with` statements.
2. Anatomy of a simple event loop
=================================
By itself, a coroutine has no concept of yielding control to *another* coroutine. It can only yield control to the caller at the bottom of a coroutine stack. This caller can then switch to another coroutine and run it.
This root node of several coroutines is commonly an **event loop**: on suspension, a coroutine yields an **event** on which it wants resume. In turn, the event loop is capable of efficiently waiting for these events to occur. This allows it to decide which coroutine to run next, or how to wait before resuming.
Such a design implies that there is a set of pre-defined events that the loop understands. Several coroutines `await` each other, until finally an event is `await`ed. This event can communicate *directly* with the event loop by `yield`ing control.
```
loop -\
: \-> coroutine --await--> event --\
:/ <-+----------------------- yield --/
| :
| : # loop waits for event to happen
| :
:\ --+-- send(reply) -------- yield --\
: coroutine <--yield-- event <-/
```
The key is that coroutine suspension allows the event loop and events to directly communicate. The intermediate coroutine stack does not require *any* knowledge about which loop is running it, nor how events work.
2.1.1. Events in time
---------------------
The simplest event to handle is reaching a point in time. This is a fundamental block of threaded code as well: a thread repeatedly `sleep`s until a condition is true.
However, a regular `sleep` blocks execution by itself - we want other coroutines to not be blocked. Instead, we want tell the event loop when it should resume the current coroutine stack.
2.1.2. Defining an Event
------------------------
An event is simply a value we can identify - be it via an enum, a type or other identity. We can define this with a simple class that stores our target time. In addition to *storing* the event information, we can allow to `await` a class directly.
```
class AsyncSleep:
"""Event to sleep until a point in time"""
def __init__(self, until: float):
self.until = until
# used whenever someone ``await``s an instance of this Event
def __await__(self):
# yield this Event to the loop
yield self
def __repr__(self):
return '%s(until=%.1f)' % (self.__class__.__name__, self.until)
```
This class only *stores* the event - it does not say how to actually handle it.
The only special feature is `__await__` - it is what the `await` keyword looks for. Practically, it is an iterator but not available for the regular iteration machinery.
2.2.1. Awaiting an event
------------------------
Now that we have an event, how do coroutines react to it? We should be able to express the equivalent of `sleep` by `await`ing our event. To better see what is going on, we wait twice for half the time:
```
import time
async def asleep(duration: float):
"""await that ``duration`` seconds pass"""
await AsyncSleep(time.time() + duration / 2)
await AsyncSleep(time.time() + duration / 2)
```
We can directly instantiate and run this coroutine. Similar to a generator, using `coroutine.send` runs the coroutine until it `yield`s a result.
```
coroutine = asleep(100)
while True:
print(coroutine.send(None))
time.sleep(0.1)
```
This gives us two `AsyncSleep` events and then a `StopIteration` when the coroutine is done. Notice that the only delay is from `time.sleep` in the loop! Each `AsyncSleep` only stores an offset from the current time.
2.2.2. Event + Sleep
--------------------
At this point, we have *two* separate mechanisms at our disposal:
* `AsyncSleep` Events that can be yielded from inside a coroutine
* `time.sleep` that can wait without impacting coroutines
Notably, these two are orthogonal: neither one affects or triggers the other. As a result, we can come up with our own strategy to `sleep` to meet the delay of an `AsyncSleep`.
2.3. A naive event loop
-----------------------
If we have *several* coroutines, each can tell us when it wants to be woken up. We can then wait until the first of them wants to be resumed, then for the one after, and so on. Notably, at each point we only care about which one is *next*.
This makes for a straightforward scheduling:
1. sort coroutines by their desired wake up time
2. pick the first that wants to wake up
3. wait until this point in time
4. run this coroutine
5. repeat from 1.
A trivial implementation does not need any advanced concepts. A `list` allows to sort coroutines by date. Waiting is a regular `time.sleep`. Running coroutines works just like before with `coroutine.send`.
```
def run(*coroutines):
"""Cooperatively run all ``coroutines`` until completion"""
# store wake-up-time and coroutines
waiting = [(0, coroutine) for coroutine in coroutines]
while waiting:
# 2. pick the first coroutine that wants to wake up
until, coroutine = waiting.pop(0)
# 3. wait until this point in time
time.sleep(max(0.0, until - time.time()))
# 4. run this coroutine
try:
command = coroutine.send(None)
except StopIteration:
continue
# 1. sort coroutines by their desired suspension
if isinstance(command, AsyncSleep):
waiting.append((command.until, coroutine))
waiting.sort(key=lambda item: item[0])
```
Of course, this has ample room for improvement. We can use a heap for the wait queue or a dispatch table for events. We could also fetch return values from the `StopIteration` and assign them to the coroutine. However, the fundamental principle remains the same.
2.4. Cooperative Waiting
------------------------
The `AsyncSleep` event and `run` event loop are a fully working implementation of timed events.
```
async def sleepy(identifier: str = "coroutine", count=5):
for i in range(count):
print(identifier, 'step', i + 1, 'at %.2f' % time.time())
await asleep(0.1)
run(*(sleepy("coroutine %d" % j) for j in range(5)))
```
This cooperatively switches between each of the five coroutines, suspending each for 0.1 seconds. Even though the event loop is synchronous, it still executes the work in 0.5 seconds instead of 2.5 seconds. Each coroutine holds state and acts independently.
3. I/O event loop
=================
An event loop that supports `sleep` is suitable for *polling*. However, waiting for I/O on a file handle can be done more efficiently: the operating system implements I/O and thus knows which handles are ready. Ideally, an event loop should support an explicit "ready for I/O" event.
3.1. The `select` call
----------------------
Python already has an interface to query the OS for read I/O handles. When called with handles to read or write, it returns the handles *ready* to read or write:
```
readable, writable, _ = select.select(rlist, wlist, xlist, timeout)
```
For example, we can `open` a file for writing and wait for it to be ready:
```
write_target = open('/tmp/foo')
readable, writable, _ = select.select([], [write_target], [])
```
Once select returns, `writable` contains our open file.
3.2. Basic I/O event
--------------------
Similar to the `AsyncSleep` request, we need to define an event for I/O. With the underlying `select` logic, the event must refer to a readable object - say an `open` file. In addition, we store how much data to read.
```
class AsyncRead:
def __init__(self, file, amount=1):
self.file = file
self.amount = amount
self._buffer = b'' if 'b' in file.mode else ''
def __await__(self):
while len(self._buffer) < self.amount:
yield self
# we only get here if ``read`` should not block
self._buffer += self.file.read(1)
return self._buffer
def __repr__(self):
return '%s(file=%s, amount=%d, progress=%d)' % (
self.__class__.__name__, self.file, self.amount, len(self._buffer)
)
```
As with `AsyncSleep` we mostly just store the data required for the underlying system call. This time, `__await__` is capable of being resumed multiple times - until our desired `amount` has been read. In addition, we `return` the I/O result instead of just resuming.
3.3. Augmenting an event loop with read I/O
-------------------------------------------
The basis for our event loop is still the `run` defined previously. First, we need to track the read requests. This is no longer a sorted schedule, we only map read requests to coroutines.
```
# new
waiting_read = {} # type: Dict[file, coroutine]
```
Since `select.select` takes a timeout parameter, we can use it in place of `time.sleep`.
```
# old
time.sleep(max(0.0, until - time.time()))
# new
readable, _, _ = select.select(list(waiting_read), [], [])
```
This gives us all readable files - if there are any, we run the corresponding coroutine. If there are none, we have waited long enough for our current coroutine to run.
```
# new - reschedule waiting coroutine, run readable coroutine
if readable:
waiting.append((until, coroutine))
waiting.sort()
coroutine = waiting_read[readable[0]]
```
Finally, we have to actually listen for read requests.
```
# new
if isinstance(command, AsyncSleep):
...
elif isinstance(command, AsyncRead):
...
```
3.4. Putting it together
------------------------
The above was a bit of a simplification. We need to do some switching to not starve sleeping coroutines if we can always read. We need to handle having nothing to read or nothing to wait for. However, the end result still fits into 30 LOC.
```
def run(*coroutines):
"""Cooperatively run all ``coroutines`` until completion"""
waiting_read = {} # type: Dict[file, coroutine]
waiting = [(0, coroutine) for coroutine in coroutines]
while waiting or waiting_read:
# 2. wait until the next coroutine may run or read ...
try:
until, coroutine = waiting.pop(0)
except IndexError:
until, coroutine = float('inf'), None
readable, _, _ = select.select(list(waiting_read), [], [])
else:
readable, _, _ = select.select(list(waiting_read), [], [], max(0.0, until - time.time()))
# ... and select the appropriate one
if readable and time.time() < until:
if until and coroutine:
waiting.append((until, coroutine))
waiting.sort()
coroutine = waiting_read.pop(readable[0])
# 3. run this coroutine
try:
command = coroutine.send(None)
except StopIteration:
continue
# 1. sort coroutines by their desired suspension ...
if isinstance(command, AsyncSleep):
waiting.append((command.until, coroutine))
waiting.sort(key=lambda item: item[0])
# ... or register reads
elif isinstance(command, AsyncRead):
waiting_read[command.file] = coroutine
```
3.5. Cooperative I/O
--------------------
The `AsyncSleep`, `AsyncRead` and `run` implementations are now fully functional to sleep and/or read.
Same as for `sleepy`, we can define a helper to test reading:
```
async def ready(path, amount=1024*32):
print('read', path, 'at', '%d' % time.time())
with open(path, 'rb') as file:
result = await AsyncRead(file, amount)
print('done', path, 'at', '%d' % time.time())
print('got', len(result), 'B')
run(sleepy('background', 5), ready('/dev/urandom'))
```
Running this, we can see that our I/O is interleaved with the waiting task:
```
id background round 1
read /dev/urandom at 1530721148
id background round 2
id background round 3
id background round 4
id background round 5
done /dev/urandom at 1530721148
got 1024 B
```
4. Non-Blocking I/O
-------------------
While I/O on files gets the concept across, it is not really suitable for a library like `asyncio`: the `select` call [always returns for files](https://stackoverflow.com/questions/25776812/using-linux-c-select-system-call-to-monitor-files), and both `open` and `read` may [block indefinitely](http://man7.org/linux/man-pages/man2/open.2.html#DESCRIPTION). This blocks all coroutines of an event loop - which is bad. Libraries like `aiofiles` use threads and synchronization to fake non-blocking I/O and events on file.
However, sockets do allow for non-blocking I/O - and their inherent latency makes it much more critical. When used in an event loop, waiting for data and retrying can be wrapped without blocking anything.
4.1. Non-Blocking I/O event
---------------------------
Similar to our `AsyncRead`, we can define a suspend-and-read event for sockets. Instead of taking a file, we take a socket - which must be non-blocking. Also, our `__await__` uses `socket.recv` instead of `file.read`.
```
class AsyncRecv:
def __init__(self, connection, amount=1, read_buffer=1024):
assert not connection.getblocking(), 'connection must be non-blocking for async recv'
self.connection = connection
self.amount = amount
self.read_buffer = read_buffer
self._buffer = b''
def __await__(self):
while len(self._buffer) < self.amount:
try:
self._buffer += self.connection.recv(self.read_buffer)
except BlockingIOError:
yield self
return self._buffer
def __repr__(self):
return '%s(file=%s, amount=%d, progress=%d)' % (
self.__class__.__name__, self.connection, self.amount, len(self._buffer)
)
```
In contrast to `AsyncRead`, `__await__` performs truly non-blocking I/O. When data is available, it *always* reads. When no data is available, it *always* suspends. That means the event loop is only blocked while we perform useful work.
4.2. Un-Blocking the event loop
-------------------------------
As far as the event loop is concerned, nothing changes much. The event to listen for is still the same as for files - a file descriptor marked ready by `select`.
```
# old
elif isinstance(command, AsyncRead):
waiting_read[command.file] = coroutine
# new
elif isinstance(command, AsyncRead):
waiting_read[command.file] = coroutine
elif isinstance(command, AsyncRecv):
waiting_read[command.connection] = coroutine
```
At this point, it should be obvious that `AsyncRead` and `AsyncRecv` are the same kind of event. We could easily refactor them to be *one* event with an exchangeable I/O component. In effect, the event loop, coroutines and events [cleanly separate](http://sans-io.readthedocs.io) a scheduler, arbitrary intermediate code and the actual I/O.
4.3. The ugly side of non-blocking I/O
--------------------------------------
In principle, what you should do at this point is replicate the logic of `read` as a `recv` for `AsyncRecv`. However, this is much more ugly now - you have to handle early returns when functions block inside the kernel, but yield control to you. For example, opening a connection versus opening a file is much longer:
```
# file
file = open(path, 'rb')
# non-blocking socket
connection = socket.socket()
connection.setblocking(False)
# open without blocking - retry on failure
try:
connection.connect((url, port))
except BlockingIOError:
pass
```
Long story short, what remains is a few dozen lines of Exception handling. The events and event loop already work at this point.
```
id background round 1
read localhost:25000 at 1530783569
read /dev/urandom at 1530783569
done localhost:25000 at 1530783569 got 32768 B
id background round 2
id background round 3
id background round 4
done /dev/urandom at 1530783569 got 4096 B
id background round 5
```
Addendum
========
[Example code at github](https://gist.github.com/maxfischer2781/27d68e69c017d7c2605074a59ada04e5) | What is asyncio?
----------------
Asyncio stands for **asynchronous input output** and refers to a programming paradigm which achieves high concurrency using a single thread or event loop.
Asynchronous programming is a type of parallel programming in which a unit of work is allowed to run separately from the primary application [thread](https://whatis.techtarget.com/definition/thread). When the work is complete, it notifies the main thread about completion or failure of the worker thread.
Let's have a look in below image:
[![asynchronous_flow](https://i.stack.imgur.com/1gYQT.png)](https://i.stack.imgur.com/1gYQT.png)
**Let's understand asyncio with an example:**
To understand the concept behind asyncio, letβs consider a restaurant with a single waiter. Suddenly, three customers, A, B and C show up. The three of them take a varying amount of time to decide what to eat once they receive the menu from the waiter.
Letβs assume A takes 5 minutes, B 10 minutes and C 1 minute to decide. If the single waiter starts with B first and takes B's order in 10 minutes, next he serves A and spends 5 minutes on noting down his order and finally spends 1 minute to know what C wants to eat.
So, in total, waiter spends 10 + 5 + 1 = 16 minutes to take down their orders. However, notice in this sequence of events, C ends up waiting 15 minutes before the waiter gets to him, A waits 10 minutes and B waits 0 minutes.
Now consider if the waiter knew the time each customer would take to decide. He can start with C first, then go to A and finally to B. This way each customer would experience a 0 minute wait.
An **illusion** of three waiters, one dedicated to each customer is created even though thereβs only one.
Lastly, the total time it takes for the waiter to take all three orders is 10 minutes, much less than the 16 minutes in the other scenario.
**Let's go through another example:**
Suppose, Chess master *Magnus Carlsen* hosts a chess exhibition in which he plays with multiple amateur players. He has two ways of conducting the exhibition: synchronously and asynchronously.
Assumptions:
* 24 opponents
* *Magnus Carlsen* makes each chess move in 5 seconds
* Opponents each take 55 seconds to make a move
* Games average 30 pair-moves (60 moves total)
**Synchronously**: Magnus Carlsen plays one game at a time, never two at the same time, until the game is complete. Each game takes *(55 + 5) \* 30 == 1800* seconds, or **30 minutes**. The entire exhibition takes *24 \* 30 == 720* minutes, or **12 hours**.
**Asynchronously**: Magnus Carlsen moves from table to table, making one move at each table. She leaves the table and lets the opponent make their next move during the wait time. One move on all 24 games takes Judit *24 \* 5 == 120* seconds, or **2 minutes**. The entire exhibition is now cut down to *120 \* 30 == 3600* seconds, or just **1 hour**
There is only one Magnus Carlsen, who has only two hands and makes only one move at a time by himself. But playing asynchronously cuts the exhibition time down from 12 hours to one.
**Coding Example:**
Let try to demonstrate Synchronous and Asynchronous execution time using code snippet.
***Asynchronous - async\_count.py***
```
import asyncio
import time
async def count():
print("One", end=" ")
await asyncio.sleep(1)
print("Two", end=" ")
await asyncio.sleep(2)
print("Three", end=" ")
async def main():
await asyncio.gather(count(), count(), count(), count(), count())
if __name__ == "__main__":
start_time = time.perf_counter()
asyncio.run(main())
end_time = time.perf_counter()
execution_time = end_time - start_time
print(f"\nExecuting - {__file__}\nExecution Starts: {start_time}\nExecutions Ends: {end_time}\nTotals Execution Time:{execution_time:0.2f} seconds.")
```
***Asynchronous - Output*:**
```
One One One One One Two Two Two Two Two Three Three Three Three Three
Executing - async_count.py
Execution Starts: 18453.442160108
Executions Ends: 18456.444719712
Totals Execution Time:3.00 seconds.
```
***Synchronous - sync\_count.py***
```
import time
def count():
print("One", end=" ")
time.sleep(1)
print("Two", end=" ")
time.sleep(2)
print("Three", end=" ")
def main():
for _ in range(5):
count()
if __name__ == "__main__":
start_time = time.perf_counter()
main()
end_time = time.perf_counter()
execution_time = end_time - start_time
print(f"\nExecuting - {__file__}\nExecution Starts: {start_time}\nExecutions Ends: {end_time}\nTotals Execution Time:{execution_time:0.2f} seconds.")
```
***Synchronous - Output*:**
```
One Two Three One Two Three One Two Three One Two Three One Two Three
Executing - sync_count.py
Execution Starts: 18875.175965998
Executions Ends: 18890.189930292
Totals Execution Time:15.01 seconds.
```
Why use asyncio instead of multithreading in Python?
----------------------------------------------------
* Itβs very difficult to write code that is thread safe. With asynchronous code, you know exactly where the code will shift from one task to the next and race conditions are much harder to come by.
* Threads consume a fair amount of data since each thread needs to have its own stack. With async code, all the code shares the same stack and the stack is kept small due to continuously unwinding the stack between tasks.
* Threads are OS structures and therefore require more memory for the platform to support. There is no such problem with asynchronous tasks.
How does asyncio works?
-----------------------
### Before going deep let's recall Python Generator
**Python Generator:**
Functions containing a `yield` statement are compiled as generators. Using a yield expression in a functionβs body causes that function to be a generator. These functions return an object which supports the iteration protocol methods. The generator object created automatically receives a `__next()__` method. Going back to the example from the previous section we can invoke `__next__` directly on the generator object instead of using `next()`:
```
def asynchronous():
yield "Educative"
if __name__ == "__main__":
gen = asynchronous()
str = gen.__next__()
print(str)
```
Remember the following about generators:
* Generator functions allow you to procrastinate computing expensive values. You only compute the next value when required. This makes generators memory and compute efficient; they refrain from saving long sequences in memory or doing all expensive computations upfront.
* Generators, when suspended, retain the code location, which is the last yield statement executed, and their entire local scope. This allows them to resume execution from where they left off.
* Generator objects are nothing more than iterators.
* Remember to make a distinction between a generator function and the associated generator object which are often used interchangeably. A generator function when invoked returns a generator object and `next()` is invoked on the generator object to run the code within the generator function.
**States of a generator:**
A generator goes through the following states:
* `GEN_CREATED` when a generator object has been returned for the first time from a generator function and iteration hasnβt started.
* `GEN_RUNNING` when next has been invoked on the generator object and is being executed by the python interpreter.
* `GEN_SUSPENDED` when a generator is suspended at a yield
* `GEN_CLOSED` when a generator has completed execution or has been closed.
[![generator_cycle](https://i.stack.imgur.com/M2mLY.png)](https://i.stack.imgur.com/M2mLY.png)
**Methods on generator objects:**
A generator object exposes different methods that can be invoked to manipulate the generator. These are:
* `throw()`
* `send()`
* `close()`
### Let's deep dive into more details explanations
**The rules of asyncio:**
* The syntax `async def` introduces either a **native coroutine** or an **asynchronous generator**. The expressions `async with` and `async for` are also valid.
* The keyword `await` passes function control back to the event loop. (It suspends the execution of the surrounding coroutine.) If Python encounters an `await f()` expression in the scope of `g()`, this is how `await` tells the event loop, "Suspend execution of `g()` until whatever Iβm waiting onβthe result of `f()`βis returned. In the meantime, go let something else run."
In code, that second bullet point looks roughly like this:
```
async def g():
# Pause here and come back to g() when f() is ready
r = await f()
return r
```
There's also a strict set of rules around when and how you can and cannot use `async`/`await`. These can be handy whether you are still picking up the syntax or already have exposure to using `async`/`await`:
* A function that you introduce with `async def` is a coroutine. It may use `await`, `return`, or `yield`, but all of these are optional. Declaring `async def noop(): pass` is valid:
+ Using `await` and/or `return` creates a coroutine function. To call a coroutine function, you must `await` it to get its results.
+ It is less common to use `yield` in an `async def` block. This creates an [asynchronous generator](https://www.python.org/dev/peps/pep-0525/), which you iterate over with `async for`. Forget about async generators for the time being and focus on getting down the syntax for coroutine functions, which use `await` and/or `return`.
+ Anything defined with `async def` may not use `yield from`, which will raise a `SyntaxError`.
* Just like itβs a `SyntaxError` to use `yield` outside of a `def` function, it is a `SyntaxError` to use `await` outside of an `async def` coroutine. You can only use `await` in the body of coroutines.
Here are some terse examples meant to summarize the above few rules:
```
async def f(x):
y = await z(x) # OK - `await` and `return` allowed in coroutines
return y
async def g(x):
yield x # OK - this is an async generator
async def m(x):
yield from gen(x) # NO - SyntaxError
def m(x):
y = await z(x) # NO - SyntaxError (no `async def` here)
return y
```
### Generator Based Coroutine
Python created a distinction between Python generators and generators that were meant to be used as coroutines. These coroutines are called generator-based coroutines and require the decorator `@asynio.coroutine` to be added to the function definition, though this isnβt strictly enforced.
Generator based coroutines use `yield from` syntax instead of `yield`. A coroutine can:
* yield from another coroutine
* yield from a future
* return an expression
* raise exception
**Coroutines in Python make cooperative multitasking possible.**
Cooperative multitasking is the approach in which the running process voluntarily gives up the CPU to other processes. A process may do so when it is logically blocked, say while waiting for user input or when it has initiated a network request and will be idle for a while.
A coroutine can be defined as a special function that can give up control to its caller without losing its state.
**So whatβs the difference between coroutines and generators?**
Generators are essentially iterators though they look like functions. The distinction between generators and coroutines, in general, is that:
* Generators yield back a value to the invoker whereas a coroutine yields control to another coroutine and can resume execution from the point it gives up control.
* A generator canβt accept arguments once started whereas a coroutine can.
* Generators are primarily used to simplify writing iterators. They are a type of coroutine and sometimes also called as semicoroutines.
### Generator Based Coroutine Example
The simplest generator based coroutine we can write is as follows:
```
@asyncio.coroutine
def do_something_important():
yield from asyncio.sleep(1)
```
The coroutine sleeps for one second. Note the decorator and the use of `yield from`.
### Native Based Coroutine Example
By native it is meant that the language introduced syntax to specifically define coroutines, making them first class citizens in the language. Native coroutines can be defined using the `async/await` syntax.
The simplest native based coroutine we can write is as follows:
```
async def do_something_important():
await asyncio.sleep(1)
```
AsyncIO Design Patterns
-----------------------
AsyncIO comes with its own set of possible script designs, which we will discuss in this section.
**1. Event loops**
The event loop is a programming construct that waits for events to happen and then dispatches them to an event handler. An event can be a user clicking on a UI button or a process initiating a file download. **At the core of asynchronous programming, sits the event loop.**
**Example Code:**
```
import asyncio
import random
import time
from threading import Thread
from threading import current_thread
# ANSI colors
colors = (
"\033[0m", # End of color
"\033[31m", # Red
"\033[32m", # Green
"\033[34m", # Blue
)
async def do_something_important(sleep_for):
print(colors[1] + f"Is event loop running in thread {current_thread().getName()} = {asyncio.get_event_loop().is_running()}" + colors[0])
await asyncio.sleep(sleep_for)
def launch_event_loops():
# get a new event loop
loop = asyncio.new_event_loop()
# set the event loop for the current thread
asyncio.set_event_loop(loop)
# run a coroutine on the event loop
loop.run_until_complete(do_something_important(random.randint(1, 5)))
# remember to close the loop
loop.close()
if __name__ == "__main__":
thread_1 = Thread(target=launch_event_loops)
thread_2 = Thread(target=launch_event_loops)
start_time = time.perf_counter()
thread_1.start()
thread_2.start()
print(colors[2] + f"Is event loop running in thread {current_thread().getName()} = {asyncio.get_event_loop().is_running()}" + colors[0])
thread_1.join()
thread_2.join()
end_time = time.perf_counter()
execution_time = end_time - start_time
print(colors[3] + f"Event Loop Start Time: {start_time}\nEvent Loop End Time: {end_time}\nEvent Loop Execution Time: {execution_time:0.2f} seconds." + colors[0])
```
**Execution Command:** `python async_event_loop.py`
**Output:**
[![async_event_loop](https://i.stack.imgur.com/w6FRy.png)](https://i.stack.imgur.com/w6FRy.png)
Try it out yourself and examine the output and youβll realize that each spawned thread is running its own event loop.
**Types of event loops**
There are two types of event loops:
* *SelectorEventLoop*: SelectorEventLoop is based on the selectors module and is the default loop on all platforms.
* *ProactorEventLoop*: ProactorEventLoop is based on Windowsβ I/O Completion Ports and is only supported on Windows.
**2. Futures**
Future represents a computation that is either in progress or will get scheduled in the future. It is a special low-level awaitable object that represents an eventual result of an asynchronous operation. Donβt confuse `threading.Future` and `asyncio.Future`.
**Example Code:**
```
import time
import asyncio
from asyncio import Future
# ANSI colors
colors = (
"\033[0m", # End of color
"\033[31m", # Red
"\033[32m", # Green
"\033[34m", # Blue
)
async def bar(future):
print(colors[1] + "bar will sleep for 3 seconds" + colors[0])
await asyncio.sleep(3)
print(colors[1] + "bar resolving the future" + colors[0])
future.done()
future.set_result("future is resolved")
async def foo(future):
print(colors[2] + "foo will await the future" + colors[0])
await future
print(colors[2] + "foo finds the future resolved" + colors[0])
async def main():
future = Future()
await asyncio.gather(foo(future), bar(future))
if __name__ == "__main__":
start_time = time.perf_counter()
asyncio.run(main())
end_time = time.perf_counter()
execution_time = end_time - start_time
print(colors[3] + f"Future Start Time: {start_time}\nFuture End Time: {end_time}\nFuture Execution Time: {execution_time:0.2f} seconds." + colors[0])
```
**Execution Command:** `python async_futures.py`
**Output:**
[![async_futures](https://i.stack.imgur.com/JGsY6.png)](https://i.stack.imgur.com/JGsY6.png)
Both the coroutines are passed a future. The `foo()` coroutine awaits for the future to get resolved, while the `bar()` coroutine resolves the future after three seconds.
**3. Tasks**
Tasks are like futures, in fact, Task is a subclass of Future and can be created using the following methods:
* `asyncio.create_task()` accepts coroutines and wraps them as tasks.
* `loop.create_task()` only accepts coroutines.
* `asyncio.ensure_future()` accepts futures, coroutines and any awaitable objects.
Tasks wrap coroutines and run them in event loops. If a coroutine awaits on a Future, the Task suspends the execution of the coroutine and waits for the Future to complete. When the Future is done, the execution of the wrapped coroutine resumes.
**Example Code:**
```
import time
import asyncio
from asyncio import Future
# ANSI colors
colors = (
"\033[0m", # End of color
"\033[31m", # Red
"\033[32m", # Green
"\033[34m", # Blue
)
async def bar(future):
print(colors[1] + "bar will sleep for 3 seconds" + colors[0])
await asyncio.sleep(3)
print(colors[1] + "bar resolving the future" + colors[0])
future.done()
future.set_result("future is resolved")
async def foo(future):
print(colors[2] + "foo will await the future" + colors[0])
await future
print(colors[2] + "foo finds the future resolved" + colors[0])
async def main():
future = Future()
loop = asyncio.get_event_loop()
t1 = loop.create_task(bar(future))
t2 = loop.create_task(foo(future))
await t2, t1
if __name__ == "__main__":
start_time = time.perf_counter()
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
end_time = time.perf_counter()
execution_time = end_time - start_time
print(colors[3] + f"Future Start Time: {start_time}\nFuture End Time: {end_time}\nFuture Execution Time: {execution_time:0.2f} seconds." + colors[0])
```
**Execution Command:** `python async_tasks.py`
**Output:**
[![async_tasks](https://i.stack.imgur.com/k9tO1.png)](https://i.stack.imgur.com/k9tO1.png)
**4. Chaining Coroutines:**
A key feature of coroutines is that they can be chained together. A coroutine object is awaitable, so another coroutine can `await` it. This allows you to break programs into smaller, manageable, recyclable coroutines:
**Example Code:**
```
import sys
import asyncio
import random
import time
# ANSI colors
colors = (
"\033[0m", # End of color
"\033[31m", # Red
"\033[32m", # Green
"\033[36m", # Cyan
"\033[34m", # Blue
)
async def function1(n: int) -> str:
i = random.randint(0, 10)
print(colors[1] + f"function1({n}) is sleeping for {i} seconds." + colors[0])
await asyncio.sleep(i)
result = f"result{n}-1"
print(colors[1] + f"Returning function1({n}) == {result}." + colors[0])
return result
async def function2(n: int, arg: str) -> str:
i = random.randint(0, 10)
print(colors[2] + f"function2{n, arg} is sleeping for {i} seconds." + colors[0])
await asyncio.sleep(i)
result = f"result{n}-2 derived from {arg}"
print(colors[2] + f"Returning function2{n, arg} == {result}." + colors[0])
return result
async def chain(n: int) -> None:
start = time.perf_counter()
p1 = await function1(n)
p2 = await function2(n, p1)
end = time.perf_counter() - start
print(colors[3] + f"--> Chained result{n} => {p2} (took {end:0.2f} seconds)." + colors[0])
async def main(*args):
await asyncio.gather(*(chain(n) for n in args))
if __name__ == "__main__":
random.seed(444)
args = [1, 2, 3] if len(sys.argv) == 1 else map(int, sys.argv[1:])
start_time = time.perf_counter()
asyncio.run(main(*args))
end_time = time.perf_counter()
execution_time = end_time - start_time
print(colors[4] + f"Program Start Time: {start_time}\nProgram End Time: {end_time}\nProgram Execution Time: {execution_time:0.2f} seconds." + colors[0])
```
Pay careful attention to the output, where `function1()` sleeps for a variable amount of time, and `function2()` begins working with the results as they become available:
**Execution Command:** `python async_chained.py 11 8 5`
**Output:**
[![async_chained](https://i.stack.imgur.com/hl03R.png)](https://i.stack.imgur.com/hl03R.png)
**5. Using a Queue:**
In this design, there is no chaining of any individual consumer to a producer. The consumers donβt know the number of producers, or even the cumulative number of items that will be added to the queue, in advance.
It takes an individual producer or consumer a variable amount of time to put and extract items from the queue, respectively. The queue serves as a throughput that can communicate with the producers and consumers without them talking to each other directly.
**Example Code:**
```
import asyncio
import argparse
import itertools as it
import os
import random
import time
# ANSI colors
colors = (
"\033[0m", # End of color
"\033[31m", # Red
"\033[32m", # Green
"\033[36m", # Cyan
"\033[34m", # Blue
)
async def generate_item(size: int = 5) -> str:
return os.urandom(size).hex()
async def random_sleep(caller=None) -> None:
i = random.randint(0, 10)
if caller:
print(colors[1] + f"{caller} sleeping for {i} seconds." + colors[0])
await asyncio.sleep(i)
async def produce(name: int, producer_queue: asyncio.Queue) -> None:
n = random.randint(0, 10)
for _ in it.repeat(None, n): # Synchronous loop for each single producer
await random_sleep(caller=f"Producer {name}")
i = await generate_item()
t = time.perf_counter()
await producer_queue.put((i, t))
print(colors[2] + f"Producer {name} added <{i}> to queue." + colors[0])
async def consume(name: int, consumer_queue: asyncio.Queue) -> None:
while True:
await random_sleep(caller=f"Consumer {name}")
i, t = await consumer_queue.get()
now = time.perf_counter()
print(colors[3] + f"Consumer {name} got element <{i}>" f" in {now - t:0.5f} seconds." + colors[0])
consumer_queue.task_done()
async def main(no_producer: int, no_consumer: int):
q = asyncio.Queue()
producers = [asyncio.create_task(produce(n, q)) for n in range(no_producer)]
consumers = [asyncio.create_task(consume(n, q)) for n in range(no_consumer)]
await asyncio.gather(*producers)
await q.join() # Implicitly awaits consumers, too
for consumer in consumers:
consumer.cancel()
if __name__ == "__main__":
random.seed(444)
parser = argparse.ArgumentParser()
parser.add_argument("-p", "--no_producer", type=int, default=10)
parser.add_argument("-c", "--no_consumer", type=int, default=15)
ns = parser.parse_args()
start_time = time.perf_counter()
asyncio.run(main(**ns.__dict__))
end_time = time.perf_counter()
execution_time = end_time - start_time
print(colors[4] + f"Program Start Time: {start_time}\nProgram End Time: {end_time}\nProgram Execution Time: {execution_time:0.2f} seconds." + colors[0])
```
**Execution Command:** `python async_queue.py -p 2 -c 4`
**Output:**
[![async_queue](https://i.stack.imgur.com/8iPho.png)](https://i.stack.imgur.com/8iPho.png)
Lastly, let's have an example of how asyncio cuts down on wait time: given a coroutine `generate_random_int()` that keeps producing random integers in the range [0, 10], until one of them exceeds a threshold, you want to let multiple calls of this coroutine not need to wait for each other to complete in succession.
**Example Code:**
```
import time
import asyncio
import random
# ANSI colors
colors = (
"\033[0m", # End of color
"\033[31m", # Red
"\033[32m", # Green
"\033[36m", # Cyan
"\033[35m", # Magenta
"\033[34m", # Blue
)
async def generate_random_int(indx: int, threshold: int = 5) -> int:
print(colors[indx + 1] + f"Initiated generate_random_int({indx}).")
i = random.randint(0, 10)
while i <= threshold:
print(colors[indx + 1] + f"generate_random_int({indx}) == {i} too low; retrying.")
await asyncio.sleep(indx + 1)
i = random.randint(0, 10)
print(colors[indx + 1] + f"---> Finished: generate_random_int({indx}) == {i}" + colors[0])
return i
async def main():
res = await asyncio.gather(*(generate_random_int(i, 10 - i - 1) for i in range(3)))
return res
if __name__ == "__main__":
random.seed(444)
start_time = time.perf_counter()
r1, r2, r3 = asyncio.run(main())
print(colors[4] + f"\nRandom INT 1: {r1}, Random INT 2: {r2}, Random INT 3: {r3}\n" + colors[0])
end_time = time.perf_counter()
execution_time = end_time - start_time
print(colors[5] + f"Program Start Time: {start_time}\nProgram End Time: {end_time}\nProgram Execution Time: {execution_time:0.2f} seconds." + colors[0])
```
**Execution Command:** `python async_random.py`
**Output:**
[![async_random](https://i.stack.imgur.com/k8wAT.png)](https://i.stack.imgur.com/k8wAT.png)
>
> **Note:** If youβre writing any code yourself, prefer native coroutines
> for the sake of being explicit rather than implicit. Generator based
> coroutines will be removed in Python 3.10.
>
>
>
**GitHub Repo: <https://github.com/tssovi/asynchronous-in-python>** |
49,005,651 | This question is motivated by my another question: [How to await in cdef?](https://stackoverflow.com/questions/48989065/how-to-await-in-cdef)
There are tons of articles and blog posts on the web about `asyncio`, but they are all very superficial. I couldn't find any information about how `asyncio` is actually implemented, and what makes I/O asynchronous. I was trying to read the source code, but it's thousands of lines of not the highest grade C code, a lot of which deals with auxiliary objects, but most crucially, it is hard to connect between Python syntax and what C code it would translate into.
Asycnio's own documentation is even less helpful. There's no information there about how it works, only some guidelines about how to use it, which are also sometimes misleading / very poorly written.
I'm familiar with Go's implementation of coroutines, and was kind of hoping that Python did the same thing. If that was the case, the code I came up in the post linked above would have worked. Since it didn't, I'm now trying to figure out why. My best guess so far is as follows, please correct me where I'm wrong:
1. Procedure definitions of the form `async def foo(): ...` are actually interpreted as methods of a class inheriting `coroutine`.
2. Perhaps, `async def` is actually split into multiple methods by `await` statements, where the object, on which these methods are called is able to keep track of the progress it made through the execution so far.
3. If the above is true, then, essentially, execution of a coroutine boils down to calling methods of coroutine object by some global manager (loop?).
4. The global manager is somehow (how?) aware of when I/O operations are performed by Python (only?) code and is able to choose one of the pending coroutine methods to execute after the current executing method relinquished control (hit on the `await` statement).
In other words, here's my attempt at "desugaring" of some `asyncio` syntax into something more understandable:
```
async def coro(name):
print('before', name)
await asyncio.sleep()
print('after', name)
asyncio.gather(coro('first'), coro('second'))
# translated from async def coro(name)
class Coro(coroutine):
def before(self, name):
print('before', name)
def after(self, name):
print('after', name)
def __init__(self, name):
self.name = name
self.parts = self.before, self.after
self.pos = 0
def __call__():
self.parts[self.pos](self.name)
self.pos += 1
def done(self):
return self.pos == len(self.parts)
# translated from asyncio.gather()
class AsyncIOManager:
def gather(*coros):
while not every(c.done() for c in coros):
coro = random.choice(coros)
coro()
```
Should my guess prove correct: then I have a problem. How does I/O actually happen in this scenario? In a separate thread? Is the whole interpreter suspended and I/O happens outside the interpreter? What exactly is meant by I/O? If my python procedure called C `open()` procedure, and it in turn sent interrupt to kernel, relinquishing control to it, how does Python interpreter know about this and is able to continue running some other code, while kernel code does the actual I/O and until it wakes up the Python procedure which sent the interrupt originally? How can Python interpreter in principle, be aware of this happening? | 2018/02/27 | [
"https://Stackoverflow.com/questions/49005651",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5691066/"
] | What is asyncio?
----------------
Asyncio stands for **asynchronous input output** and refers to a programming paradigm which achieves high concurrency using a single thread or event loop.
Asynchronous programming is a type of parallel programming in which a unit of work is allowed to run separately from the primary application [thread](https://whatis.techtarget.com/definition/thread). When the work is complete, it notifies the main thread about completion or failure of the worker thread.
Let's have a look in below image:
[![asynchronous_flow](https://i.stack.imgur.com/1gYQT.png)](https://i.stack.imgur.com/1gYQT.png)
**Let's understand asyncio with an example:**
To understand the concept behind asyncio, letβs consider a restaurant with a single waiter. Suddenly, three customers, A, B and C show up. The three of them take a varying amount of time to decide what to eat once they receive the menu from the waiter.
Letβs assume A takes 5 minutes, B 10 minutes and C 1 minute to decide. If the single waiter starts with B first and takes B's order in 10 minutes, next he serves A and spends 5 minutes on noting down his order and finally spends 1 minute to know what C wants to eat.
So, in total, waiter spends 10 + 5 + 1 = 16 minutes to take down their orders. However, notice in this sequence of events, C ends up waiting 15 minutes before the waiter gets to him, A waits 10 minutes and B waits 0 minutes.
Now consider if the waiter knew the time each customer would take to decide. He can start with C first, then go to A and finally to B. This way each customer would experience a 0 minute wait.
An **illusion** of three waiters, one dedicated to each customer is created even though thereβs only one.
Lastly, the total time it takes for the waiter to take all three orders is 10 minutes, much less than the 16 minutes in the other scenario.
**Let's go through another example:**
Suppose, Chess master *Magnus Carlsen* hosts a chess exhibition in which he plays with multiple amateur players. He has two ways of conducting the exhibition: synchronously and asynchronously.
Assumptions:
* 24 opponents
* *Magnus Carlsen* makes each chess move in 5 seconds
* Opponents each take 55 seconds to make a move
* Games average 30 pair-moves (60 moves total)
**Synchronously**: Magnus Carlsen plays one game at a time, never two at the same time, until the game is complete. Each game takes *(55 + 5) \* 30 == 1800* seconds, or **30 minutes**. The entire exhibition takes *24 \* 30 == 720* minutes, or **12 hours**.
**Asynchronously**: Magnus Carlsen moves from table to table, making one move at each table. She leaves the table and lets the opponent make their next move during the wait time. One move on all 24 games takes Judit *24 \* 5 == 120* seconds, or **2 minutes**. The entire exhibition is now cut down to *120 \* 30 == 3600* seconds, or just **1 hour**
There is only one Magnus Carlsen, who has only two hands and makes only one move at a time by himself. But playing asynchronously cuts the exhibition time down from 12 hours to one.
**Coding Example:**
Let try to demonstrate Synchronous and Asynchronous execution time using code snippet.
***Asynchronous - async\_count.py***
```
import asyncio
import time
async def count():
print("One", end=" ")
await asyncio.sleep(1)
print("Two", end=" ")
await asyncio.sleep(2)
print("Three", end=" ")
async def main():
await asyncio.gather(count(), count(), count(), count(), count())
if __name__ == "__main__":
start_time = time.perf_counter()
asyncio.run(main())
end_time = time.perf_counter()
execution_time = end_time - start_time
print(f"\nExecuting - {__file__}\nExecution Starts: {start_time}\nExecutions Ends: {end_time}\nTotals Execution Time:{execution_time:0.2f} seconds.")
```
***Asynchronous - Output*:**
```
One One One One One Two Two Two Two Two Three Three Three Three Three
Executing - async_count.py
Execution Starts: 18453.442160108
Executions Ends: 18456.444719712
Totals Execution Time:3.00 seconds.
```
***Synchronous - sync\_count.py***
```
import time
def count():
print("One", end=" ")
time.sleep(1)
print("Two", end=" ")
time.sleep(2)
print("Three", end=" ")
def main():
for _ in range(5):
count()
if __name__ == "__main__":
start_time = time.perf_counter()
main()
end_time = time.perf_counter()
execution_time = end_time - start_time
print(f"\nExecuting - {__file__}\nExecution Starts: {start_time}\nExecutions Ends: {end_time}\nTotals Execution Time:{execution_time:0.2f} seconds.")
```
***Synchronous - Output*:**
```
One Two Three One Two Three One Two Three One Two Three One Two Three
Executing - sync_count.py
Execution Starts: 18875.175965998
Executions Ends: 18890.189930292
Totals Execution Time:15.01 seconds.
```
Why use asyncio instead of multithreading in Python?
----------------------------------------------------
* Itβs very difficult to write code that is thread safe. With asynchronous code, you know exactly where the code will shift from one task to the next and race conditions are much harder to come by.
* Threads consume a fair amount of data since each thread needs to have its own stack. With async code, all the code shares the same stack and the stack is kept small due to continuously unwinding the stack between tasks.
* Threads are OS structures and therefore require more memory for the platform to support. There is no such problem with asynchronous tasks.
How does asyncio works?
-----------------------
### Before going deep let's recall Python Generator
**Python Generator:**
Functions containing a `yield` statement are compiled as generators. Using a yield expression in a functionβs body causes that function to be a generator. These functions return an object which supports the iteration protocol methods. The generator object created automatically receives a `__next()__` method. Going back to the example from the previous section we can invoke `__next__` directly on the generator object instead of using `next()`:
```
def asynchronous():
yield "Educative"
if __name__ == "__main__":
gen = asynchronous()
str = gen.__next__()
print(str)
```
Remember the following about generators:
* Generator functions allow you to procrastinate computing expensive values. You only compute the next value when required. This makes generators memory and compute efficient; they refrain from saving long sequences in memory or doing all expensive computations upfront.
* Generators, when suspended, retain the code location, which is the last yield statement executed, and their entire local scope. This allows them to resume execution from where they left off.
* Generator objects are nothing more than iterators.
* Remember to make a distinction between a generator function and the associated generator object which are often used interchangeably. A generator function when invoked returns a generator object and `next()` is invoked on the generator object to run the code within the generator function.
**States of a generator:**
A generator goes through the following states:
* `GEN_CREATED` when a generator object has been returned for the first time from a generator function and iteration hasnβt started.
* `GEN_RUNNING` when next has been invoked on the generator object and is being executed by the python interpreter.
* `GEN_SUSPENDED` when a generator is suspended at a yield
* `GEN_CLOSED` when a generator has completed execution or has been closed.
[![generator_cycle](https://i.stack.imgur.com/M2mLY.png)](https://i.stack.imgur.com/M2mLY.png)
**Methods on generator objects:**
A generator object exposes different methods that can be invoked to manipulate the generator. These are:
* `throw()`
* `send()`
* `close()`
### Let's deep dive into more details explanations
**The rules of asyncio:**
* The syntax `async def` introduces either a **native coroutine** or an **asynchronous generator**. The expressions `async with` and `async for` are also valid.
* The keyword `await` passes function control back to the event loop. (It suspends the execution of the surrounding coroutine.) If Python encounters an `await f()` expression in the scope of `g()`, this is how `await` tells the event loop, "Suspend execution of `g()` until whatever Iβm waiting onβthe result of `f()`βis returned. In the meantime, go let something else run."
In code, that second bullet point looks roughly like this:
```
async def g():
# Pause here and come back to g() when f() is ready
r = await f()
return r
```
There's also a strict set of rules around when and how you can and cannot use `async`/`await`. These can be handy whether you are still picking up the syntax or already have exposure to using `async`/`await`:
* A function that you introduce with `async def` is a coroutine. It may use `await`, `return`, or `yield`, but all of these are optional. Declaring `async def noop(): pass` is valid:
+ Using `await` and/or `return` creates a coroutine function. To call a coroutine function, you must `await` it to get its results.
+ It is less common to use `yield` in an `async def` block. This creates an [asynchronous generator](https://www.python.org/dev/peps/pep-0525/), which you iterate over with `async for`. Forget about async generators for the time being and focus on getting down the syntax for coroutine functions, which use `await` and/or `return`.
+ Anything defined with `async def` may not use `yield from`, which will raise a `SyntaxError`.
* Just like itβs a `SyntaxError` to use `yield` outside of a `def` function, it is a `SyntaxError` to use `await` outside of an `async def` coroutine. You can only use `await` in the body of coroutines.
Here are some terse examples meant to summarize the above few rules:
```
async def f(x):
y = await z(x) # OK - `await` and `return` allowed in coroutines
return y
async def g(x):
yield x # OK - this is an async generator
async def m(x):
yield from gen(x) # NO - SyntaxError
def m(x):
y = await z(x) # NO - SyntaxError (no `async def` here)
return y
```
### Generator Based Coroutine
Python created a distinction between Python generators and generators that were meant to be used as coroutines. These coroutines are called generator-based coroutines and require the decorator `@asynio.coroutine` to be added to the function definition, though this isnβt strictly enforced.
Generator based coroutines use `yield from` syntax instead of `yield`. A coroutine can:
* yield from another coroutine
* yield from a future
* return an expression
* raise exception
**Coroutines in Python make cooperative multitasking possible.**
Cooperative multitasking is the approach in which the running process voluntarily gives up the CPU to other processes. A process may do so when it is logically blocked, say while waiting for user input or when it has initiated a network request and will be idle for a while.
A coroutine can be defined as a special function that can give up control to its caller without losing its state.
**So whatβs the difference between coroutines and generators?**
Generators are essentially iterators though they look like functions. The distinction between generators and coroutines, in general, is that:
* Generators yield back a value to the invoker whereas a coroutine yields control to another coroutine and can resume execution from the point it gives up control.
* A generator canβt accept arguments once started whereas a coroutine can.
* Generators are primarily used to simplify writing iterators. They are a type of coroutine and sometimes also called as semicoroutines.
### Generator Based Coroutine Example
The simplest generator based coroutine we can write is as follows:
```
@asyncio.coroutine
def do_something_important():
yield from asyncio.sleep(1)
```
The coroutine sleeps for one second. Note the decorator and the use of `yield from`.
### Native Based Coroutine Example
By native it is meant that the language introduced syntax to specifically define coroutines, making them first class citizens in the language. Native coroutines can be defined using the `async/await` syntax.
The simplest native based coroutine we can write is as follows:
```
async def do_something_important():
await asyncio.sleep(1)
```
AsyncIO Design Patterns
-----------------------
AsyncIO comes with its own set of possible script designs, which we will discuss in this section.
**1. Event loops**
The event loop is a programming construct that waits for events to happen and then dispatches them to an event handler. An event can be a user clicking on a UI button or a process initiating a file download. **At the core of asynchronous programming, sits the event loop.**
**Example Code:**
```
import asyncio
import random
import time
from threading import Thread
from threading import current_thread
# ANSI colors
colors = (
"\033[0m", # End of color
"\033[31m", # Red
"\033[32m", # Green
"\033[34m", # Blue
)
async def do_something_important(sleep_for):
print(colors[1] + f"Is event loop running in thread {current_thread().getName()} = {asyncio.get_event_loop().is_running()}" + colors[0])
await asyncio.sleep(sleep_for)
def launch_event_loops():
# get a new event loop
loop = asyncio.new_event_loop()
# set the event loop for the current thread
asyncio.set_event_loop(loop)
# run a coroutine on the event loop
loop.run_until_complete(do_something_important(random.randint(1, 5)))
# remember to close the loop
loop.close()
if __name__ == "__main__":
thread_1 = Thread(target=launch_event_loops)
thread_2 = Thread(target=launch_event_loops)
start_time = time.perf_counter()
thread_1.start()
thread_2.start()
print(colors[2] + f"Is event loop running in thread {current_thread().getName()} = {asyncio.get_event_loop().is_running()}" + colors[0])
thread_1.join()
thread_2.join()
end_time = time.perf_counter()
execution_time = end_time - start_time
print(colors[3] + f"Event Loop Start Time: {start_time}\nEvent Loop End Time: {end_time}\nEvent Loop Execution Time: {execution_time:0.2f} seconds." + colors[0])
```
**Execution Command:** `python async_event_loop.py`
**Output:**
[![async_event_loop](https://i.stack.imgur.com/w6FRy.png)](https://i.stack.imgur.com/w6FRy.png)
Try it out yourself and examine the output and youβll realize that each spawned thread is running its own event loop.
**Types of event loops**
There are two types of event loops:
* *SelectorEventLoop*: SelectorEventLoop is based on the selectors module and is the default loop on all platforms.
* *ProactorEventLoop*: ProactorEventLoop is based on Windowsβ I/O Completion Ports and is only supported on Windows.
**2. Futures**
Future represents a computation that is either in progress or will get scheduled in the future. It is a special low-level awaitable object that represents an eventual result of an asynchronous operation. Donβt confuse `threading.Future` and `asyncio.Future`.
**Example Code:**
```
import time
import asyncio
from asyncio import Future
# ANSI colors
colors = (
"\033[0m", # End of color
"\033[31m", # Red
"\033[32m", # Green
"\033[34m", # Blue
)
async def bar(future):
print(colors[1] + "bar will sleep for 3 seconds" + colors[0])
await asyncio.sleep(3)
print(colors[1] + "bar resolving the future" + colors[0])
future.done()
future.set_result("future is resolved")
async def foo(future):
print(colors[2] + "foo will await the future" + colors[0])
await future
print(colors[2] + "foo finds the future resolved" + colors[0])
async def main():
future = Future()
await asyncio.gather(foo(future), bar(future))
if __name__ == "__main__":
start_time = time.perf_counter()
asyncio.run(main())
end_time = time.perf_counter()
execution_time = end_time - start_time
print(colors[3] + f"Future Start Time: {start_time}\nFuture End Time: {end_time}\nFuture Execution Time: {execution_time:0.2f} seconds." + colors[0])
```
**Execution Command:** `python async_futures.py`
**Output:**
[![async_futures](https://i.stack.imgur.com/JGsY6.png)](https://i.stack.imgur.com/JGsY6.png)
Both the coroutines are passed a future. The `foo()` coroutine awaits for the future to get resolved, while the `bar()` coroutine resolves the future after three seconds.
**3. Tasks**
Tasks are like futures, in fact, Task is a subclass of Future and can be created using the following methods:
* `asyncio.create_task()` accepts coroutines and wraps them as tasks.
* `loop.create_task()` only accepts coroutines.
* `asyncio.ensure_future()` accepts futures, coroutines and any awaitable objects.
Tasks wrap coroutines and run them in event loops. If a coroutine awaits on a Future, the Task suspends the execution of the coroutine and waits for the Future to complete. When the Future is done, the execution of the wrapped coroutine resumes.
**Example Code:**
```
import time
import asyncio
from asyncio import Future
# ANSI colors
colors = (
"\033[0m", # End of color
"\033[31m", # Red
"\033[32m", # Green
"\033[34m", # Blue
)
async def bar(future):
print(colors[1] + "bar will sleep for 3 seconds" + colors[0])
await asyncio.sleep(3)
print(colors[1] + "bar resolving the future" + colors[0])
future.done()
future.set_result("future is resolved")
async def foo(future):
print(colors[2] + "foo will await the future" + colors[0])
await future
print(colors[2] + "foo finds the future resolved" + colors[0])
async def main():
future = Future()
loop = asyncio.get_event_loop()
t1 = loop.create_task(bar(future))
t2 = loop.create_task(foo(future))
await t2, t1
if __name__ == "__main__":
start_time = time.perf_counter()
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
end_time = time.perf_counter()
execution_time = end_time - start_time
print(colors[3] + f"Future Start Time: {start_time}\nFuture End Time: {end_time}\nFuture Execution Time: {execution_time:0.2f} seconds." + colors[0])
```
**Execution Command:** `python async_tasks.py`
**Output:**
[![async_tasks](https://i.stack.imgur.com/k9tO1.png)](https://i.stack.imgur.com/k9tO1.png)
**4. Chaining Coroutines:**
A key feature of coroutines is that they can be chained together. A coroutine object is awaitable, so another coroutine can `await` it. This allows you to break programs into smaller, manageable, recyclable coroutines:
**Example Code:**
```
import sys
import asyncio
import random
import time
# ANSI colors
colors = (
"\033[0m", # End of color
"\033[31m", # Red
"\033[32m", # Green
"\033[36m", # Cyan
"\033[34m", # Blue
)
async def function1(n: int) -> str:
i = random.randint(0, 10)
print(colors[1] + f"function1({n}) is sleeping for {i} seconds." + colors[0])
await asyncio.sleep(i)
result = f"result{n}-1"
print(colors[1] + f"Returning function1({n}) == {result}." + colors[0])
return result
async def function2(n: int, arg: str) -> str:
i = random.randint(0, 10)
print(colors[2] + f"function2{n, arg} is sleeping for {i} seconds." + colors[0])
await asyncio.sleep(i)
result = f"result{n}-2 derived from {arg}"
print(colors[2] + f"Returning function2{n, arg} == {result}." + colors[0])
return result
async def chain(n: int) -> None:
start = time.perf_counter()
p1 = await function1(n)
p2 = await function2(n, p1)
end = time.perf_counter() - start
print(colors[3] + f"--> Chained result{n} => {p2} (took {end:0.2f} seconds)." + colors[0])
async def main(*args):
await asyncio.gather(*(chain(n) for n in args))
if __name__ == "__main__":
random.seed(444)
args = [1, 2, 3] if len(sys.argv) == 1 else map(int, sys.argv[1:])
start_time = time.perf_counter()
asyncio.run(main(*args))
end_time = time.perf_counter()
execution_time = end_time - start_time
print(colors[4] + f"Program Start Time: {start_time}\nProgram End Time: {end_time}\nProgram Execution Time: {execution_time:0.2f} seconds." + colors[0])
```
Pay careful attention to the output, where `function1()` sleeps for a variable amount of time, and `function2()` begins working with the results as they become available:
**Execution Command:** `python async_chained.py 11 8 5`
**Output:**
[![async_chained](https://i.stack.imgur.com/hl03R.png)](https://i.stack.imgur.com/hl03R.png)
**5. Using a Queue:**
In this design, there is no chaining of any individual consumer to a producer. The consumers donβt know the number of producers, or even the cumulative number of items that will be added to the queue, in advance.
It takes an individual producer or consumer a variable amount of time to put and extract items from the queue, respectively. The queue serves as a throughput that can communicate with the producers and consumers without them talking to each other directly.
**Example Code:**
```
import asyncio
import argparse
import itertools as it
import os
import random
import time
# ANSI colors
colors = (
"\033[0m", # End of color
"\033[31m", # Red
"\033[32m", # Green
"\033[36m", # Cyan
"\033[34m", # Blue
)
async def generate_item(size: int = 5) -> str:
return os.urandom(size).hex()
async def random_sleep(caller=None) -> None:
i = random.randint(0, 10)
if caller:
print(colors[1] + f"{caller} sleeping for {i} seconds." + colors[0])
await asyncio.sleep(i)
async def produce(name: int, producer_queue: asyncio.Queue) -> None:
n = random.randint(0, 10)
for _ in it.repeat(None, n): # Synchronous loop for each single producer
await random_sleep(caller=f"Producer {name}")
i = await generate_item()
t = time.perf_counter()
await producer_queue.put((i, t))
print(colors[2] + f"Producer {name} added <{i}> to queue." + colors[0])
async def consume(name: int, consumer_queue: asyncio.Queue) -> None:
while True:
await random_sleep(caller=f"Consumer {name}")
i, t = await consumer_queue.get()
now = time.perf_counter()
print(colors[3] + f"Consumer {name} got element <{i}>" f" in {now - t:0.5f} seconds." + colors[0])
consumer_queue.task_done()
async def main(no_producer: int, no_consumer: int):
q = asyncio.Queue()
producers = [asyncio.create_task(produce(n, q)) for n in range(no_producer)]
consumers = [asyncio.create_task(consume(n, q)) for n in range(no_consumer)]
await asyncio.gather(*producers)
await q.join() # Implicitly awaits consumers, too
for consumer in consumers:
consumer.cancel()
if __name__ == "__main__":
random.seed(444)
parser = argparse.ArgumentParser()
parser.add_argument("-p", "--no_producer", type=int, default=10)
parser.add_argument("-c", "--no_consumer", type=int, default=15)
ns = parser.parse_args()
start_time = time.perf_counter()
asyncio.run(main(**ns.__dict__))
end_time = time.perf_counter()
execution_time = end_time - start_time
print(colors[4] + f"Program Start Time: {start_time}\nProgram End Time: {end_time}\nProgram Execution Time: {execution_time:0.2f} seconds." + colors[0])
```
**Execution Command:** `python async_queue.py -p 2 -c 4`
**Output:**
[![async_queue](https://i.stack.imgur.com/8iPho.png)](https://i.stack.imgur.com/8iPho.png)
Lastly, let's have an example of how asyncio cuts down on wait time: given a coroutine `generate_random_int()` that keeps producing random integers in the range [0, 10], until one of them exceeds a threshold, you want to let multiple calls of this coroutine not need to wait for each other to complete in succession.
**Example Code:**
```
import time
import asyncio
import random
# ANSI colors
colors = (
"\033[0m", # End of color
"\033[31m", # Red
"\033[32m", # Green
"\033[36m", # Cyan
"\033[35m", # Magenta
"\033[34m", # Blue
)
async def generate_random_int(indx: int, threshold: int = 5) -> int:
print(colors[indx + 1] + f"Initiated generate_random_int({indx}).")
i = random.randint(0, 10)
while i <= threshold:
print(colors[indx + 1] + f"generate_random_int({indx}) == {i} too low; retrying.")
await asyncio.sleep(indx + 1)
i = random.randint(0, 10)
print(colors[indx + 1] + f"---> Finished: generate_random_int({indx}) == {i}" + colors[0])
return i
async def main():
res = await asyncio.gather(*(generate_random_int(i, 10 - i - 1) for i in range(3)))
return res
if __name__ == "__main__":
random.seed(444)
start_time = time.perf_counter()
r1, r2, r3 = asyncio.run(main())
print(colors[4] + f"\nRandom INT 1: {r1}, Random INT 2: {r2}, Random INT 3: {r3}\n" + colors[0])
end_time = time.perf_counter()
execution_time = end_time - start_time
print(colors[5] + f"Program Start Time: {start_time}\nProgram End Time: {end_time}\nProgram Execution Time: {execution_time:0.2f} seconds." + colors[0])
```
**Execution Command:** `python async_random.py`
**Output:**
[![async_random](https://i.stack.imgur.com/k8wAT.png)](https://i.stack.imgur.com/k8wAT.png)
>
> **Note:** If youβre writing any code yourself, prefer native coroutines
> for the sake of being explicit rather than implicit. Generator based
> coroutines will be removed in Python 3.10.
>
>
>
**GitHub Repo: <https://github.com/tssovi/asynchronous-in-python>** | It all boils down to the two main challenges that asyncio is addressing:
* How to perform multiple I/O in a single thread?
* How to implement cooperative multitasking?
The answer to the first point has been around for a long while and is called a [select loop](https://en.wikipedia.org/wiki/Asynchronous_I/O#Select(/poll)_loops). In python, it is implemented in the [selectors module](https://docs.python.org/3/library/selectors.html).
The second question is related to the concept of [coroutine](https://en.wikipedia.org/wiki/Coroutine), i.e. functions that can stop their execution and be restored later on. In python, coroutines are implemented using [generators](https://wiki.python.org/moin/Generators) and the [yield from](https://www.python.org/dev/peps/pep-0380/) statement. That's what is hiding behind the [async/await syntax](https://www.python.org/dev/peps/pep-0492/).
More resources in this [answer](https://stackoverflow.com/a/41208685/2846140).
---
**EDIT:** Addressing your comment about goroutines:
The closest equivalent to a goroutine in asyncio is actually not a coroutine but a task (see the difference in the [documentation](https://docs.python.org/3/library/asyncio-task.html)). In python, a coroutine (or a generator) knows nothing about the concepts of event loop or I/O. It simply is a function that can stop its execution using `yield` while keeping its current state, so it can be restored later on. The `yield from` syntax allows for chaining them in a transparent way.
Now, within an asyncio task, the coroutine at the very bottom of the chain always ends up yielding a [future](https://docs.python.org/3.4/library/asyncio-task.html#asyncio.Future). This future then bubbles up to the event loop, and gets integrated into the inner machinery. When the future is set to done by some other inner callback, the event loop can restore the task by sending the future back into the coroutine chain.
---
**EDIT:** Addressing some of the questions in your post:
>
> How does I/O actually happen in this scenario? In a separate thread? Is the whole interpreter suspended and I/O happens outside the interpreter?
>
>
>
No, nothing happens in a thread. I/O is always managed by the event loop, mostly through file descriptors. However the registration of those file descriptors is usually hidden by high-level coroutines, making the dirty work for you.
>
> What exactly is meant by I/O? If my python procedure called C open() procedure, and it in turn sent interrupt to kernel, relinquishing control to it, how does Python interpreter know about this and is able to continue running some other code, while kernel code does the actual I/O and until it wakes up the Python procedure which sent the interrupt originally? How can Python interpreter in principle, be aware of this happening?
>
>
>
An I/O is any blocking call. In asyncio, all the I/O operations should go through the event loop, because as you said, the event loop has no way to be aware that a blocking call is being performed in some synchronous code. That means you're not supposed to use a synchronous `open` within the context of a coroutine. Instead, use a dedicated library such [aiofiles](https://github.com/Tinche/aiofiles) which provides an asynchronous version of `open`. |
49,005,651 | This question is motivated by my another question: [How to await in cdef?](https://stackoverflow.com/questions/48989065/how-to-await-in-cdef)
There are tons of articles and blog posts on the web about `asyncio`, but they are all very superficial. I couldn't find any information about how `asyncio` is actually implemented, and what makes I/O asynchronous. I was trying to read the source code, but it's thousands of lines of not the highest grade C code, a lot of which deals with auxiliary objects, but most crucially, it is hard to connect between Python syntax and what C code it would translate into.
Asycnio's own documentation is even less helpful. There's no information there about how it works, only some guidelines about how to use it, which are also sometimes misleading / very poorly written.
I'm familiar with Go's implementation of coroutines, and was kind of hoping that Python did the same thing. If that was the case, the code I came up in the post linked above would have worked. Since it didn't, I'm now trying to figure out why. My best guess so far is as follows, please correct me where I'm wrong:
1. Procedure definitions of the form `async def foo(): ...` are actually interpreted as methods of a class inheriting `coroutine`.
2. Perhaps, `async def` is actually split into multiple methods by `await` statements, where the object, on which these methods are called is able to keep track of the progress it made through the execution so far.
3. If the above is true, then, essentially, execution of a coroutine boils down to calling methods of coroutine object by some global manager (loop?).
4. The global manager is somehow (how?) aware of when I/O operations are performed by Python (only?) code and is able to choose one of the pending coroutine methods to execute after the current executing method relinquished control (hit on the `await` statement).
In other words, here's my attempt at "desugaring" of some `asyncio` syntax into something more understandable:
```
async def coro(name):
print('before', name)
await asyncio.sleep()
print('after', name)
asyncio.gather(coro('first'), coro('second'))
# translated from async def coro(name)
class Coro(coroutine):
def before(self, name):
print('before', name)
def after(self, name):
print('after', name)
def __init__(self, name):
self.name = name
self.parts = self.before, self.after
self.pos = 0
def __call__():
self.parts[self.pos](self.name)
self.pos += 1
def done(self):
return self.pos == len(self.parts)
# translated from asyncio.gather()
class AsyncIOManager:
def gather(*coros):
while not every(c.done() for c in coros):
coro = random.choice(coros)
coro()
```
Should my guess prove correct: then I have a problem. How does I/O actually happen in this scenario? In a separate thread? Is the whole interpreter suspended and I/O happens outside the interpreter? What exactly is meant by I/O? If my python procedure called C `open()` procedure, and it in turn sent interrupt to kernel, relinquishing control to it, how does Python interpreter know about this and is able to continue running some other code, while kernel code does the actual I/O and until it wakes up the Python procedure which sent the interrupt originally? How can Python interpreter in principle, be aware of this happening? | 2018/02/27 | [
"https://Stackoverflow.com/questions/49005651",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5691066/"
] | Talking about `async/await` and `asyncio` is not the same thing. The first is a fundamental, low-level construct (coroutines) while the later is a library using these constructs. Conversely, there is no single ultimate answer.
The following is a general description of how `async/await` and `asyncio`-like libraries work. That is, there may be other tricks on top (there are...) but they are inconsequential unless you build them yourself. The difference should be negligible unless you already know enough to not have to ask such a question.
1. Coroutines versus subroutines in a nut shell
===============================================
Just like **subroutines** (functions, procedures, ...), **coroutines** (generators, ...) are an abstraction of call stack and instruction pointer: there is a stack of executing code pieces, and each is at a specific instruction.
The distinction of `def` versus `async def` is merely for clarity. The actual difference is `return` versus `yield`. From this, `await` or `yield from` take the difference from individual calls to entire stacks.
1.1. Subroutines
----------------
A subroutine represents a new stack level to hold local variables, and a single traversal of its instructions to reach an end. Consider a subroutine like this:
```
def subfoo(bar):
qux = 3
return qux * bar
```
When you run it, that means
1. allocate stack space for `bar` and `qux`
2. recursively execute the first statement and jump to the next statement
3. once at a `return`, push its value to the calling stack
4. clear the stack (1.) and instruction pointer (2.)
Notably, 4. means that a subroutine always starts at the same state. Everything exclusive to the function itself is lost upon completion. A function cannot be resumed, even if there are instructions after `return`.
```
root -\
: \- subfoo --\
:/--<---return --/
|
V
```
1.2. Coroutines as persistent subroutines
-----------------------------------------
A coroutine is like a subroutine, but can exit *without* destroying its state. Consider a coroutine like this:
```
def cofoo(bar):
qux = yield bar # yield marks a break point
return qux
```
When you run it, that means
1. allocate stack space for `bar` and `qux`
2. recursively execute the first statement and jump to the next statement
1. once at a `yield`, push its value to the calling stack *but store the stack and instruction pointer*
2. once calling into `yield`, restore stack and instruction pointer and push arguments to `qux`
3. once at a `return`, push its value to the calling stack
4. clear the stack (1.) and instruction pointer (2.)
Note the addition of 2.1 and 2.2 - a coroutine can be suspended and resumed at predefined points. This is similar to how a subroutine is suspended during calling another subroutine. The difference is that the active coroutine is not strictly bound to its calling stack. Instead, a suspended coroutine is part of a separate, isolated stack.
```
root -\
: \- cofoo --\
:/--<+--yield --/
| :
V :
```
This means that suspended coroutines can be freely stored or moved between stacks. Any call stack that has access to a coroutine can decide to resume it.
1.3. Traversing the call stack
------------------------------
So far, our coroutine only goes down the call stack with `yield`. A subroutine can go down *and up* the call stack with `return` and `()`. For completeness, coroutines also need a mechanism to go up the call stack. Consider a coroutine like this:
```
def wrap():
yield 'before'
yield from cofoo()
yield 'after'
```
When you run it, that means it still allocates the stack and instruction pointer like a subroutine. When it suspends, that still is like storing a subroutine.
However, `yield from` does *both*. It suspends stack and instruction pointer of `wrap` *and* runs `cofoo`. Note that `wrap` stays suspended until `cofoo` finishes completely. Whenever `cofoo` suspends or something is sent, `cofoo` is directly connected to the calling stack.
1.4. Coroutines all the way down
--------------------------------
As established, `yield from` allows to connect two scopes across another intermediate one. When applied recursively, that means the *top* of the stack can be connected to the *bottom* of the stack.
```
root -\
: \-> coro_a -yield-from-> coro_b --\
:/ <-+------------------------yield ---/
| :
:\ --+-- coro_a.send----------yield ---\
: coro_b <-/
```
Note that `root` and `coro_b` do not know about each other. This makes coroutines much cleaner than callbacks: coroutines still built on a 1:1 relation like subroutines. Coroutines suspend and resume their entire existing execution stack up until a regular call point.
Notably, `root` could have an arbitrary number of coroutines to resume. Yet, it can never resume more than one at the same time. Coroutines of the same root are concurrent but not parallel!
1.5. Python's `async` and `await`
---------------------------------
The explanation has so far explicitly used the `yield` and `yield from` vocabulary of generators - the underlying functionality is the same. The new Python3.5 syntax `async` and `await` exists mainly for clarity.
```
def foo(): # subroutine?
return None
def foo(): # coroutine?
yield from foofoo() # generator? coroutine?
async def foo(): # coroutine!
await foofoo() # coroutine!
return None
```
The `async for` and `async with` statements are needed because you would break the `yield from/await` chain with the bare `for` and `with` statements.
2. Anatomy of a simple event loop
=================================
By itself, a coroutine has no concept of yielding control to *another* coroutine. It can only yield control to the caller at the bottom of a coroutine stack. This caller can then switch to another coroutine and run it.
This root node of several coroutines is commonly an **event loop**: on suspension, a coroutine yields an **event** on which it wants resume. In turn, the event loop is capable of efficiently waiting for these events to occur. This allows it to decide which coroutine to run next, or how to wait before resuming.
Such a design implies that there is a set of pre-defined events that the loop understands. Several coroutines `await` each other, until finally an event is `await`ed. This event can communicate *directly* with the event loop by `yield`ing control.
```
loop -\
: \-> coroutine --await--> event --\
:/ <-+----------------------- yield --/
| :
| : # loop waits for event to happen
| :
:\ --+-- send(reply) -------- yield --\
: coroutine <--yield-- event <-/
```
The key is that coroutine suspension allows the event loop and events to directly communicate. The intermediate coroutine stack does not require *any* knowledge about which loop is running it, nor how events work.
2.1.1. Events in time
---------------------
The simplest event to handle is reaching a point in time. This is a fundamental block of threaded code as well: a thread repeatedly `sleep`s until a condition is true.
However, a regular `sleep` blocks execution by itself - we want other coroutines to not be blocked. Instead, we want tell the event loop when it should resume the current coroutine stack.
2.1.2. Defining an Event
------------------------
An event is simply a value we can identify - be it via an enum, a type or other identity. We can define this with a simple class that stores our target time. In addition to *storing* the event information, we can allow to `await` a class directly.
```
class AsyncSleep:
"""Event to sleep until a point in time"""
def __init__(self, until: float):
self.until = until
# used whenever someone ``await``s an instance of this Event
def __await__(self):
# yield this Event to the loop
yield self
def __repr__(self):
return '%s(until=%.1f)' % (self.__class__.__name__, self.until)
```
This class only *stores* the event - it does not say how to actually handle it.
The only special feature is `__await__` - it is what the `await` keyword looks for. Practically, it is an iterator but not available for the regular iteration machinery.
2.2.1. Awaiting an event
------------------------
Now that we have an event, how do coroutines react to it? We should be able to express the equivalent of `sleep` by `await`ing our event. To better see what is going on, we wait twice for half the time:
```
import time
async def asleep(duration: float):
"""await that ``duration`` seconds pass"""
await AsyncSleep(time.time() + duration / 2)
await AsyncSleep(time.time() + duration / 2)
```
We can directly instantiate and run this coroutine. Similar to a generator, using `coroutine.send` runs the coroutine until it `yield`s a result.
```
coroutine = asleep(100)
while True:
print(coroutine.send(None))
time.sleep(0.1)
```
This gives us two `AsyncSleep` events and then a `StopIteration` when the coroutine is done. Notice that the only delay is from `time.sleep` in the loop! Each `AsyncSleep` only stores an offset from the current time.
2.2.2. Event + Sleep
--------------------
At this point, we have *two* separate mechanisms at our disposal:
* `AsyncSleep` Events that can be yielded from inside a coroutine
* `time.sleep` that can wait without impacting coroutines
Notably, these two are orthogonal: neither one affects or triggers the other. As a result, we can come up with our own strategy to `sleep` to meet the delay of an `AsyncSleep`.
2.3. A naive event loop
-----------------------
If we have *several* coroutines, each can tell us when it wants to be woken up. We can then wait until the first of them wants to be resumed, then for the one after, and so on. Notably, at each point we only care about which one is *next*.
This makes for a straightforward scheduling:
1. sort coroutines by their desired wake up time
2. pick the first that wants to wake up
3. wait until this point in time
4. run this coroutine
5. repeat from 1.
A trivial implementation does not need any advanced concepts. A `list` allows to sort coroutines by date. Waiting is a regular `time.sleep`. Running coroutines works just like before with `coroutine.send`.
```
def run(*coroutines):
"""Cooperatively run all ``coroutines`` until completion"""
# store wake-up-time and coroutines
waiting = [(0, coroutine) for coroutine in coroutines]
while waiting:
# 2. pick the first coroutine that wants to wake up
until, coroutine = waiting.pop(0)
# 3. wait until this point in time
time.sleep(max(0.0, until - time.time()))
# 4. run this coroutine
try:
command = coroutine.send(None)
except StopIteration:
continue
# 1. sort coroutines by their desired suspension
if isinstance(command, AsyncSleep):
waiting.append((command.until, coroutine))
waiting.sort(key=lambda item: item[0])
```
Of course, this has ample room for improvement. We can use a heap for the wait queue or a dispatch table for events. We could also fetch return values from the `StopIteration` and assign them to the coroutine. However, the fundamental principle remains the same.
2.4. Cooperative Waiting
------------------------
The `AsyncSleep` event and `run` event loop are a fully working implementation of timed events.
```
async def sleepy(identifier: str = "coroutine", count=5):
for i in range(count):
print(identifier, 'step', i + 1, 'at %.2f' % time.time())
await asleep(0.1)
run(*(sleepy("coroutine %d" % j) for j in range(5)))
```
This cooperatively switches between each of the five coroutines, suspending each for 0.1 seconds. Even though the event loop is synchronous, it still executes the work in 0.5 seconds instead of 2.5 seconds. Each coroutine holds state and acts independently.
3. I/O event loop
=================
An event loop that supports `sleep` is suitable for *polling*. However, waiting for I/O on a file handle can be done more efficiently: the operating system implements I/O and thus knows which handles are ready. Ideally, an event loop should support an explicit "ready for I/O" event.
3.1. The `select` call
----------------------
Python already has an interface to query the OS for read I/O handles. When called with handles to read or write, it returns the handles *ready* to read or write:
```
readable, writable, _ = select.select(rlist, wlist, xlist, timeout)
```
For example, we can `open` a file for writing and wait for it to be ready:
```
write_target = open('/tmp/foo')
readable, writable, _ = select.select([], [write_target], [])
```
Once select returns, `writable` contains our open file.
3.2. Basic I/O event
--------------------
Similar to the `AsyncSleep` request, we need to define an event for I/O. With the underlying `select` logic, the event must refer to a readable object - say an `open` file. In addition, we store how much data to read.
```
class AsyncRead:
def __init__(self, file, amount=1):
self.file = file
self.amount = amount
self._buffer = b'' if 'b' in file.mode else ''
def __await__(self):
while len(self._buffer) < self.amount:
yield self
# we only get here if ``read`` should not block
self._buffer += self.file.read(1)
return self._buffer
def __repr__(self):
return '%s(file=%s, amount=%d, progress=%d)' % (
self.__class__.__name__, self.file, self.amount, len(self._buffer)
)
```
As with `AsyncSleep` we mostly just store the data required for the underlying system call. This time, `__await__` is capable of being resumed multiple times - until our desired `amount` has been read. In addition, we `return` the I/O result instead of just resuming.
3.3. Augmenting an event loop with read I/O
-------------------------------------------
The basis for our event loop is still the `run` defined previously. First, we need to track the read requests. This is no longer a sorted schedule, we only map read requests to coroutines.
```
# new
waiting_read = {} # type: Dict[file, coroutine]
```
Since `select.select` takes a timeout parameter, we can use it in place of `time.sleep`.
```
# old
time.sleep(max(0.0, until - time.time()))
# new
readable, _, _ = select.select(list(waiting_read), [], [])
```
This gives us all readable files - if there are any, we run the corresponding coroutine. If there are none, we have waited long enough for our current coroutine to run.
```
# new - reschedule waiting coroutine, run readable coroutine
if readable:
waiting.append((until, coroutine))
waiting.sort()
coroutine = waiting_read[readable[0]]
```
Finally, we have to actually listen for read requests.
```
# new
if isinstance(command, AsyncSleep):
...
elif isinstance(command, AsyncRead):
...
```
3.4. Putting it together
------------------------
The above was a bit of a simplification. We need to do some switching to not starve sleeping coroutines if we can always read. We need to handle having nothing to read or nothing to wait for. However, the end result still fits into 30 LOC.
```
def run(*coroutines):
"""Cooperatively run all ``coroutines`` until completion"""
waiting_read = {} # type: Dict[file, coroutine]
waiting = [(0, coroutine) for coroutine in coroutines]
while waiting or waiting_read:
# 2. wait until the next coroutine may run or read ...
try:
until, coroutine = waiting.pop(0)
except IndexError:
until, coroutine = float('inf'), None
readable, _, _ = select.select(list(waiting_read), [], [])
else:
readable, _, _ = select.select(list(waiting_read), [], [], max(0.0, until - time.time()))
# ... and select the appropriate one
if readable and time.time() < until:
if until and coroutine:
waiting.append((until, coroutine))
waiting.sort()
coroutine = waiting_read.pop(readable[0])
# 3. run this coroutine
try:
command = coroutine.send(None)
except StopIteration:
continue
# 1. sort coroutines by their desired suspension ...
if isinstance(command, AsyncSleep):
waiting.append((command.until, coroutine))
waiting.sort(key=lambda item: item[0])
# ... or register reads
elif isinstance(command, AsyncRead):
waiting_read[command.file] = coroutine
```
3.5. Cooperative I/O
--------------------
The `AsyncSleep`, `AsyncRead` and `run` implementations are now fully functional to sleep and/or read.
Same as for `sleepy`, we can define a helper to test reading:
```
async def ready(path, amount=1024*32):
print('read', path, 'at', '%d' % time.time())
with open(path, 'rb') as file:
result = await AsyncRead(file, amount)
print('done', path, 'at', '%d' % time.time())
print('got', len(result), 'B')
run(sleepy('background', 5), ready('/dev/urandom'))
```
Running this, we can see that our I/O is interleaved with the waiting task:
```
id background round 1
read /dev/urandom at 1530721148
id background round 2
id background round 3
id background round 4
id background round 5
done /dev/urandom at 1530721148
got 1024 B
```
4. Non-Blocking I/O
-------------------
While I/O on files gets the concept across, it is not really suitable for a library like `asyncio`: the `select` call [always returns for files](https://stackoverflow.com/questions/25776812/using-linux-c-select-system-call-to-monitor-files), and both `open` and `read` may [block indefinitely](http://man7.org/linux/man-pages/man2/open.2.html#DESCRIPTION). This blocks all coroutines of an event loop - which is bad. Libraries like `aiofiles` use threads and synchronization to fake non-blocking I/O and events on file.
However, sockets do allow for non-blocking I/O - and their inherent latency makes it much more critical. When used in an event loop, waiting for data and retrying can be wrapped without blocking anything.
4.1. Non-Blocking I/O event
---------------------------
Similar to our `AsyncRead`, we can define a suspend-and-read event for sockets. Instead of taking a file, we take a socket - which must be non-blocking. Also, our `__await__` uses `socket.recv` instead of `file.read`.
```
class AsyncRecv:
def __init__(self, connection, amount=1, read_buffer=1024):
assert not connection.getblocking(), 'connection must be non-blocking for async recv'
self.connection = connection
self.amount = amount
self.read_buffer = read_buffer
self._buffer = b''
def __await__(self):
while len(self._buffer) < self.amount:
try:
self._buffer += self.connection.recv(self.read_buffer)
except BlockingIOError:
yield self
return self._buffer
def __repr__(self):
return '%s(file=%s, amount=%d, progress=%d)' % (
self.__class__.__name__, self.connection, self.amount, len(self._buffer)
)
```
In contrast to `AsyncRead`, `__await__` performs truly non-blocking I/O. When data is available, it *always* reads. When no data is available, it *always* suspends. That means the event loop is only blocked while we perform useful work.
4.2. Un-Blocking the event loop
-------------------------------
As far as the event loop is concerned, nothing changes much. The event to listen for is still the same as for files - a file descriptor marked ready by `select`.
```
# old
elif isinstance(command, AsyncRead):
waiting_read[command.file] = coroutine
# new
elif isinstance(command, AsyncRead):
waiting_read[command.file] = coroutine
elif isinstance(command, AsyncRecv):
waiting_read[command.connection] = coroutine
```
At this point, it should be obvious that `AsyncRead` and `AsyncRecv` are the same kind of event. We could easily refactor them to be *one* event with an exchangeable I/O component. In effect, the event loop, coroutines and events [cleanly separate](http://sans-io.readthedocs.io) a scheduler, arbitrary intermediate code and the actual I/O.
4.3. The ugly side of non-blocking I/O
--------------------------------------
In principle, what you should do at this point is replicate the logic of `read` as a `recv` for `AsyncRecv`. However, this is much more ugly now - you have to handle early returns when functions block inside the kernel, but yield control to you. For example, opening a connection versus opening a file is much longer:
```
# file
file = open(path, 'rb')
# non-blocking socket
connection = socket.socket()
connection.setblocking(False)
# open without blocking - retry on failure
try:
connection.connect((url, port))
except BlockingIOError:
pass
```
Long story short, what remains is a few dozen lines of Exception handling. The events and event loop already work at this point.
```
id background round 1
read localhost:25000 at 1530783569
read /dev/urandom at 1530783569
done localhost:25000 at 1530783569 got 32768 B
id background round 2
id background round 3
id background round 4
done /dev/urandom at 1530783569 got 4096 B
id background round 5
```
Addendum
========
[Example code at github](https://gist.github.com/maxfischer2781/27d68e69c017d7c2605074a59ada04e5) | If you picture an airport control tower, with many planes waiting to land on the same runway. The control tower can be seen as the event loop and runway as the thread. Each plane is a separate function waiting to execute. In reality only one plane can land on the runway at a time. What asyncio basically does it allows many planes to land simultaneously on the same runway by using the event loop to suspend functions and allow other functions to run when you use the await syntax it basically means that plane(function can be suspended and allow other functions to process |
49,005,651 | This question is motivated by my another question: [How to await in cdef?](https://stackoverflow.com/questions/48989065/how-to-await-in-cdef)
There are tons of articles and blog posts on the web about `asyncio`, but they are all very superficial. I couldn't find any information about how `asyncio` is actually implemented, and what makes I/O asynchronous. I was trying to read the source code, but it's thousands of lines of not the highest grade C code, a lot of which deals with auxiliary objects, but most crucially, it is hard to connect between Python syntax and what C code it would translate into.
Asycnio's own documentation is even less helpful. There's no information there about how it works, only some guidelines about how to use it, which are also sometimes misleading / very poorly written.
I'm familiar with Go's implementation of coroutines, and was kind of hoping that Python did the same thing. If that was the case, the code I came up in the post linked above would have worked. Since it didn't, I'm now trying to figure out why. My best guess so far is as follows, please correct me where I'm wrong:
1. Procedure definitions of the form `async def foo(): ...` are actually interpreted as methods of a class inheriting `coroutine`.
2. Perhaps, `async def` is actually split into multiple methods by `await` statements, where the object, on which these methods are called is able to keep track of the progress it made through the execution so far.
3. If the above is true, then, essentially, execution of a coroutine boils down to calling methods of coroutine object by some global manager (loop?).
4. The global manager is somehow (how?) aware of when I/O operations are performed by Python (only?) code and is able to choose one of the pending coroutine methods to execute after the current executing method relinquished control (hit on the `await` statement).
In other words, here's my attempt at "desugaring" of some `asyncio` syntax into something more understandable:
```
async def coro(name):
print('before', name)
await asyncio.sleep()
print('after', name)
asyncio.gather(coro('first'), coro('second'))
# translated from async def coro(name)
class Coro(coroutine):
def before(self, name):
print('before', name)
def after(self, name):
print('after', name)
def __init__(self, name):
self.name = name
self.parts = self.before, self.after
self.pos = 0
def __call__():
self.parts[self.pos](self.name)
self.pos += 1
def done(self):
return self.pos == len(self.parts)
# translated from asyncio.gather()
class AsyncIOManager:
def gather(*coros):
while not every(c.done() for c in coros):
coro = random.choice(coros)
coro()
```
Should my guess prove correct: then I have a problem. How does I/O actually happen in this scenario? In a separate thread? Is the whole interpreter suspended and I/O happens outside the interpreter? What exactly is meant by I/O? If my python procedure called C `open()` procedure, and it in turn sent interrupt to kernel, relinquishing control to it, how does Python interpreter know about this and is able to continue running some other code, while kernel code does the actual I/O and until it wakes up the Python procedure which sent the interrupt originally? How can Python interpreter in principle, be aware of this happening? | 2018/02/27 | [
"https://Stackoverflow.com/questions/49005651",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5691066/"
] | How does asyncio work?
======================
Before answering this question we need to understand a few base terms, skip these if you already know any of them.
[Generators](https://wiki.python.org/moin/Generators)
-----------------------------------------------------
Generators are objects that allow us to suspend the execution of a python function. User curated generators are implemented using the keyword [**`yield`**](https://docs.python.org/3/reference/expressions.html#yield-expressions). By creating a normal function containing the `yield` keyword, we turn that function into a generator:
```
>>> def test():
... yield 1
... yield 2
...
>>> gen = test()
>>> next(gen)
1
>>> next(gen)
2
>>> next(gen)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
```
As you can see, calling [`next()`](https://docs.python.org/3/library/functions.html#next) on the generator causes the interpreter to load the test's frame, and return the `yield`ed value. Calling `next()` again, causes the frame to load again into the interpreter stack, and continues on `yield`ing another value.
By the third time `next()` is called, our generator was finished, and [`StopIteration`](https://docs.python.org/3/library/exceptions.html#StopIteration) was thrown.
### Communicating with a generator
A less-known feature of generators is the fact that you can communicate with them using two methods: [`send()`](https://docs.python.org/3/reference/expressions.html#generator.send) and [`throw()`](https://docs.python.org/3/reference/expressions.html#generator.throw).
```
>>> def test():
... val = yield 1
... print(val)
... yield 2
... yield 3
...
>>> gen = test()
>>> next(gen)
1
>>> gen.send("abc")
abc
2
>>> gen.throw(Exception())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in test
Exception
```
Upon calling `gen.send()`, the value is passed as a return value from the `yield` keyword.
`gen.throw()` on the other hand, allows throwing Exceptions inside generators, with the exception raised at the same spot `yield` was called.
### Returning values from generators
Returning a value from a generator, results in the value being put inside the `StopIteration` exception. We can later on recover the value from the exception and use it to our needs.
```
>>> def test():
... yield 1
... return "abc"
...
>>> gen = test()
>>> next(gen)
1
>>> try:
... next(gen)
... except StopIteration as exc:
... print(exc.value)
...
abc
```
Behold, a new keyword: `yield from`
-----------------------------------
Python 3.4 came with the addition of a new keyword: [`yield from`](https://docs.python.org/3/reference/expressions.html#yield-expressions). What that keyword allows us to do, is pass on any `next()`, `send()` and `throw()` into an inner-most nested generator. If the inner generator returns a value, it is also the return value of `yield from`:
```
>>> def inner():
... inner_result = yield 2
... print('inner', inner_result)
... return 3
...
>>> def outer():
... yield 1
... val = yield from inner()
... print('outer', val)
... yield 4
...
>>> gen = outer()
>>> next(gen)
1
>>> next(gen) # Goes inside inner() automatically
2
>>> gen.send("abc")
inner abc
outer 3
4
```
I've written [an article](https://towardsdatascience.com/cpython-internals-how-do-generators-work-ba1c4405b4bc) to further elaborate on this topic.
Putting it all together
-----------------------
Upon introducing the new keyword `yield from` in Python 3.4, we were now able to create generators inside generators that just like a tunnel, pass the data back and forth from the inner-most to the outer-most generators. This has spawned a new meaning for generators - *coroutines*.
**Coroutines** are functions that can be stopped and resumed while being run. In Python, they are defined using the **[`async def`](https://docs.python.org/3/reference/compound_stmts.html#coroutine-function-definition)** keyword. Much like generators, they too use their own form of `yield from` which is **[`await`](https://docs.python.org/3/reference/expressions.html#await)**. Before `async` and `await` were introduced in Python 3.5, we created coroutines in the exact same way generators were created (with `yield from` instead of `await`).
```
async def inner():
return 1
async def outer():
await inner()
```
Just like all iterators and generators implement the `__iter__()` method, all coroutines implement `__await__()` which allows them to continue on every time `await coro` is called.
There's a nice [sequence diagram](https://docs.python.org/3.5/_images/tulip_coro.png) inside the [Python docs](https://docs.python.org/3.5/library/asyncio-task.html#example-chain-coroutines) that you should check out.
In asyncio, apart from coroutine functions, we have 2 important objects: **tasks** and **futures**.
### [Futures](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future)
Futures are objects that have the `__await__()` method implemented, and their job is to hold a certain state and result. The state can be one of the following:
1. PENDING - future does not have any result or exception set.
2. CANCELLED - future was cancelled using `fut.cancel()`
3. FINISHED - future was finished, either by a result set using [`fut.set_result()`](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future.set_result) or by an exception set using [`fut.set_exception()`](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future.set_exception)
The result, just like you have guessed, can either be a Python object, that will be returned, or an exception which may be raised.
Another **important** feature of `future` objects, is that they contain a method called **[`add_done_callback()`](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future.add_done_callback)**. This method allows functions to be called as soon as the task is done - whether it raised an exception or finished.
### [Tasks](https://docs.python.org/3/library/asyncio-task.html#task)
Task objects are special futures, which wrap around coroutines, and communicate with the inner-most and outer-most coroutines. Every time a coroutine `await`s a future, the future is passed all the way back to the task (just like in `yield from`), and the task receives it.
Next, the task binds itself to the future. It does so by calling `add_done_callback()` on the future. From now on, if the future will ever be done, by either being cancelled, passed an exception or passed a Python object as a result, the task's callback will be called, and it will rise back up to existence.
Asyncio
=======
The final burning question we must answer is - how is the IO implemented?
Deep inside asyncio, we have an event loop. An event loop of tasks. The event loop's job is to call tasks every time they are ready and coordinate all that effort into one single working machine.
The IO part of the event loop is built upon a single crucial function called **[`select`](https://docs.python.org/3/library/select.html#module-select)**. Select is a blocking function, implemented by the operating system underneath, that allows waiting on sockets for incoming or outgoing data. Upon receiving data it wakes up, and returns the sockets which received data, or the sockets which are ready for writing.
When you try to receive or send data over a socket through asyncio, what actually happens below is that the socket is first checked if it has any data that can be immediately read or sent. If its `.send()` buffer is full, or the `.recv()` buffer is empty, the socket is registered to the `select` function (by simply adding it to one of the lists, `rlist` for `recv` and `wlist` for `send`) and the appropriate function `await`s a newly created `future` object, tied to that socket.
When all available tasks are waiting for futures, the event loop calls `select` and waits. When the one of the sockets has incoming data, or its `send` buffer drained up, asyncio checks for the future object tied to that socket, and sets it to done.
Now all the magic happens. The future is set to done, the task that added itself before with `add_done_callback()` rises up back to life, and calls `.send()` on the coroutine which resumes the inner-most coroutine (because of the `await` chain) and you read the newly received data from a nearby buffer it was spilled unto.
**Method chain again, in case of `recv()`:**
1. `select.select` waits.
2. A ready socket, with data is returned.
3. Data from the socket is moved into a buffer.
4. `future.set_result()` is called.
5. Task that added itself with `add_done_callback()` is now woken up.
6. Task calls `.send()` on the coroutine which goes all the way into the inner-most coroutine and wakes it up.
7. Data is being read from the buffer and returned to our humble user.
In summary, asyncio uses generator capabilities, that allow pausing and resuming functions. It uses `yield from` capabilities that allow passing data back and forth from the inner-most generator to the outer-most. It uses all of those in order to halt function execution while it's waiting for IO to complete (by using the OS `select` function).
And the best of all? While one function is paused, another may run and interleave with the delicate fabric, which is asyncio. | It all boils down to the two main challenges that asyncio is addressing:
* How to perform multiple I/O in a single thread?
* How to implement cooperative multitasking?
The answer to the first point has been around for a long while and is called a [select loop](https://en.wikipedia.org/wiki/Asynchronous_I/O#Select(/poll)_loops). In python, it is implemented in the [selectors module](https://docs.python.org/3/library/selectors.html).
The second question is related to the concept of [coroutine](https://en.wikipedia.org/wiki/Coroutine), i.e. functions that can stop their execution and be restored later on. In python, coroutines are implemented using [generators](https://wiki.python.org/moin/Generators) and the [yield from](https://www.python.org/dev/peps/pep-0380/) statement. That's what is hiding behind the [async/await syntax](https://www.python.org/dev/peps/pep-0492/).
More resources in this [answer](https://stackoverflow.com/a/41208685/2846140).
---
**EDIT:** Addressing your comment about goroutines:
The closest equivalent to a goroutine in asyncio is actually not a coroutine but a task (see the difference in the [documentation](https://docs.python.org/3/library/asyncio-task.html)). In python, a coroutine (or a generator) knows nothing about the concepts of event loop or I/O. It simply is a function that can stop its execution using `yield` while keeping its current state, so it can be restored later on. The `yield from` syntax allows for chaining them in a transparent way.
Now, within an asyncio task, the coroutine at the very bottom of the chain always ends up yielding a [future](https://docs.python.org/3.4/library/asyncio-task.html#asyncio.Future). This future then bubbles up to the event loop, and gets integrated into the inner machinery. When the future is set to done by some other inner callback, the event loop can restore the task by sending the future back into the coroutine chain.
---
**EDIT:** Addressing some of the questions in your post:
>
> How does I/O actually happen in this scenario? In a separate thread? Is the whole interpreter suspended and I/O happens outside the interpreter?
>
>
>
No, nothing happens in a thread. I/O is always managed by the event loop, mostly through file descriptors. However the registration of those file descriptors is usually hidden by high-level coroutines, making the dirty work for you.
>
> What exactly is meant by I/O? If my python procedure called C open() procedure, and it in turn sent interrupt to kernel, relinquishing control to it, how does Python interpreter know about this and is able to continue running some other code, while kernel code does the actual I/O and until it wakes up the Python procedure which sent the interrupt originally? How can Python interpreter in principle, be aware of this happening?
>
>
>
An I/O is any blocking call. In asyncio, all the I/O operations should go through the event loop, because as you said, the event loop has no way to be aware that a blocking call is being performed in some synchronous code. That means you're not supposed to use a synchronous `open` within the context of a coroutine. Instead, use a dedicated library such [aiofiles](https://github.com/Tinche/aiofiles) which provides an asynchronous version of `open`. |
49,005,651 | This question is motivated by my another question: [How to await in cdef?](https://stackoverflow.com/questions/48989065/how-to-await-in-cdef)
There are tons of articles and blog posts on the web about `asyncio`, but they are all very superficial. I couldn't find any information about how `asyncio` is actually implemented, and what makes I/O asynchronous. I was trying to read the source code, but it's thousands of lines of not the highest grade C code, a lot of which deals with auxiliary objects, but most crucially, it is hard to connect between Python syntax and what C code it would translate into.
Asycnio's own documentation is even less helpful. There's no information there about how it works, only some guidelines about how to use it, which are also sometimes misleading / very poorly written.
I'm familiar with Go's implementation of coroutines, and was kind of hoping that Python did the same thing. If that was the case, the code I came up in the post linked above would have worked. Since it didn't, I'm now trying to figure out why. My best guess so far is as follows, please correct me where I'm wrong:
1. Procedure definitions of the form `async def foo(): ...` are actually interpreted as methods of a class inheriting `coroutine`.
2. Perhaps, `async def` is actually split into multiple methods by `await` statements, where the object, on which these methods are called is able to keep track of the progress it made through the execution so far.
3. If the above is true, then, essentially, execution of a coroutine boils down to calling methods of coroutine object by some global manager (loop?).
4. The global manager is somehow (how?) aware of when I/O operations are performed by Python (only?) code and is able to choose one of the pending coroutine methods to execute after the current executing method relinquished control (hit on the `await` statement).
In other words, here's my attempt at "desugaring" of some `asyncio` syntax into something more understandable:
```
async def coro(name):
print('before', name)
await asyncio.sleep()
print('after', name)
asyncio.gather(coro('first'), coro('second'))
# translated from async def coro(name)
class Coro(coroutine):
def before(self, name):
print('before', name)
def after(self, name):
print('after', name)
def __init__(self, name):
self.name = name
self.parts = self.before, self.after
self.pos = 0
def __call__():
self.parts[self.pos](self.name)
self.pos += 1
def done(self):
return self.pos == len(self.parts)
# translated from asyncio.gather()
class AsyncIOManager:
def gather(*coros):
while not every(c.done() for c in coros):
coro = random.choice(coros)
coro()
```
Should my guess prove correct: then I have a problem. How does I/O actually happen in this scenario? In a separate thread? Is the whole interpreter suspended and I/O happens outside the interpreter? What exactly is meant by I/O? If my python procedure called C `open()` procedure, and it in turn sent interrupt to kernel, relinquishing control to it, how does Python interpreter know about this and is able to continue running some other code, while kernel code does the actual I/O and until it wakes up the Python procedure which sent the interrupt originally? How can Python interpreter in principle, be aware of this happening? | 2018/02/27 | [
"https://Stackoverflow.com/questions/49005651",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5691066/"
] | How does asyncio work?
======================
Before answering this question we need to understand a few base terms, skip these if you already know any of them.
[Generators](https://wiki.python.org/moin/Generators)
-----------------------------------------------------
Generators are objects that allow us to suspend the execution of a python function. User curated generators are implemented using the keyword [**`yield`**](https://docs.python.org/3/reference/expressions.html#yield-expressions). By creating a normal function containing the `yield` keyword, we turn that function into a generator:
```
>>> def test():
... yield 1
... yield 2
...
>>> gen = test()
>>> next(gen)
1
>>> next(gen)
2
>>> next(gen)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
```
As you can see, calling [`next()`](https://docs.python.org/3/library/functions.html#next) on the generator causes the interpreter to load the test's frame, and return the `yield`ed value. Calling `next()` again, causes the frame to load again into the interpreter stack, and continues on `yield`ing another value.
By the third time `next()` is called, our generator was finished, and [`StopIteration`](https://docs.python.org/3/library/exceptions.html#StopIteration) was thrown.
### Communicating with a generator
A less-known feature of generators is the fact that you can communicate with them using two methods: [`send()`](https://docs.python.org/3/reference/expressions.html#generator.send) and [`throw()`](https://docs.python.org/3/reference/expressions.html#generator.throw).
```
>>> def test():
... val = yield 1
... print(val)
... yield 2
... yield 3
...
>>> gen = test()
>>> next(gen)
1
>>> gen.send("abc")
abc
2
>>> gen.throw(Exception())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in test
Exception
```
Upon calling `gen.send()`, the value is passed as a return value from the `yield` keyword.
`gen.throw()` on the other hand, allows throwing Exceptions inside generators, with the exception raised at the same spot `yield` was called.
### Returning values from generators
Returning a value from a generator, results in the value being put inside the `StopIteration` exception. We can later on recover the value from the exception and use it to our needs.
```
>>> def test():
... yield 1
... return "abc"
...
>>> gen = test()
>>> next(gen)
1
>>> try:
... next(gen)
... except StopIteration as exc:
... print(exc.value)
...
abc
```
Behold, a new keyword: `yield from`
-----------------------------------
Python 3.4 came with the addition of a new keyword: [`yield from`](https://docs.python.org/3/reference/expressions.html#yield-expressions). What that keyword allows us to do, is pass on any `next()`, `send()` and `throw()` into an inner-most nested generator. If the inner generator returns a value, it is also the return value of `yield from`:
```
>>> def inner():
... inner_result = yield 2
... print('inner', inner_result)
... return 3
...
>>> def outer():
... yield 1
... val = yield from inner()
... print('outer', val)
... yield 4
...
>>> gen = outer()
>>> next(gen)
1
>>> next(gen) # Goes inside inner() automatically
2
>>> gen.send("abc")
inner abc
outer 3
4
```
I've written [an article](https://towardsdatascience.com/cpython-internals-how-do-generators-work-ba1c4405b4bc) to further elaborate on this topic.
Putting it all together
-----------------------
Upon introducing the new keyword `yield from` in Python 3.4, we were now able to create generators inside generators that just like a tunnel, pass the data back and forth from the inner-most to the outer-most generators. This has spawned a new meaning for generators - *coroutines*.
**Coroutines** are functions that can be stopped and resumed while being run. In Python, they are defined using the **[`async def`](https://docs.python.org/3/reference/compound_stmts.html#coroutine-function-definition)** keyword. Much like generators, they too use their own form of `yield from` which is **[`await`](https://docs.python.org/3/reference/expressions.html#await)**. Before `async` and `await` were introduced in Python 3.5, we created coroutines in the exact same way generators were created (with `yield from` instead of `await`).
```
async def inner():
return 1
async def outer():
await inner()
```
Just like all iterators and generators implement the `__iter__()` method, all coroutines implement `__await__()` which allows them to continue on every time `await coro` is called.
There's a nice [sequence diagram](https://docs.python.org/3.5/_images/tulip_coro.png) inside the [Python docs](https://docs.python.org/3.5/library/asyncio-task.html#example-chain-coroutines) that you should check out.
In asyncio, apart from coroutine functions, we have 2 important objects: **tasks** and **futures**.
### [Futures](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future)
Futures are objects that have the `__await__()` method implemented, and their job is to hold a certain state and result. The state can be one of the following:
1. PENDING - future does not have any result or exception set.
2. CANCELLED - future was cancelled using `fut.cancel()`
3. FINISHED - future was finished, either by a result set using [`fut.set_result()`](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future.set_result) or by an exception set using [`fut.set_exception()`](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future.set_exception)
The result, just like you have guessed, can either be a Python object, that will be returned, or an exception which may be raised.
Another **important** feature of `future` objects, is that they contain a method called **[`add_done_callback()`](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future.add_done_callback)**. This method allows functions to be called as soon as the task is done - whether it raised an exception or finished.
### [Tasks](https://docs.python.org/3/library/asyncio-task.html#task)
Task objects are special futures, which wrap around coroutines, and communicate with the inner-most and outer-most coroutines. Every time a coroutine `await`s a future, the future is passed all the way back to the task (just like in `yield from`), and the task receives it.
Next, the task binds itself to the future. It does so by calling `add_done_callback()` on the future. From now on, if the future will ever be done, by either being cancelled, passed an exception or passed a Python object as a result, the task's callback will be called, and it will rise back up to existence.
Asyncio
=======
The final burning question we must answer is - how is the IO implemented?
Deep inside asyncio, we have an event loop. An event loop of tasks. The event loop's job is to call tasks every time they are ready and coordinate all that effort into one single working machine.
The IO part of the event loop is built upon a single crucial function called **[`select`](https://docs.python.org/3/library/select.html#module-select)**. Select is a blocking function, implemented by the operating system underneath, that allows waiting on sockets for incoming or outgoing data. Upon receiving data it wakes up, and returns the sockets which received data, or the sockets which are ready for writing.
When you try to receive or send data over a socket through asyncio, what actually happens below is that the socket is first checked if it has any data that can be immediately read or sent. If its `.send()` buffer is full, or the `.recv()` buffer is empty, the socket is registered to the `select` function (by simply adding it to one of the lists, `rlist` for `recv` and `wlist` for `send`) and the appropriate function `await`s a newly created `future` object, tied to that socket.
When all available tasks are waiting for futures, the event loop calls `select` and waits. When the one of the sockets has incoming data, or its `send` buffer drained up, asyncio checks for the future object tied to that socket, and sets it to done.
Now all the magic happens. The future is set to done, the task that added itself before with `add_done_callback()` rises up back to life, and calls `.send()` on the coroutine which resumes the inner-most coroutine (because of the `await` chain) and you read the newly received data from a nearby buffer it was spilled unto.
**Method chain again, in case of `recv()`:**
1. `select.select` waits.
2. A ready socket, with data is returned.
3. Data from the socket is moved into a buffer.
4. `future.set_result()` is called.
5. Task that added itself with `add_done_callback()` is now woken up.
6. Task calls `.send()` on the coroutine which goes all the way into the inner-most coroutine and wakes it up.
7. Data is being read from the buffer and returned to our humble user.
In summary, asyncio uses generator capabilities, that allow pausing and resuming functions. It uses `yield from` capabilities that allow passing data back and forth from the inner-most generator to the outer-most. It uses all of those in order to halt function execution while it's waiting for IO to complete (by using the OS `select` function).
And the best of all? While one function is paused, another may run and interleave with the delicate fabric, which is asyncio. | It allows you to write single-threaded asynchronous code and implement concurrency in Python. Basically, `asyncio` provides an event loop for asynchronous programming. For example, if we need to make requests without blocking the main thread, we can use the `asyncio` library.
The asyncio module allows for the implementation of asynchronous programming
using a combination of the following elements:
* Event loop: The asyncio module allows an event loop per process.
* Coroutines: A coroutine is a generator that follows certain conventions. Its most interesting feature is that it can be suspended during execution to wait for external processing (the some routine in I/O) and return from the point it had stopped when the external processing was done.
* Futures: Futures represent a process that has still not finished. A future is an object that is supposed to have a result in the future and represents uncompleted tasks.
* Tasks: This is a subclass of `asyncio`.Future that encapsulates and manages
coroutines. We can use the asyncio.Task object to encapsulate a coroutine.
The most important concept within `asyncio` is the event loop. An event loop
allows you to write asynchronous code using either callbacks or coroutines.
The keys to understanding `asyncio` are the terms of coroutines and the event
loop. **Coroutines** are stateful functions whose execution can be stopped while another I/O operation is being executed. An event loop is used to orchestrate the execution of the coroutines.
To run any coroutine function, we need to get an event loop. We can do this
with
```
loop = asyncio.get_event_loop()
```
This gives us a `BaseEventLoop` object. This has a `run_until_complete` method that takes in a coroutine and runs it until completion. Then, the coroutine returns a result. At a low level, an event loop executes the `BaseEventLoop.rununtilcomplete(future)` method. |
49,005,651 | This question is motivated by my another question: [How to await in cdef?](https://stackoverflow.com/questions/48989065/how-to-await-in-cdef)
There are tons of articles and blog posts on the web about `asyncio`, but they are all very superficial. I couldn't find any information about how `asyncio` is actually implemented, and what makes I/O asynchronous. I was trying to read the source code, but it's thousands of lines of not the highest grade C code, a lot of which deals with auxiliary objects, but most crucially, it is hard to connect between Python syntax and what C code it would translate into.
Asycnio's own documentation is even less helpful. There's no information there about how it works, only some guidelines about how to use it, which are also sometimes misleading / very poorly written.
I'm familiar with Go's implementation of coroutines, and was kind of hoping that Python did the same thing. If that was the case, the code I came up in the post linked above would have worked. Since it didn't, I'm now trying to figure out why. My best guess so far is as follows, please correct me where I'm wrong:
1. Procedure definitions of the form `async def foo(): ...` are actually interpreted as methods of a class inheriting `coroutine`.
2. Perhaps, `async def` is actually split into multiple methods by `await` statements, where the object, on which these methods are called is able to keep track of the progress it made through the execution so far.
3. If the above is true, then, essentially, execution of a coroutine boils down to calling methods of coroutine object by some global manager (loop?).
4. The global manager is somehow (how?) aware of when I/O operations are performed by Python (only?) code and is able to choose one of the pending coroutine methods to execute after the current executing method relinquished control (hit on the `await` statement).
In other words, here's my attempt at "desugaring" of some `asyncio` syntax into something more understandable:
```
async def coro(name):
print('before', name)
await asyncio.sleep()
print('after', name)
asyncio.gather(coro('first'), coro('second'))
# translated from async def coro(name)
class Coro(coroutine):
def before(self, name):
print('before', name)
def after(self, name):
print('after', name)
def __init__(self, name):
self.name = name
self.parts = self.before, self.after
self.pos = 0
def __call__():
self.parts[self.pos](self.name)
self.pos += 1
def done(self):
return self.pos == len(self.parts)
# translated from asyncio.gather()
class AsyncIOManager:
def gather(*coros):
while not every(c.done() for c in coros):
coro = random.choice(coros)
coro()
```
Should my guess prove correct: then I have a problem. How does I/O actually happen in this scenario? In a separate thread? Is the whole interpreter suspended and I/O happens outside the interpreter? What exactly is meant by I/O? If my python procedure called C `open()` procedure, and it in turn sent interrupt to kernel, relinquishing control to it, how does Python interpreter know about this and is able to continue running some other code, while kernel code does the actual I/O and until it wakes up the Python procedure which sent the interrupt originally? How can Python interpreter in principle, be aware of this happening? | 2018/02/27 | [
"https://Stackoverflow.com/questions/49005651",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5691066/"
] | How does asyncio work?
======================
Before answering this question we need to understand a few base terms, skip these if you already know any of them.
[Generators](https://wiki.python.org/moin/Generators)
-----------------------------------------------------
Generators are objects that allow us to suspend the execution of a python function. User curated generators are implemented using the keyword [**`yield`**](https://docs.python.org/3/reference/expressions.html#yield-expressions). By creating a normal function containing the `yield` keyword, we turn that function into a generator:
```
>>> def test():
... yield 1
... yield 2
...
>>> gen = test()
>>> next(gen)
1
>>> next(gen)
2
>>> next(gen)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
```
As you can see, calling [`next()`](https://docs.python.org/3/library/functions.html#next) on the generator causes the interpreter to load the test's frame, and return the `yield`ed value. Calling `next()` again, causes the frame to load again into the interpreter stack, and continues on `yield`ing another value.
By the third time `next()` is called, our generator was finished, and [`StopIteration`](https://docs.python.org/3/library/exceptions.html#StopIteration) was thrown.
### Communicating with a generator
A less-known feature of generators is the fact that you can communicate with them using two methods: [`send()`](https://docs.python.org/3/reference/expressions.html#generator.send) and [`throw()`](https://docs.python.org/3/reference/expressions.html#generator.throw).
```
>>> def test():
... val = yield 1
... print(val)
... yield 2
... yield 3
...
>>> gen = test()
>>> next(gen)
1
>>> gen.send("abc")
abc
2
>>> gen.throw(Exception())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in test
Exception
```
Upon calling `gen.send()`, the value is passed as a return value from the `yield` keyword.
`gen.throw()` on the other hand, allows throwing Exceptions inside generators, with the exception raised at the same spot `yield` was called.
### Returning values from generators
Returning a value from a generator, results in the value being put inside the `StopIteration` exception. We can later on recover the value from the exception and use it to our needs.
```
>>> def test():
... yield 1
... return "abc"
...
>>> gen = test()
>>> next(gen)
1
>>> try:
... next(gen)
... except StopIteration as exc:
... print(exc.value)
...
abc
```
Behold, a new keyword: `yield from`
-----------------------------------
Python 3.4 came with the addition of a new keyword: [`yield from`](https://docs.python.org/3/reference/expressions.html#yield-expressions). What that keyword allows us to do, is pass on any `next()`, `send()` and `throw()` into an inner-most nested generator. If the inner generator returns a value, it is also the return value of `yield from`:
```
>>> def inner():
... inner_result = yield 2
... print('inner', inner_result)
... return 3
...
>>> def outer():
... yield 1
... val = yield from inner()
... print('outer', val)
... yield 4
...
>>> gen = outer()
>>> next(gen)
1
>>> next(gen) # Goes inside inner() automatically
2
>>> gen.send("abc")
inner abc
outer 3
4
```
I've written [an article](https://towardsdatascience.com/cpython-internals-how-do-generators-work-ba1c4405b4bc) to further elaborate on this topic.
Putting it all together
-----------------------
Upon introducing the new keyword `yield from` in Python 3.4, we were now able to create generators inside generators that just like a tunnel, pass the data back and forth from the inner-most to the outer-most generators. This has spawned a new meaning for generators - *coroutines*.
**Coroutines** are functions that can be stopped and resumed while being run. In Python, they are defined using the **[`async def`](https://docs.python.org/3/reference/compound_stmts.html#coroutine-function-definition)** keyword. Much like generators, they too use their own form of `yield from` which is **[`await`](https://docs.python.org/3/reference/expressions.html#await)**. Before `async` and `await` were introduced in Python 3.5, we created coroutines in the exact same way generators were created (with `yield from` instead of `await`).
```
async def inner():
return 1
async def outer():
await inner()
```
Just like all iterators and generators implement the `__iter__()` method, all coroutines implement `__await__()` which allows them to continue on every time `await coro` is called.
There's a nice [sequence diagram](https://docs.python.org/3.5/_images/tulip_coro.png) inside the [Python docs](https://docs.python.org/3.5/library/asyncio-task.html#example-chain-coroutines) that you should check out.
In asyncio, apart from coroutine functions, we have 2 important objects: **tasks** and **futures**.
### [Futures](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future)
Futures are objects that have the `__await__()` method implemented, and their job is to hold a certain state and result. The state can be one of the following:
1. PENDING - future does not have any result or exception set.
2. CANCELLED - future was cancelled using `fut.cancel()`
3. FINISHED - future was finished, either by a result set using [`fut.set_result()`](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future.set_result) or by an exception set using [`fut.set_exception()`](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future.set_exception)
The result, just like you have guessed, can either be a Python object, that will be returned, or an exception which may be raised.
Another **important** feature of `future` objects, is that they contain a method called **[`add_done_callback()`](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future.add_done_callback)**. This method allows functions to be called as soon as the task is done - whether it raised an exception or finished.
### [Tasks](https://docs.python.org/3/library/asyncio-task.html#task)
Task objects are special futures, which wrap around coroutines, and communicate with the inner-most and outer-most coroutines. Every time a coroutine `await`s a future, the future is passed all the way back to the task (just like in `yield from`), and the task receives it.
Next, the task binds itself to the future. It does so by calling `add_done_callback()` on the future. From now on, if the future will ever be done, by either being cancelled, passed an exception or passed a Python object as a result, the task's callback will be called, and it will rise back up to existence.
Asyncio
=======
The final burning question we must answer is - how is the IO implemented?
Deep inside asyncio, we have an event loop. An event loop of tasks. The event loop's job is to call tasks every time they are ready and coordinate all that effort into one single working machine.
The IO part of the event loop is built upon a single crucial function called **[`select`](https://docs.python.org/3/library/select.html#module-select)**. Select is a blocking function, implemented by the operating system underneath, that allows waiting on sockets for incoming or outgoing data. Upon receiving data it wakes up, and returns the sockets which received data, or the sockets which are ready for writing.
When you try to receive or send data over a socket through asyncio, what actually happens below is that the socket is first checked if it has any data that can be immediately read or sent. If its `.send()` buffer is full, or the `.recv()` buffer is empty, the socket is registered to the `select` function (by simply adding it to one of the lists, `rlist` for `recv` and `wlist` for `send`) and the appropriate function `await`s a newly created `future` object, tied to that socket.
When all available tasks are waiting for futures, the event loop calls `select` and waits. When the one of the sockets has incoming data, or its `send` buffer drained up, asyncio checks for the future object tied to that socket, and sets it to done.
Now all the magic happens. The future is set to done, the task that added itself before with `add_done_callback()` rises up back to life, and calls `.send()` on the coroutine which resumes the inner-most coroutine (because of the `await` chain) and you read the newly received data from a nearby buffer it was spilled unto.
**Method chain again, in case of `recv()`:**
1. `select.select` waits.
2. A ready socket, with data is returned.
3. Data from the socket is moved into a buffer.
4. `future.set_result()` is called.
5. Task that added itself with `add_done_callback()` is now woken up.
6. Task calls `.send()` on the coroutine which goes all the way into the inner-most coroutine and wakes it up.
7. Data is being read from the buffer and returned to our humble user.
In summary, asyncio uses generator capabilities, that allow pausing and resuming functions. It uses `yield from` capabilities that allow passing data back and forth from the inner-most generator to the outer-most. It uses all of those in order to halt function execution while it's waiting for IO to complete (by using the OS `select` function).
And the best of all? While one function is paused, another may run and interleave with the delicate fabric, which is asyncio. | What is asyncio?
----------------
Asyncio stands for **asynchronous input output** and refers to a programming paradigm which achieves high concurrency using a single thread or event loop.
Asynchronous programming is a type of parallel programming in which a unit of work is allowed to run separately from the primary application [thread](https://whatis.techtarget.com/definition/thread). When the work is complete, it notifies the main thread about completion or failure of the worker thread.
Let's have a look in below image:
[![asynchronous_flow](https://i.stack.imgur.com/1gYQT.png)](https://i.stack.imgur.com/1gYQT.png)
**Let's understand asyncio with an example:**
To understand the concept behind asyncio, letβs consider a restaurant with a single waiter. Suddenly, three customers, A, B and C show up. The three of them take a varying amount of time to decide what to eat once they receive the menu from the waiter.
Letβs assume A takes 5 minutes, B 10 minutes and C 1 minute to decide. If the single waiter starts with B first and takes B's order in 10 minutes, next he serves A and spends 5 minutes on noting down his order and finally spends 1 minute to know what C wants to eat.
So, in total, waiter spends 10 + 5 + 1 = 16 minutes to take down their orders. However, notice in this sequence of events, C ends up waiting 15 minutes before the waiter gets to him, A waits 10 minutes and B waits 0 minutes.
Now consider if the waiter knew the time each customer would take to decide. He can start with C first, then go to A and finally to B. This way each customer would experience a 0 minute wait.
An **illusion** of three waiters, one dedicated to each customer is created even though thereβs only one.
Lastly, the total time it takes for the waiter to take all three orders is 10 minutes, much less than the 16 minutes in the other scenario.
**Let's go through another example:**
Suppose, Chess master *Magnus Carlsen* hosts a chess exhibition in which he plays with multiple amateur players. He has two ways of conducting the exhibition: synchronously and asynchronously.
Assumptions:
* 24 opponents
* *Magnus Carlsen* makes each chess move in 5 seconds
* Opponents each take 55 seconds to make a move
* Games average 30 pair-moves (60 moves total)
**Synchronously**: Magnus Carlsen plays one game at a time, never two at the same time, until the game is complete. Each game takes *(55 + 5) \* 30 == 1800* seconds, or **30 minutes**. The entire exhibition takes *24 \* 30 == 720* minutes, or **12 hours**.
**Asynchronously**: Magnus Carlsen moves from table to table, making one move at each table. She leaves the table and lets the opponent make their next move during the wait time. One move on all 24 games takes Judit *24 \* 5 == 120* seconds, or **2 minutes**. The entire exhibition is now cut down to *120 \* 30 == 3600* seconds, or just **1 hour**
There is only one Magnus Carlsen, who has only two hands and makes only one move at a time by himself. But playing asynchronously cuts the exhibition time down from 12 hours to one.
**Coding Example:**
Let try to demonstrate Synchronous and Asynchronous execution time using code snippet.
***Asynchronous - async\_count.py***
```
import asyncio
import time
async def count():
print("One", end=" ")
await asyncio.sleep(1)
print("Two", end=" ")
await asyncio.sleep(2)
print("Three", end=" ")
async def main():
await asyncio.gather(count(), count(), count(), count(), count())
if __name__ == "__main__":
start_time = time.perf_counter()
asyncio.run(main())
end_time = time.perf_counter()
execution_time = end_time - start_time
print(f"\nExecuting - {__file__}\nExecution Starts: {start_time}\nExecutions Ends: {end_time}\nTotals Execution Time:{execution_time:0.2f} seconds.")
```
***Asynchronous - Output*:**
```
One One One One One Two Two Two Two Two Three Three Three Three Three
Executing - async_count.py
Execution Starts: 18453.442160108
Executions Ends: 18456.444719712
Totals Execution Time:3.00 seconds.
```
***Synchronous - sync\_count.py***
```
import time
def count():
print("One", end=" ")
time.sleep(1)
print("Two", end=" ")
time.sleep(2)
print("Three", end=" ")
def main():
for _ in range(5):
count()
if __name__ == "__main__":
start_time = time.perf_counter()
main()
end_time = time.perf_counter()
execution_time = end_time - start_time
print(f"\nExecuting - {__file__}\nExecution Starts: {start_time}\nExecutions Ends: {end_time}\nTotals Execution Time:{execution_time:0.2f} seconds.")
```
***Synchronous - Output*:**
```
One Two Three One Two Three One Two Three One Two Three One Two Three
Executing - sync_count.py
Execution Starts: 18875.175965998
Executions Ends: 18890.189930292
Totals Execution Time:15.01 seconds.
```
Why use asyncio instead of multithreading in Python?
----------------------------------------------------
* Itβs very difficult to write code that is thread safe. With asynchronous code, you know exactly where the code will shift from one task to the next and race conditions are much harder to come by.
* Threads consume a fair amount of data since each thread needs to have its own stack. With async code, all the code shares the same stack and the stack is kept small due to continuously unwinding the stack between tasks.
* Threads are OS structures and therefore require more memory for the platform to support. There is no such problem with asynchronous tasks.
How does asyncio works?
-----------------------
### Before going deep let's recall Python Generator
**Python Generator:**
Functions containing a `yield` statement are compiled as generators. Using a yield expression in a functionβs body causes that function to be a generator. These functions return an object which supports the iteration protocol methods. The generator object created automatically receives a `__next()__` method. Going back to the example from the previous section we can invoke `__next__` directly on the generator object instead of using `next()`:
```
def asynchronous():
yield "Educative"
if __name__ == "__main__":
gen = asynchronous()
str = gen.__next__()
print(str)
```
Remember the following about generators:
* Generator functions allow you to procrastinate computing expensive values. You only compute the next value when required. This makes generators memory and compute efficient; they refrain from saving long sequences in memory or doing all expensive computations upfront.
* Generators, when suspended, retain the code location, which is the last yield statement executed, and their entire local scope. This allows them to resume execution from where they left off.
* Generator objects are nothing more than iterators.
* Remember to make a distinction between a generator function and the associated generator object which are often used interchangeably. A generator function when invoked returns a generator object and `next()` is invoked on the generator object to run the code within the generator function.
**States of a generator:**
A generator goes through the following states:
* `GEN_CREATED` when a generator object has been returned for the first time from a generator function and iteration hasnβt started.
* `GEN_RUNNING` when next has been invoked on the generator object and is being executed by the python interpreter.
* `GEN_SUSPENDED` when a generator is suspended at a yield
* `GEN_CLOSED` when a generator has completed execution or has been closed.
[![generator_cycle](https://i.stack.imgur.com/M2mLY.png)](https://i.stack.imgur.com/M2mLY.png)
**Methods on generator objects:**
A generator object exposes different methods that can be invoked to manipulate the generator. These are:
* `throw()`
* `send()`
* `close()`
### Let's deep dive into more details explanations
**The rules of asyncio:**
* The syntax `async def` introduces either a **native coroutine** or an **asynchronous generator**. The expressions `async with` and `async for` are also valid.
* The keyword `await` passes function control back to the event loop. (It suspends the execution of the surrounding coroutine.) If Python encounters an `await f()` expression in the scope of `g()`, this is how `await` tells the event loop, "Suspend execution of `g()` until whatever Iβm waiting onβthe result of `f()`βis returned. In the meantime, go let something else run."
In code, that second bullet point looks roughly like this:
```
async def g():
# Pause here and come back to g() when f() is ready
r = await f()
return r
```
There's also a strict set of rules around when and how you can and cannot use `async`/`await`. These can be handy whether you are still picking up the syntax or already have exposure to using `async`/`await`:
* A function that you introduce with `async def` is a coroutine. It may use `await`, `return`, or `yield`, but all of these are optional. Declaring `async def noop(): pass` is valid:
+ Using `await` and/or `return` creates a coroutine function. To call a coroutine function, you must `await` it to get its results.
+ It is less common to use `yield` in an `async def` block. This creates an [asynchronous generator](https://www.python.org/dev/peps/pep-0525/), which you iterate over with `async for`. Forget about async generators for the time being and focus on getting down the syntax for coroutine functions, which use `await` and/or `return`.
+ Anything defined with `async def` may not use `yield from`, which will raise a `SyntaxError`.
* Just like itβs a `SyntaxError` to use `yield` outside of a `def` function, it is a `SyntaxError` to use `await` outside of an `async def` coroutine. You can only use `await` in the body of coroutines.
Here are some terse examples meant to summarize the above few rules:
```
async def f(x):
y = await z(x) # OK - `await` and `return` allowed in coroutines
return y
async def g(x):
yield x # OK - this is an async generator
async def m(x):
yield from gen(x) # NO - SyntaxError
def m(x):
y = await z(x) # NO - SyntaxError (no `async def` here)
return y
```
### Generator Based Coroutine
Python created a distinction between Python generators and generators that were meant to be used as coroutines. These coroutines are called generator-based coroutines and require the decorator `@asynio.coroutine` to be added to the function definition, though this isnβt strictly enforced.
Generator based coroutines use `yield from` syntax instead of `yield`. A coroutine can:
* yield from another coroutine
* yield from a future
* return an expression
* raise exception
**Coroutines in Python make cooperative multitasking possible.**
Cooperative multitasking is the approach in which the running process voluntarily gives up the CPU to other processes. A process may do so when it is logically blocked, say while waiting for user input or when it has initiated a network request and will be idle for a while.
A coroutine can be defined as a special function that can give up control to its caller without losing its state.
**So whatβs the difference between coroutines and generators?**
Generators are essentially iterators though they look like functions. The distinction between generators and coroutines, in general, is that:
* Generators yield back a value to the invoker whereas a coroutine yields control to another coroutine and can resume execution from the point it gives up control.
* A generator canβt accept arguments once started whereas a coroutine can.
* Generators are primarily used to simplify writing iterators. They are a type of coroutine and sometimes also called as semicoroutines.
### Generator Based Coroutine Example
The simplest generator based coroutine we can write is as follows:
```
@asyncio.coroutine
def do_something_important():
yield from asyncio.sleep(1)
```
The coroutine sleeps for one second. Note the decorator and the use of `yield from`.
### Native Based Coroutine Example
By native it is meant that the language introduced syntax to specifically define coroutines, making them first class citizens in the language. Native coroutines can be defined using the `async/await` syntax.
The simplest native based coroutine we can write is as follows:
```
async def do_something_important():
await asyncio.sleep(1)
```
AsyncIO Design Patterns
-----------------------
AsyncIO comes with its own set of possible script designs, which we will discuss in this section.
**1. Event loops**
The event loop is a programming construct that waits for events to happen and then dispatches them to an event handler. An event can be a user clicking on a UI button or a process initiating a file download. **At the core of asynchronous programming, sits the event loop.**
**Example Code:**
```
import asyncio
import random
import time
from threading import Thread
from threading import current_thread
# ANSI colors
colors = (
"\033[0m", # End of color
"\033[31m", # Red
"\033[32m", # Green
"\033[34m", # Blue
)
async def do_something_important(sleep_for):
print(colors[1] + f"Is event loop running in thread {current_thread().getName()} = {asyncio.get_event_loop().is_running()}" + colors[0])
await asyncio.sleep(sleep_for)
def launch_event_loops():
# get a new event loop
loop = asyncio.new_event_loop()
# set the event loop for the current thread
asyncio.set_event_loop(loop)
# run a coroutine on the event loop
loop.run_until_complete(do_something_important(random.randint(1, 5)))
# remember to close the loop
loop.close()
if __name__ == "__main__":
thread_1 = Thread(target=launch_event_loops)
thread_2 = Thread(target=launch_event_loops)
start_time = time.perf_counter()
thread_1.start()
thread_2.start()
print(colors[2] + f"Is event loop running in thread {current_thread().getName()} = {asyncio.get_event_loop().is_running()}" + colors[0])
thread_1.join()
thread_2.join()
end_time = time.perf_counter()
execution_time = end_time - start_time
print(colors[3] + f"Event Loop Start Time: {start_time}\nEvent Loop End Time: {end_time}\nEvent Loop Execution Time: {execution_time:0.2f} seconds." + colors[0])
```
**Execution Command:** `python async_event_loop.py`
**Output:**
[![async_event_loop](https://i.stack.imgur.com/w6FRy.png)](https://i.stack.imgur.com/w6FRy.png)
Try it out yourself and examine the output and youβll realize that each spawned thread is running its own event loop.
**Types of event loops**
There are two types of event loops:
* *SelectorEventLoop*: SelectorEventLoop is based on the selectors module and is the default loop on all platforms.
* *ProactorEventLoop*: ProactorEventLoop is based on Windowsβ I/O Completion Ports and is only supported on Windows.
**2. Futures**
Future represents a computation that is either in progress or will get scheduled in the future. It is a special low-level awaitable object that represents an eventual result of an asynchronous operation. Donβt confuse `threading.Future` and `asyncio.Future`.
**Example Code:**
```
import time
import asyncio
from asyncio import Future
# ANSI colors
colors = (
"\033[0m", # End of color
"\033[31m", # Red
"\033[32m", # Green
"\033[34m", # Blue
)
async def bar(future):
print(colors[1] + "bar will sleep for 3 seconds" + colors[0])
await asyncio.sleep(3)
print(colors[1] + "bar resolving the future" + colors[0])
future.done()
future.set_result("future is resolved")
async def foo(future):
print(colors[2] + "foo will await the future" + colors[0])
await future
print(colors[2] + "foo finds the future resolved" + colors[0])
async def main():
future = Future()
await asyncio.gather(foo(future), bar(future))
if __name__ == "__main__":
start_time = time.perf_counter()
asyncio.run(main())
end_time = time.perf_counter()
execution_time = end_time - start_time
print(colors[3] + f"Future Start Time: {start_time}\nFuture End Time: {end_time}\nFuture Execution Time: {execution_time:0.2f} seconds." + colors[0])
```
**Execution Command:** `python async_futures.py`
**Output:**
[![async_futures](https://i.stack.imgur.com/JGsY6.png)](https://i.stack.imgur.com/JGsY6.png)
Both the coroutines are passed a future. The `foo()` coroutine awaits for the future to get resolved, while the `bar()` coroutine resolves the future after three seconds.
**3. Tasks**
Tasks are like futures, in fact, Task is a subclass of Future and can be created using the following methods:
* `asyncio.create_task()` accepts coroutines and wraps them as tasks.
* `loop.create_task()` only accepts coroutines.
* `asyncio.ensure_future()` accepts futures, coroutines and any awaitable objects.
Tasks wrap coroutines and run them in event loops. If a coroutine awaits on a Future, the Task suspends the execution of the coroutine and waits for the Future to complete. When the Future is done, the execution of the wrapped coroutine resumes.
**Example Code:**
```
import time
import asyncio
from asyncio import Future
# ANSI colors
colors = (
"\033[0m", # End of color
"\033[31m", # Red
"\033[32m", # Green
"\033[34m", # Blue
)
async def bar(future):
print(colors[1] + "bar will sleep for 3 seconds" + colors[0])
await asyncio.sleep(3)
print(colors[1] + "bar resolving the future" + colors[0])
future.done()
future.set_result("future is resolved")
async def foo(future):
print(colors[2] + "foo will await the future" + colors[0])
await future
print(colors[2] + "foo finds the future resolved" + colors[0])
async def main():
future = Future()
loop = asyncio.get_event_loop()
t1 = loop.create_task(bar(future))
t2 = loop.create_task(foo(future))
await t2, t1
if __name__ == "__main__":
start_time = time.perf_counter()
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
end_time = time.perf_counter()
execution_time = end_time - start_time
print(colors[3] + f"Future Start Time: {start_time}\nFuture End Time: {end_time}\nFuture Execution Time: {execution_time:0.2f} seconds." + colors[0])
```
**Execution Command:** `python async_tasks.py`
**Output:**
[![async_tasks](https://i.stack.imgur.com/k9tO1.png)](https://i.stack.imgur.com/k9tO1.png)
**4. Chaining Coroutines:**
A key feature of coroutines is that they can be chained together. A coroutine object is awaitable, so another coroutine can `await` it. This allows you to break programs into smaller, manageable, recyclable coroutines:
**Example Code:**
```
import sys
import asyncio
import random
import time
# ANSI colors
colors = (
"\033[0m", # End of color
"\033[31m", # Red
"\033[32m", # Green
"\033[36m", # Cyan
"\033[34m", # Blue
)
async def function1(n: int) -> str:
i = random.randint(0, 10)
print(colors[1] + f"function1({n}) is sleeping for {i} seconds." + colors[0])
await asyncio.sleep(i)
result = f"result{n}-1"
print(colors[1] + f"Returning function1({n}) == {result}." + colors[0])
return result
async def function2(n: int, arg: str) -> str:
i = random.randint(0, 10)
print(colors[2] + f"function2{n, arg} is sleeping for {i} seconds." + colors[0])
await asyncio.sleep(i)
result = f"result{n}-2 derived from {arg}"
print(colors[2] + f"Returning function2{n, arg} == {result}." + colors[0])
return result
async def chain(n: int) -> None:
start = time.perf_counter()
p1 = await function1(n)
p2 = await function2(n, p1)
end = time.perf_counter() - start
print(colors[3] + f"--> Chained result{n} => {p2} (took {end:0.2f} seconds)." + colors[0])
async def main(*args):
await asyncio.gather(*(chain(n) for n in args))
if __name__ == "__main__":
random.seed(444)
args = [1, 2, 3] if len(sys.argv) == 1 else map(int, sys.argv[1:])
start_time = time.perf_counter()
asyncio.run(main(*args))
end_time = time.perf_counter()
execution_time = end_time - start_time
print(colors[4] + f"Program Start Time: {start_time}\nProgram End Time: {end_time}\nProgram Execution Time: {execution_time:0.2f} seconds." + colors[0])
```
Pay careful attention to the output, where `function1()` sleeps for a variable amount of time, and `function2()` begins working with the results as they become available:
**Execution Command:** `python async_chained.py 11 8 5`
**Output:**
[![async_chained](https://i.stack.imgur.com/hl03R.png)](https://i.stack.imgur.com/hl03R.png)
**5. Using a Queue:**
In this design, there is no chaining of any individual consumer to a producer. The consumers donβt know the number of producers, or even the cumulative number of items that will be added to the queue, in advance.
It takes an individual producer or consumer a variable amount of time to put and extract items from the queue, respectively. The queue serves as a throughput that can communicate with the producers and consumers without them talking to each other directly.
**Example Code:**
```
import asyncio
import argparse
import itertools as it
import os
import random
import time
# ANSI colors
colors = (
"\033[0m", # End of color
"\033[31m", # Red
"\033[32m", # Green
"\033[36m", # Cyan
"\033[34m", # Blue
)
async def generate_item(size: int = 5) -> str:
return os.urandom(size).hex()
async def random_sleep(caller=None) -> None:
i = random.randint(0, 10)
if caller:
print(colors[1] + f"{caller} sleeping for {i} seconds." + colors[0])
await asyncio.sleep(i)
async def produce(name: int, producer_queue: asyncio.Queue) -> None:
n = random.randint(0, 10)
for _ in it.repeat(None, n): # Synchronous loop for each single producer
await random_sleep(caller=f"Producer {name}")
i = await generate_item()
t = time.perf_counter()
await producer_queue.put((i, t))
print(colors[2] + f"Producer {name} added <{i}> to queue." + colors[0])
async def consume(name: int, consumer_queue: asyncio.Queue) -> None:
while True:
await random_sleep(caller=f"Consumer {name}")
i, t = await consumer_queue.get()
now = time.perf_counter()
print(colors[3] + f"Consumer {name} got element <{i}>" f" in {now - t:0.5f} seconds." + colors[0])
consumer_queue.task_done()
async def main(no_producer: int, no_consumer: int):
q = asyncio.Queue()
producers = [asyncio.create_task(produce(n, q)) for n in range(no_producer)]
consumers = [asyncio.create_task(consume(n, q)) for n in range(no_consumer)]
await asyncio.gather(*producers)
await q.join() # Implicitly awaits consumers, too
for consumer in consumers:
consumer.cancel()
if __name__ == "__main__":
random.seed(444)
parser = argparse.ArgumentParser()
parser.add_argument("-p", "--no_producer", type=int, default=10)
parser.add_argument("-c", "--no_consumer", type=int, default=15)
ns = parser.parse_args()
start_time = time.perf_counter()
asyncio.run(main(**ns.__dict__))
end_time = time.perf_counter()
execution_time = end_time - start_time
print(colors[4] + f"Program Start Time: {start_time}\nProgram End Time: {end_time}\nProgram Execution Time: {execution_time:0.2f} seconds." + colors[0])
```
**Execution Command:** `python async_queue.py -p 2 -c 4`
**Output:**
[![async_queue](https://i.stack.imgur.com/8iPho.png)](https://i.stack.imgur.com/8iPho.png)
Lastly, let's have an example of how asyncio cuts down on wait time: given a coroutine `generate_random_int()` that keeps producing random integers in the range [0, 10], until one of them exceeds a threshold, you want to let multiple calls of this coroutine not need to wait for each other to complete in succession.
**Example Code:**
```
import time
import asyncio
import random
# ANSI colors
colors = (
"\033[0m", # End of color
"\033[31m", # Red
"\033[32m", # Green
"\033[36m", # Cyan
"\033[35m", # Magenta
"\033[34m", # Blue
)
async def generate_random_int(indx: int, threshold: int = 5) -> int:
print(colors[indx + 1] + f"Initiated generate_random_int({indx}).")
i = random.randint(0, 10)
while i <= threshold:
print(colors[indx + 1] + f"generate_random_int({indx}) == {i} too low; retrying.")
await asyncio.sleep(indx + 1)
i = random.randint(0, 10)
print(colors[indx + 1] + f"---> Finished: generate_random_int({indx}) == {i}" + colors[0])
return i
async def main():
res = await asyncio.gather(*(generate_random_int(i, 10 - i - 1) for i in range(3)))
return res
if __name__ == "__main__":
random.seed(444)
start_time = time.perf_counter()
r1, r2, r3 = asyncio.run(main())
print(colors[4] + f"\nRandom INT 1: {r1}, Random INT 2: {r2}, Random INT 3: {r3}\n" + colors[0])
end_time = time.perf_counter()
execution_time = end_time - start_time
print(colors[5] + f"Program Start Time: {start_time}\nProgram End Time: {end_time}\nProgram Execution Time: {execution_time:0.2f} seconds." + colors[0])
```
**Execution Command:** `python async_random.py`
**Output:**
[![async_random](https://i.stack.imgur.com/k8wAT.png)](https://i.stack.imgur.com/k8wAT.png)
>
> **Note:** If youβre writing any code yourself, prefer native coroutines
> for the sake of being explicit rather than implicit. Generator based
> coroutines will be removed in Python 3.10.
>
>
>
**GitHub Repo: <https://github.com/tssovi/asynchronous-in-python>** |
36,590,875 | How to obtain absolute path via relative path for 'other' project files, not those python file in the project, like Java?
```
D:\Workspaces\ABCPythonProject\
|- src
| |-- com/abc
| |-- conf.py
| |-- abcd.py
| |-- defg.py
| |-- installation.rst
|- resources
| |-- a.txt
| |-- b.txt
| |-- c.jpg
```
For example, I would like access 'a.txt' or 'b.txt' in python codes like 'abcd.py' in a simple manner with variable like 'resource/a.txt', just like a Java project in Java.
In short, I want to get '**D:\Workspaces\ABCPythonProject\resources\a.txt**' by '**resources\a.txt**', which is extremely easy to do in Java, but is seemingly extremely difficult to achieve in Python.
(If I use the built-in python methods like 'os.filePath.join(os.filePath.dirname(\_file\_\_), 'resources/a.txt')', os.path.dirname('resources/a.txt'), os.path.abspath('resources/a.txt'), ..., etc., the results is always "**D:\Workspaces\ABCPythonProject\com\abc\resources\a.txt**", a non-exit file path. )
How to achieve this? | 2016/04/13 | [
"https://Stackoverflow.com/questions/36590875",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1762932/"
] | For images you'll have to use:
```
<img src="url">
``` | It should be in following way,
```
foreach ($pdo->query($sql) as $row) {
echo '<tr>';
echo '<td>'. $row['u_id'] . '</td>';
echo '<td>'. $row['u_role'] . '</td>';
echo '<td>'. $row['u_name'] . '</td>';
echo '<td>'. $row['u_passw'] . '</td>';
echo '<td>'. $row['u_init'] . '</td>';
echo '<td>'. $row['c_name'] . '</td>';
echo '<td>'. $row['u_mail'] . '</td>';
echo '<td>'.'<img src="'. $row['u_pic'] . '" width=45 height=45></img>'.'</td>';
}
``` |
36,590,875 | How to obtain absolute path via relative path for 'other' project files, not those python file in the project, like Java?
```
D:\Workspaces\ABCPythonProject\
|- src
| |-- com/abc
| |-- conf.py
| |-- abcd.py
| |-- defg.py
| |-- installation.rst
|- resources
| |-- a.txt
| |-- b.txt
| |-- c.jpg
```
For example, I would like access 'a.txt' or 'b.txt' in python codes like 'abcd.py' in a simple manner with variable like 'resource/a.txt', just like a Java project in Java.
In short, I want to get '**D:\Workspaces\ABCPythonProject\resources\a.txt**' by '**resources\a.txt**', which is extremely easy to do in Java, but is seemingly extremely difficult to achieve in Python.
(If I use the built-in python methods like 'os.filePath.join(os.filePath.dirname(\_file\_\_), 'resources/a.txt')', os.path.dirname('resources/a.txt'), os.path.abspath('resources/a.txt'), ..., etc., the results is always "**D:\Workspaces\ABCPythonProject\com\abc\resources\a.txt**", a non-exit file path. )
How to achieve this? | 2016/04/13 | [
"https://Stackoverflow.com/questions/36590875",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1762932/"
] | For images you'll have to use:
```
<img src="url">
``` | ```
echo '<td><img src="'. $row['u_pic'] . '" width=45 height=45></img></td>';
``` |
36,590,875 | How to obtain absolute path via relative path for 'other' project files, not those python file in the project, like Java?
```
D:\Workspaces\ABCPythonProject\
|- src
| |-- com/abc
| |-- conf.py
| |-- abcd.py
| |-- defg.py
| |-- installation.rst
|- resources
| |-- a.txt
| |-- b.txt
| |-- c.jpg
```
For example, I would like access 'a.txt' or 'b.txt' in python codes like 'abcd.py' in a simple manner with variable like 'resource/a.txt', just like a Java project in Java.
In short, I want to get '**D:\Workspaces\ABCPythonProject\resources\a.txt**' by '**resources\a.txt**', which is extremely easy to do in Java, but is seemingly extremely difficult to achieve in Python.
(If I use the built-in python methods like 'os.filePath.join(os.filePath.dirname(\_file\_\_), 'resources/a.txt')', os.path.dirname('resources/a.txt'), os.path.abspath('resources/a.txt'), ..., etc., the results is always "**D:\Workspaces\ABCPythonProject\com\abc\resources\a.txt**", a non-exit file path. )
How to achieve this? | 2016/04/13 | [
"https://Stackoverflow.com/questions/36590875",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1762932/"
] | ```
echo '<td><img src="'. $row['u_pic'] . '" width=45 height=45></img></td>';
``` | It should be in following way,
```
foreach ($pdo->query($sql) as $row) {
echo '<tr>';
echo '<td>'. $row['u_id'] . '</td>';
echo '<td>'. $row['u_role'] . '</td>';
echo '<td>'. $row['u_name'] . '</td>';
echo '<td>'. $row['u_passw'] . '</td>';
echo '<td>'. $row['u_init'] . '</td>';
echo '<td>'. $row['c_name'] . '</td>';
echo '<td>'. $row['u_mail'] . '</td>';
echo '<td>'.'<img src="'. $row['u_pic'] . '" width=45 height=45></img>'.'</td>';
}
``` |
36,215,958 | I want to filter the moment of a day only with hour and minutes.
For example, a function that return true if now is between the 9.15 and 11.20 of the day.
I tried with datetime but with the minutes is littlebit complicated.
```
#!/usr/bin/python
import datetime
n = datetime.datetime.now()
sta = datetime.time(19,18)
sto = datetime.time(20,19)
if sta.hour <= n.hour and n.hour <= sto.hour:
if sta.minute <= n.minute and sto.minute <= n.minute:
print str(n.hour) + ":" + str(n.minute)
```
What is the best way?
Regards | 2016/03/25 | [
"https://Stackoverflow.com/questions/36215958",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/341022/"
] | You can use tuple comparisons to do any subinterval comparisons pretty easily:
```
>>> def f(dt):
... return (9, 15) <= (dt.hour, dt.minute) < (11, 21)
...
>>> d = datetime.datetime.now()
>>> str(d)
'2016-03-25 09:50:51.782718'
>>> f(d)
True
>>> f(d + datetime.timedelta(hours=2)
False
```
This accepts any datetime that has time between 9:15:00.000000 and 11:20:59.999999.
---
The above method also works if you need to check for example 5 first minutes of each hour; but for the hours of day, it might be simpler to use `.time()` to get the time part of a datetime, then compare this to the limits. The following accepts any time between 9:15:00.000000 and 11:20:00.000000 (inclusive):
```
>>> def f(dt):
... return datetime.time(9, 15) <= dt.time() <= datetime.time(11, 20)
``` | You'll need to use the combine class method:
```
import datetime
def between():
now = datetime.datetime.now()
start = datetime.datetime.combine(now.date(), datetime.time(9, 15))
end = datetime.datetime.combine(now.date(), datetime.time(11, 20))
return start <= now < end
``` |
5,965,655 | I'm trying to build a web interface for some python scripts. The thing is I have to use PHP (and not CGI) and some of the scripts I execute take quite some time to finish: 5-10 minutes. Is it possible for PHP to communicate with the scripts and display some sort of progress status? This should allow the user to use the webpage as the task runs and display some status in the meantime or just a message when it's done.
Currently using exec() and on completion I process the output. The server is running on a Windows machine, so pcntl\_fork will not work.
**LATER EDIT**:
Using another php script to feed the main page information using ajax doesn't seem to work because the server kills it (it reaches max execution time, and I don't really want to increase this unless necessary)
I was thinking about socket based communication but I don't see how is this useful in my case (some hints, maybe?
Thank you | 2011/05/11 | [
"https://Stackoverflow.com/questions/5965655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/748676/"
] | You want *inter-process communication*. Sockets are the first thing that comes to mind; you'd need to set up a socket to *listen* for a connection (on the same machine) in PHP and set up a socket to *connect* to the listening socket in Python and *send* it its status.
Have a look at [this socket programming overview](http://docs.python.org/howto/sockets.html) from the Python documentation and [the Python `socket` module's documentation (especially the examples at the end)](http://docs.python.org/library/socket.html). I'm sure PHP has similar resources.
Once you've got an more specific idea of what you want to build and need help, feel free to ask a *new* question on StackOverflow (if it isn't already answered). | I think you would have to use a meta refresh and maybe have the python write the status to a file and then have the php read from it.
You could use AJAX as well to make it more dynamic.
Also, probably shouldn't use exec()...that opens up a world of vulnerabilities. |
5,965,655 | I'm trying to build a web interface for some python scripts. The thing is I have to use PHP (and not CGI) and some of the scripts I execute take quite some time to finish: 5-10 minutes. Is it possible for PHP to communicate with the scripts and display some sort of progress status? This should allow the user to use the webpage as the task runs and display some status in the meantime or just a message when it's done.
Currently using exec() and on completion I process the output. The server is running on a Windows machine, so pcntl\_fork will not work.
**LATER EDIT**:
Using another php script to feed the main page information using ajax doesn't seem to work because the server kills it (it reaches max execution time, and I don't really want to increase this unless necessary)
I was thinking about socket based communication but I don't see how is this useful in my case (some hints, maybe?
Thank you | 2011/05/11 | [
"https://Stackoverflow.com/questions/5965655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/748676/"
] | I think you would have to use a meta refresh and maybe have the python write the status to a file and then have the php read from it.
You could use AJAX as well to make it more dynamic.
Also, probably shouldn't use exec()...that opens up a world of vulnerabilities. | Unfortunately my friend, I do believe you'll need to use Sockets as you requested. :( I have little experience working with them, but [This Python Tutorial on Sockets/Network Programming](http://heather.cs.ucdavis.edu/~matloff/Python/PyNet.pdf) may help you get the Python socket interaction you need. (Beau Martinez's links seem promising as well.)
You'd also need to get some PHP socket connections, too, so it can request the status.
Continuing on that, my thoughts would be that your Python script is likely going to run in a loop. Ergo, I'd put the "Check for a status request" check inside the beginning of a part of that loop. It'd reply one status, while a later loop inside that script would reply with an increased status.. etc.
Good luck!
**Edit:** I think that the file writing recommendation from Thomas Schultz is probably the easiest to implement. The only downside is waiting for the file to be opened-- You'll need to make sure your PHP and Python scripts don't hang or return failure without trying again. |
5,965,655 | I'm trying to build a web interface for some python scripts. The thing is I have to use PHP (and not CGI) and some of the scripts I execute take quite some time to finish: 5-10 minutes. Is it possible for PHP to communicate with the scripts and display some sort of progress status? This should allow the user to use the webpage as the task runs and display some status in the meantime or just a message when it's done.
Currently using exec() and on completion I process the output. The server is running on a Windows machine, so pcntl\_fork will not work.
**LATER EDIT**:
Using another php script to feed the main page information using ajax doesn't seem to work because the server kills it (it reaches max execution time, and I don't really want to increase this unless necessary)
I was thinking about socket based communication but I don't see how is this useful in my case (some hints, maybe?
Thank you | 2011/05/11 | [
"https://Stackoverflow.com/questions/5965655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/748676/"
] | I think you would have to use a meta refresh and maybe have the python write the status to a file and then have the php read from it.
You could use AJAX as well to make it more dynamic.
Also, probably shouldn't use exec()...that opens up a world of vulnerabilities. | You could use a queuing service like [Gearman](http://gearman.org/), with a client in PHP and a worker in Python or vice versa.
Someone has created an example setup here.
<https://github.com/dbaltas/gearman-python-worker> |
5,965,655 | I'm trying to build a web interface for some python scripts. The thing is I have to use PHP (and not CGI) and some of the scripts I execute take quite some time to finish: 5-10 minutes. Is it possible for PHP to communicate with the scripts and display some sort of progress status? This should allow the user to use the webpage as the task runs and display some status in the meantime or just a message when it's done.
Currently using exec() and on completion I process the output. The server is running on a Windows machine, so pcntl\_fork will not work.
**LATER EDIT**:
Using another php script to feed the main page information using ajax doesn't seem to work because the server kills it (it reaches max execution time, and I don't really want to increase this unless necessary)
I was thinking about socket based communication but I don't see how is this useful in my case (some hints, maybe?
Thank you | 2011/05/11 | [
"https://Stackoverflow.com/questions/5965655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/748676/"
] | You want *inter-process communication*. Sockets are the first thing that comes to mind; you'd need to set up a socket to *listen* for a connection (on the same machine) in PHP and set up a socket to *connect* to the listening socket in Python and *send* it its status.
Have a look at [this socket programming overview](http://docs.python.org/howto/sockets.html) from the Python documentation and [the Python `socket` module's documentation (especially the examples at the end)](http://docs.python.org/library/socket.html). I'm sure PHP has similar resources.
Once you've got an more specific idea of what you want to build and need help, feel free to ask a *new* question on StackOverflow (if it isn't already answered). | Unfortunately my friend, I do believe you'll need to use Sockets as you requested. :( I have little experience working with them, but [This Python Tutorial on Sockets/Network Programming](http://heather.cs.ucdavis.edu/~matloff/Python/PyNet.pdf) may help you get the Python socket interaction you need. (Beau Martinez's links seem promising as well.)
You'd also need to get some PHP socket connections, too, so it can request the status.
Continuing on that, my thoughts would be that your Python script is likely going to run in a loop. Ergo, I'd put the "Check for a status request" check inside the beginning of a part of that loop. It'd reply one status, while a later loop inside that script would reply with an increased status.. etc.
Good luck!
**Edit:** I think that the file writing recommendation from Thomas Schultz is probably the easiest to implement. The only downside is waiting for the file to be opened-- You'll need to make sure your PHP and Python scripts don't hang or return failure without trying again. |
5,965,655 | I'm trying to build a web interface for some python scripts. The thing is I have to use PHP (and not CGI) and some of the scripts I execute take quite some time to finish: 5-10 minutes. Is it possible for PHP to communicate with the scripts and display some sort of progress status? This should allow the user to use the webpage as the task runs and display some status in the meantime or just a message when it's done.
Currently using exec() and on completion I process the output. The server is running on a Windows machine, so pcntl\_fork will not work.
**LATER EDIT**:
Using another php script to feed the main page information using ajax doesn't seem to work because the server kills it (it reaches max execution time, and I don't really want to increase this unless necessary)
I was thinking about socket based communication but I don't see how is this useful in my case (some hints, maybe?
Thank you | 2011/05/11 | [
"https://Stackoverflow.com/questions/5965655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/748676/"
] | You want *inter-process communication*. Sockets are the first thing that comes to mind; you'd need to set up a socket to *listen* for a connection (on the same machine) in PHP and set up a socket to *connect* to the listening socket in Python and *send* it its status.
Have a look at [this socket programming overview](http://docs.python.org/howto/sockets.html) from the Python documentation and [the Python `socket` module's documentation (especially the examples at the end)](http://docs.python.org/library/socket.html). I'm sure PHP has similar resources.
Once you've got an more specific idea of what you want to build and need help, feel free to ask a *new* question on StackOverflow (if it isn't already answered). | You could use a queuing service like [Gearman](http://gearman.org/), with a client in PHP and a worker in Python or vice versa.
Someone has created an example setup here.
<https://github.com/dbaltas/gearman-python-worker> |
5,965,655 | I'm trying to build a web interface for some python scripts. The thing is I have to use PHP (and not CGI) and some of the scripts I execute take quite some time to finish: 5-10 minutes. Is it possible for PHP to communicate with the scripts and display some sort of progress status? This should allow the user to use the webpage as the task runs and display some status in the meantime or just a message when it's done.
Currently using exec() and on completion I process the output. The server is running on a Windows machine, so pcntl\_fork will not work.
**LATER EDIT**:
Using another php script to feed the main page information using ajax doesn't seem to work because the server kills it (it reaches max execution time, and I don't really want to increase this unless necessary)
I was thinking about socket based communication but I don't see how is this useful in my case (some hints, maybe?
Thank you | 2011/05/11 | [
"https://Stackoverflow.com/questions/5965655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/748676/"
] | You could use a queuing service like [Gearman](http://gearman.org/), with a client in PHP and a worker in Python or vice versa.
Someone has created an example setup here.
<https://github.com/dbaltas/gearman-python-worker> | Unfortunately my friend, I do believe you'll need to use Sockets as you requested. :( I have little experience working with them, but [This Python Tutorial on Sockets/Network Programming](http://heather.cs.ucdavis.edu/~matloff/Python/PyNet.pdf) may help you get the Python socket interaction you need. (Beau Martinez's links seem promising as well.)
You'd also need to get some PHP socket connections, too, so it can request the status.
Continuing on that, my thoughts would be that your Python script is likely going to run in a loop. Ergo, I'd put the "Check for a status request" check inside the beginning of a part of that loop. It'd reply one status, while a later loop inside that script would reply with an increased status.. etc.
Good luck!
**Edit:** I think that the file writing recommendation from Thomas Schultz is probably the easiest to implement. The only downside is waiting for the file to be opened-- You'll need to make sure your PHP and Python scripts don't hang or return failure without trying again. |
31,480,921 | I can't seem to get the interactive tooltips powered by mpld3 to work with the fantastic lmplot-like scatter plots from seaborn.
I'd love any pointer on how to get this to work! Thanks!
Example Code:
```
# I'm running this in an ipython notebook.
%matplotlib inline
import matplotlib.pyplot as plt, mpld3
mpld3.enable_notebook()
import seaborn as sns
N=10
data = pd.DataFrame({"x": np.random.randn(N),
"y": np.random.randn(N),
"size": np.random.randint(20,200, size=N),
"label": np.arange(N)
})
scatter_sns = sns.lmplot("x", "y",
scatter_kws={"s": data["size"]},
robust=False, # slow if true
data=data, size=8)
fig = plt.gcf()
tooltip = mpld3.plugins.PointLabelTooltip(fig, labels=list(data.label))
mpld3.plugins.connect(fig, tooltip)
mpld3.display(fig)
```
I'm getting the seaborn plot along with the following error:
```
Javascript error adding output!
TypeError: obj.elements is not a function
See your browser Javascript console for more details.
```
The console shows:
```
TypeError: obj.elements is not a function
at mpld3_TooltipPlugin.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1161:9)
at mpld3_Figure.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1400:23)
at Object.mpld3.draw_figure (https://mpld3.github.io/js/mpld3.v0.2.js:18:9)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:14:14)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:15:5)
at eval (native)
at Function.x.extend.globalEval (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231)
at x.fn.extend.domManip (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:21253)
at x.fn.extend.append (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:18822)
at OutputArea._safe_append (https://mbcomp1:9999/static/notebook/js/outputarea.js:336:26)
outputarea.js:319 Javascript error adding output! TypeError: obj.elements is not a function
at mpld3_TooltipPlugin.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1161:9)
at mpld3_Figure.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1400:23)
at Object.mpld3.draw_figure (https://mpld3.github.io/js/mpld3.v0.2.js:18:9)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:14:14)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:15:5)
at eval (native)
at Function.x.extend.globalEval (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231)
at x.fn.extend.domManip (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:21253)
at x.fn.extend.append (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:18822)
at OutputArea._safe_append (https://mbcomp1:9999/static/notebook/js/outputarea.js:336:26)
outputarea.js:338 TypeError: obj.elements is not a function
at mpld3_TooltipPlugin.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1161:9)
at mpld3_Figure.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1400:23)
at Object.mpld3.draw_figure (https://mpld3.github.io/js/mpld3.v0.2.js:18:9)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:14:14)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:15:5)
at eval (native)
at Function.x.extend.globalEval (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231)
at x.fn.extend.domManip (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:21253)
at x.fn.extend.append (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:18822)
at OutputArea._safe_append (https://mbcomp1:9999/static/notebook/js/outputarea.js:336:26)
outputarea.js:319 Javascript error adding output! TypeError: obj.elements is not a function
at mpld3_TooltipPlugin.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1161:9)
at mpld3_Figure.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1400:23)
at Object.mpld3.draw_figure (https://mpld3.github.io/js/mpld3.v0.2.js:18:9)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:14:14)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:15:5)
at eval (native)
at Function.x.extend.globalEval (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231)
at x.fn.extend.domManip (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:21253)
at x.fn.extend.append (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:18822)
at OutputArea._safe_append (https://mbcomp1:9999/static/notebook/js/outputarea.js:336:26)
``` | 2015/07/17 | [
"https://Stackoverflow.com/questions/31480921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1270151/"
] | I don't think that there is an easy way to do this currently. I can get some of the tooltips to show by replacing your `tooltip` constructor with the following:
```
ax = plt.gca()
pts = ax.get_children()[3]
tooltip = mpld3.plugins.PointLabelTooltip(pts, labels=list(data.label))
```
This only works for the points outside of the uncertainty interval, though. I think it would be possible to extend `seaborn` to make these points highest in the `zorder` and store them in in the instance somewhere so that you don't need do pull them out of the axis children list. Perhaps worth a feature request. | Your code works for me on `ipython` (no notepad) when saving the figure to file with `mpld3.save_html(fig,"./out.html")`. May be an issue with `ipython` `notepad`/`mpld3` compatibility or `mpld3.display` (which causes an error for me, although I think this is related to an old version of matplotlib on my computer).
The full code which worked for me is,
```
import numpy as np
import matplotlib.pyplot as plt, mpld3
import seaborn as sns
import pandas as pd
N=10
data = pd.DataFrame({"x": np.random.randn(N),
"y": np.random.randn(N),
"size": np.random.randint(20,200, size=N),
"label": np.arange(N)
})
scatter_sns = sns.lmplot("x", "y",
scatter_kws={"s": data["size"]},
robust=False, # slow if true
data=data, size=8)
fig = plt.gcf()
tooltip = mpld3.plugins.PointLabelTooltip(fig, labels=list(data.label))
mpld3.plugins.connect(fig, tooltip)
mpld3.save_html(fig,"./out.html")
``` |
31,480,921 | I can't seem to get the interactive tooltips powered by mpld3 to work with the fantastic lmplot-like scatter plots from seaborn.
I'd love any pointer on how to get this to work! Thanks!
Example Code:
```
# I'm running this in an ipython notebook.
%matplotlib inline
import matplotlib.pyplot as plt, mpld3
mpld3.enable_notebook()
import seaborn as sns
N=10
data = pd.DataFrame({"x": np.random.randn(N),
"y": np.random.randn(N),
"size": np.random.randint(20,200, size=N),
"label": np.arange(N)
})
scatter_sns = sns.lmplot("x", "y",
scatter_kws={"s": data["size"]},
robust=False, # slow if true
data=data, size=8)
fig = plt.gcf()
tooltip = mpld3.plugins.PointLabelTooltip(fig, labels=list(data.label))
mpld3.plugins.connect(fig, tooltip)
mpld3.display(fig)
```
I'm getting the seaborn plot along with the following error:
```
Javascript error adding output!
TypeError: obj.elements is not a function
See your browser Javascript console for more details.
```
The console shows:
```
TypeError: obj.elements is not a function
at mpld3_TooltipPlugin.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1161:9)
at mpld3_Figure.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1400:23)
at Object.mpld3.draw_figure (https://mpld3.github.io/js/mpld3.v0.2.js:18:9)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:14:14)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:15:5)
at eval (native)
at Function.x.extend.globalEval (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231)
at x.fn.extend.domManip (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:21253)
at x.fn.extend.append (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:18822)
at OutputArea._safe_append (https://mbcomp1:9999/static/notebook/js/outputarea.js:336:26)
outputarea.js:319 Javascript error adding output! TypeError: obj.elements is not a function
at mpld3_TooltipPlugin.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1161:9)
at mpld3_Figure.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1400:23)
at Object.mpld3.draw_figure (https://mpld3.github.io/js/mpld3.v0.2.js:18:9)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:14:14)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:15:5)
at eval (native)
at Function.x.extend.globalEval (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231)
at x.fn.extend.domManip (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:21253)
at x.fn.extend.append (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:18822)
at OutputArea._safe_append (https://mbcomp1:9999/static/notebook/js/outputarea.js:336:26)
outputarea.js:338 TypeError: obj.elements is not a function
at mpld3_TooltipPlugin.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1161:9)
at mpld3_Figure.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1400:23)
at Object.mpld3.draw_figure (https://mpld3.github.io/js/mpld3.v0.2.js:18:9)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:14:14)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:15:5)
at eval (native)
at Function.x.extend.globalEval (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231)
at x.fn.extend.domManip (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:21253)
at x.fn.extend.append (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:18822)
at OutputArea._safe_append (https://mbcomp1:9999/static/notebook/js/outputarea.js:336:26)
outputarea.js:319 Javascript error adding output! TypeError: obj.elements is not a function
at mpld3_TooltipPlugin.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1161:9)
at mpld3_Figure.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1400:23)
at Object.mpld3.draw_figure (https://mpld3.github.io/js/mpld3.v0.2.js:18:9)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:14:14)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:15:5)
at eval (native)
at Function.x.extend.globalEval (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231)
at x.fn.extend.domManip (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:21253)
at x.fn.extend.append (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:18822)
at OutputArea._safe_append (https://mbcomp1:9999/static/notebook/js/outputarea.js:336:26)
``` | 2015/07/17 | [
"https://Stackoverflow.com/questions/31480921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1270151/"
] | I was able to get the tooltips to work by using the standard matplotlib scatter on top of the seaborn plot and very low alpha (you can't use zero)
```
data_tip_points = ax.scatter(x_points, y_points, alpha=0.001)
tooltip = plugins.PointLabelTooltip(data_tip_points, labels)
```
It's a bit of a hack, but it works as seen here.
<http://nbviewer.ipython.org/urls/bitbucket.org/jeff_mcgehee/cds_presentation_intro/raw/49cc7808ec26adebec94ffa83973bb5db13017d7/CDS%20Intro%20Presentation.ipynb> | Your code works for me on `ipython` (no notepad) when saving the figure to file with `mpld3.save_html(fig,"./out.html")`. May be an issue with `ipython` `notepad`/`mpld3` compatibility or `mpld3.display` (which causes an error for me, although I think this is related to an old version of matplotlib on my computer).
The full code which worked for me is,
```
import numpy as np
import matplotlib.pyplot as plt, mpld3
import seaborn as sns
import pandas as pd
N=10
data = pd.DataFrame({"x": np.random.randn(N),
"y": np.random.randn(N),
"size": np.random.randint(20,200, size=N),
"label": np.arange(N)
})
scatter_sns = sns.lmplot("x", "y",
scatter_kws={"s": data["size"]},
robust=False, # slow if true
data=data, size=8)
fig = plt.gcf()
tooltip = mpld3.plugins.PointLabelTooltip(fig, labels=list(data.label))
mpld3.plugins.connect(fig, tooltip)
mpld3.save_html(fig,"./out.html")
``` |
31,480,921 | I can't seem to get the interactive tooltips powered by mpld3 to work with the fantastic lmplot-like scatter plots from seaborn.
I'd love any pointer on how to get this to work! Thanks!
Example Code:
```
# I'm running this in an ipython notebook.
%matplotlib inline
import matplotlib.pyplot as plt, mpld3
mpld3.enable_notebook()
import seaborn as sns
N=10
data = pd.DataFrame({"x": np.random.randn(N),
"y": np.random.randn(N),
"size": np.random.randint(20,200, size=N),
"label": np.arange(N)
})
scatter_sns = sns.lmplot("x", "y",
scatter_kws={"s": data["size"]},
robust=False, # slow if true
data=data, size=8)
fig = plt.gcf()
tooltip = mpld3.plugins.PointLabelTooltip(fig, labels=list(data.label))
mpld3.plugins.connect(fig, tooltip)
mpld3.display(fig)
```
I'm getting the seaborn plot along with the following error:
```
Javascript error adding output!
TypeError: obj.elements is not a function
See your browser Javascript console for more details.
```
The console shows:
```
TypeError: obj.elements is not a function
at mpld3_TooltipPlugin.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1161:9)
at mpld3_Figure.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1400:23)
at Object.mpld3.draw_figure (https://mpld3.github.io/js/mpld3.v0.2.js:18:9)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:14:14)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:15:5)
at eval (native)
at Function.x.extend.globalEval (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231)
at x.fn.extend.domManip (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:21253)
at x.fn.extend.append (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:18822)
at OutputArea._safe_append (https://mbcomp1:9999/static/notebook/js/outputarea.js:336:26)
outputarea.js:319 Javascript error adding output! TypeError: obj.elements is not a function
at mpld3_TooltipPlugin.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1161:9)
at mpld3_Figure.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1400:23)
at Object.mpld3.draw_figure (https://mpld3.github.io/js/mpld3.v0.2.js:18:9)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:14:14)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:15:5)
at eval (native)
at Function.x.extend.globalEval (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231)
at x.fn.extend.domManip (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:21253)
at x.fn.extend.append (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:18822)
at OutputArea._safe_append (https://mbcomp1:9999/static/notebook/js/outputarea.js:336:26)
outputarea.js:338 TypeError: obj.elements is not a function
at mpld3_TooltipPlugin.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1161:9)
at mpld3_Figure.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1400:23)
at Object.mpld3.draw_figure (https://mpld3.github.io/js/mpld3.v0.2.js:18:9)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:14:14)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:15:5)
at eval (native)
at Function.x.extend.globalEval (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231)
at x.fn.extend.domManip (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:21253)
at x.fn.extend.append (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:18822)
at OutputArea._safe_append (https://mbcomp1:9999/static/notebook/js/outputarea.js:336:26)
outputarea.js:319 Javascript error adding output! TypeError: obj.elements is not a function
at mpld3_TooltipPlugin.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1161:9)
at mpld3_Figure.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1400:23)
at Object.mpld3.draw_figure (https://mpld3.github.io/js/mpld3.v0.2.js:18:9)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:14:14)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:15:5)
at eval (native)
at Function.x.extend.globalEval (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231)
at x.fn.extend.domManip (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:21253)
at x.fn.extend.append (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:18822)
at OutputArea._safe_append (https://mbcomp1:9999/static/notebook/js/outputarea.js:336:26)
``` | 2015/07/17 | [
"https://Stackoverflow.com/questions/31480921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1270151/"
] | I don't think that there is an easy way to do this currently. I can get some of the tooltips to show by replacing your `tooltip` constructor with the following:
```
ax = plt.gca()
pts = ax.get_children()[3]
tooltip = mpld3.plugins.PointLabelTooltip(pts, labels=list(data.label))
```
This only works for the points outside of the uncertainty interval, though. I think it would be possible to extend `seaborn` to make these points highest in the `zorder` and store them in in the instance somewhere so that you don't need do pull them out of the axis children list. Perhaps worth a feature request. | I was able to get the tooltips to work by using the standard matplotlib scatter on top of the seaborn plot and very low alpha (you can't use zero)
```
data_tip_points = ax.scatter(x_points, y_points, alpha=0.001)
tooltip = plugins.PointLabelTooltip(data_tip_points, labels)
```
It's a bit of a hack, but it works as seen here.
<http://nbviewer.ipython.org/urls/bitbucket.org/jeff_mcgehee/cds_presentation_intro/raw/49cc7808ec26adebec94ffa83973bb5db13017d7/CDS%20Intro%20Presentation.ipynb> |
28,180,252 | I am trying to create a quiver plot from a NetCDF file in Python using this code:
```
import matplotlib.pyplot as plt
import numpy as np
import netCDF4 as Dataset
ncfile = netCDF4.Dataset('30JUNE2012_0300UTC.cdf', 'r')
dbZ = ncfile.variables['MAXDBZF']
data = dbZ[0,0]
U = ncfile.variables['UNEW'][:]
V = ncfile.variables['VNEW'][:]
x, y= np.arange(0,2*np.pi,.2), np.arange(0,2*np.pi,.2)
X,Y = np.meshgrid(x,y)
plt.quiver(X,Y,U,V)
plt.show()
```
and I am getting the following errors
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-109-b449c540a7ea> in <module>()
11 X,Y = np.meshgrid(x,y)
12
---> 13 plt.quiver(X,Y,U,V)
14
15 plt.show()
/Users/felishalawrence/anaconda/lib/python2.7/site-packages/matplotlib/pyplot.pyc in quiver(*args, **kw)
3152 ax.hold(hold)
3153 try:
-> 3154 ret = ax.quiver(*args, **kw)
3155 draw_if_interactive()
3156 finally:
/Users/felishalawrence/anaconda/lib/python2.7/site-packages/matplotlib/axes/_axes.pyc in quiver(self, *args, **kw)
4162 if not self._hold:
4163 self.cla()
-> 4164 q = mquiver.Quiver(self, *args, **kw)
4165
4166 self.add_collection(q, autolim=True)
/Users/felishalawrence/anaconda/lib/python2.7/site-packages/matplotlib/quiver.pyc in __init__(self, ax, *args, **kw)
415 """
416 self.ax = ax
--> 417 X, Y, U, V, C = _parse_args(*args)
418 self.X = X
419 self.Y = Y
/Users/felishalawrence/anaconda/lib/python2.7/site-packages/matplotlib/quiver.pyc in _parse_args(*args)
377 nr, nc = 1, U.shape[0]
378 else:
--> 379 nr, nc = U.shape
380 if len(args) == 2: # remaining after removing U,V,C
381 X, Y = [np.array(a).ravel() for a in args]
ValueError: too many values to unpack
```
What does this error mean? | 2015/01/27 | [
"https://Stackoverflow.com/questions/28180252",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4500459/"
] | `ValueError: too many values to unpack` is because the line `379` of your program is trying to assign two variables (`nr`, `nc`) from `U.shape` when there are not enough variables to assign these values to.
Look above on line `377` - that is correctly assigning two values (`1` and `U.shape[0]` to `nr` and `nc` but line `379` has only a `U.shape` object to assign to two variables. If there are more than 2 values in `U.shape` you will get this error. It was made clear that `U.shape` is actually a tuple with at least two values which means that this code would work as-is as long as there are an equal amount of values to assign to the variables (in this case two). I would print out the value of `U.shape` and determine that it holds the expected values and quantity of values. If you `U.shape` can return two or more values then your code will need to learn how to adapt to this. For example if you find that `U.shape` is a tuple of 3 values then you will need 3 variables to hold those values like so:
`nr, nc, blah = U.shape`
Consider the following:
```
a,b,c = ["a","b","c"] #works
print a
print b
print c
a, b = ["a","b","c"] #will result in error because 3 values are trying to be assigned to only 2 variables
```
The results from the above code:
```
a
b
c
Traceback (most recent call last):
File "None", line 7, in <module>
ValueError: too many values to unpack
```
So you see it's just a matter of having enough values to assign to all of the variables that are requesting a value. | Probably more useful to solve future problems rather then author's but still:
The problem was likely that the netcdf file had a time dimension, therefore U and V where 3 dimensional arrays - you should choose the time slice or aggregate the data across the time dimension. |
36,486,120 | I'm trying to centre and normalise a data set in python with the following code
```
mean = np.mean(train, axis=0)
std = np.std(train, axis=0)
norm_train = (train - mean) / std
```
The problem is that I get a devision by zero error. Two of the values in the data set end up having a zero std. The data set if of shape (3750, 55). My stats skills are not so strong so I'm not sure how to overcome this. Any suggestions? | 2016/04/07 | [
"https://Stackoverflow.com/questions/36486120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/531296/"
] | Since the [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation) is calculated by taking the sum of the *squared* deviations from the mean, a zero standard deviation can only be possible when all the values of a variable are the same (all equal to the mean). In this case, those variables have no discriminative power so they can be removed from the analysis. They cannot improve any classification, clustering or regression task. Many implementations will do it for you or throw an error about a matrix calculation. | You could just replace the 0 std to 1 for that feature. This would basically mean that the scaled value would be zero for all the data points for that feature. This makes sense as this implies that the feature values do not deviate even a bit form the mean(as the values is constant, the constant is the mean.)
FYI- This is what sklearn does!
<https://github.com/scikit-learn/scikit-learn/blob/7389dbac82d362f296dc2746f10e43ffa1615660/sklearn/preprocessing/data.py#L70> |
36,486,120 | I'm trying to centre and normalise a data set in python with the following code
```
mean = np.mean(train, axis=0)
std = np.std(train, axis=0)
norm_train = (train - mean) / std
```
The problem is that I get a devision by zero error. Two of the values in the data set end up having a zero std. The data set if of shape (3750, 55). My stats skills are not so strong so I'm not sure how to overcome this. Any suggestions? | 2016/04/07 | [
"https://Stackoverflow.com/questions/36486120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/531296/"
] | Since the [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation) is calculated by taking the sum of the *squared* deviations from the mean, a zero standard deviation can only be possible when all the values of a variable are the same (all equal to the mean). In this case, those variables have no discriminative power so they can be removed from the analysis. They cannot improve any classification, clustering or regression task. Many implementations will do it for you or throw an error about a matrix calculation. | One standard is to include an epsilon variable that prevents divide by zero. In theory, it is not needed because it doesn't make logical sense to do such calculations. In reality, machines are just calculators and divide by zero becomes either NaN or +/-Inf.
In short, define your function like this:
```
def z_norm(arr, epsilon=1e-100):
return (arr-arr.mean())/(arr.std()+epsilon)
```
This assumes a 1D array, but it would be easy to change to row-wise or column-wise calculation of a 2D array.
Epsilon is an intentional error added to calculations to prevent creating NaN or Inf. In the case of Inf, you will still end up with numbers that are really large, but later calculations will not propagate Inf and may still retain some meaning.
The value of 1/(1 x 10^100) is incredibly small and will not change your result much. You can go down to 1e-300 or so if you want, but you risk hitting the lowest precision value after further calculation. Be aware of the precision you use and the smallest precision it can handle. I was using float64.
**Update 2021-11-03**: Adding test code. The objective of this epsilon is to minimize damage and remove the chance of random NaNs in your data pipeline. Setting epsilon to a positive value fixes the problem.
```
for arr in [
np.array([0,0]),
np.array([1e-300,1e-300]),
np.array([1,1]),
np.array([1,2])
]:
for epi in [1e-100,0,1e100]:
stdev = arr.std()
mean = arr.mean()
result = z_norm(arr, epsilon=epi)
print(f' z_norm(np.array({str(arr):<21}),{epi:<7}) ### stdev={stdev}; mean={mean:<6}; becomes --> {str(result):<19} (float-64) --> Truncate to 32 bits. =', result.astype(np.float32))
z_norm(np.array([0 0] ),1e-100 ) ### stdev=0.0; mean=0.0 ; becomes --> [0. 0.] (float-64) --> Truncate to 32 bits. = [0. 0.]
z_norm(np.array([0 0] ),0 ) ### stdev=0.0; mean=0.0 ; becomes --> [nan nan] (float-64) --> Truncate to 32 bits. = [nan nan]
z_norm(np.array([0 0] ),1e+100 ) ### stdev=0.0; mean=0.0 ; becomes --> [0. 0.] (float-64) --> Truncate to 32 bits. = [0. 0.]
z_norm(np.array([1.e-300 1.e-300] ),1e-100 ) ### stdev=0.0; mean=1e-300; becomes --> [0. 0.] (float-64) --> Truncate to 32 bits. = [0. 0.]
z_norm(np.array([1.e-300 1.e-300] ),0 ) ### stdev=0.0; mean=1e-300; becomes --> [nan nan] (float-64) --> Truncate to 32 bits. = [nan nan]
z_norm(np.array([1.e-300 1.e-300] ),1e+100 ) ### stdev=0.0; mean=1e-300; becomes --> [0. 0.] (float-64) --> Truncate to 32 bits. = [0. 0.]
z_norm(np.array([1 1] ),1e-100 ) ### stdev=0.0; mean=1.0 ; becomes --> [0. 0.] (float-64) --> Truncate to 32 bits. = [0. 0.]
z_norm(np.array([1 1] ),0 ) ### stdev=0.0; mean=1.0 ; becomes --> [nan nan] (float-64) --> Truncate to 32 bits. = [nan nan]
z_norm(np.array([1 1] ),1e+100 ) ### stdev=0.0; mean=1.0 ; becomes --> [0. 0.] (float-64) --> Truncate to 32 bits. = [0. 0.]
z_norm(np.array([1 2] ),1e-100 ) ### stdev=0.5; mean=1.5 ; becomes --> [-1. 1.] (float-64) --> Truncate to 32 bits. = [-1. 1.]
z_norm(np.array([1 2] ),0 ) ### stdev=0.5; mean=1.5 ; becomes --> [-1. 1.] (float-64) --> Truncate to 32 bits. = [-1. 1.]
z_norm(np.array([1 2] ),1e+100 ) ### stdev=0.5; mean=1.5 ; becomes --> [-5.e-101 5.e-101] (float-64) --> Truncate to 32 bits. = [-0. 0.]
``` |
36,486,120 | I'm trying to centre and normalise a data set in python with the following code
```
mean = np.mean(train, axis=0)
std = np.std(train, axis=0)
norm_train = (train - mean) / std
```
The problem is that I get a devision by zero error. Two of the values in the data set end up having a zero std. The data set if of shape (3750, 55). My stats skills are not so strong so I'm not sure how to overcome this. Any suggestions? | 2016/04/07 | [
"https://Stackoverflow.com/questions/36486120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/531296/"
] | Since the [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation) is calculated by taking the sum of the *squared* deviations from the mean, a zero standard deviation can only be possible when all the values of a variable are the same (all equal to the mean). In this case, those variables have no discriminative power so they can be removed from the analysis. They cannot improve any classification, clustering or regression task. Many implementations will do it for you or throw an error about a matrix calculation. | Going back to its definition, the idea behind the z\_score is to give the distance between an element and the mean of the sample in terms of standard deviations. If all elements are the same, it means that their distance to the mean is 0, and therefore the zscore is 0 time the standard deviation, since all your data points are at the mean. The division by the standard division is a way to relate the distance to the dispersion of the data. Visually it is easy to understand and come to this conclusion: <https://en.wikipedia.org/wiki/Standard_score#/media/File:The_Normal_Distribution.svg> |
36,486,120 | I'm trying to centre and normalise a data set in python with the following code
```
mean = np.mean(train, axis=0)
std = np.std(train, axis=0)
norm_train = (train - mean) / std
```
The problem is that I get a devision by zero error. Two of the values in the data set end up having a zero std. The data set if of shape (3750, 55). My stats skills are not so strong so I'm not sure how to overcome this. Any suggestions? | 2016/04/07 | [
"https://Stackoverflow.com/questions/36486120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/531296/"
] | One standard is to include an epsilon variable that prevents divide by zero. In theory, it is not needed because it doesn't make logical sense to do such calculations. In reality, machines are just calculators and divide by zero becomes either NaN or +/-Inf.
In short, define your function like this:
```
def z_norm(arr, epsilon=1e-100):
return (arr-arr.mean())/(arr.std()+epsilon)
```
This assumes a 1D array, but it would be easy to change to row-wise or column-wise calculation of a 2D array.
Epsilon is an intentional error added to calculations to prevent creating NaN or Inf. In the case of Inf, you will still end up with numbers that are really large, but later calculations will not propagate Inf and may still retain some meaning.
The value of 1/(1 x 10^100) is incredibly small and will not change your result much. You can go down to 1e-300 or so if you want, but you risk hitting the lowest precision value after further calculation. Be aware of the precision you use and the smallest precision it can handle. I was using float64.
**Update 2021-11-03**: Adding test code. The objective of this epsilon is to minimize damage and remove the chance of random NaNs in your data pipeline. Setting epsilon to a positive value fixes the problem.
```
for arr in [
np.array([0,0]),
np.array([1e-300,1e-300]),
np.array([1,1]),
np.array([1,2])
]:
for epi in [1e-100,0,1e100]:
stdev = arr.std()
mean = arr.mean()
result = z_norm(arr, epsilon=epi)
print(f' z_norm(np.array({str(arr):<21}),{epi:<7}) ### stdev={stdev}; mean={mean:<6}; becomes --> {str(result):<19} (float-64) --> Truncate to 32 bits. =', result.astype(np.float32))
z_norm(np.array([0 0] ),1e-100 ) ### stdev=0.0; mean=0.0 ; becomes --> [0. 0.] (float-64) --> Truncate to 32 bits. = [0. 0.]
z_norm(np.array([0 0] ),0 ) ### stdev=0.0; mean=0.0 ; becomes --> [nan nan] (float-64) --> Truncate to 32 bits. = [nan nan]
z_norm(np.array([0 0] ),1e+100 ) ### stdev=0.0; mean=0.0 ; becomes --> [0. 0.] (float-64) --> Truncate to 32 bits. = [0. 0.]
z_norm(np.array([1.e-300 1.e-300] ),1e-100 ) ### stdev=0.0; mean=1e-300; becomes --> [0. 0.] (float-64) --> Truncate to 32 bits. = [0. 0.]
z_norm(np.array([1.e-300 1.e-300] ),0 ) ### stdev=0.0; mean=1e-300; becomes --> [nan nan] (float-64) --> Truncate to 32 bits. = [nan nan]
z_norm(np.array([1.e-300 1.e-300] ),1e+100 ) ### stdev=0.0; mean=1e-300; becomes --> [0. 0.] (float-64) --> Truncate to 32 bits. = [0. 0.]
z_norm(np.array([1 1] ),1e-100 ) ### stdev=0.0; mean=1.0 ; becomes --> [0. 0.] (float-64) --> Truncate to 32 bits. = [0. 0.]
z_norm(np.array([1 1] ),0 ) ### stdev=0.0; mean=1.0 ; becomes --> [nan nan] (float-64) --> Truncate to 32 bits. = [nan nan]
z_norm(np.array([1 1] ),1e+100 ) ### stdev=0.0; mean=1.0 ; becomes --> [0. 0.] (float-64) --> Truncate to 32 bits. = [0. 0.]
z_norm(np.array([1 2] ),1e-100 ) ### stdev=0.5; mean=1.5 ; becomes --> [-1. 1.] (float-64) --> Truncate to 32 bits. = [-1. 1.]
z_norm(np.array([1 2] ),0 ) ### stdev=0.5; mean=1.5 ; becomes --> [-1. 1.] (float-64) --> Truncate to 32 bits. = [-1. 1.]
z_norm(np.array([1 2] ),1e+100 ) ### stdev=0.5; mean=1.5 ; becomes --> [-5.e-101 5.e-101] (float-64) --> Truncate to 32 bits. = [-0. 0.]
``` | You could just replace the 0 std to 1 for that feature. This would basically mean that the scaled value would be zero for all the data points for that feature. This makes sense as this implies that the feature values do not deviate even a bit form the mean(as the values is constant, the constant is the mean.)
FYI- This is what sklearn does!
<https://github.com/scikit-learn/scikit-learn/blob/7389dbac82d362f296dc2746f10e43ffa1615660/sklearn/preprocessing/data.py#L70> |
36,486,120 | I'm trying to centre and normalise a data set in python with the following code
```
mean = np.mean(train, axis=0)
std = np.std(train, axis=0)
norm_train = (train - mean) / std
```
The problem is that I get a devision by zero error. Two of the values in the data set end up having a zero std. The data set if of shape (3750, 55). My stats skills are not so strong so I'm not sure how to overcome this. Any suggestions? | 2016/04/07 | [
"https://Stackoverflow.com/questions/36486120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/531296/"
] | One standard is to include an epsilon variable that prevents divide by zero. In theory, it is not needed because it doesn't make logical sense to do such calculations. In reality, machines are just calculators and divide by zero becomes either NaN or +/-Inf.
In short, define your function like this:
```
def z_norm(arr, epsilon=1e-100):
return (arr-arr.mean())/(arr.std()+epsilon)
```
This assumes a 1D array, but it would be easy to change to row-wise or column-wise calculation of a 2D array.
Epsilon is an intentional error added to calculations to prevent creating NaN or Inf. In the case of Inf, you will still end up with numbers that are really large, but later calculations will not propagate Inf and may still retain some meaning.
The value of 1/(1 x 10^100) is incredibly small and will not change your result much. You can go down to 1e-300 or so if you want, but you risk hitting the lowest precision value after further calculation. Be aware of the precision you use and the smallest precision it can handle. I was using float64.
**Update 2021-11-03**: Adding test code. The objective of this epsilon is to minimize damage and remove the chance of random NaNs in your data pipeline. Setting epsilon to a positive value fixes the problem.
```
for arr in [
np.array([0,0]),
np.array([1e-300,1e-300]),
np.array([1,1]),
np.array([1,2])
]:
for epi in [1e-100,0,1e100]:
stdev = arr.std()
mean = arr.mean()
result = z_norm(arr, epsilon=epi)
print(f' z_norm(np.array({str(arr):<21}),{epi:<7}) ### stdev={stdev}; mean={mean:<6}; becomes --> {str(result):<19} (float-64) --> Truncate to 32 bits. =', result.astype(np.float32))
z_norm(np.array([0 0] ),1e-100 ) ### stdev=0.0; mean=0.0 ; becomes --> [0. 0.] (float-64) --> Truncate to 32 bits. = [0. 0.]
z_norm(np.array([0 0] ),0 ) ### stdev=0.0; mean=0.0 ; becomes --> [nan nan] (float-64) --> Truncate to 32 bits. = [nan nan]
z_norm(np.array([0 0] ),1e+100 ) ### stdev=0.0; mean=0.0 ; becomes --> [0. 0.] (float-64) --> Truncate to 32 bits. = [0. 0.]
z_norm(np.array([1.e-300 1.e-300] ),1e-100 ) ### stdev=0.0; mean=1e-300; becomes --> [0. 0.] (float-64) --> Truncate to 32 bits. = [0. 0.]
z_norm(np.array([1.e-300 1.e-300] ),0 ) ### stdev=0.0; mean=1e-300; becomes --> [nan nan] (float-64) --> Truncate to 32 bits. = [nan nan]
z_norm(np.array([1.e-300 1.e-300] ),1e+100 ) ### stdev=0.0; mean=1e-300; becomes --> [0. 0.] (float-64) --> Truncate to 32 bits. = [0. 0.]
z_norm(np.array([1 1] ),1e-100 ) ### stdev=0.0; mean=1.0 ; becomes --> [0. 0.] (float-64) --> Truncate to 32 bits. = [0. 0.]
z_norm(np.array([1 1] ),0 ) ### stdev=0.0; mean=1.0 ; becomes --> [nan nan] (float-64) --> Truncate to 32 bits. = [nan nan]
z_norm(np.array([1 1] ),1e+100 ) ### stdev=0.0; mean=1.0 ; becomes --> [0. 0.] (float-64) --> Truncate to 32 bits. = [0. 0.]
z_norm(np.array([1 2] ),1e-100 ) ### stdev=0.5; mean=1.5 ; becomes --> [-1. 1.] (float-64) --> Truncate to 32 bits. = [-1. 1.]
z_norm(np.array([1 2] ),0 ) ### stdev=0.5; mean=1.5 ; becomes --> [-1. 1.] (float-64) --> Truncate to 32 bits. = [-1. 1.]
z_norm(np.array([1 2] ),1e+100 ) ### stdev=0.5; mean=1.5 ; becomes --> [-5.e-101 5.e-101] (float-64) --> Truncate to 32 bits. = [-0. 0.]
``` | Going back to its definition, the idea behind the z\_score is to give the distance between an element and the mean of the sample in terms of standard deviations. If all elements are the same, it means that their distance to the mean is 0, and therefore the zscore is 0 time the standard deviation, since all your data points are at the mean. The division by the standard division is a way to relate the distance to the dispersion of the data. Visually it is easy to understand and come to this conclusion: <https://en.wikipedia.org/wiki/Standard_score#/media/File:The_Normal_Distribution.svg> |
3,950,368 | >
> **Possible Duplicate:**
>
> [What do I use for a max-heap implementation in Python?](https://stackoverflow.com/questions/2501457/what-do-i-use-for-a-max-heap-implementation-in-python)
>
>
>
Python has a min heap implemented in the heapq module. However, if one would want a max heap, would one have to build from scratch? | 2010/10/16 | [
"https://Stackoverflow.com/questions/3950368",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/475790/"
] | You could multiply your numbers by -1 and use the min heap. | No need to implement a max heap from scratch. You can easily employ a bit of math to turn your min heap into a max heap!
See [this](http://www.mail-archive.com/python-list@python.org/msg238926.html) and [this](http://code.activestate.com/recipes/502295/) - but really [this SO answer](https://stackoverflow.com/questions/2501457/what-do-i-use-for-a-max-heap-implementation-in-python). |
55,522,649 | I have installed numpy but when I import it, it doesn't work.
```
from numpy import *
arr=array([1,2,3,4])
print(arr)
```
Result:
```
C:\Users\YUVRAJ\PycharmProjects\mycode2\venv\Scripts\python.exe C:/Users/YUVRAJ/PycharmProjects/mycode2/numpy.py
Traceback (most recent call last):
File "C:/Users/YUVRAJ/PycharmProjects/mycode2/numpy.py", line 1, in <module>
from numpy import *
File "C:\Users\YUVRAJ\PycharmProjects\mycode2\numpy.py", line 2, in <module>
x=array([1,2,3,4])
NameError: name 'array' is not defined
Process finished with exit code 1
``` | 2019/04/04 | [
"https://Stackoverflow.com/questions/55522649",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11313285/"
] | The problem is you named your script as `numpy.py`, which is a conflict with the module numpy that you need to use. Just rename your script to something else and will be fine. | Instead of using `from numpy import *`
Try using this:
```
import numpy
from numpy import array
```
And then add your code:
```
arr=array([1,2,3,4])
print(arr)
```
---
**EDIT:** Even though this is the accepted answer, this may not work under all circumstances. If this doesn't work, see [adrtam's answer](https://stackoverflow.com/a/55522733/5721784). |
24,703,432 | I am attempting to catch messages by topic by using the message\_callback\_add() function in [this library](https://pypi.python.org/pypi/paho-mqtt#usage-and-api). Below is my entire module that I am using to deal with my mqtt subscribe and publishing needs. I have been able to test that the publish works, but I can't seem to catch any incoming messages. There are no warnings/errors of any kind and the `print("position")` statements are working for 1 and 2 only.
```
import sys
import os
import time
import Things
import paho.mqtt.client as paho
global mqttclient;
global broker;
global port;
broker = "10.64.16.199";
port = 1883;
mypid = os.getpid()
client_uniq = "pubclient_"+str(mypid)
mqttclient = paho.Client(client_uniq, False) #nocleanstart
mqttclient.connect(broker, port, 60)
mqttclient.subscribe("Commands/#")
def Pump_callback(client, userdata, message):
#print("Received message '" + str(message.payload) + "' on topic '"
# + message.topic + "' with QoS " + str(message.qos))
print("position 3")
Things.set_waterPumpSpeed(int(message.payload))
def Valve_callback(client, userdata, message):
#print("Received message '" + str(message.payload) + "' on topic '"
# + message.topic + "' with QoS " + str(message.qos))
print("position 4")
Things.set_valvePosition(int(message.payload))
mqttclient.message_callback_add("Commands/PumpSpeed", Pump_callback)
mqttclient.message_callback_add("Commands/ValvePosition", Valve_callback)
print("position 1")
mqttclient.loop_start()
print("position 2")
def pub(topic, value):
mqttclient.publish(topic, value, 0, True)
``` | 2014/07/11 | [
"https://Stackoverflow.com/questions/24703432",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2851048/"
] | I called `loop_start` in the wrong place.
I moved the call to right after the connect statement and it now works.
Here is the snippet:
```
client_uniq = "pubclient_"+str(mypid)
mqttclient = paho.Client(client_uniq, False) #nocleanstart
mqttclient.connect(broker, port, 60)
mqttclient.loop_start()
mqttclient.subscribe("FM_WaterPump/Commands/#")
```
In the documentation on loop\_start it alludes to calling `loop_start()` after or before connect though it should say immediately before or after to clarify.
Snippet of the documentation:
>
> These functions implement a threaded interface to the network loop. Calling loop\_start() once, before or after connect\*(), runs a thread in the background to call loop() automatically. This frees up the main thread for other work that may be blocking. This call also handles reconnecting to the broker. Call loop\_stop() to stop the background thread.
>
>
> | `loop_start()` will return immediately, so your program will quit before it gets chance to do anything.
You've also called `subscribe()` before `message_callback_add()` which doesn't make sense, although in this specific example it probably doesn't matter. |
23,190,348 | Has the alsaaudio library been ported to python3? i have this working on python 2.7 but not on python 3.
is there another library for python 3 if the above cannot be used? | 2014/04/21 | [
"https://Stackoverflow.com/questions/23190348",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/612242/"
] | I have compiled alsaaudio for python3 manually.
You can install it by following the steps given below.
1. Make sure that **gcc, python3-dev, libasound2-dev** packages are installed in your machine (install them using synaptic if you are using Ubuntu).
2. Download and extract the following package
<http://sourceforge.net/projects/pyalsaaudio/files/pyalsaaudio-0.7.tar.gz/download>
3. Go to the extracted folder and execute the following commands (Execute the commands as root or use sudo)
```
python3 setup.py build
python3 setup.py install
```
HTH.. | It's now called pyalsaaudio.
For me pip install pyalsaaudio worked. |
66,929,254 | Is there a library for interpreting python code within a python program?
Sample usage might look like this..
```
code = """
def hello():
return 'hello'
hello()
"""
output = Interpreter.run(code)
print(output)
```
which then outputs
`hello` | 2021/04/03 | [
"https://Stackoverflow.com/questions/66929254",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12594122/"
] | found this example from grepper
```
the_code = '''
a = 1
b = 2
return_me = a + b
'''
loc = {}
exec(the_code, globals(), loc)
return_workaround = loc['return_me']
print(return_workaround)
```
apparently you can pass global and local scope into `exec`. In your use case, you would just use a named variable instead of returning. | You can use the `exec` function. You can't get the return value from the code variable. Instead you can print it there itself.
```
code = """
def hello():
print('hello')
hello()
"""
exec(code)
``` |
65,697,374 | So I am a beginner at python, and I was trying to install packages using pip. But any time I try to install I keep getting the error:
>
> ERROR: Could not install packages due to an EnvironmentError: [WinError 2] The system cannot find the file specified: 'c:\python38\Scripts\sqlformat.exe' -> 'c:\python38\Scripts\sqlformat.exe.deleteme'
>
>
>
How do I fix this? | 2021/01/13 | [
"https://Stackoverflow.com/questions/65697374",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14996295/"
] | Try running command line as administrator. The issue looks like its about permission. To run as administrator. Type cmd on search bar and right click on icon of command prompt. There you will find an option of run as administrator. Click the option and then try to install package | Looks like a permissions error. You might try starting the installation with admin rights or install the package only for your current user with:
```
pip install --user package
``` |
65,697,374 | So I am a beginner at python, and I was trying to install packages using pip. But any time I try to install I keep getting the error:
>
> ERROR: Could not install packages due to an EnvironmentError: [WinError 2] The system cannot find the file specified: 'c:\python38\Scripts\sqlformat.exe' -> 'c:\python38\Scripts\sqlformat.exe.deleteme'
>
>
>
How do I fix this? | 2021/01/13 | [
"https://Stackoverflow.com/questions/65697374",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14996295/"
] | Try running command line as administrator. The issue looks like its about permission. To run as administrator. Type cmd on search bar and right click on icon of command prompt. There you will find an option of run as administrator. Click the option and then try to install package | Looks like the issue is with permissions . Try running the same command on the terminal as a "administrator". Let me know if that fixes this issue. |
59,662,028 | I am trying to retrieve app related information from Google Play store using selenium and BeautifulSoup. When I try to retrieve the information, I got webdriver exception error. I checked the chrome version and chrome driver version (both are compatible). Here is the weblink that is causing the issue, code to retrieve information, and error thrown by the code:
Link: <https://play.google.com/store/apps/details?id=com.tudasoft.android.BeMakeup&hl=en&showAllReviews=true>
Code:
```
driver = webdriver.Chrome('path')
driver.get('https://play.google.com/store/apps/details?id=com.tudasoft.android.BeMakeup&hl=en&showAllReviews=true')
soup = bs.BeautifulSoup(driver.page_source, 'html.parser')
```
I am getting error on third line. Here is the parts of the error message:
Start of the error message:
```
---------------------------------------------------------------------------
WebDriverException Traceback (most recent call last)
<ipython-input-280-4e8a1ef443f2> in <module>()
----> 1 soup = bs.BeautifulSoup(driver.page_source, 'html.parser')
~/anaconda3/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py in page_source(self)
676 driver.page_source
677 """
--> 678 return self.execute(Command.GET_PAGE_SOURCE)['value']
679
680 def close(self):
~/anaconda3/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py in execute(self, driver_command, params)
318 response = self.command_executor.execute(driver_command, params)
319 if response:
--> 320 self.error_handler.check_response(response)
321 response['value'] = self._unwrap_value(
322 response.get('value', None))
~/anaconda3/lib/python3.6/site-packages/selenium/webdriver/remote/errorhandler.py in check_response(self, response)
240 alert_text = value['alert'].get('text')
241 raise exception_class(message, screen, stacktrace, alert_text)
--> 242 raise exception_class(message, screen, stacktrace)
243
244 def _value_or_default(self, obj, key, default):
WebDriverException: Message: unknown error: bad inspector message:
```
End of the error message:
```
(Session info: chrome=79.0.3945.117)
```
Could anyone guide me how to fix the issue? | 2020/01/09 | [
"https://Stackoverflow.com/questions/59662028",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2293224/"
] | I think this is due to the chromedriver encoding problem.
See <https://bugs.chromium.org/p/chromium/issues/detail?id=723592#c9> for additional information about this bug.
Instead of selenium you can get page source using BeautifulSoup as follows.
```
import requests
from bs4 import BeautifulSoup
r = requests.get('https://play.google.com/store/apps/details?id=com.tudasoft.android.BeMakeup&hl=en&showAllReviews=true')
soup = BeautifulSoup(r.content, "lxml")
print(soup)
``` | try this
```
driver = webdriver.Chrome('path')
driver.get('https://play.google.com/store/apps/details?id=com.tudasoft.android.BeMakeup&hl=en&showAllReviews=true')
# retrieve data you want, for example
review_user_list = driver.find_elements_by_class_name("X43Kjb")
``` |
59,662,028 | I am trying to retrieve app related information from Google Play store using selenium and BeautifulSoup. When I try to retrieve the information, I got webdriver exception error. I checked the chrome version and chrome driver version (both are compatible). Here is the weblink that is causing the issue, code to retrieve information, and error thrown by the code:
Link: <https://play.google.com/store/apps/details?id=com.tudasoft.android.BeMakeup&hl=en&showAllReviews=true>
Code:
```
driver = webdriver.Chrome('path')
driver.get('https://play.google.com/store/apps/details?id=com.tudasoft.android.BeMakeup&hl=en&showAllReviews=true')
soup = bs.BeautifulSoup(driver.page_source, 'html.parser')
```
I am getting error on third line. Here is the parts of the error message:
Start of the error message:
```
---------------------------------------------------------------------------
WebDriverException Traceback (most recent call last)
<ipython-input-280-4e8a1ef443f2> in <module>()
----> 1 soup = bs.BeautifulSoup(driver.page_source, 'html.parser')
~/anaconda3/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py in page_source(self)
676 driver.page_source
677 """
--> 678 return self.execute(Command.GET_PAGE_SOURCE)['value']
679
680 def close(self):
~/anaconda3/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py in execute(self, driver_command, params)
318 response = self.command_executor.execute(driver_command, params)
319 if response:
--> 320 self.error_handler.check_response(response)
321 response['value'] = self._unwrap_value(
322 response.get('value', None))
~/anaconda3/lib/python3.6/site-packages/selenium/webdriver/remote/errorhandler.py in check_response(self, response)
240 alert_text = value['alert'].get('text')
241 raise exception_class(message, screen, stacktrace, alert_text)
--> 242 raise exception_class(message, screen, stacktrace)
243
244 def _value_or_default(self, obj, key, default):
WebDriverException: Message: unknown error: bad inspector message:
```
End of the error message:
```
(Session info: chrome=79.0.3945.117)
```
Could anyone guide me how to fix the issue? | 2020/01/09 | [
"https://Stackoverflow.com/questions/59662028",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2293224/"
] | try this
```
driver = webdriver.Chrome('path')
driver.get('https://play.google.com/store/apps/details?id=com.tudasoft.android.BeMakeup&hl=en&showAllReviews=true')
# retrieve data you want, for example
review_user_list = driver.find_elements_by_class_name("X43Kjb")
``` | You can use [urllib](/questions/tagged/urllib "show questions tagged 'urllib'") with [beautifulsoup](/questions/tagged/beautifulsoup "show questions tagged 'beautifulsoup'") as follows:
* Code Block:
```
# -*- coding: UTF-8 -*
from bs4 import BeautifulSoup
from urllib.request import urlopen as uReq
url = "https://play.google.com/store/apps/details?id=com.tudasoft.android.BeMakeup&hl=en&showAllReviews=true"
uClient = uReq(url)
page_html = uClient.read()
uClient.close()
page_soup = BeautifulSoup(page_html, "html.parser")
print(page_soup)
```
* Console Output:
```
<!DOCTYPE doctype html>
<html dir="ltr" lang="en"><head><base href="https://play.google.com/"/><meta content="origin" name="referrer"/><link href="/opensearch.xml" rel="search" title="Google Play" type="application/opensearchdescription+xml"/>
.
.
.
<style nonce="96JYwPKBYhVDb+ABipwCww">@font-face{font-family:'Roboto';font-style:normal;font-weight:100;src:local('Roboto Thin'),local('Roboto-Thin'),url(//fonts.gstatic.com/s/roboto/v18/KFOkCnqEu92Fr1MmgVxIIzc.ttf)format('truetype');}@font-face{font-family:'Roboto';font-style:normal;font-weight:300;src:local('Roboto Light'),local('Roboto-Light'),url(//fonts.gstatic.com/s/roboto/v18/KFOlCnqEu92Fr1MmSU5fBBc9.ttf)format('truetype');}@font-face{font-family:'Roboto';font-style:normal;font-weight:400;src:local('Roboto Regular'),local('Roboto-Regular'),url(//fonts.gstatic.com/s/roboto/v18/KFOmCnqEu92Fr1Mu4mxP.ttf)format('truetype');}@font-face{font-family:'Roboto';font-style:normal;font-weight:500;src:local('Roboto Medium'),local('Roboto-Medium'),url(//fonts.gstatic.com/s/roboto/v18/KFOlCnqEu92Fr1MmEU9fBBc9.ttf)format('truetype');}@font-face{font-family:'Roboto';font-style:normal;font-weight:700;src:local('Roboto Bold'),local('Roboto-Bold'),url(//fonts.gstatic.com/s/roboto/v18/KFOlCnqEu92Fr1MmWUlfBBc9.ttf)format('truetype');}@font-face{font-family:'Material Icons Extended';font-style:normal;font-weight:400;src:url(//fonts.gstatic.com/s/materialiconsextended/v50/kJEjBvgX7BgnkSrUwT8UnLVc38YydejYY-oE_LvM.ttf)format('truetype');}.material-icons-extended{font-family:'Material Icons Extended';font-weight:normal;font-style:normal;font-size:24px;line-height:1;letter-spacing:normal;text-transform:none;display:inline-block;white-space:nowrap;word-wrap:normal;direction:ltr;}@font-face{font-family:'Product Sans';font-style:normal;font-weight:400;src:local('Product Sans'),local('ProductSans-Regular'),url(//fonts.gstatic.com/s/productsans/v9/pxiDypQkot1TnFhsFMOfGShVF9eL.ttf)format('truetype');}</style><script nonce="96JYwPKBYhVDb+ABipwCww">(function(){/*
Copyright The Closure Library Authors.
SPDX-License-Identifier: Apache-2.0
*/
``` |
59,662,028 | I am trying to retrieve app related information from Google Play store using selenium and BeautifulSoup. When I try to retrieve the information, I got webdriver exception error. I checked the chrome version and chrome driver version (both are compatible). Here is the weblink that is causing the issue, code to retrieve information, and error thrown by the code:
Link: <https://play.google.com/store/apps/details?id=com.tudasoft.android.BeMakeup&hl=en&showAllReviews=true>
Code:
```
driver = webdriver.Chrome('path')
driver.get('https://play.google.com/store/apps/details?id=com.tudasoft.android.BeMakeup&hl=en&showAllReviews=true')
soup = bs.BeautifulSoup(driver.page_source, 'html.parser')
```
I am getting error on third line. Here is the parts of the error message:
Start of the error message:
```
---------------------------------------------------------------------------
WebDriverException Traceback (most recent call last)
<ipython-input-280-4e8a1ef443f2> in <module>()
----> 1 soup = bs.BeautifulSoup(driver.page_source, 'html.parser')
~/anaconda3/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py in page_source(self)
676 driver.page_source
677 """
--> 678 return self.execute(Command.GET_PAGE_SOURCE)['value']
679
680 def close(self):
~/anaconda3/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py in execute(self, driver_command, params)
318 response = self.command_executor.execute(driver_command, params)
319 if response:
--> 320 self.error_handler.check_response(response)
321 response['value'] = self._unwrap_value(
322 response.get('value', None))
~/anaconda3/lib/python3.6/site-packages/selenium/webdriver/remote/errorhandler.py in check_response(self, response)
240 alert_text = value['alert'].get('text')
241 raise exception_class(message, screen, stacktrace, alert_text)
--> 242 raise exception_class(message, screen, stacktrace)
243
244 def _value_or_default(self, obj, key, default):
WebDriverException: Message: unknown error: bad inspector message:
```
End of the error message:
```
(Session info: chrome=79.0.3945.117)
```
Could anyone guide me how to fix the issue? | 2020/01/09 | [
"https://Stackoverflow.com/questions/59662028",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2293224/"
] | I think this is due to the chromedriver encoding problem.
See <https://bugs.chromium.org/p/chromium/issues/detail?id=723592#c9> for additional information about this bug.
Instead of selenium you can get page source using BeautifulSoup as follows.
```
import requests
from bs4 import BeautifulSoup
r = requests.get('https://play.google.com/store/apps/details?id=com.tudasoft.android.BeMakeup&hl=en&showAllReviews=true')
soup = BeautifulSoup(r.content, "lxml")
print(soup)
``` | You can use [urllib](/questions/tagged/urllib "show questions tagged 'urllib'") with [beautifulsoup](/questions/tagged/beautifulsoup "show questions tagged 'beautifulsoup'") as follows:
* Code Block:
```
# -*- coding: UTF-8 -*
from bs4 import BeautifulSoup
from urllib.request import urlopen as uReq
url = "https://play.google.com/store/apps/details?id=com.tudasoft.android.BeMakeup&hl=en&showAllReviews=true"
uClient = uReq(url)
page_html = uClient.read()
uClient.close()
page_soup = BeautifulSoup(page_html, "html.parser")
print(page_soup)
```
* Console Output:
```
<!DOCTYPE doctype html>
<html dir="ltr" lang="en"><head><base href="https://play.google.com/"/><meta content="origin" name="referrer"/><link href="/opensearch.xml" rel="search" title="Google Play" type="application/opensearchdescription+xml"/>
.
.
.
<style nonce="96JYwPKBYhVDb+ABipwCww">@font-face{font-family:'Roboto';font-style:normal;font-weight:100;src:local('Roboto Thin'),local('Roboto-Thin'),url(//fonts.gstatic.com/s/roboto/v18/KFOkCnqEu92Fr1MmgVxIIzc.ttf)format('truetype');}@font-face{font-family:'Roboto';font-style:normal;font-weight:300;src:local('Roboto Light'),local('Roboto-Light'),url(//fonts.gstatic.com/s/roboto/v18/KFOlCnqEu92Fr1MmSU5fBBc9.ttf)format('truetype');}@font-face{font-family:'Roboto';font-style:normal;font-weight:400;src:local('Roboto Regular'),local('Roboto-Regular'),url(//fonts.gstatic.com/s/roboto/v18/KFOmCnqEu92Fr1Mu4mxP.ttf)format('truetype');}@font-face{font-family:'Roboto';font-style:normal;font-weight:500;src:local('Roboto Medium'),local('Roboto-Medium'),url(//fonts.gstatic.com/s/roboto/v18/KFOlCnqEu92Fr1MmEU9fBBc9.ttf)format('truetype');}@font-face{font-family:'Roboto';font-style:normal;font-weight:700;src:local('Roboto Bold'),local('Roboto-Bold'),url(//fonts.gstatic.com/s/roboto/v18/KFOlCnqEu92Fr1MmWUlfBBc9.ttf)format('truetype');}@font-face{font-family:'Material Icons Extended';font-style:normal;font-weight:400;src:url(//fonts.gstatic.com/s/materialiconsextended/v50/kJEjBvgX7BgnkSrUwT8UnLVc38YydejYY-oE_LvM.ttf)format('truetype');}.material-icons-extended{font-family:'Material Icons Extended';font-weight:normal;font-style:normal;font-size:24px;line-height:1;letter-spacing:normal;text-transform:none;display:inline-block;white-space:nowrap;word-wrap:normal;direction:ltr;}@font-face{font-family:'Product Sans';font-style:normal;font-weight:400;src:local('Product Sans'),local('ProductSans-Regular'),url(//fonts.gstatic.com/s/productsans/v9/pxiDypQkot1TnFhsFMOfGShVF9eL.ttf)format('truetype');}</style><script nonce="96JYwPKBYhVDb+ABipwCww">(function(){/*
Copyright The Closure Library Authors.
SPDX-License-Identifier: Apache-2.0
*/
``` |
36,781,198 | I'm sending an integer from python using pySerial.
```
import serial
ser = serial.Serial('/dev/cu.usbmodem1421', 9600);
ser.write(b'5');
```
When i compile,the receiver LED on arduino blinks.However I want to cross check if the integer is received by arduino. I cannot use Serial.println() because the port is busy. I cannot run serial monitor first on arduino and then run the python script because the port is busy. How can i achieve this? | 2016/04/21 | [
"https://Stackoverflow.com/questions/36781198",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6237876/"
] | A simple way to do it using the standard library :
```
import java.util.Scanner;
import java.util.concurrent.LinkedBlockingDeque;
import java.util.concurrent.ThreadPoolExecutor;
import static java.util.concurrent.TimeUnit.MILLISECONDS;
public class Example {
private static final int POOL_SIZE = 5;
private static final ExecutorService WORKERS = new ThreadPoolExecutor(POOL_SIZE, POOL_SIZE, 1, MILLISECONDS, new LinkedBlockingDeque<>());
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
while (true) {
System.out.print("> ");
String cmd = sc.nextLine();
switch (cmd) {
case "process":
WORKERS.submit(newExpensiveTask());
break;
case "kill":
System.exit(0);
default:
System.err.println("Unrecognized command: " + cmd);
}
}
}
private static Runnable newExpensiveTask() {
return () -> {
try {
Thread.sleep(10000);
System.out.println("Done processing");
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
};
}
}
```
This code lets you run heavy tasks asynchronously while the user terminal remains available and reactive. | I would recommend reading up on specific tutorials, such as the Java Language Tutorial (available as a book - at least, it used to be - as well as on the Java website <https://docs.oracle.com/javase/tutorial/essential/concurrency/>)
However as others have cautioned, getting into threading is a challenge and requires good knowledge of the language quite apart from the aspects of multithreading and synchronization. I'd be tempted to recommend you read some of the other tutorials - working through IO and so on - first of all. |
34,685,486 | After installing my python project with `setup.py` and executing it in terminal I get the following error:
```
...
from ui.mainwindow import MainWindow
File "/usr/local/lib/python2.7/dist-packages/EpiPy-0.1-py2.7.egg/epipy/ui/mainwindow.py", line 9, in <module>
from model.sir import SIR
ImportError: No module named model.sir
```
...
We assume we have the following structure of our project `cookies`:
```
.
βββ setup.py
βββ src
βββ a
βΒ Β βββ aa.py
βΒ Β βββ __init__.py
βββ b
βΒ Β βββ bb.py
βΒ Β βββ __init__.py
βββ __init__.py
βββ main.py
```
File: `cookies/src/main.py`
```
from a import aa
def main():
print aa.get_aa()
```
File `cookies/src/a/aa.py`
```
from b import bb
def get_aa():
return bb.get_bb()
```
File: `cookies/src/b/bb.py`
```
def get_bb():
return 'bb'
```
File: `cookies/setup.py`
```
#!/usr/bin/env python
import os
import sys
try:
from setuptools import setup, find_packages
except ImportError:
raise ImportError("Install setup tools")
setup(
name = "cookies",
version = "0.1",
author = "sam",
description = ("test"),
license = "MIT",
keywords = "test",
url = "asd@ads.asd",
packages=find_packages(),
classifiers=[
"""\
Development Status :: 3 - Alpha
Operating System :: Unix
"""
],
entry_points = {'console_scripts': ['cookies = src.main:main',],},
)
```
If I install `cookies` as `root` with `$ python setup.py install` and execute `cookies` I get the following error: `ImportError: No module named b`. How can I solve the problem. | 2016/01/08 | [
"https://Stackoverflow.com/questions/34685486",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2609713/"
] | What I would do is to use absolute imports everywhere (from epipy import ...). That's what is recommanded in [PEP 328](https://docs.python.org/2.5/whatsnew/pep-328.html).
Your imports won't work anymore if the project is not installed. You can add the project directory to your PYTHONPATH, install the package, or, what I do when I'm in the middle of developing packages, [install with the 'editable' option](https://pip.pypa.io/en/stable/reference/pip_install/#editable-installs) : `pip install -e`
In editable mode, instead of installing the package code in your python distribution, a pointer to your project is created. That way it is importable, but the package uses the live code in development.
Example:
I am developing a package in /home/jbchouinard/mypackage. Inside my code, I use absolute imports, e.g. `from mypackage import subpackage`.
If I install with `pip install`, the package will be installed in my distribution, let's say in /usr/lib/python2.7/dist-packages. If I make further changes to the package, I have to upgrade or uninstall/reinstall the package. This can get tedious quickly.
If I install with `pip install -e`, a pointer (a .pth file) is created in /usr/lib/python2.7/dist-packages towards /home/jbchouinard/mypackage. I can `import mypackage` as if it was installed normally, but the code used is the code at /home/jbchouinard/mypackage; any change is reflected immediately. | I had a similar issue with one of my projects.
I've been able to solve my issue by adding this line at the start of my module (before all imports besides sys & os, which are required for this insert), so that it would include the parent folder and by that it will be able to see the parent folder (turns out it doesn't do that by default):
```
import sys
import os
sys.path.insert(1, os.path.join(sys.path[0], '..'))
# all other imports go here...
```
This way, your main.py will include the parent folder (epipy).
Give that a try, hope this helps :-) |
42,968,543 | I have a file displayed as follows. I want to delete the lines start from `>rev_` until the next line with `>`, not delete the `>` line. I want a python code to realize it.
input file:
```
>name1
fgrsagrhshsjtdkj
jfsdljgagdahdrah
gsag
>rev_name1 # delete from here
jfdsfjdlsgrgagrehdsah
fsagasfd # until here
>name2
jfosajgreajljioesfg
fjsdsagjljljlj
>rev_name2 # delete from here
jflsajgljkop
ljljasffdsa # until here
>name3
.......
```
output file:
```
>name1
fgrsagrhshsjtdkj
jfsdljgagdahdrah
gsag
>name2
jfosajgreajljioesfg
fjsdsagjljljlj
>name3
.......
```
My code is as follows, but it can not work.
```
mark = {}
with open("human.fasta") as inf, open("human_norev.fasta",'w') as outf:
for line in inf:
if line[0:5] == '>rev_':
mark[line] = 1
elif line[0] == '>':
mark[line] = 0
if mark[line] == 0:
outf.write(line)
``` | 2017/03/23 | [
"https://Stackoverflow.com/questions/42968543",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4672728/"
] | I'd recommend at least trying to come up with a solution on your own before asking us on here. Ask yourself questions regarding what different ways I can work towards a solution, will parsing character by character/line by line/regex be sufficient for this problem.
But in this case since determining when to start and stop removing lines was always at the start of the line it made sense to just go line by line and check the starting few characters.
```
i = """>name1
fgrsagrhshsjtdkj
jfsdljgagdahdrah
gsag
>rev_name1 # delete from here
jfdsfjdlsgrgagrehdsah
fsagasfd # until here
>name2
jfosajgreajljioesfg
fjsdsagjljljlj
>rev_name2 # delete from here"""
final_string = ""
keep_line = True
for line in i.split('\n'):
if line[0:5] == ">rev_":
keep_line = False
elif line[0] == '>':
keep_line = True
if keep_line:
final_string += line + '\n'
print(final_string)
```
If you wanted the lines to just go directly to console you could remove the print at the end and replace `final_string += line + '\n'` with a `print(line)`. | The code also can be as follows:
```
with open("human.fasta") as inf, open("human_norev.fasta",'w') as outf:
del_start = False
for line in inf:
if line.startswith('>rev_'):
del_start = True
elif line.startswith('>'):
del_start = False
if not del_start:
outf.write(line)
``` |
42,968,543 | I have a file displayed as follows. I want to delete the lines start from `>rev_` until the next line with `>`, not delete the `>` line. I want a python code to realize it.
input file:
```
>name1
fgrsagrhshsjtdkj
jfsdljgagdahdrah
gsag
>rev_name1 # delete from here
jfdsfjdlsgrgagrehdsah
fsagasfd # until here
>name2
jfosajgreajljioesfg
fjsdsagjljljlj
>rev_name2 # delete from here
jflsajgljkop
ljljasffdsa # until here
>name3
.......
```
output file:
```
>name1
fgrsagrhshsjtdkj
jfsdljgagdahdrah
gsag
>name2
jfosajgreajljioesfg
fjsdsagjljljlj
>name3
.......
```
My code is as follows, but it can not work.
```
mark = {}
with open("human.fasta") as inf, open("human_norev.fasta",'w') as outf:
for line in inf:
if line[0:5] == '>rev_':
mark[line] = 1
elif line[0] == '>':
mark[line] = 0
if mark[line] == 0:
outf.write(line)
``` | 2017/03/23 | [
"https://Stackoverflow.com/questions/42968543",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4672728/"
] | Your code isn't working because (among other things) you're not marking lines that neither start with `>rev` nor with `>`. Also, you'd need another loop for outputting all the lines that have been marked for output - right now you're only outputting the last one.
Alec's answer is nice, but I'll suggest a different approach using a regular expression:
```
import re
regex = re.compile(r">rev_[^>]*")
with open("human.fasta") as inf, open("human_norev.fasta", "w") as outf:
outf.write(regex.sub("", inf.read()))
```
Test the regex [live on regex101.com](https://regex101.com/r/hA2ZTT/1). | The code also can be as follows:
```
with open("human.fasta") as inf, open("human_norev.fasta",'w') as outf:
del_start = False
for line in inf:
if line.startswith('>rev_'):
del_start = True
elif line.startswith('>'):
del_start = False
if not del_start:
outf.write(line)
``` |
49,396,554 | Okay, so I have the following issue. I have a Mac, so the the default Python 2.7 is installed for the OS's use. However, I also have Python 3.6 installed, and I want to install a package using Pip that is only compatible with python version 3. How can I install a package with Python 3 and not 2? | 2018/03/21 | [
"https://Stackoverflow.com/questions/49396554",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9525828/"
] | To download use
```
pip3 install package
```
and to run the file
```
python3 file.py
``` | Why do you ask such a thing here?
<https://docs.python.org/3/using/mac.html>
>
> 4.3. Installing Additional Python Packages
> There are several methods to install additional Python packages:
>
>
> Packages can be installed via the standard Python distutils mode (python setup.py install).
> Many packages can also be installed via the setuptools extension or pip wrapper, see <https://pip.pypa.io/>.
>
>
>
<https://pip.pypa.io/en/stable/user_guide/#installing-packages>
>
> Installing Packages
> pip supports installing from PyPI, version control, local projects, and directly from distribution files.
>
>
> The most common scenario is to install from PyPI using Requirement Specifiers
>
>
> `$ pip install SomePackage` # latest version
> `$ pip install SomePackage==1.0.4` # specific version
> `$ pip install 'SomePackage>=1.0.4'` # minimum version
> For more information and examples, see the pip install reference.
>
>
> |
49,396,554 | Okay, so I have the following issue. I have a Mac, so the the default Python 2.7 is installed for the OS's use. However, I also have Python 3.6 installed, and I want to install a package using Pip that is only compatible with python version 3. How can I install a package with Python 3 and not 2? | 2018/03/21 | [
"https://Stackoverflow.com/questions/49396554",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9525828/"
] | To download use
```
pip3 install package
```
and to run the file
```
python3 file.py
``` | Just a suggestion, before you run any command that you don't know what is it, please use `which your_cmd` or `whereis your_cmd` to find its path. |
57,754,497 | So I think tensorflow.keras and the independant keras packages are in conflict and I can't load my model, which I have made with transfer learning.
Import in the CNN ipynb:
```
!pip install tensorflow-gpu==2.0.0b1
import tensorflow as tf
from tensorflow import keras
print(tf.__version__)
```
Loading this pretrained model
```
base_model = keras.applications.xception.Xception(weights="imagenet",
include_top=False)
avg = keras.layers.GlobalAveragePooling2D()(base_model.output)
output = keras.layers.Dense(n_classes, activation="softmax")(avg)
model = keras.models.Model(inputs=base_model.input, outputs=output)
```
Saving with:
```
model.save('Leavesnet Model 2.h5')
```
Then in the new ipynb for the already trained model (the imports are the same as in the CNN ipynb:
```
from keras.models import load_model
model =load_model('Leavesnet Model.h5')
```
I get the error:
```
AttributeError Traceback (most recent call last)
<ipython-input-4-77ca5a1f5f24> in <module>()
2 from keras.models import load_model
3
----> 4 model =load_model('Leavesnet Model.h5')
13 frames
/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py in placeholder(shape, ndim, dtype, sparse, name)
539 x = tf.sparse_placeholder(dtype, shape=shape, name=name)
540 else:
--> 541 x = tf.placeholder(dtype, shape=shape, name=name)
542 x._keras_shape = shape
543 x._uses_learning_phase = False
AttributeError: module 'tensorflow' has no attribute 'placeholder'
```
I think there might be a conflict between tf.keras and the independant keras, can someone help me out? | 2019/09/02 | [
"https://Stackoverflow.com/questions/57754497",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10780811/"
] | Yes, there is a conflict between `tf.keras` and `keras` packages, you trained the model using `tf.keras` but then you are loading it with the `keras` package. That is not supported, you should use only one version of this package.
The specific problem is that you are using TensorFlow 2.0, but the standalone `keras` package does not support TensorFlow 2.0 yet. | Try to replace
```
from keras.models import load_model
model =load_model('Leavesnet Model.h5')
```
with
`model = tf.keras.models.load_model(model_path)`
It works for me, and I am using:
tensorflow version: 2.0.0
keras version: 2.3.1
You can check the following:
<https://www.tensorflow.org/api_docs/python/tf/keras/models/load_model?version=stable> |
57,754,497 | So I think tensorflow.keras and the independant keras packages are in conflict and I can't load my model, which I have made with transfer learning.
Import in the CNN ipynb:
```
!pip install tensorflow-gpu==2.0.0b1
import tensorflow as tf
from tensorflow import keras
print(tf.__version__)
```
Loading this pretrained model
```
base_model = keras.applications.xception.Xception(weights="imagenet",
include_top=False)
avg = keras.layers.GlobalAveragePooling2D()(base_model.output)
output = keras.layers.Dense(n_classes, activation="softmax")(avg)
model = keras.models.Model(inputs=base_model.input, outputs=output)
```
Saving with:
```
model.save('Leavesnet Model 2.h5')
```
Then in the new ipynb for the already trained model (the imports are the same as in the CNN ipynb:
```
from keras.models import load_model
model =load_model('Leavesnet Model.h5')
```
I get the error:
```
AttributeError Traceback (most recent call last)
<ipython-input-4-77ca5a1f5f24> in <module>()
2 from keras.models import load_model
3
----> 4 model =load_model('Leavesnet Model.h5')
13 frames
/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py in placeholder(shape, ndim, dtype, sparse, name)
539 x = tf.sparse_placeholder(dtype, shape=shape, name=name)
540 else:
--> 541 x = tf.placeholder(dtype, shape=shape, name=name)
542 x._keras_shape = shape
543 x._uses_learning_phase = False
AttributeError: module 'tensorflow' has no attribute 'placeholder'
```
I think there might be a conflict between tf.keras and the independant keras, can someone help me out? | 2019/09/02 | [
"https://Stackoverflow.com/questions/57754497",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10780811/"
] | Yes, there is a conflict between `tf.keras` and `keras` packages, you trained the model using `tf.keras` but then you are loading it with the `keras` package. That is not supported, you should use only one version of this package.
The specific problem is that you are using TensorFlow 2.0, but the standalone `keras` package does not support TensorFlow 2.0 yet. | I think downgrading your **keras** or **tensorflow** will not be that much suitable because you need to retrain your model for various dependencies. Why not try to load the weights instead of loading the model?
here is a piece of code
```
import tensorflow as tf
{your code here}
#save the weights
model.save_weights('model_checkpoint')
#initialise the model again (example - MobileNetv2)
encoder = MobileNetV2(input_tensor=inputs, weights="imagenet", include_top=False, alpha=0.35)
#load the weights
encoder.load_weights('model_checkpoint')
```
and you are good to go |
57,754,497 | So I think tensorflow.keras and the independant keras packages are in conflict and I can't load my model, which I have made with transfer learning.
Import in the CNN ipynb:
```
!pip install tensorflow-gpu==2.0.0b1
import tensorflow as tf
from tensorflow import keras
print(tf.__version__)
```
Loading this pretrained model
```
base_model = keras.applications.xception.Xception(weights="imagenet",
include_top=False)
avg = keras.layers.GlobalAveragePooling2D()(base_model.output)
output = keras.layers.Dense(n_classes, activation="softmax")(avg)
model = keras.models.Model(inputs=base_model.input, outputs=output)
```
Saving with:
```
model.save('Leavesnet Model 2.h5')
```
Then in the new ipynb for the already trained model (the imports are the same as in the CNN ipynb:
```
from keras.models import load_model
model =load_model('Leavesnet Model.h5')
```
I get the error:
```
AttributeError Traceback (most recent call last)
<ipython-input-4-77ca5a1f5f24> in <module>()
2 from keras.models import load_model
3
----> 4 model =load_model('Leavesnet Model.h5')
13 frames
/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py in placeholder(shape, ndim, dtype, sparse, name)
539 x = tf.sparse_placeholder(dtype, shape=shape, name=name)
540 else:
--> 541 x = tf.placeholder(dtype, shape=shape, name=name)
542 x._keras_shape = shape
543 x._uses_learning_phase = False
AttributeError: module 'tensorflow' has no attribute 'placeholder'
```
I think there might be a conflict between tf.keras and the independant keras, can someone help me out? | 2019/09/02 | [
"https://Stackoverflow.com/questions/57754497",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10780811/"
] | Try to replace
```
from keras.models import load_model
model =load_model('Leavesnet Model.h5')
```
with
`model = tf.keras.models.load_model(model_path)`
It works for me, and I am using:
tensorflow version: 2.0.0
keras version: 2.3.1
You can check the following:
<https://www.tensorflow.org/api_docs/python/tf/keras/models/load_model?version=stable> | I think downgrading your **keras** or **tensorflow** will not be that much suitable because you need to retrain your model for various dependencies. Why not try to load the weights instead of loading the model?
here is a piece of code
```
import tensorflow as tf
{your code here}
#save the weights
model.save_weights('model_checkpoint')
#initialise the model again (example - MobileNetv2)
encoder = MobileNetV2(input_tensor=inputs, weights="imagenet", include_top=False, alpha=0.35)
#load the weights
encoder.load_weights('model_checkpoint')
```
and you are good to go |
66,196,791 | So take a triangle formatted as a nested list.
e.g.
```
t = [[5],[3, 6],[8, 14, 7],[4, 9, 2, 0],[9, 11, 5, 2, 9],[1, 3, 8, 5, 3, 2]]
```
and define a path to be the sum of elements from each row of the triangle,
moving 1 to the left or right as you go down rows. Or in python
the second index either stays the same or we add 1 to it.
```
a_path = [t[0][0],[t[1][1]],t[2][1],t[3][1],t[4][2],t[5][3]] = [5, 6, 14, 9, 5,5] is valid
not_a_path = [t[0][0],[t[1][0]],t[2][2],t[3][1],t[4][0],t[5][4]] = [5, 3, 7, 9, 9, 3] is not valid
```
For a triangle as small as this example this can obviously be done via brute force.
I wrote a function like that, for a 20 row triangle it takes about 1 minuite.
I need a function that can do this for a 100 row triangle.
I found this code on <https://rosettacode.org/wiki/Maximum_triangle_path_sum#zkl> and it agrees with all the results my terrible function outputs for small triangles I've tried, and using %time in the console it can do the 100 line triangle in 0 ns so relatively quick.
```
def maxPathSum(rows):
return reduce(
lambda xs, ys: [
a + max(b, c) for (a, b, c) in zip(ys, xs, xs[1:])
],
reversed(rows[:-1]), rows[-1]
)
```
So I started taking bits of this, and using print statements and the console to work out what it was doing. I get that `reversed(rows[:-1]), rows[-1]` is reversing the triangle so that we can iterate from all possible final values on the last row through the sums of their possible paths to get to that value, and that as a,b,c iterate: a is a number from the bottom row, b is the second from bottom row, c is the third from bottom row. And as they iterate I think `a + max(b,c)` seems to sum a with the greatest number on b or c, but when I try to find the max of either two lists or a nested list in the console the list returned seems completely arbitrary.
```
ys = t[-1]
xs = list(reversed(t[:-1]))
for (a, b, c) in zip(ys, xs, xs[1:]):
print(b)
print(c)
print(max(b,c))
print("")
```
prints
```
[9, 11, 5, 2, 9]
[4, 9, 2, 0]
[9, 11, 5, 2, 9]
[4, 9, 2, 0]
[8, 14, 7]
[8, 14, 7]
[8, 14, 7]
[3, 6]
[8, 14, 7]
[3, 6]
[5]
[5]
```
If max(b,c) returned the list containing max(max(b),max(c)) then b = [3, 6], c = [5] would return b, so not that. If max(b,c) returned the list with the greatest sum, max(sum(b),sum(c)), then the same example contradicts it. It doesn't return the list containg minimum value or the one with the greatest mean, so my only guess is that the fact that I set `xs = list(reversed(t[:-1]))` is the problem and that it works fine if its an iterator inside the lambda function but not in console.
Also trying to find `a + max (b,c)` gives me this error, which makes sense.
```
TypeError: unsupported operand type(s) for +: 'int' and 'list'
```
My best guess is again that the different definition of xs as a list is the problem. If true I would like to know how this all works in the context of being iterators in the lambda function. I think I get what reduce() and zip() are doing, so mostly just the lambda function is what's confusing me.
Thanks in advance for any help | 2021/02/14 | [
"https://Stackoverflow.com/questions/66196791",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15208320/"
] | We can simplify the expression a bit by including all the rows in the second argument to reduce - there's no reason to pass the last row as third parameter (the starting value) of `reduce`.
Then, it really helps to give your variables meaningful names, which the original code badly fails to do.
So, this becomes:
```
from functools import reduce
def maxPathSum(rows):
return reduce(
lambda sums, upper_row: [cell + max(sum_left, sum_right)
for (cell, sum_left, sum_right)
in zip(upper_row, sums, sums[1:])],
reversed(rows)
)
```
On the first iteration, `sums` will be the last row, and `upper_row` the one over it.
The lambda will calculate the best possible sums by adding each value of the upper row with the largest value of `sums` to its left or right.
It zips the upper row with the sums (the last sum won't be used, as there is one too much), and the sums shifted by one value. So, zip will provide us with a triplet (value from upper row (`cell`), sum underneath to its left (`sum_left`), sum underneath to its right (`sum_right`). The best possible sum at this point is our current cell + the largest of theses sums.
The lambda returns this new row of sums, which will be used as the first parameter of reduce (`sums`) on the next iteration, while `upper_row` becomes the next row in `reversed(rows)`.
In the end, `reduce` returns the last row of sums, which contains only one value, our best possible total:
```
[53]
``` | you can spell out the lambda function so it can print. does this help you understand?
```
t = [[5],[3, 6],[8, 14, 7],[4, 9, 2, 0],[9, 11, 5, 2, 9],[1, 3, 8, 5, 3, 2]]
def g( xs, ys):
ans=[a + max(b, c) for (a, b, c) in zip(ys, xs, xs[1:])]
print(ans)
return ans
def maxPathSum(rows):
return reduce(
g,
reversed(rows[:-1]), rows[-1]
)
maxPathSum(t)
``` |
2,291,176 | I need to arrange some kind of encrpytion for generating user specific links. Users will be clicking this link and at some other view, related link with the crypted string will be decrypted and result will be returned.
For this, I need some kind of encryption function that consumes a number(or a string) that is the primary key of my selected item that is bound to the user account, also consuming some kind of seed and generating encryption code that will be decrypted at some other page.
so something like this
```
my_items_pk = 36 #primary key of an item
seed = "rsdjk324j23423j4j2" #some string for crypting
encrypted_string = encrypt(my_items_pk,seed)
#generates some crypted string such as "dsaj2j213jasas452k41k"
and at another page:
decrypt_input = encrypt(decypt,seed)
print decrypt_input
#gives 36
```
I want my "seed" to be some kind of primary variable (not some class) for this purpose (ie some number or string).
How can I achieve this under python and django ? | 2010/02/18 | [
"https://Stackoverflow.com/questions/2291176",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/151937/"
] | There are no encryption algorithms, per se, built in to Python. However, you might want to look at the [Python Cryptography Toolkit](http://www.dlitz.net/software/pycrypto/) (PyCrypt). I've only tinkered with it, but it's referenced in Python's documentation on [cryptographic services](http://docs.python.org/library/crypto.html). Here's an example of how you could encrypt a string with AES using PyCrypt:
```
from Crypto.Cipher import AES
from urllib import quote
# Note that for AES the key length must be either 16, 24, or 32 bytes
encryption_obj = AES.new('abcdefghijklmnop')
plain = "Testing"
# The plaintext must be a multiple of 16 bytes (for AES), so here we pad it
# with spaces if necessary.
mismatch = len(plain) % 16
if mismatch != 0:
padding = (16 - mismatch) * ' '
plain += padding
ciph = encryption_obj.encrypt(plain)
# Finally, to make the encrypted string safe to use in a URL we quote it
quoted_ciph = quote(ciph)
```
You would then make this part of your URL, perhaps as part of a GET request.
To decrypt, just reverse the process; assuming that `encryption_obj` is created as above, and that you've retrieved the relevant part of the URL, this would do it:
```
from urllib import unquote
# We've already created encryption_object as shown above
ciph = unquote(quoted_ciph)
plain = encryption_obj.decrypt(ciph)
```
You also might consider a different approach: one simple method would be to hash the primary key (with a salt, if you wish) and store the hash and pk in your database. Give the user the hash as part of their link, and when they return and present the hash, look up the corresponding pk and return the appropriate object. (If you want to go this route, check out the built-in library [hashlib](http://docs.python.org/library/hashlib.html#module-hashlib).)
As an example, you'd have something like this defined in models.py:
```
class Pk_lookup(models.Model):
# since we're using sha256, set the max_length of this field to 32
hashed_pk = models.CharField(primary_key=True, max_length=32)
key = models.IntegerField()
```
And you'd generate the hash in a view using something like the following:
```
import hashlib
import Pk_lookup
hash = hashlib.sha256()
hash.update(str(pk)) # pk has been defined previously
pk_digest = hash.digest()
lookup = Pk_lookup(hashed_pk=pk_digest,key=pk)
lookup.save()
```
Note that you'd have to quote this version as well; if you prefer, you can use `hexdigest()` instead of `digest` (you wouldn't have to quote the resulting string), but you'll have to adjust the length of the field to 64. | Django has features for this now. See <https://docs.djangoproject.com/en/dev/topics/signing/>
Quoting that page:
"Django provides both a low-level API for signing values and a high-level API for setting and reading signed cookies, one of the most common uses of signing in Web applications.
You may also find signing useful for the following:
* Generating βrecover my accountβ URLs for sending to users who have lost their password.
* Ensuring data stored in hidden form fields has not been tampered with.
* Generating one-time secret URLs for allowing temporary access to a protected resource, for - example a downloadable file that a user has paid for." |
11,632,154 | In python if I have two dictionaries, specifically Counter objects that look like so
```
c1 = Counter({'item1': 4, 'item2':2, 'item3': 5, 'item4': 3})
c2 = Counter({'item1': 6, 'item2':2, 'item3': 1, 'item5': 9})
```
Can I combine these dictionaries so that the results is a dictionary of lists, as follows:
```
c3 = {'item1': [4,6], 'item2':[2,2], 'item3': [5,1], 'item4': [3], 'item5': [9]}
```
where each value is a list of all the values of the preceding dictionaries from the appropriate key, and where there are no matching keys between the two original dictionaries, a new kew is added that contains a one element list. | 2012/07/24 | [
"https://Stackoverflow.com/questions/11632154",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/801348/"
] | ```
from collections import Counter
c1 = Counter({'item1': 4, 'item2':2, 'item3': 5, 'item4': 3})
c2 = Counter({'item1': 6, 'item2':2, 'item3': 1, 'item5': 9})
c3 = {}
for c in (c1, c2):
for k,v in c.iteritems():
c3.setdefault(k, []).append(v)
```
`c3` is now: `{'item1': [4, 6], 'item2': [2, 2], 'item3': [5, 1], 'item4': [3], 'item5': [9]}` | Or with a list comprehension:
```
from collections import Counter
c1 = Counter({'item1': 4, 'item2':2, 'item3': 5, 'item4': 3})
c2 = Counter({'item1': 6, 'item2':2, 'item3': 1, 'item5': 9})
merged = {}
for k in set().union(c1, c2):
merged[k] = [d[k] for d in [c1, c2] if k in d]
>>> merged
{'item2': [2, 2], 'item3': [5, 1], 'item1': [4, 6], 'item4': [3], 'item5': [9]}
```
Explanation
-----------
1. Throw all keys that exist into an anonymous set. (It's a set => no duplicate keys)
2. For every key, do 3.
3. For every dictionary d in the list of dictionaries `[c1, c2]`
* Check whether the currently being processed key `k` exists
+ If true: include the expression `d[k]` in the resulting list
+ If not: proceed with next iteration
[Here](http://docs.python.org/tutorial/datastructures.html#list-comprehensions) is a detailed introduction to list comprehension with many examples. |
11,632,154 | In python if I have two dictionaries, specifically Counter objects that look like so
```
c1 = Counter({'item1': 4, 'item2':2, 'item3': 5, 'item4': 3})
c2 = Counter({'item1': 6, 'item2':2, 'item3': 1, 'item5': 9})
```
Can I combine these dictionaries so that the results is a dictionary of lists, as follows:
```
c3 = {'item1': [4,6], 'item2':[2,2], 'item3': [5,1], 'item4': [3], 'item5': [9]}
```
where each value is a list of all the values of the preceding dictionaries from the appropriate key, and where there are no matching keys between the two original dictionaries, a new kew is added that contains a one element list. | 2012/07/24 | [
"https://Stackoverflow.com/questions/11632154",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/801348/"
] | ```
from collections import Counter
c1 = Counter({'item1': 4, 'item2':2, 'item3': 5, 'item4': 3})
c2 = Counter({'item1': 6, 'item2':2, 'item3': 1, 'item5': 9})
c3 = {}
for c in (c1, c2):
for k,v in c.iteritems():
c3.setdefault(k, []).append(v)
```
`c3` is now: `{'item1': [4, 6], 'item2': [2, 2], 'item3': [5, 1], 'item4': [3], 'item5': [9]}` | You can use `defaultdict`:
```
>>> from collections import Counter, defaultdict
>>> c1 = Counter({'item1': 4, 'item2':2, 'item3': 5, 'item4': 3})
>>> c2 = Counter({'item1': 6, 'item2':2, 'item3': 1, 'item5': 9})
>>> c3 = defaultdict(list)
>>> for c in c1, c2:
... for k, v in c.items():
... c3[k].append(v)
...
>>> c3
defaultdict(<type 'list'>, {'item2': [2, 2], 'item3': [5, 1], 'item1': [4, 6],
'item4': [3], 'item5': [9]})
``` |