qid
int64 469
74.7M
| question
stringlengths 36
37.8k
| date
stringlengths 10
10
| metadata
sequence | response_j
stringlengths 5
31.5k
| response_k
stringlengths 10
31.6k
|
---|---|---|---|---|---|
73,993,034 | I just started learning python and i have been trying to learn coding for a few days. I wrote a bill calculation code, but there is a point I don't understand.
When I write a number other than `10`, `15`, `20`, I want the text I wrote in the else part to appear, but this does not work except for `20`. What should I do?
```py
print("\t Welcome to Tip Calculator")
bill = float(input("What was the total bill? $"))
people = int(input("How many people split to bill? "))
tip_percantage = int(input("What percentage tip would you like to give? 10, 15 or 20? "))
tip = bill * tip_percantage / 100
if tip_percantage == 10:
print("Each people should pay: " + str( (bill + tip) / people) )
if tip_percantage == 15:
print("Each people should pay: " + str( (bill + tip) / people) )
if tip_percantage == 20:
print("Each people should pay: " + str( (bill + tip) / people) )
else:
print(f"{tip_percantage} is not valid percentage!")
``` | 2022/10/07 | [
"https://Stackoverflow.com/questions/73993034",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20187786/"
] | You have three separate if statements here. What you're looking for is `elif`. [More here.](https://www.w3schools.com/python/gloss_python_elif.asp)
```
if tip_percantage == 10:
print("Each people should pay: " + str( (bill + tip) / people) )
elif tip_percantage == 15:
print("Each people should pay: " + str( (bill + tip) / people) )
elif tip_percantage == 20:
print("Each people should pay: " + str( (bill + tip) / people) )
else:
print(f"{tip_percantage} is not valid percentage!")
``` | This is because you use `if` multiple times so `else` is attached to last `if tip_percantage == 20`.
so first `if` not True will print anything.
* To Fix this swap last 2 `if's` with `elif`
```
print("\t Welcome to Tip Calculator")
bill = float(input("What was the total bill? $"))
people = int(input("How many people split to bill? "))
tip_percantage = int(input("What percentage tip would you like to give? 10, 15 or 20? "))
tip = bill * tip_percantage / 100
if tip_percantage == 10: print("Each people should pay: " + str((bill + tip) / people))
elif tip_percantage == 15: print("Each people should pay: " + str((bill + tip) / people))
elif tip_percantage == 20:
print("Each people should pay: " + str((bill + tip) / people))
else:
print(f"{tip_percantage} is not valid percentage!")
``` |
27,992,264 | In python I was able to slice part of a string; in other words just print the characters after a certain position. Is there an equivalent to this in C++?
Python Code:
```
text= "Apple Pear Orange"
print text[6:]
```
Would print: `Pear Orange` | 2015/01/16 | [
"https://Stackoverflow.com/questions/27992264",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4435481/"
] | Yes, it is the [`substr`](http://en.cppreference.com/w/cpp/string/basic_string/substr) method:
```
basic_string substr( size_type pos = 0,
size_type count = npos ) const;
```
>
> Returns a substring [pos, pos+count). If the requested substring extends past the end of the string, or if count == npos, the returned substring is [pos, size()).
>
>
>
### Example
```
#include <iostream>
#include <string>
int main(void) {
std::string text("Apple Pear Orange");
std::cout << text.substr(6) << std::endl;
return 0;
}
```
[See it run](http://coliru.stacked-crooked.com/a/5fb5358b669ce6b1) | You can do something like this using the string class:
```
std::string text = "Apple Pear Orange";
size_t pos = text.find('Pear');
``` |
27,992,264 | In python I was able to slice part of a string; in other words just print the characters after a certain position. Is there an equivalent to this in C++?
Python Code:
```
text= "Apple Pear Orange"
print text[6:]
```
Would print: `Pear Orange` | 2015/01/16 | [
"https://Stackoverflow.com/questions/27992264",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4435481/"
] | Yes, it is the [`substr`](http://en.cppreference.com/w/cpp/string/basic_string/substr) method:
```
basic_string substr( size_type pos = 0,
size_type count = npos ) const;
```
>
> Returns a substring [pos, pos+count). If the requested substring extends past the end of the string, or if count == npos, the returned substring is [pos, size()).
>
>
>
### Example
```
#include <iostream>
#include <string>
int main(void) {
std::string text("Apple Pear Orange");
std::cout << text.substr(6) << std::endl;
return 0;
}
```
[See it run](http://coliru.stacked-crooked.com/a/5fb5358b669ce6b1) | \*\*First parameter determines the starting index and the second parameter specifies the ending index remember that the starting of a string is from 0 \*\*
```
string s="Apple";
string ans=s.substr(2);//ple
string ans1=s.substr(2,3)//pl
``` |
27,992,264 | In python I was able to slice part of a string; in other words just print the characters after a certain position. Is there an equivalent to this in C++?
Python Code:
```
text= "Apple Pear Orange"
print text[6:]
```
Would print: `Pear Orange` | 2015/01/16 | [
"https://Stackoverflow.com/questions/27992264",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4435481/"
] | ```
std::string text = "Apple Pear Orange";
std::cout << std::string(text.begin() + 6, text.end()) << std::endl; // No range checking at all.
std::cout << text.substr(6) << std::endl; // Throws an exception if string isn't long enough.
```
Note that unlike python, the first doesn't do range checking: Your input string needs to be long enough. Depending on your end-use for the slice there may be other alternatives as well (such as using an iterator range directly instead of making a copy like I do here). | You can do something like this using the string class:
```
std::string text = "Apple Pear Orange";
size_t pos = text.find('Pear');
``` |
27,992,264 | In python I was able to slice part of a string; in other words just print the characters after a certain position. Is there an equivalent to this in C++?
Python Code:
```
text= "Apple Pear Orange"
print text[6:]
```
Would print: `Pear Orange` | 2015/01/16 | [
"https://Stackoverflow.com/questions/27992264",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4435481/"
] | In C++ the closest equivalent would probably be string::substr().
Example:
```
std::string str = "Something";
printf("%s", str.substr(4)); // -> "thing"
printf("%s", str.substr(4,3)); // -> "thi"
```
(first parameter is the initial position, the second is the length sliced).
Second parameter defaults to end of string (string::npos). | \*\*First parameter determines the starting index and the second parameter specifies the ending index remember that the starting of a string is from 0 \*\*
```
string s="Apple";
string ans=s.substr(2);//ple
string ans1=s.substr(2,3)//pl
``` |
27,992,264 | In python I was able to slice part of a string; in other words just print the characters after a certain position. Is there an equivalent to this in C++?
Python Code:
```
text= "Apple Pear Orange"
print text[6:]
```
Would print: `Pear Orange` | 2015/01/16 | [
"https://Stackoverflow.com/questions/27992264",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4435481/"
] | You can do something like this using the string class:
```
std::string text = "Apple Pear Orange";
size_t pos = text.find('Pear');
``` | \*\*First parameter determines the starting index and the second parameter specifies the ending index remember that the starting of a string is from 0 \*\*
```
string s="Apple";
string ans=s.substr(2);//ple
string ans1=s.substr(2,3)//pl
``` |
27,992,264 | In python I was able to slice part of a string; in other words just print the characters after a certain position. Is there an equivalent to this in C++?
Python Code:
```
text= "Apple Pear Orange"
print text[6:]
```
Would print: `Pear Orange` | 2015/01/16 | [
"https://Stackoverflow.com/questions/27992264",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4435481/"
] | ```
std::string text = "Apple Pear Orange";
std::cout << std::string(text.begin() + 6, text.end()) << std::endl; // No range checking at all.
std::cout << text.substr(6) << std::endl; // Throws an exception if string isn't long enough.
```
Note that unlike python, the first doesn't do range checking: Your input string needs to be long enough. Depending on your end-use for the slice there may be other alternatives as well (such as using an iterator range directly instead of making a copy like I do here). | Sounds like you want [string::substr](http://www.cplusplus.com/reference/string/string/substr/):
```
std::string text = "Apple Pear Orange";
std::cout << text.substr(6, std::string::npos) << std::endl; // "Pear Orange"
```
Here [string::npos](http://www.cplusplus.com/reference/string/string/npos/) is synonymous with "until the end of the string" (and is also default but I included it for clarity). |
27,992,264 | In python I was able to slice part of a string; in other words just print the characters after a certain position. Is there an equivalent to this in C++?
Python Code:
```
text= "Apple Pear Orange"
print text[6:]
```
Would print: `Pear Orange` | 2015/01/16 | [
"https://Stackoverflow.com/questions/27992264",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4435481/"
] | It looks like C++20 will have Ranges
<https://en.cppreference.com/w/cpp/ranges>
which are designed to provide, amongst other things, python-like slicing
<http://ericniebler.com/2014/12/07/a-slice-of-python-in-c/>
So I'm waiting for it to land in my favorite compiler, and meanwhile use
<https://ericniebler.github.io/range-v3/> | \*\*First parameter determines the starting index and the second parameter specifies the ending index remember that the starting of a string is from 0 \*\*
```
string s="Apple";
string ans=s.substr(2);//ple
string ans1=s.substr(2,3)//pl
``` |
27,992,264 | In python I was able to slice part of a string; in other words just print the characters after a certain position. Is there an equivalent to this in C++?
Python Code:
```
text= "Apple Pear Orange"
print text[6:]
```
Would print: `Pear Orange` | 2015/01/16 | [
"https://Stackoverflow.com/questions/27992264",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4435481/"
] | Yes, it is the [`substr`](http://en.cppreference.com/w/cpp/string/basic_string/substr) method:
```
basic_string substr( size_type pos = 0,
size_type count = npos ) const;
```
>
> Returns a substring [pos, pos+count). If the requested substring extends past the end of the string, or if count == npos, the returned substring is [pos, size()).
>
>
>
### Example
```
#include <iostream>
#include <string>
int main(void) {
std::string text("Apple Pear Orange");
std::cout << text.substr(6) << std::endl;
return 0;
}
```
[See it run](http://coliru.stacked-crooked.com/a/5fb5358b669ce6b1) | Sounds like you want [string::substr](http://www.cplusplus.com/reference/string/string/substr/):
```
std::string text = "Apple Pear Orange";
std::cout << text.substr(6, std::string::npos) << std::endl; // "Pear Orange"
```
Here [string::npos](http://www.cplusplus.com/reference/string/string/npos/) is synonymous with "until the end of the string" (and is also default but I included it for clarity). |
27,992,264 | In python I was able to slice part of a string; in other words just print the characters after a certain position. Is there an equivalent to this in C++?
Python Code:
```
text= "Apple Pear Orange"
print text[6:]
```
Would print: `Pear Orange` | 2015/01/16 | [
"https://Stackoverflow.com/questions/27992264",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4435481/"
] | In C++ the closest equivalent would probably be string::substr().
Example:
```
std::string str = "Something";
printf("%s", str.substr(4)); // -> "thing"
printf("%s", str.substr(4,3)); // -> "thi"
```
(first parameter is the initial position, the second is the length sliced).
Second parameter defaults to end of string (string::npos). | You can do something like this using the string class:
```
std::string text = "Apple Pear Orange";
size_t pos = text.find('Pear');
``` |
27,992,264 | In python I was able to slice part of a string; in other words just print the characters after a certain position. Is there an equivalent to this in C++?
Python Code:
```
text= "Apple Pear Orange"
print text[6:]
```
Would print: `Pear Orange` | 2015/01/16 | [
"https://Stackoverflow.com/questions/27992264",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4435481/"
] | Yes, it is the [`substr`](http://en.cppreference.com/w/cpp/string/basic_string/substr) method:
```
basic_string substr( size_type pos = 0,
size_type count = npos ) const;
```
>
> Returns a substring [pos, pos+count). If the requested substring extends past the end of the string, or if count == npos, the returned substring is [pos, size()).
>
>
>
### Example
```
#include <iostream>
#include <string>
int main(void) {
std::string text("Apple Pear Orange");
std::cout << text.substr(6) << std::endl;
return 0;
}
```
[See it run](http://coliru.stacked-crooked.com/a/5fb5358b669ce6b1) | In C++ the closest equivalent would probably be string::substr().
Example:
```
std::string str = "Something";
printf("%s", str.substr(4)); // -> "thing"
printf("%s", str.substr(4,3)); // -> "thi"
```
(first parameter is the initial position, the second is the length sliced).
Second parameter defaults to end of string (string::npos). |
19,671,145 | When a Python bytearray is created (with an integer passed to it) it creates a bytearray of that many bytes, and sets them all to zero.
I want to clear the bytearray, and it could be quite large, iterating over it and setting the contents to zero that way is pretty poor.
Is there a better way?
(memoryviews and bytearrays are poorly documented IMO)
Best resources so far (but none of them answer my question)
<http://docs.python.org/dev/library/stdtypes.html#bytes-methods>
<http://docs.python.org/dev/library/functions.html#bytearray> | 2013/10/29 | [
"https://Stackoverflow.com/questions/19671145",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2112028/"
] | Edit: This answer is wrong. `s = s.translate('\0'*256)` is slower than `s = bytearray(256)`, so there is no point in using `translate` here. @gnibbler provides [a better solution](https://stackoverflow.com/a/19671429/190597).
---
Bytearrays have many of the same methods that strings have. You could use the translate method:
```
In [64]: s = bytearray('Hello World')
In [65]: s
Out[65]: bytearray(b'Hello World')
In [66]: import string
In [67]: zero = string.maketrans(buffer(bytearray(range(256))),buffer(bytearray(256)))
In [68]: s.translate(zero)
Out[68]: bytearray(b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')
```
By the way, Dave Beazley has written a very useful [introduction to bytearrays](http://dabeaz.blogspot.com/2010/01/few-useful-bytearray-tricks.html).
---
Or, slightly modifying millimoose's answer:
```
In [72]: s.translate('\0'*256)
Out[72]: bytearray(b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')
In [73]: %timeit s.translate('\0'*256)
1000000 loops, best of 3: 282 ns per loop
In [74]: %timeit s.translate(bytearray(256))
1000000 loops, best of 3: 398 ns per loop
``` | All you need to do is re declare your bytearray
```
b = bytearray(LEN_OF_BYTE_ARRAY)
``` |
19,671,145 | When a Python bytearray is created (with an integer passed to it) it creates a bytearray of that many bytes, and sets them all to zero.
I want to clear the bytearray, and it could be quite large, iterating over it and setting the contents to zero that way is pretty poor.
Is there a better way?
(memoryviews and bytearrays are poorly documented IMO)
Best resources so far (but none of them answer my question)
<http://docs.python.org/dev/library/stdtypes.html#bytes-methods>
<http://docs.python.org/dev/library/functions.html#bytearray> | 2013/10/29 | [
"https://Stackoverflow.com/questions/19671145",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2112028/"
] | Why would you assume reallocating the bytearray is so slow? It's more than 10 times faster than using `translate` or large bytearrays!
I'm deleting the original bytearray so you don't have to worry about temporarily using double the memory
```
# For small bytearray reallocation is a tiny bit faster
$ python -m timeit -s "s=bytearray('Hello World')" "s.translate('\0'*256)"
1000000 loops, best of 3: 0.672 usec per loop
$ python -m timeit -s "s=bytearray('Hello World')" "lens=len(s);del s;s=bytearray(lens)"
1000000 loops, best of 3: 0.522 usec per loop
# For large bytearray reallocation is much faster
$ python -m timeit -s "s=bytearray('Hello World'*10000)" "s.translate('\0'*256)"
1000 loops, best of 3: 225 usec per loop
$ python -m timeit -s "s=bytearray('Hello World'*10000)" "lens=len(s);del s;s=bytearray(lens)"
10000 loops, best of 3: 18.5 usec per loop
```
There's an even better way that allow `s` to keep the same reference. You simply need to call the `__init__` method on the instance.
```
>>> s=bytearray(b"hello world")
>>> id(s)
3074325152L
>>> s.__init__(len(s))
>>> s
bytearray(b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')
>>> id(s)
3074325152L
```
Testing the timing
```
$ python -m timeit -s "s=bytearray('Hello World'*10000)" "s.__init__(len(s))"
100000 loops, best of 3: 18.7 usec per loop
```
I ran these gigabyte tests on a different computer with more RAM
```
$ python -m timeit -s "s=bytearray('HelloWorld'*100000000)" "s.__init__(len(s))"
10 loops, best of 3: 454 msec per loop
$ python -m timeit -s "s=bytearray('HelloWorld'*100000000)" "s.translate('\0'*256)"
10 loops, best of 3: 1.43 sec per loop
``` | All you need to do is re declare your bytearray
```
b = bytearray(LEN_OF_BYTE_ARRAY)
``` |
19,671,145 | When a Python bytearray is created (with an integer passed to it) it creates a bytearray of that many bytes, and sets them all to zero.
I want to clear the bytearray, and it could be quite large, iterating over it and setting the contents to zero that way is pretty poor.
Is there a better way?
(memoryviews and bytearrays are poorly documented IMO)
Best resources so far (but none of them answer my question)
<http://docs.python.org/dev/library/stdtypes.html#bytes-methods>
<http://docs.python.org/dev/library/functions.html#bytearray> | 2013/10/29 | [
"https://Stackoverflow.com/questions/19671145",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2112028/"
] | Here are a few different ways of clearing a bytearray without changing the reference (in case other object refer to it):
1. Using clear():
```
>>> a=bytearray(10)
>>> a
bytearray(b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')
>>> a.clear()
>>> a
bytearray(b'')
```
2. Using slicing:
```
>>> a=bytearray(10)
>>> a[0:10] = []
>>> a
bytearray(b'')
>>> a=bytearray(10)
>>> del a[0:10]
>>> a
bytearray(b'')
```
3. Using del:
```
>>> a=bytearray(10)
>>> b=a
>>> del a[0:10]
>>> a
bytearray(b'')
```
You can verify that if another variable, say `b`, references `a`, none of the above technique will break this. The following technique of resetting `a`, by creating a new bytearray, will break this:
```
>>> a=bytearray(10)
>>> b=a
>>> b is a
True
>>> a=bytearray(10)
>>> b is a
False
```
However, all the above change the array size to 0. Perhaps you want to simply 0 all the items, keeping the size unchanged, and keeping any references valid:
```
>>> a=bytearray(10)
>>> b=a
>>> b is a
True
>>> a[0:10]=bytearray(10)
>>> b is a
True
```
So you can easily, with this technique, 0 any subsection of the array (in fact, of any mutable container). | All you need to do is re declare your bytearray
```
b = bytearray(LEN_OF_BYTE_ARRAY)
``` |
19,671,145 | When a Python bytearray is created (with an integer passed to it) it creates a bytearray of that many bytes, and sets them all to zero.
I want to clear the bytearray, and it could be quite large, iterating over it and setting the contents to zero that way is pretty poor.
Is there a better way?
(memoryviews and bytearrays are poorly documented IMO)
Best resources so far (but none of them answer my question)
<http://docs.python.org/dev/library/stdtypes.html#bytes-methods>
<http://docs.python.org/dev/library/functions.html#bytearray> | 2013/10/29 | [
"https://Stackoverflow.com/questions/19671145",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2112028/"
] | Edit: This answer is wrong. `s = s.translate('\0'*256)` is slower than `s = bytearray(256)`, so there is no point in using `translate` here. @gnibbler provides [a better solution](https://stackoverflow.com/a/19671429/190597).
---
Bytearrays have many of the same methods that strings have. You could use the translate method:
```
In [64]: s = bytearray('Hello World')
In [65]: s
Out[65]: bytearray(b'Hello World')
In [66]: import string
In [67]: zero = string.maketrans(buffer(bytearray(range(256))),buffer(bytearray(256)))
In [68]: s.translate(zero)
Out[68]: bytearray(b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')
```
By the way, Dave Beazley has written a very useful [introduction to bytearrays](http://dabeaz.blogspot.com/2010/01/few-useful-bytearray-tricks.html).
---
Or, slightly modifying millimoose's answer:
```
In [72]: s.translate('\0'*256)
Out[72]: bytearray(b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')
In [73]: %timeit s.translate('\0'*256)
1000000 loops, best of 3: 282 ns per loop
In [74]: %timeit s.translate(bytearray(256))
1000000 loops, best of 3: 398 ns per loop
``` | Here are a few different ways of clearing a bytearray without changing the reference (in case other object refer to it):
1. Using clear():
```
>>> a=bytearray(10)
>>> a
bytearray(b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')
>>> a.clear()
>>> a
bytearray(b'')
```
2. Using slicing:
```
>>> a=bytearray(10)
>>> a[0:10] = []
>>> a
bytearray(b'')
>>> a=bytearray(10)
>>> del a[0:10]
>>> a
bytearray(b'')
```
3. Using del:
```
>>> a=bytearray(10)
>>> b=a
>>> del a[0:10]
>>> a
bytearray(b'')
```
You can verify that if another variable, say `b`, references `a`, none of the above technique will break this. The following technique of resetting `a`, by creating a new bytearray, will break this:
```
>>> a=bytearray(10)
>>> b=a
>>> b is a
True
>>> a=bytearray(10)
>>> b is a
False
```
However, all the above change the array size to 0. Perhaps you want to simply 0 all the items, keeping the size unchanged, and keeping any references valid:
```
>>> a=bytearray(10)
>>> b=a
>>> b is a
True
>>> a[0:10]=bytearray(10)
>>> b is a
True
```
So you can easily, with this technique, 0 any subsection of the array (in fact, of any mutable container). |
19,671,145 | When a Python bytearray is created (with an integer passed to it) it creates a bytearray of that many bytes, and sets them all to zero.
I want to clear the bytearray, and it could be quite large, iterating over it and setting the contents to zero that way is pretty poor.
Is there a better way?
(memoryviews and bytearrays are poorly documented IMO)
Best resources so far (but none of them answer my question)
<http://docs.python.org/dev/library/stdtypes.html#bytes-methods>
<http://docs.python.org/dev/library/functions.html#bytearray> | 2013/10/29 | [
"https://Stackoverflow.com/questions/19671145",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2112028/"
] | Why would you assume reallocating the bytearray is so slow? It's more than 10 times faster than using `translate` or large bytearrays!
I'm deleting the original bytearray so you don't have to worry about temporarily using double the memory
```
# For small bytearray reallocation is a tiny bit faster
$ python -m timeit -s "s=bytearray('Hello World')" "s.translate('\0'*256)"
1000000 loops, best of 3: 0.672 usec per loop
$ python -m timeit -s "s=bytearray('Hello World')" "lens=len(s);del s;s=bytearray(lens)"
1000000 loops, best of 3: 0.522 usec per loop
# For large bytearray reallocation is much faster
$ python -m timeit -s "s=bytearray('Hello World'*10000)" "s.translate('\0'*256)"
1000 loops, best of 3: 225 usec per loop
$ python -m timeit -s "s=bytearray('Hello World'*10000)" "lens=len(s);del s;s=bytearray(lens)"
10000 loops, best of 3: 18.5 usec per loop
```
There's an even better way that allow `s` to keep the same reference. You simply need to call the `__init__` method on the instance.
```
>>> s=bytearray(b"hello world")
>>> id(s)
3074325152L
>>> s.__init__(len(s))
>>> s
bytearray(b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')
>>> id(s)
3074325152L
```
Testing the timing
```
$ python -m timeit -s "s=bytearray('Hello World'*10000)" "s.__init__(len(s))"
100000 loops, best of 3: 18.7 usec per loop
```
I ran these gigabyte tests on a different computer with more RAM
```
$ python -m timeit -s "s=bytearray('HelloWorld'*100000000)" "s.__init__(len(s))"
10 loops, best of 3: 454 msec per loop
$ python -m timeit -s "s=bytearray('HelloWorld'*100000000)" "s.translate('\0'*256)"
10 loops, best of 3: 1.43 sec per loop
``` | Here are a few different ways of clearing a bytearray without changing the reference (in case other object refer to it):
1. Using clear():
```
>>> a=bytearray(10)
>>> a
bytearray(b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')
>>> a.clear()
>>> a
bytearray(b'')
```
2. Using slicing:
```
>>> a=bytearray(10)
>>> a[0:10] = []
>>> a
bytearray(b'')
>>> a=bytearray(10)
>>> del a[0:10]
>>> a
bytearray(b'')
```
3. Using del:
```
>>> a=bytearray(10)
>>> b=a
>>> del a[0:10]
>>> a
bytearray(b'')
```
You can verify that if another variable, say `b`, references `a`, none of the above technique will break this. The following technique of resetting `a`, by creating a new bytearray, will break this:
```
>>> a=bytearray(10)
>>> b=a
>>> b is a
True
>>> a=bytearray(10)
>>> b is a
False
```
However, all the above change the array size to 0. Perhaps you want to simply 0 all the items, keeping the size unchanged, and keeping any references valid:
```
>>> a=bytearray(10)
>>> b=a
>>> b is a
True
>>> a[0:10]=bytearray(10)
>>> b is a
True
```
So you can easily, with this technique, 0 any subsection of the array (in fact, of any mutable container). |
47,511,885 | I'm in trouble for many days now and all solutions given didn't helped me yet.
The image profile I want to show doesn't appear if I use the template variable {{ userprofile.photo.url}} (the result is the Alt text) but it does work when I put the path to the image like this : /dashboard/media/dashboard/photo/profils/user.png.
I've tried to debug, it seems the url is good but the result given is this :
```
[27/Nov/2017 13:55:07] "GET /dashboard/ HTTP/1.1" 200 44757
Not Found: /media/dashboard/photos/profils/user.png
[27/Nov/2017 13:55:07] "GET /media/dashboard/photos/profils/user.png HTTP/1.1" 404 2295
```
Here the files of the project :
Structure of the project :
```
project_dir/
dash-app/
__init__.py
settings.py
urls.py
wsgi.py
dashboard/
__init__.py
admin.py
app.py
forms.py
models.py
urls.py
views.py
...
templates/
dashboard/
index.html
...
static/
dashboard/
images/
logo9.png
...
media/
dashboard/
photos/
profils/
user.png
...
```
### On the urls.py :
```
from django.conf.urls import url
from dashboard import models
from dashboard import views
from django.conf.urls.static import static
from django.conf import settings
urlpatterns = [
url(r'^$', views.index, name='index'),
]
if settings.DEBUG:
urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
urlpatterns += static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
```
### On the settings.py :
```
STATIC_URL = '/static/'
STATICFILES_DIRS = [
os.path.join(BASE_DIR, "static"),
]
STATIC_ROOT = os.path.join(BASE_DIR, "dashboard", "static")
#dashboard/media/
MEDIA_URL = "/media/"
MEDIA_ROOT = os.path.join(BASE_DIR, "dashboard", "media", "dashboard")
```
### On the models.py :
```
from django.core.files.storage import FileSystemStorage
image_storage = FileSystemStorage(
# Physical file location ROOT
location='{0}/'.format(settings.MEDIA_ROOT),
# Url for file
base_url='{0}/dashboard/'.format(settings.MEDIA_URL),
)
def image_directory_path(instance, filename):
# file will be uploaded to MEDIA_ROOT/dashboard/picture/<filename>
return 'photos/profils/{0}'.format(filename)
def logo_directory_path(instance, filename):
# file will be uploaded to MEDIA_ROOT/dashboard/picture/<filename>
return 'photos/logos/{0}'.format(filename)
...
# Photos
photo = models.ImageField(blank=True,
upload_to=image_directory_path,
storage=image_storage)
# Logo de l'activité
photo = models.ImageField(null=True,
upload_to=logo_directory_path,
storage=image_storage)
```
### On the views.py :
```
def index(request):
connected = models.UserProfile.objects.get(user=request.user)
print(connected.photo.url)
context={
'userprofile':connected,
}
return render(request, 'dashboard/index.html', context)
```
### On the index.html :
```
<!-- menu profile quick info -->
<div class="profile clearfix">
<div class="profile_pic">
{% if userprofile.photo %}
<img src="{{ userprofile.photo.url }}" alt="User" class="img-circle profile_img">
{% else %}
<img src="media/dashboard/photos/profils/user.png" alt="..." class="img-circle profile_img">
{% endif %}
</div>
<div class="profile_info">
<span>Bienvenu,</span>
<h2>{{ userprofile }}</h2>
</div>
</div>
<!-- /menu profile quick info -->
```
I'm using python 3 and Django 1.11.5
Thank tou for your help !
>
> **EDIT :** I've opened the application on a private navigation to reload and see if there was someting on cache and know I see that nothing works, neither : `{{ userprofile.photo.url|slice:"1:" }}` or `media/dashboard/photos/profils/user.png` . Is ther something to reload the >media's file links to the project or something like that. Because If add the image manualy to the folder or if I upload it via Django Admin Interface, is there some difference ?
>
>
> **SOLVED :** I changed these lines :
> `MEDIA_ROOT = os.path.join(BASE_DIR, "dashboard", "media", "dashboard")` to `MEDIA_ROOT = os.path.join(BASE_DIR, "dashboard", "media")`
>
>
> `image_storage = FileSystemStorage(
> # Physical file location ROOT
> location='{0}/'.format(settings.MEDIA_ROOT),
> # Url for file
> base_url='{0}/dashboard/'.format(settings.MEDIA_URL),
> )`
>
>
> to
>
>
> `image_storage = FileSystemStorage(
> # Physical file location ROOT
> location='{0}/dashboard/'.format(settings.MEDIA_ROOT),
> # Url for file
> base_url='{0}/dashboard/'.format(settings.MEDIA_URL),
> )`
>
>
> | 2017/11/27 | [
"https://Stackoverflow.com/questions/47511885",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8832815/"
] | In your question, there are missing some relevant parts(`HTML` code, `Bootstrap` files, etc), but I tried to replicate your issue and you can use `e[i].name` to get the names from the `json`. Also, you can use `Object.keys(e).length` to get the actual length of the `json` and parse it correctly:
```js
$.ajax({
url: "https://n2s.herokuapp.com/api/post/get_all_category/",
method: "GET",
dataType: "JSON",
success: function(e) {
console.log(e);
$('.selectpicker').selectpicker();
//replaced i < 10 with i < Object.keys(e).length
for (var i = 0; i < Object.keys(e).length; i++) {
var o = new Option(e[i].name, "value" + i);
$(".selectpicker").append(o);
}
$(".selectpicker").selectpicker('refresh');
}
});
```
```html
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" rel="stylesheet"/>
<link href="https://cdnjs.cloudflare.com/ajax/libs/bootstrap-select/1.12.4/css/bootstrap-select.min.css" rel="stylesheet" />
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/bootstrap-select/1.12.4/js/bootstrap-select.min.js"></script>
<select class='selectpicker'>
</select>
``` | Try this
```
for (var i = 0; i <= e.length; i++) {
var o = new Option(e[i].name, "value"+i);
$(".selectpicker").append(o);
}
``` |
18,266,401 | I have a webapp written in PHP that currently creates a DB connection (using mysqli\_connect) on every page to pull data from the database.
Recently, my website has slowed down (with some traffic increase), and I was wondering if the creation of so many connections - one for every user that is on any page - is causing the slow down?
Is there any fix for this?
Is it possible to create one sharable connection for the server? I know this is possible in python, but I do not know how I would implement such a model in PHP.
Note: My site is on BlueHost...I don't know if that also makes a difference. | 2013/08/16 | [
"https://Stackoverflow.com/questions/18266401",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1947492/"
] | I have just faced the same problem, here it's how I've done it.
At some points when setting up the Gigya Share Button you will have to declare a variable called "shareParams", invoked in *gigya.services.socialize.showShareUI(shareParams)*.
Just add *'onSendDone' : yourFunctionName* to the shareParams object.
Example:
```
var shareParams = {
'userAction' : {0},
'onSendDone' : myNamespace.GigyaSendDone
}
gigya.services.socialize.showShareUI(shareParams);
```
When the sharing is **successfully** completed, this Javascript action will be invoked. | So thanks to Emanuele Ciriachi, I found the js api code in the plugin. Once modified it, I think this will resolve my issue. |
19,934,248 | I'm teaching myself Python and was just "exploring". Google says that datetime is a global variable but when I try to find todays date in the terminal I receive the NameError in the question title?
```
mynames-MacBook:pythonhard myname$ python
Enthought Canopy Python 2.7.3 | 64-bit | (default, Aug 8 2013, 05:37:06)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> date = datetime.date.today()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'datetime' is not defined
>>>
``` | 2013/11/12 | [
"https://Stackoverflow.com/questions/19934248",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1024586/"
] | You need to import the module [`datetime`](http://docs.python.org/2/library/datetime.html) first:
```
>>> import datetime
```
After that it works:
```
>>> import datetime
>>> date = datetime.date.today()
>>> date
datetime.date(2013, 11, 12)
``` | It can also be used as below:
```
from datetime import datetime
start_date = datetime(2016,3,1)
end_date = datetime(2016,3,10)
``` |
42,349,626 | I want to create a list that contains the monomials up to degree *n*
```
basis = [lambda x:x**i for i in range(0,n+1)]
```
This however creates a list of *n* functions, but all the same one (of degree *n*) and not of degree 0,1,2,...*n*
I tried without list comprehension as well:
```
basis = []
for i in range(0,n+1):
basis.append(lambda x:x**i)
```
but with the same result. Substituting the lambda function by a classic function definition also did not fix it.
---
I checked [Python lambdas and scoping](https://stackoverflow.com/questions/1924214/python-lambdas-and-scoping), but that did not help, since I don't want to store function values, but the function itself. For example, I want to be able to call
```
basis[0](34)
```
and this should return
```
1
``` | 2017/02/20 | [
"https://Stackoverflow.com/questions/42349626",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7186959/"
] | As i said in the comments, take a look at [partial](https://docs.python.org/3/library/functools.html#functools.partial)
```
def f(x, n):
return x**n
basis = [partial(f, n=i) for i in range(10)]
print(basis[0](34)) # 1
``` | This entry sums it up perfectly
<http://docs.python-guide.org/en/latest/writing/gotchas/#late-binding-closures> :
>
> ... you can create a closure that binds immediately to its arguments by using a default arg like so:
>
>
>
> ```
> def create_multipliers():
> return [lambda x, i=i : i * x for i in range(5)]
>
> ```
>
>
You code would become:
```
basis = [lambda x,i=i:x**i for i in range(0,n+1)]
```
This is a bit of a hacky solution, but works. I still highly recommend to read the link provided as the 3 points that are made there are usual errors when you are new to Python. |
7,211,296 | Happy coding weekend to everyone!!!.
I'm stuck trying to send a JSON object via $.load() of jQuery, i want to send it with the GET method, this is the code that i have in my javascript code, I attached the Ajax request that receives the JSON Object for clarity:
```
function ajaxLoadClasses() {
$.ajax({
url: 'load_classes/',
type: 'GET',
dataType: 'json',
success: function(json) {
$.each(json, function(iterator,item) {
loadViaGet(item);
});
},
error: function(xhr, status) {
alert('Sorry, there was a problem!');
},
complete: function(xhr, status) {},
});
}
function loadViaGet(item) {
$div = $('div.myClass');
//Here is where I'm stuck, I'm not sure if this is the way to send the JSON obj
$div.load('thisAppURL/?json=' + encodeURIComponent(item), function() {
alert('Load was performed');
});
}
```
The "item" json obj received was made out of a Model of Django using
```
jsonToSendToAjax = serializers.serialize('json', obj)
```
And I don't think that I'm using the correct methods in my Django to deserialize the JSON object or to convert the JSON object into a Python object so I can handle it in my view and send it to a template:
```
def popUpForm(request):
jsonData = request.GET['json']
deser = serializers.deserialize('json', jsonData)
#This could be another way to convert the JSON object to a Python Object
#pythonObj = simplejson.loads(jsonData)
return render_to_response('class_pop_up_form.html', deser)
```
It will be very helpful if someone can help me with this!! I'm really struggling with it but I don't find the right way to do it.
EDIT 1 :
I want to send the JSON object via the GET with the $.load() function, not with the POST method,as I read in the jQuery api: <http://api.jquery.com/load/> the $.load() method works as follow: .load( url, [data], [complete(responseText, textStatus, XMLHttpRequest)] )
The POST method is used if data is provided as an object; otherwise, GET is assumed.
EDIT 2:
Forget about sending the json object via the GET method, now I'm using the POST method, but now I don't figure out how to use that json object in my Django View.py, don't know if i need to deserialize it or not, the format of the json object that I'm using is the following:
```
{"pk": 1,
"model": "skedified.class",
"fields": {
"hr_three": null,
"group": 1,
"name": "Abastecimiento de agua",
"day_three": null,
"day_one": "1 , 3",
"hr_one": "10+/3",
"online_class": null,
"teacher_name": "Enrique C\\u00e1zares Rivera / ",
"day_two": null,
"class_key": "CV3009",
"hr_two": null }
}
``` | 2011/08/26 | [
"https://Stackoverflow.com/questions/7211296",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/657355/"
] | This isn't how jQuery suggests you should send the data and it's probably not a good idea to do it this way either. Your url gets very ugly and long very quick if you add the json string to it like that.
Use the second argument for $.load; "data" (see <http://api.jquery.com/load/>) instead. So something like
```
$div.load('thisAppURL', {"json": encodeURIComponent(item)});
```
Also, if you want to trace the output, I'd sugest using the third argument, the callback function, and use console instead of alert. You can get the actual return from the server that way too. So you'd get something like:
```
$div.load(
'thisAppURL',
{"json": encodeURIComponent(item)},
function(response, status, xhr){
console.log(response);
}
);
``` | the question was not clear to me but you can send json via load as the second argument
```
$div = $('div.myClass');
//Here is where I'm stuck, I'm not sure if this is the way to send the JSON obj
$div.load('thisAppURL/?json=' + encodeURIComponent(item),{"name":"john","age":"20"}, function() {
alert('Load was performed');
});
```
for converting javascript array to json see this answer [Convert array to JSON](https://stackoverflow.com/questions/2295496/convert-javascript-array-to-json/2295515#2295515)
and for deserializing json in django [Django Deserialization](https://stackoverflow.com/questions/2852583/django-deserialization/2852651#2852651) |
45,216,886 | I was asked to create a machine algorithm using tensorflow and python that could detect anomalies by creating a range of 'normal' values. I have two perameters, a large array of floats around 1.5 and timestamps. I have not seen similar threads using tensorflow in a basic sense, and since I am new to technology I am looking to make a more basic machine. However, I would like to have it be unsupervised, meaning that I do not specify what an anomaly is, but rather a large amount of past data does. Thank you, I am running python 3.5 and tensorflow 1.2.1. | 2017/07/20 | [
"https://Stackoverflow.com/questions/45216886",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8338758/"
] | I think theres only currently one connected service available in the visual studio market place for MongoDB. [Link Here.](https://marketplace.visualstudio.com/items?itemName=DevartSoftware.ODBCDriverforMongoDB)
>
> ODBC Driver for MongoDB provides high-performance and feature-rich
> connectivity solution for ODBC-based applications to access MongoDB
> databases from Windows, MacOS, Linux. Full support for standard ODBC
> API functions, MongoDB data types and SQL queries implemented in our
> driver makes interaction of your database applications with MongoDB
> fast, easy and extremely handy.
>
>
>
Looks like it would handle all of the things you'd expect it to when connecting to MongoDB.
**However it's worth noting that, that is only a trail and I've been unable to find any 'open source' versions** | MongoDB OData Connector
<http://cdn.cdata.com/help/DGB/cd/>
Its not free <https://www.cdata.com/drivers/mongodb/download/>
Overview
The MongoDB OData Connector application enables you to securely access data from MongoDB in popular formats like OData, JSONP, SOAP, RSS, and more.
The Getting Started section explains how to establish the connection to MongoDB. In this section, you will find a guide to setting required connection properties and allowing the OData connector to access MongoDB tables.
The Supported OData section shows the OData syntax supported by the OData connector and points out any limitations when querying live data.
The OData connector can be installed as a stand-alone application or integrated with your server. In the Server Configuration section you will find information on how to install the OData connector on an existing server configuration. System requirements are also listed here. You will also find instructions on how to manage users and deploy SSL. Logging details the available logging resources.
The OData API enables access to your data from any application with Web connectivity. The OData connector supports all major authentication schemes. This section documents HTTP methods supported by the server, server responses, and supported authentication schemes.
The Data Model section lists the tables, views, and stored procedures available for the application. |
24,898,863 | ```
#!/usr/bin/python
def map():
myList = ['g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr\'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj.']
list(myList)
print(myList)
```
Why does `list()` function does not separate list into characters? But if I put this way then it works:
```
list('g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr\'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj.')
``` | 2014/07/22 | [
"https://Stackoverflow.com/questions/24898863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3763437/"
] | The most important thing to note here is that strings are, themselves, ~~lists~~ iterables, collecting characterstogether. What `list()` does is take each element of one list/iterable and put it into another list. The effect this has on a string is to make another list containing every character of that string (because `list()` isn't going to re-combine all those characters into a string again).
In your first line, however, you're not giving it a string - you're giving it a list that contains a string. `list()` will look at the input, see that the first item is an entire string, and move that whole string to another list. That's why it doesn't get split up. If you instead had:
```
myList = 'g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr\'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj.'
```
Then you would get the behavior you expect. | The function `list()` only goes one layer deep when creating a list out of the argument.
When you pass it a list with one element, it will observe the list with one element, and put the list into a new list.
When you pass it a string, it will observe a list of characters and put each character into a new list. |
24,898,863 | ```
#!/usr/bin/python
def map():
myList = ['g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr\'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj.']
list(myList)
print(myList)
```
Why does `list()` function does not separate list into characters? But if I put this way then it works:
```
list('g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr\'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj.')
``` | 2014/07/22 | [
"https://Stackoverflow.com/questions/24898863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3763437/"
] | myList is an iterable, so the list constructor will iterate throught the elements in the list. In your case there is one string in the list. Lose the square brackets when you declare myList to get the behavior you're expecting. | The most important thing to note here is that strings are, themselves, ~~lists~~ iterables, collecting characterstogether. What `list()` does is take each element of one list/iterable and put it into another list. The effect this has on a string is to make another list containing every character of that string (because `list()` isn't going to re-combine all those characters into a string again).
In your first line, however, you're not giving it a string - you're giving it a list that contains a string. `list()` will look at the input, see that the first item is an entire string, and move that whole string to another list. That's why it doesn't get split up. If you instead had:
```
myList = 'g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr\'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj.'
```
Then you would get the behavior you expect. |
24,898,863 | ```
#!/usr/bin/python
def map():
myList = ['g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr\'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj.']
list(myList)
print(myList)
```
Why does `list()` function does not separate list into characters? But if I put this way then it works:
```
list('g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr\'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj.')
``` | 2014/07/22 | [
"https://Stackoverflow.com/questions/24898863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3763437/"
] | The most important thing to note here is that strings are, themselves, ~~lists~~ iterables, collecting characterstogether. What `list()` does is take each element of one list/iterable and put it into another list. The effect this has on a string is to make another list containing every character of that string (because `list()` isn't going to re-combine all those characters into a string again).
In your first line, however, you're not giving it a string - you're giving it a list that contains a string. `list()` will look at the input, see that the first item is an entire string, and move that whole string to another list. That's why it doesn't get split up. If you instead had:
```
myList = 'g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr\'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj.'
```
Then you would get the behavior you expect. | `list` takes sequences in its constructor. You assign `myList` with a literal list object containing a single string element. When you call `list(myList)` its argument is already a list so the one element is extracted and referenced in a new list object. When you call `list` with a string as the argument, it interprets that as its sequence to "listify" and separates them by element - the individual characters from the string. |
24,898,863 | ```
#!/usr/bin/python
def map():
myList = ['g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr\'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj.']
list(myList)
print(myList)
```
Why does `list()` function does not separate list into characters? But if I put this way then it works:
```
list('g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr\'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj.')
``` | 2014/07/22 | [
"https://Stackoverflow.com/questions/24898863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3763437/"
] | The most important thing to note here is that strings are, themselves, ~~lists~~ iterables, collecting characterstogether. What `list()` does is take each element of one list/iterable and put it into another list. The effect this has on a string is to make another list containing every character of that string (because `list()` isn't going to re-combine all those characters into a string again).
In your first line, however, you're not giving it a string - you're giving it a list that contains a string. `list()` will look at the input, see that the first item is an entire string, and move that whole string to another list. That's why it doesn't get split up. If you instead had:
```
myList = 'g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr\'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj.'
```
Then you would get the behavior you expect. | I would try to split them.
```
def map():
myList = ['g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr\'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj.']
print list(myList[0].split())
```
myList[0] access the first element of your list, which is the whole string. split() then basically just splits them up into characters. I think it is then already returned as list, but to make sure it is a list, just put the 'list()' thingy around it. |
24,898,863 | ```
#!/usr/bin/python
def map():
myList = ['g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr\'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj.']
list(myList)
print(myList)
```
Why does `list()` function does not separate list into characters? But if I put this way then it works:
```
list('g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr\'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj.')
``` | 2014/07/22 | [
"https://Stackoverflow.com/questions/24898863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3763437/"
] | myList is an iterable, so the list constructor will iterate throught the elements in the list. In your case there is one string in the list. Lose the square brackets when you declare myList to get the behavior you're expecting. | The function `list()` only goes one layer deep when creating a list out of the argument.
When you pass it a list with one element, it will observe the list with one element, and put the list into a new list.
When you pass it a string, it will observe a list of characters and put each character into a new list. |
24,898,863 | ```
#!/usr/bin/python
def map():
myList = ['g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr\'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj.']
list(myList)
print(myList)
```
Why does `list()` function does not separate list into characters? But if I put this way then it works:
```
list('g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr\'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj.')
``` | 2014/07/22 | [
"https://Stackoverflow.com/questions/24898863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3763437/"
] | myList is an iterable, so the list constructor will iterate throught the elements in the list. In your case there is one string in the list. Lose the square brackets when you declare myList to get the behavior you're expecting. | `list` takes sequences in its constructor. You assign `myList` with a literal list object containing a single string element. When you call `list(myList)` its argument is already a list so the one element is extracted and referenced in a new list object. When you call `list` with a string as the argument, it interprets that as its sequence to "listify" and separates them by element - the individual characters from the string. |
24,898,863 | ```
#!/usr/bin/python
def map():
myList = ['g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr\'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj.']
list(myList)
print(myList)
```
Why does `list()` function does not separate list into characters? But if I put this way then it works:
```
list('g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr\'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj.')
``` | 2014/07/22 | [
"https://Stackoverflow.com/questions/24898863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3763437/"
] | myList is an iterable, so the list constructor will iterate throught the elements in the list. In your case there is one string in the list. Lose the square brackets when you declare myList to get the behavior you're expecting. | I would try to split them.
```
def map():
myList = ['g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr\'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj.']
print list(myList[0].split())
```
myList[0] access the first element of your list, which is the whole string. split() then basically just splits them up into characters. I think it is then already returned as list, but to make sure it is a list, just put the 'list()' thingy around it. |
67,062,516 | I'm working with a huge dataframe in python and sometimes I need to add an empty row or several rows in a definite position to dataframe. For this question I created a small dataframe df in order to show, what I want to achieve.
```
cars = {'Brand': ['Honda Civic','Toyota Corolla','Ford Focus','Audi A4'],
'Price': [22000,25000,27000,35000]
}
df = pd.DataFrame(cars, columns = ['Brand', 'Price'])
```
If a row value is 27000, I want to add an empty row before it.
I can insert row after with Concat but I can't really think of a way of adding it before.. | 2021/04/12 | [
"https://Stackoverflow.com/questions/67062516",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15613473/"
] | Create a DataFrame with the index labels based on your condition that has all null values. [Assumes `df` has a non-duplicated index]. Then concat and `sort_index` which will place the missing row before (because we concat `df` to `empty`). Then `reset_index` to remove the duplicate index labels.
```
import pandas as pd
empty = pd.DataFrame(columns=df.columns, index=df[df.Price.eq(27000)].index)
df = pd.concat([empty, df]).sort_index().reset_index(drop=True)
# Brand Price
#0 Honda Civic 22000
#1 Toyota Corolla 25000
#2 NaN NaN
#3 Ford Focus 27000
#4 Audi A4 35000
```
---
This will add a blank row before **every** 27000 row
```
cars = {'Brand': ['Honda Civic','Toyota Corolla','Ford Focus','Audi A4','Jeep'],
'Price': [22000,25000,27000,35000,27000]}
df = pd.DataFrame(cars, columns = ['Brand', 'Price'])
empty = pd.DataFrame(columns=df.columns, index=df[df.Price.eq(27000)].index)
df = pd.concat([empty, df]).sort_index().reset_index(drop=True)
# Brand Price
#0 Honda Civic 22000
#1 Toyota Corolla 25000
#2 NaN NaN
#3 Ford Focus 27000
#4 Audi A4 35000
#5 NaN NaN
#6 Jeep 27000
``` | You can create a helper cumsum column for groupby then append a blank row only for the first group and then concat:
```
out = pd.concat((g.append(pd.Series(),ignore_index=True) if i==0 else g
for i, g in df.groupby(df['Price'].eq(27000).cumsum())))
```
---
```
print(out)
Brand Price
0 Honda Civic 22000.0
1 Toyota Corolla 25000.0
2 NaN NaN
2 Ford Focus 27000.0
3 Audi A4 35000.0
``` |
67,062,516 | I'm working with a huge dataframe in python and sometimes I need to add an empty row or several rows in a definite position to dataframe. For this question I created a small dataframe df in order to show, what I want to achieve.
```
cars = {'Brand': ['Honda Civic','Toyota Corolla','Ford Focus','Audi A4'],
'Price': [22000,25000,27000,35000]
}
df = pd.DataFrame(cars, columns = ['Brand', 'Price'])
```
If a row value is 27000, I want to add an empty row before it.
I can insert row after with Concat but I can't really think of a way of adding it before.. | 2021/04/12 | [
"https://Stackoverflow.com/questions/67062516",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15613473/"
] | Create a DataFrame with the index labels based on your condition that has all null values. [Assumes `df` has a non-duplicated index]. Then concat and `sort_index` which will place the missing row before (because we concat `df` to `empty`). Then `reset_index` to remove the duplicate index labels.
```
import pandas as pd
empty = pd.DataFrame(columns=df.columns, index=df[df.Price.eq(27000)].index)
df = pd.concat([empty, df]).sort_index().reset_index(drop=True)
# Brand Price
#0 Honda Civic 22000
#1 Toyota Corolla 25000
#2 NaN NaN
#3 Ford Focus 27000
#4 Audi A4 35000
```
---
This will add a blank row before **every** 27000 row
```
cars = {'Brand': ['Honda Civic','Toyota Corolla','Ford Focus','Audi A4','Jeep'],
'Price': [22000,25000,27000,35000,27000]}
df = pd.DataFrame(cars, columns = ['Brand', 'Price'])
empty = pd.DataFrame(columns=df.columns, index=df[df.Price.eq(27000)].index)
df = pd.concat([empty, df]).sort_index().reset_index(drop=True)
# Brand Price
#0 Honda Civic 22000
#1 Toyota Corolla 25000
#2 NaN NaN
#3 Ford Focus 27000
#4 Audi A4 35000
#5 NaN NaN
#6 Jeep 27000
``` | Let us try `cummax` with `append`:
```
m = df['Price'].eq(27000).cummax()
df[~m].append(pd.Series(), ignore_index=True).append(df[m])
```
---
```
Brand Price
0 Honda Civic 22000.0
1 Toyota Corolla 25000.0
2 NaN NaN
2 Ford Focus 27000.0
3 Audi A4 35000.0
``` |
67,062,516 | I'm working with a huge dataframe in python and sometimes I need to add an empty row or several rows in a definite position to dataframe. For this question I created a small dataframe df in order to show, what I want to achieve.
```
cars = {'Brand': ['Honda Civic','Toyota Corolla','Ford Focus','Audi A4'],
'Price': [22000,25000,27000,35000]
}
df = pd.DataFrame(cars, columns = ['Brand', 'Price'])
```
If a row value is 27000, I want to add an empty row before it.
I can insert row after with Concat but I can't really think of a way of adding it before.. | 2021/04/12 | [
"https://Stackoverflow.com/questions/67062516",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15613473/"
] | Create a DataFrame with the index labels based on your condition that has all null values. [Assumes `df` has a non-duplicated index]. Then concat and `sort_index` which will place the missing row before (because we concat `df` to `empty`). Then `reset_index` to remove the duplicate index labels.
```
import pandas as pd
empty = pd.DataFrame(columns=df.columns, index=df[df.Price.eq(27000)].index)
df = pd.concat([empty, df]).sort_index().reset_index(drop=True)
# Brand Price
#0 Honda Civic 22000
#1 Toyota Corolla 25000
#2 NaN NaN
#3 Ford Focus 27000
#4 Audi A4 35000
```
---
This will add a blank row before **every** 27000 row
```
cars = {'Brand': ['Honda Civic','Toyota Corolla','Ford Focus','Audi A4','Jeep'],
'Price': [22000,25000,27000,35000,27000]}
df = pd.DataFrame(cars, columns = ['Brand', 'Price'])
empty = pd.DataFrame(columns=df.columns, index=df[df.Price.eq(27000)].index)
df = pd.concat([empty, df]).sort_index().reset_index(drop=True)
# Brand Price
#0 Honda Civic 22000
#1 Toyota Corolla 25000
#2 NaN NaN
#3 Ford Focus 27000
#4 Audi A4 35000
#5 NaN NaN
#6 Jeep 27000
``` | You can also do this by `concat()` method and `apply()` method:
```
result=pd.concat((df.apply(lambda x:np.nan if x['Price']==27000 else x,1),df))
```
Finally use `sort_index()` method,`drop_duplicates()` method and `reset_index()` method:
```
result=result.sort_index(na_position='first').drop_duplicates().reset_index(drop=True)
```
Now if you print `result` you will get your desired output:
```
Brand Price
0 Honda Civic 22000.0
1 Toyota Corolla 25000.0
2 NaN NaN
3 Ford Focus 27000.0
4 Audi A4 35000.0
```
**This will add a blank row before every row where Price=27000:**
```
result=pd.concat((df.apply(lambda x:np.nan if x['Price']==27000 else x,1),df))
result=result.drop_duplicates().append(result[result.isna().all(1)].iloc[1:]).sort_index(na_position='first').reset_index(drop=True)
``` |
7,375,545 | When I install ipython on my osx and run it, I get the following warning:
```
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/
site-packages/IPython/utils/rlineimpl.py:96:
RuntimeWarning: Leopard libedit detected - readline will not be wel
behaved including some crashes on tab completion, and incorrect
history navigation. It is highly recommended that you install
readline, which is easy_installable with: 'easy_install readline'
```
I have have installed readline, and do not use the system python that was originally installed in `/Library/Frameworks/Python.framework/Versions/2.7/bin/python$`. The `/usr/bin/python` points to version 2.7 as shown below
```
uname -a
Darwin macbook.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7
16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386
$sudo pip install readline ipython
$ipython --version
0.11
$/usr/bin/python --version #
Python 2.7.1
$which python
/Library/Frameworks/Python.framework/Versions/2.7/bin/python
```
I have read the question in [Python sys.path modification not working](https://stackoverflow.com/questions/1017909/python-sys-path-modification-not-working) - I added `/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/readline-6.2.1-py2.7.egg-info` to the `/Library/Frameworks/Python.framework/Versions/2.7/bin/ipython` so that it now looks like this: <http://pastebin.com/raw.php?i=dVnxufbS>
but I cannot figure out why I am getting the following error:
```
File
"/Library/Frameworks/Python.framework/Versions/2.7/bin/ipython",
line 9
sys.path.insert(0,"/Library/Frameworks/Python.framework/Versions/2.7/lib/
python2.7/site-packages/readline-6.2.1-py2.7.egg-info")
```
I do not think the above path is an issue, and my goal is to get ipython to working without complaining about readline even though it is installed and imports correctly. | 2011/09/11 | [
"https://Stackoverflow.com/questions/7375545",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/881362/"
] | When pip installs readline, it will never be imported, because readline.so goes in site-packages, which ends up behind the libedit System one, located in `lib-dynload` (OSX Python path order is very odd). `easy_install -a readline` will actually install usable readline.
So you can either use easy\_install, or use pip and muck about with your PYTHONPATH/sys.path (which essentially means: DO NOT USE PIP).
A bit more detail on the IPython list (though there really isn't anything IPython-specific about this issue): <http://mail.scipy.org/pipermail/ipython-user/2011-September/008426.html>
EDIT: extra note about virtualenv.
There is a bug in virtualenv < 1.8.3, where readline would not be properly staged when you create an env. | Additional note to future readers of this answer.
In my case -- running a MacPorts installation of IPython -- there were several versions of `easy_install` in /opt/local/bin/, but no non-versioned symlink pointing to the most current. Performing `easy_install-2.7 -a readline` worked. |
7,375,545 | When I install ipython on my osx and run it, I get the following warning:
```
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/
site-packages/IPython/utils/rlineimpl.py:96:
RuntimeWarning: Leopard libedit detected - readline will not be wel
behaved including some crashes on tab completion, and incorrect
history navigation. It is highly recommended that you install
readline, which is easy_installable with: 'easy_install readline'
```
I have have installed readline, and do not use the system python that was originally installed in `/Library/Frameworks/Python.framework/Versions/2.7/bin/python$`. The `/usr/bin/python` points to version 2.7 as shown below
```
uname -a
Darwin macbook.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7
16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386
$sudo pip install readline ipython
$ipython --version
0.11
$/usr/bin/python --version #
Python 2.7.1
$which python
/Library/Frameworks/Python.framework/Versions/2.7/bin/python
```
I have read the question in [Python sys.path modification not working](https://stackoverflow.com/questions/1017909/python-sys-path-modification-not-working) - I added `/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/readline-6.2.1-py2.7.egg-info` to the `/Library/Frameworks/Python.framework/Versions/2.7/bin/ipython` so that it now looks like this: <http://pastebin.com/raw.php?i=dVnxufbS>
but I cannot figure out why I am getting the following error:
```
File
"/Library/Frameworks/Python.framework/Versions/2.7/bin/ipython",
line 9
sys.path.insert(0,"/Library/Frameworks/Python.framework/Versions/2.7/lib/
python2.7/site-packages/readline-6.2.1-py2.7.egg-info")
```
I do not think the above path is an issue, and my goal is to get ipython to working without complaining about readline even though it is installed and imports correctly. | 2011/09/11 | [
"https://Stackoverflow.com/questions/7375545",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/881362/"
] | When pip installs readline, it will never be imported, because readline.so goes in site-packages, which ends up behind the libedit System one, located in `lib-dynload` (OSX Python path order is very odd). `easy_install -a readline` will actually install usable readline.
So you can either use easy\_install, or use pip and muck about with your PYTHONPATH/sys.path (which essentially means: DO NOT USE PIP).
A bit more detail on the IPython list (though there really isn't anything IPython-specific about this issue): <http://mail.scipy.org/pipermail/ipython-user/2011-September/008426.html>
EDIT: extra note about virtualenv.
There is a bug in virtualenv < 1.8.3, where readline would not be properly staged when you create an env. | If you don't mind mucking around with your PYTHONPATH, here's how you can get rid of that pesky warning:
```
# move site-packages to the front of your sys.path
import sys
for i in range(len(sys.path)):
if sys.path[i].endswith('site-packages'):
path = sys.path.pop(i)
sys.path.insert(0, path)
break
```
If you're using Django, you can put this in the `ipython` method of your site-packages/django/core/management/commands/shell.py so that it runs when you run `./manage.py shell`. |
7,375,545 | When I install ipython on my osx and run it, I get the following warning:
```
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/
site-packages/IPython/utils/rlineimpl.py:96:
RuntimeWarning: Leopard libedit detected - readline will not be wel
behaved including some crashes on tab completion, and incorrect
history navigation. It is highly recommended that you install
readline, which is easy_installable with: 'easy_install readline'
```
I have have installed readline, and do not use the system python that was originally installed in `/Library/Frameworks/Python.framework/Versions/2.7/bin/python$`. The `/usr/bin/python` points to version 2.7 as shown below
```
uname -a
Darwin macbook.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7
16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386
$sudo pip install readline ipython
$ipython --version
0.11
$/usr/bin/python --version #
Python 2.7.1
$which python
/Library/Frameworks/Python.framework/Versions/2.7/bin/python
```
I have read the question in [Python sys.path modification not working](https://stackoverflow.com/questions/1017909/python-sys-path-modification-not-working) - I added `/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/readline-6.2.1-py2.7.egg-info` to the `/Library/Frameworks/Python.framework/Versions/2.7/bin/ipython` so that it now looks like this: <http://pastebin.com/raw.php?i=dVnxufbS>
but I cannot figure out why I am getting the following error:
```
File
"/Library/Frameworks/Python.framework/Versions/2.7/bin/ipython",
line 9
sys.path.insert(0,"/Library/Frameworks/Python.framework/Versions/2.7/lib/
python2.7/site-packages/readline-6.2.1-py2.7.egg-info")
```
I do not think the above path is an issue, and my goal is to get ipython to working without complaining about readline even though it is installed and imports correctly. | 2011/09/11 | [
"https://Stackoverflow.com/questions/7375545",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/881362/"
] | When pip installs readline, it will never be imported, because readline.so goes in site-packages, which ends up behind the libedit System one, located in `lib-dynload` (OSX Python path order is very odd). `easy_install -a readline` will actually install usable readline.
So you can either use easy\_install, or use pip and muck about with your PYTHONPATH/sys.path (which essentially means: DO NOT USE PIP).
A bit more detail on the IPython list (though there really isn't anything IPython-specific about this issue): <http://mail.scipy.org/pipermail/ipython-user/2011-September/008426.html>
EDIT: extra note about virtualenv.
There is a bug in virtualenv < 1.8.3, where readline would not be properly staged when you create an env. | I am also using `brew` installed `ipython` and I had a similar issue.
```
⚡ easy_install-3.7 -a readline
Searching for readline
Reading https://pypi.org/simple/readline/
Download error on https://pypi.org/simple/readline/: unknown url type: https -- Some packages may not be found!
Couldn't find index page for 'readline' (maybe misspelled?)
Scanning index of all packages (this may take a while)
Reading https://pypi.org/simple/
Download error on https://pypi.org/simple/: unknown url type: https -- Some packages may not be found!
No local packages or working download links found for readline
error: Could not find suitable distribution for Requirement.parse('readline') (--always-copy skips system and development eggs)
```
Solution:
```
⚡ brew install readline
Updating Homebrew...
Warning: readline 7.0.5 is already installed, it's just not linked
You can use `brew link readline` to link this version.
⚡ brew link readline
Warning: readline is keg-only and must be linked with --force
⚡ brew link readline --force
Linking /usr/local/Cellar/readline/7.0.5... 16 symlinks created
```
Result:
```
⚡ ipython
Python 3.7.2 (default, Dec 27 2018, 07:35:06)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.2.0 -- An enhanced Interactive Python. Type '?' for help.
>>> ~/.pyrc loaded successfully
``` |
7,375,545 | When I install ipython on my osx and run it, I get the following warning:
```
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/
site-packages/IPython/utils/rlineimpl.py:96:
RuntimeWarning: Leopard libedit detected - readline will not be wel
behaved including some crashes on tab completion, and incorrect
history navigation. It is highly recommended that you install
readline, which is easy_installable with: 'easy_install readline'
```
I have have installed readline, and do not use the system python that was originally installed in `/Library/Frameworks/Python.framework/Versions/2.7/bin/python$`. The `/usr/bin/python` points to version 2.7 as shown below
```
uname -a
Darwin macbook.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7
16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386
$sudo pip install readline ipython
$ipython --version
0.11
$/usr/bin/python --version #
Python 2.7.1
$which python
/Library/Frameworks/Python.framework/Versions/2.7/bin/python
```
I have read the question in [Python sys.path modification not working](https://stackoverflow.com/questions/1017909/python-sys-path-modification-not-working) - I added `/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/readline-6.2.1-py2.7.egg-info` to the `/Library/Frameworks/Python.framework/Versions/2.7/bin/ipython` so that it now looks like this: <http://pastebin.com/raw.php?i=dVnxufbS>
but I cannot figure out why I am getting the following error:
```
File
"/Library/Frameworks/Python.framework/Versions/2.7/bin/ipython",
line 9
sys.path.insert(0,"/Library/Frameworks/Python.framework/Versions/2.7/lib/
python2.7/site-packages/readline-6.2.1-py2.7.egg-info")
```
I do not think the above path is an issue, and my goal is to get ipython to working without complaining about readline even though it is installed and imports correctly. | 2011/09/11 | [
"https://Stackoverflow.com/questions/7375545",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/881362/"
] | If you don't mind mucking around with your PYTHONPATH, here's how you can get rid of that pesky warning:
```
# move site-packages to the front of your sys.path
import sys
for i in range(len(sys.path)):
if sys.path[i].endswith('site-packages'):
path = sys.path.pop(i)
sys.path.insert(0, path)
break
```
If you're using Django, you can put this in the `ipython` method of your site-packages/django/core/management/commands/shell.py so that it runs when you run `./manage.py shell`. | Additional note to future readers of this answer.
In my case -- running a MacPorts installation of IPython -- there were several versions of `easy_install` in /opt/local/bin/, but no non-versioned symlink pointing to the most current. Performing `easy_install-2.7 -a readline` worked. |
7,375,545 | When I install ipython on my osx and run it, I get the following warning:
```
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/
site-packages/IPython/utils/rlineimpl.py:96:
RuntimeWarning: Leopard libedit detected - readline will not be wel
behaved including some crashes on tab completion, and incorrect
history navigation. It is highly recommended that you install
readline, which is easy_installable with: 'easy_install readline'
```
I have have installed readline, and do not use the system python that was originally installed in `/Library/Frameworks/Python.framework/Versions/2.7/bin/python$`. The `/usr/bin/python` points to version 2.7 as shown below
```
uname -a
Darwin macbook.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7
16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386
$sudo pip install readline ipython
$ipython --version
0.11
$/usr/bin/python --version #
Python 2.7.1
$which python
/Library/Frameworks/Python.framework/Versions/2.7/bin/python
```
I have read the question in [Python sys.path modification not working](https://stackoverflow.com/questions/1017909/python-sys-path-modification-not-working) - I added `/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/readline-6.2.1-py2.7.egg-info` to the `/Library/Frameworks/Python.framework/Versions/2.7/bin/ipython` so that it now looks like this: <http://pastebin.com/raw.php?i=dVnxufbS>
but I cannot figure out why I am getting the following error:
```
File
"/Library/Frameworks/Python.framework/Versions/2.7/bin/ipython",
line 9
sys.path.insert(0,"/Library/Frameworks/Python.framework/Versions/2.7/lib/
python2.7/site-packages/readline-6.2.1-py2.7.egg-info")
```
I do not think the above path is an issue, and my goal is to get ipython to working without complaining about readline even though it is installed and imports correctly. | 2011/09/11 | [
"https://Stackoverflow.com/questions/7375545",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/881362/"
] | If you don't mind mucking around with your PYTHONPATH, here's how you can get rid of that pesky warning:
```
# move site-packages to the front of your sys.path
import sys
for i in range(len(sys.path)):
if sys.path[i].endswith('site-packages'):
path = sys.path.pop(i)
sys.path.insert(0, path)
break
```
If you're using Django, you can put this in the `ipython` method of your site-packages/django/core/management/commands/shell.py so that it runs when you run `./manage.py shell`. | I am also using `brew` installed `ipython` and I had a similar issue.
```
⚡ easy_install-3.7 -a readline
Searching for readline
Reading https://pypi.org/simple/readline/
Download error on https://pypi.org/simple/readline/: unknown url type: https -- Some packages may not be found!
Couldn't find index page for 'readline' (maybe misspelled?)
Scanning index of all packages (this may take a while)
Reading https://pypi.org/simple/
Download error on https://pypi.org/simple/: unknown url type: https -- Some packages may not be found!
No local packages or working download links found for readline
error: Could not find suitable distribution for Requirement.parse('readline') (--always-copy skips system and development eggs)
```
Solution:
```
⚡ brew install readline
Updating Homebrew...
Warning: readline 7.0.5 is already installed, it's just not linked
You can use `brew link readline` to link this version.
⚡ brew link readline
Warning: readline is keg-only and must be linked with --force
⚡ brew link readline --force
Linking /usr/local/Cellar/readline/7.0.5... 16 symlinks created
```
Result:
```
⚡ ipython
Python 3.7.2 (default, Dec 27 2018, 07:35:06)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.2.0 -- An enhanced Interactive Python. Type '?' for help.
>>> ~/.pyrc loaded successfully
``` |
24,355,799 | The command `python -i script.py` will run the given script then drop me into an interactive repl with the functions and variables from the script accessible. Is there a Perl analogue?
Edit: If it helps, here's another description of `python -i` <https://docs.python.org/3.4/using/cmdline.html#cmdoption-i>
>
> When a script is passed as first argument or the -c option is used, enter interactive mode after executing the script or the command, even when sys.stdin does not appear to be a terminal. The PYTHONSTARTUP file is not read.
>
>
> This can be useful to inspect global variables or a stack trace when a script raises an exception
>
>
> | 2014/06/22 | [
"https://Stackoverflow.com/questions/24355799",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/284795/"
] | Several things to test:
>
> * When you precompile your assets, do the required files appear in your `public` directory?
>
>
>
When you `precompile` your assets, you basically tell the Rails asset pipeline to merge & compile all your assets in the `public` directory. This means if you're trying to include a particular file in your production asset pipeline, it *should* be precompiled
I would run the `rake assets:precompile RAILS_ENV=production` command, and then look at all your assets in the `public` folder to check if they're there
--
>
> * If the assets are present in your `public` folder, are they being pushed to the server?
>
>
>
You should `SSH` into your server & browse to the `public` directory of your app (in `/current`). This will allow you to see if your assets have been uploaded to your server as you need
If they are not there, the problem is with Capistrano's deploy process; if they *are* there, it means there's a problem with your Rails server
--
>
> * If the assets *are* in the `public` folder on your server, it will mean the server is at fault somehow
>
>
>
Probably the best way to ensure it's not a server issue is to reload the server. With `apache`, you'll typically use `service reload apache2`, but I'm not sure about `nginx`.
This should be accompanied by looking at your Rails installation - is it calling the correct `assets` as defined in your `config` files? | This is a bit off topic, but for organization's sake you might want to put your Nginx server block in /etc/nginx/sites-available/default and then make sure you have it linked to /etc/nginx/sites-enabled/default.
It should work fine with the server block in /etc/nginx/nginx.conf but it's best practice to put it in the default host or create another virtual host if you so desire.
Glad you were able to get things working. |
30,734,682 | I am attempting to extract anchor text and associated URLs from Markdown. I've seen [this](https://stackoverflow.com/q/25109307/189134) question. Unfortunately, the [answer](https://stackoverflow.com/a/25109573/189134) doesn't seem to fully answer what I want.
In Markdown, there are two ways to insert a link:
### Example 1:
```
[anchor text](http://my.url)
```
### Example 2:
```
[anchor text][2]
[1]: http://my.url
```
---
My script looks like this (note that I am using [regex](https://pypi.python.org/pypi/regex), not re):
```
import regex
body_markdown = "This is an [inline link](http://google.com). This is a [non inline link][4]\r\n\r\n [1]: http://yahoo.com"
rex = """(?|(?<txt>(?<url>(?:ht|f)tps?://\S+(?<=\P{P})))|\(([^)]+)\)\[(\g<url>)\])"""
pattern = regex.compile(rex)
matches = regex.findall(pattern, body_markdown, overlapped=True)
for m in matches:
print m
```
This produces the output:
```
('http://google.com', 'http://google.com')
('http://yahoo.com', 'http://yahoo.com')
```
My expected output is:
```
('inline link', 'http://google.com')
('non inline link', 'http://yahoo.com')
```
---
How can I properly capture the anchor text from Markdown? | 2015/06/09 | [
"https://Stackoverflow.com/questions/30734682",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/189134/"
] | >
> How can I properly capture the anchor text from Markdown?
>
>
>
Parse it into a structured format (e.g., html) and then use the appropriate tools to extract link labels and addresses.
```
import markdown
from lxml import etree
body_markdown = "This is an [inline link](http://google.com). This is a [non inline link][1]\r\n\r\n [1]: http://yahoo.com"
doc = etree.fromstring(markdown.markdown(body_markdown))
for link in doc.xpath('//a'):
print link.text, link.get('href')
```
Which gets me:
```
inline link http://google.com
non inline link http://yahoo.com
```
The alternative is writing your own Markdown parser, which seems like the wrong place to focus your effort. | You can do it with a couple simple `re` patterns:
```
import re
INLINE_LINK_RE = re.compile(r'\[([^\]]+)\]\(([^)]+)\)')
FOOTNOTE_LINK_TEXT_RE = re.compile(r'\[([^\]]+)\]\[(\d+)\]')
FOOTNOTE_LINK_URL_RE = re.compile(r'\[(\d+)\]:\s+(\S+)')
def find_md_links(md):
""" Return dict of links in markdown """
links = dict(INLINE_LINK_RE.findall(md))
footnote_links = dict(FOOTNOTE_LINK_TEXT_RE.findall(md))
footnote_urls = dict(FOOTNOTE_LINK_URL_RE.findall(md))
for key, value in footnote_links.iteritems():
footnote_links[key] = footnote_urls[value]
links.update(footnote_links)
return links
```
Then you could use it like:
```
>>> body_markdown = """
... This is an [inline link](http://google.com).
... This is a [footnote link][1].
...
... [1]: http://yahoo.com
... """
>>> links = find_md_links(body_markdown)
>>> links
{'footnote link': 'http://yahoo.com', 'inline link': 'http://google.com'}
>>> links.values()
['http://yahoo.com', 'http://google.com']
``` |
30,734,682 | I am attempting to extract anchor text and associated URLs from Markdown. I've seen [this](https://stackoverflow.com/q/25109307/189134) question. Unfortunately, the [answer](https://stackoverflow.com/a/25109573/189134) doesn't seem to fully answer what I want.
In Markdown, there are two ways to insert a link:
### Example 1:
```
[anchor text](http://my.url)
```
### Example 2:
```
[anchor text][2]
[1]: http://my.url
```
---
My script looks like this (note that I am using [regex](https://pypi.python.org/pypi/regex), not re):
```
import regex
body_markdown = "This is an [inline link](http://google.com). This is a [non inline link][4]\r\n\r\n [1]: http://yahoo.com"
rex = """(?|(?<txt>(?<url>(?:ht|f)tps?://\S+(?<=\P{P})))|\(([^)]+)\)\[(\g<url>)\])"""
pattern = regex.compile(rex)
matches = regex.findall(pattern, body_markdown, overlapped=True)
for m in matches:
print m
```
This produces the output:
```
('http://google.com', 'http://google.com')
('http://yahoo.com', 'http://yahoo.com')
```
My expected output is:
```
('inline link', 'http://google.com')
('non inline link', 'http://yahoo.com')
```
---
How can I properly capture the anchor text from Markdown? | 2015/06/09 | [
"https://Stackoverflow.com/questions/30734682",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/189134/"
] | >
> How can I properly capture the anchor text from Markdown?
>
>
>
Parse it into a structured format (e.g., html) and then use the appropriate tools to extract link labels and addresses.
```
import markdown
from lxml import etree
body_markdown = "This is an [inline link](http://google.com). This is a [non inline link][1]\r\n\r\n [1]: http://yahoo.com"
doc = etree.fromstring(markdown.markdown(body_markdown))
for link in doc.xpath('//a'):
print link.text, link.get('href')
```
Which gets me:
```
inline link http://google.com
non inline link http://yahoo.com
```
The alternative is writing your own Markdown parser, which seems like the wrong place to focus your effort. | Modifying @mreinhardt solution to return a list (and not a dict) of all pairs `(text, link)`:
```
import re
INLINE_LINK_RE = re.compile(r'\[([^\]]+)\]\(([^)]+)\)')
FOOTNOTE_LINK_TEXT_RE = re.compile(r'\[([^\]]+)\]\[(\d+)\]')
FOOTNOTE_LINK_URL_RE = re.compile(r'\[(\d+)\]:\s+(\S+)')
def find_md_links(md):
""" Return dict of links in markdown """
links = list(INLINE_LINK_RE.findall(md))
footnote_links = dict(FOOTNOTE_LINK_TEXT_RE.findall(md))
footnote_urls = dict(FOOTNOTE_LINK_URL_RE.findall(md))
for key in footnote_links.keys():
links.append((footnote_links[key], footnote_urls[footnote_links[key]]))
return links
```
I test in python3 with [repeated](https://stackoverflow.com/questions/30734682/extracting-url-and-anchor-text-from-markdown-using-python/30738268#comment112048716_30738268) links as:
```
[h](http://google.com) and [h](https://goog.e.com)
``` |
4,762,822 | I'm writing my first desktop app and I'm struggling with class instances. This app is a simple ftp program using paramiko. What I've set up so far is a connection.py which looks like this...
```
#connect.py
import user, db
import paramiko, time, os
paramiko.util.log_to_file('paramiko-log.txt')
class Connection:
def __init__(self):
#Call DB Functions
database = db.Database()
#Set Transport
self.transport = paramiko.Transport((user.hostname, user.port))
#User Credentials
username = user.username
password = user.password
self.transport.connect(username = username, password = password)
self.sftp = paramiko.SFTPClient.from_transport(self.transport)
print "Set your credentials in user.py for now!"
msg = "Connecting as: %s, on port number %d" % (user.username, user.port)
print msg
def disconnect(self):
print "Closing connection..."
self.sftp.close()
self.transport.close()
print "Connection closed."
```
Pretty straightforward. Connect and disconnect.
This connect.py file is being imported into a main.py (which is my gui)
```
#main.py
import connect
from PySide import QtCore, QtGui
class Window(QtGui.QWidget):
def __init__(self, parent=None):
super(Window, self).__init__(parent)
windowWidth = 550
windowHeight = 350
self.establishedConnection = ""
connectButton = self.createButton("&Connect", self.conn)
disconnectButton = self.createButton("&Disconnect", self.disconnect)
grid = QtGui.QGridLayout()
grid.addWidget(connectButton, 3, 3)
grid.addWidget(disconnectButton, 4, 3)
grid.addWidget(self.createList(), 1, 0, 1, 4)
self.setLayout(grid)
self.resize(windowWidth, windowHeight)
self.setWindowTitle("FTP Program")
def conn(self):
connection = connect.Connection()
self.establishedConnection = connection
def disconnect(self):
self.establishedConnection.disconnect()
def createButton(self, text, member):
button = QtGui.QPushButton(text)
button.clicked.connect(member)
return button
if __name__ == '__main__':
import sys
app = QtGui.QApplication(sys.argv)
gui = Window()
gui.show()
sys.exit(app.exec_())
```
The issue is disconnecting.
I was thinking `__init__` would create an instance of the `Connection()` class. If you look on main.py you can see that I tried to create the variable `self.connectionEstablished` in order to save the object so I could call disconnect on it later.
Where am I going wrong? I'm fairly new to python and other non-web languages(I spend most of my time writing RoR and php apps).
No errors are shown at any time and I started this app out as a terminal app so I do know that connect.py does work as intended.
Edit: So I guess Senderle got a connection closed message, which is what I'd like to see as well but I'm not. I'll mark a best answer if I see something that solves my problem.
Edit Solved: Pushed connect.py and main.py into one file to simplify things. And for some reason that solved things. So who knows whats going on. I'm still going to hold off on 'best answer'. If someone can tell me why I can't have a split file like that then I'm all ears. | 2011/01/21 | [
"https://Stackoverflow.com/questions/4762822",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/191870/"
] | I tried the code and it ran fine. I made only a few changes.
First, I didn't know what "user" and "db" are, so I commented out
```
import user, db
```
and
```
database = db.Database()
```
and used my own data for username, password, etc.
Second, the PySide module isn't available via my package manager, so I used PyQt4 instead. It didn't like `grid.addWidget(self.createList(), 1, 0, 1, 4)` so I commented that out, and everything worked as expected.
Further thoughts: When there were connection errors, there was some console feedback consisting of stack traces, but nothing more, and `self.establishedConnection` remained a string, causing `self.establishedConnection.disconnect()` to fail. So perhaps there's a connection problem?
EDIT: Aaaahhhhh, I just saw this: "No errors are shown at any time." Are you running this from a terminal or double-clicking an executable? If you start it from a terminal, I bet you'll see stacktraces in the terminal. The gui doesn't close when the code hits an exception.
EDIT2: If joining the files fixes the problem, then I am certain the problem cannot have anything to do with python itself. This has to be a problem with eclipse. You say that connection.py began as a terminal app, so you must be able to run python apps from the command line. Try the following: put main.py, connect.py, etc. in a directory of their own, open a terminal, and run `python main.py`. If it works as expected, then the problem has something to do with eclipse. | You are not calling conn() in the constructor. |
14,465,154 | I have a text file includes over than 10 million lines. Lines like that:
```
37024469;196672001;255.0000000000
37024469;196665001;396.0000000000
37024469;196664001;396.0000000000
37024469;196399002;85.0000000000
37024469;160507001;264.0000000000
37024469;160506001;264.0000000000
```
As you seen, delimiter is ";". i would like to sort this text file by using python according to the second element. I couldnt use split function. Because it causes MemoryError. how can i manage it ? | 2013/01/22 | [
"https://Stackoverflow.com/questions/14465154",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1907576/"
] | Don't sort 10 million lines in memory. Split this up in batches instead:
* Run 100 100k line sorts (using the file as an iterator, combined with `islice()` or similar to pick a batch). Write out to separate files elsewhere.
* Merge the sorted files. Here is an merge generator that you can pass 100 open files and it'll yield lines in sorted order. Write to a new file line by line:
```
import operator
def mergeiter(*iterables, **kwargs):
"""Given a set of sorted iterables, yield the next value in merged order
Takes an optional `key` callable to compare values by.
"""
iterables = [iter(it) for it in iterables]
iterables = {i: [next(it), i, it] for i, it in enumerate(iterables)}
if 'key' not in kwargs:
key = operator.itemgetter(0)
else:
key = lambda item, key=kwargs['key']: key(item[0])
while True:
value, i, it = min(iterables.values(), key=key)
yield value
try:
iterables[i][0] = next(it)
except StopIteration:
del iterables[i]
if not iterables:
raise
``` | You can do it with an `os.system()` call to the bash function `sort`
```
sort -k2 yourFile.txt
``` |
14,465,154 | I have a text file includes over than 10 million lines. Lines like that:
```
37024469;196672001;255.0000000000
37024469;196665001;396.0000000000
37024469;196664001;396.0000000000
37024469;196399002;85.0000000000
37024469;160507001;264.0000000000
37024469;160506001;264.0000000000
```
As you seen, delimiter is ";". i would like to sort this text file by using python according to the second element. I couldnt use split function. Because it causes MemoryError. how can i manage it ? | 2013/01/22 | [
"https://Stackoverflow.com/questions/14465154",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1907576/"
] | Don't sort 10 million lines in memory. Split this up in batches instead:
* Run 100 100k line sorts (using the file as an iterator, combined with `islice()` or similar to pick a batch). Write out to separate files elsewhere.
* Merge the sorted files. Here is an merge generator that you can pass 100 open files and it'll yield lines in sorted order. Write to a new file line by line:
```
import operator
def mergeiter(*iterables, **kwargs):
"""Given a set of sorted iterables, yield the next value in merged order
Takes an optional `key` callable to compare values by.
"""
iterables = [iter(it) for it in iterables]
iterables = {i: [next(it), i, it] for i, it in enumerate(iterables)}
if 'key' not in kwargs:
key = operator.itemgetter(0)
else:
key = lambda item, key=kwargs['key']: key(item[0])
while True:
value, i, it = min(iterables.values(), key=key)
yield value
try:
iterables[i][0] = next(it)
except StopIteration:
del iterables[i]
if not iterables:
raise
``` | Based on [Sorting a million 32-bit integers in 2MB of RAM using Python](http://neopythonic.blogspot.ru/2008/10/sorting-million-32-bit-integers-in-2mb.html):
```
import sys
from functools import partial
from heapq import merge
from tempfile import TemporaryFile
# define sorting criteria
def second_column(line, default=float("inf")):
try:
return int(line.split(";", 2)[1]) # use int() for numeric sort
except (IndexError, ValueError):
return default # a key for non-integer or non-existent 2nd column
# sort lines in small batches, write intermediate results to temporary files
sorted_files = []
nbytes = 1 << 20 # load around nbytes bytes at a time
for lines in iter(partial(sys.stdin.readlines, nbytes), []):
lines.sort(key=second_column) # sort current batch
f = TemporaryFile("w+")
f.writelines(lines)
f.seek(0) # rewind
sorted_files.append(f)
# merge & write the result
sys.stdout.writelines(merge(*sorted_files, key=second_column))
# clean up
for f in sorted_files:
f.close() # temporary file is deleted when it closes
```
[`heapq.merge()` has `key` parameter since Python 3.5](https://docs.python.org/3.5/library/heapq.html#heapq.merge). You could try [`mergeiter()` from Martijn Pieters' answer](https://stackoverflow.com/a/14465236/4279) instead or do [Schwartzian transform](http://en.wikipedia.org/wiki/Schwartzian_transform) on older Python versions:
```
iters = [((second_column(line), line) for line in file)
for file in sorted_files] # note: this makes the sort unstable
sorted_lines = (line for _, line in merge(*iters))
sys.stdout.writelines(sorted_lines)
```
Usage:
```
$ python sort-k2-n.py < input.txt > output.txt
``` |
14,465,154 | I have a text file includes over than 10 million lines. Lines like that:
```
37024469;196672001;255.0000000000
37024469;196665001;396.0000000000
37024469;196664001;396.0000000000
37024469;196399002;85.0000000000
37024469;160507001;264.0000000000
37024469;160506001;264.0000000000
```
As you seen, delimiter is ";". i would like to sort this text file by using python according to the second element. I couldnt use split function. Because it causes MemoryError. how can i manage it ? | 2013/01/22 | [
"https://Stackoverflow.com/questions/14465154",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1907576/"
] | Based on [Sorting a million 32-bit integers in 2MB of RAM using Python](http://neopythonic.blogspot.ru/2008/10/sorting-million-32-bit-integers-in-2mb.html):
```
import sys
from functools import partial
from heapq import merge
from tempfile import TemporaryFile
# define sorting criteria
def second_column(line, default=float("inf")):
try:
return int(line.split(";", 2)[1]) # use int() for numeric sort
except (IndexError, ValueError):
return default # a key for non-integer or non-existent 2nd column
# sort lines in small batches, write intermediate results to temporary files
sorted_files = []
nbytes = 1 << 20 # load around nbytes bytes at a time
for lines in iter(partial(sys.stdin.readlines, nbytes), []):
lines.sort(key=second_column) # sort current batch
f = TemporaryFile("w+")
f.writelines(lines)
f.seek(0) # rewind
sorted_files.append(f)
# merge & write the result
sys.stdout.writelines(merge(*sorted_files, key=second_column))
# clean up
for f in sorted_files:
f.close() # temporary file is deleted when it closes
```
[`heapq.merge()` has `key` parameter since Python 3.5](https://docs.python.org/3.5/library/heapq.html#heapq.merge). You could try [`mergeiter()` from Martijn Pieters' answer](https://stackoverflow.com/a/14465236/4279) instead or do [Schwartzian transform](http://en.wikipedia.org/wiki/Schwartzian_transform) on older Python versions:
```
iters = [((second_column(line), line) for line in file)
for file in sorted_files] # note: this makes the sort unstable
sorted_lines = (line for _, line in merge(*iters))
sys.stdout.writelines(sorted_lines)
```
Usage:
```
$ python sort-k2-n.py < input.txt > output.txt
``` | You can do it with an `os.system()` call to the bash function `sort`
```
sort -k2 yourFile.txt
``` |
47,481,294 | I am deploying a flask web app to Azure, and need to upgrade pip on Azure to the latest version. I have tried running
```
D:\python34\python.exe -m pip install --upgrade pip
```
in the Kudu console of Azure, but this didn't work, and gave me an error below.
```
Access is denied: 'd:\\python34\\lib\\site-packages\\pip-1.5.6.dist-info\\description.rst'
```
I really appreciate your help. | 2017/11/24 | [
"https://Stackoverflow.com/questions/47481294",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9005590/"
] | I found the right method here:
<https://stackoverflow.com/a/41843617/9005590>
I logged in to Kudu console at Azure so that I can edit the deploy.cmd file, and added
```
env\scripts\python -m pip install --upgrade pip
``` | For upgrading the Python version, please try the following manually steps:
• Navigate to Azure portal
• Click on App Service blade of Web App, select Extensions and then Add.
• From the list of extensions, scroll down until you spot the Python logos, then choose the version you need and let us know how this goes. |
54,166,387 | I would like to see the accuracy of the speech services from Azure, specifically speech-to-text using an audio file.
I have been reading the documentation <https://learn.microsoft.com/en-us/python/api/azure-cognitiveservices-speech/?view=azure-python> and playing around with a suggested code from the MS quickstar page. The code workds fine and I can get some transcription, but it just transcribes the beginning of the audio (first utterance):
```
import azure.cognitiveservices.speech as speechsdk
speechKey = 'xxx'
service_region = 'westus'
speech_config = speechsdk.SpeechConfig(subscription=speechKey, region=service_region, speech_recognition_language="es-MX")
audio_config = speechsdk.audio.AudioConfig(use_default_microphone=False, filename='lala.wav')
sr = speechsdk.SpeechRecognizer(speech_config, audio_config)
es = speechsdk.EventSignal(sr.recognized, sr.recognized)
result = sr.recognize_once()
if result.reason == speechsdk.ResultReason.RecognizedSpeech:
print("Recognized: {}".format(result.text))
elif result.reason == speechsdk.ResultReason.NoMatch:
print("No speech could be recognized: {}".format(result.no_match_details))
elif result.reason == speechsdk.ResultReason.Canceled:
cancellation_details = result.cancellation_details
print("Speech Recognition canceled: {}".format(cancellation_details.reason))
if cancellation_details.reason == speechsdk.CancellationReason.Error:
print("Error details: {}".format(cancellation_details.error_details))
```
Based on the documentation, looks like I have to use signals and events to capture the full audio using method start\_continuous\_recognition (which is not documented for python, but looks like the method and related classes are implemented).
I tried to follow other examples from c# and Java but was not able to implement this in Python.
Has anyone been able to do this and provie some pointers?
Thank you very much! | 2019/01/13 | [
"https://Stackoverflow.com/questions/54166387",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10267489/"
] | Check the Azure python sample: <https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py>
Or other language samples: <https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples>
Basically, the below:
```
def speech_recognize_continuous_from_file():
"""performs continuous speech recognition with input from an audio file"""
# <SpeechContinuousRecognitionWithFile>
speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)
audio_config = speechsdk.audio.AudioConfig(filename=weatherfilename)
speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_config)
done = False
def stop_cb(evt):
"""callback that stops continuous recognition upon receiving an event `evt`"""
print('CLOSING on {}'.format(evt))
speech_recognizer.stop_continuous_recognition()
nonlocal done
done = True
# Connect callbacks to the events fired by the speech recognizer
speech_recognizer.recognizing.connect(lambda evt: print('RECOGNIZING: {}'.format(evt)))
speech_recognizer.recognized.connect(lambda evt: print('RECOGNIZED: {}'.format(evt)))
speech_recognizer.session_started.connect(lambda evt: print('SESSION STARTED: {}'.format(evt)))
speech_recognizer.session_stopped.connect(lambda evt: print('SESSION STOPPED {}'.format(evt)))
speech_recognizer.canceled.connect(lambda evt: print('CANCELED {}'.format(evt)))
# stop continuous recognition on either session stopped or canceled events
speech_recognizer.session_stopped.connect(stop_cb)
speech_recognizer.canceled.connect(stop_cb)
# Start continuous speech recognition
speech_recognizer.start_continuous_recognition()
while not done:
time.sleep(.5)
# </SpeechContinuousRecognitionWithFile>
``` | You could try this:
```
import azure.cognitiveservices.speech as speechsdk
import time
speech_key, service_region = "xyz", "WestEurope"
speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region, speech_recognition_language="it-IT")
speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config)
speech_recognizer.session_started.connect(lambda evt: print('SESSION STARTED: {}'.format(evt)))
speech_recognizer.session_stopped.connect(lambda evt: print('\nSESSION STOPPED {}'.format(evt)))
speech_recognizer.recognized.connect(lambda evt: print('\n{}'.format(evt.result.text)))
print('Say a few words\n\n')
speech_recognizer.start_continuous_recognition()
time.sleep(10)
speech_recognizer.stop_continuous_recognition()
speech_recognizer.session_started.disconnect_all()
speech_recognizer.recognized.disconnect_all()
speech_recognizer.session_stopped.disconnect_all()
```
Remember to set your preferred language. It's not too much but it's a good starting point, and it works. I will continue experimenting. |
54,166,387 | I would like to see the accuracy of the speech services from Azure, specifically speech-to-text using an audio file.
I have been reading the documentation <https://learn.microsoft.com/en-us/python/api/azure-cognitiveservices-speech/?view=azure-python> and playing around with a suggested code from the MS quickstar page. The code workds fine and I can get some transcription, but it just transcribes the beginning of the audio (first utterance):
```
import azure.cognitiveservices.speech as speechsdk
speechKey = 'xxx'
service_region = 'westus'
speech_config = speechsdk.SpeechConfig(subscription=speechKey, region=service_region, speech_recognition_language="es-MX")
audio_config = speechsdk.audio.AudioConfig(use_default_microphone=False, filename='lala.wav')
sr = speechsdk.SpeechRecognizer(speech_config, audio_config)
es = speechsdk.EventSignal(sr.recognized, sr.recognized)
result = sr.recognize_once()
if result.reason == speechsdk.ResultReason.RecognizedSpeech:
print("Recognized: {}".format(result.text))
elif result.reason == speechsdk.ResultReason.NoMatch:
print("No speech could be recognized: {}".format(result.no_match_details))
elif result.reason == speechsdk.ResultReason.Canceled:
cancellation_details = result.cancellation_details
print("Speech Recognition canceled: {}".format(cancellation_details.reason))
if cancellation_details.reason == speechsdk.CancellationReason.Error:
print("Error details: {}".format(cancellation_details.error_details))
```
Based on the documentation, looks like I have to use signals and events to capture the full audio using method start\_continuous\_recognition (which is not documented for python, but looks like the method and related classes are implemented).
I tried to follow other examples from c# and Java but was not able to implement this in Python.
Has anyone been able to do this and provie some pointers?
Thank you very much! | 2019/01/13 | [
"https://Stackoverflow.com/questions/54166387",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10267489/"
] | You could try this:
```
import azure.cognitiveservices.speech as speechsdk
import time
speech_key, service_region = "xyz", "WestEurope"
speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region, speech_recognition_language="it-IT")
speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config)
speech_recognizer.session_started.connect(lambda evt: print('SESSION STARTED: {}'.format(evt)))
speech_recognizer.session_stopped.connect(lambda evt: print('\nSESSION STOPPED {}'.format(evt)))
speech_recognizer.recognized.connect(lambda evt: print('\n{}'.format(evt.result.text)))
print('Say a few words\n\n')
speech_recognizer.start_continuous_recognition()
time.sleep(10)
speech_recognizer.stop_continuous_recognition()
speech_recognizer.session_started.disconnect_all()
speech_recognizer.recognized.disconnect_all()
speech_recognizer.session_stopped.disconnect_all()
```
Remember to set your preferred language. It's not too much but it's a good starting point, and it works. I will continue experimenting. | and to further assist with @David Beauchemin's solution, the following code block worked for me to get the final result in a neat list:
```
speech_recognizer.recognizing.connect(lambda evt: print('RECOGNIZING:{}'.format(evt)))
speech_recognizer.recognized.connect(lambda evt: print('RECOGNIZED:{}'.format(evt)))
all_results = []
def handle_final_result(evt):
all_results.append(evt.result.text)
speech_recognizer.recognized.connect(handle_final_result)
speech_recognizer.session_started.connect(lambda evt: print('SESSION STARTED:{}'.format(evt)))
speech_recognizer.session_stopped.connect(lambda evt: print('SESSION STOPPED {}'.format(evt)))
speech_recognizer.canceled.connect(lambda evt: print('CANCELED {}'.format(evt)))
speech_recognizer.session_stopped.connect(stop_cb)
speech_recognizer.canceled.connect(stop_cb)
``` |
54,166,387 | I would like to see the accuracy of the speech services from Azure, specifically speech-to-text using an audio file.
I have been reading the documentation <https://learn.microsoft.com/en-us/python/api/azure-cognitiveservices-speech/?view=azure-python> and playing around with a suggested code from the MS quickstar page. The code workds fine and I can get some transcription, but it just transcribes the beginning of the audio (first utterance):
```
import azure.cognitiveservices.speech as speechsdk
speechKey = 'xxx'
service_region = 'westus'
speech_config = speechsdk.SpeechConfig(subscription=speechKey, region=service_region, speech_recognition_language="es-MX")
audio_config = speechsdk.audio.AudioConfig(use_default_microphone=False, filename='lala.wav')
sr = speechsdk.SpeechRecognizer(speech_config, audio_config)
es = speechsdk.EventSignal(sr.recognized, sr.recognized)
result = sr.recognize_once()
if result.reason == speechsdk.ResultReason.RecognizedSpeech:
print("Recognized: {}".format(result.text))
elif result.reason == speechsdk.ResultReason.NoMatch:
print("No speech could be recognized: {}".format(result.no_match_details))
elif result.reason == speechsdk.ResultReason.Canceled:
cancellation_details = result.cancellation_details
print("Speech Recognition canceled: {}".format(cancellation_details.reason))
if cancellation_details.reason == speechsdk.CancellationReason.Error:
print("Error details: {}".format(cancellation_details.error_details))
```
Based on the documentation, looks like I have to use signals and events to capture the full audio using method start\_continuous\_recognition (which is not documented for python, but looks like the method and related classes are implemented).
I tried to follow other examples from c# and Java but was not able to implement this in Python.
Has anyone been able to do this and provie some pointers?
Thank you very much! | 2019/01/13 | [
"https://Stackoverflow.com/questions/54166387",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10267489/"
] | Check the Azure python sample: <https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py>
Or other language samples: <https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples>
Basically, the below:
```
def speech_recognize_continuous_from_file():
"""performs continuous speech recognition with input from an audio file"""
# <SpeechContinuousRecognitionWithFile>
speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)
audio_config = speechsdk.audio.AudioConfig(filename=weatherfilename)
speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_config)
done = False
def stop_cb(evt):
"""callback that stops continuous recognition upon receiving an event `evt`"""
print('CLOSING on {}'.format(evt))
speech_recognizer.stop_continuous_recognition()
nonlocal done
done = True
# Connect callbacks to the events fired by the speech recognizer
speech_recognizer.recognizing.connect(lambda evt: print('RECOGNIZING: {}'.format(evt)))
speech_recognizer.recognized.connect(lambda evt: print('RECOGNIZED: {}'.format(evt)))
speech_recognizer.session_started.connect(lambda evt: print('SESSION STARTED: {}'.format(evt)))
speech_recognizer.session_stopped.connect(lambda evt: print('SESSION STOPPED {}'.format(evt)))
speech_recognizer.canceled.connect(lambda evt: print('CANCELED {}'.format(evt)))
# stop continuous recognition on either session stopped or canceled events
speech_recognizer.session_stopped.connect(stop_cb)
speech_recognizer.canceled.connect(stop_cb)
# Start continuous speech recognition
speech_recognizer.start_continuous_recognition()
while not done:
time.sleep(.5)
# </SpeechContinuousRecognitionWithFile>
``` | And to further improve @manyways solutions here own to collect the data.
```
all_results = []
def handle_final_result(evt):
all_results.append(evt.result.text)
speech_recognizer.recognized.connect(handle_final_result) # to collect data at the end
``` |
54,166,387 | I would like to see the accuracy of the speech services from Azure, specifically speech-to-text using an audio file.
I have been reading the documentation <https://learn.microsoft.com/en-us/python/api/azure-cognitiveservices-speech/?view=azure-python> and playing around with a suggested code from the MS quickstar page. The code workds fine and I can get some transcription, but it just transcribes the beginning of the audio (first utterance):
```
import azure.cognitiveservices.speech as speechsdk
speechKey = 'xxx'
service_region = 'westus'
speech_config = speechsdk.SpeechConfig(subscription=speechKey, region=service_region, speech_recognition_language="es-MX")
audio_config = speechsdk.audio.AudioConfig(use_default_microphone=False, filename='lala.wav')
sr = speechsdk.SpeechRecognizer(speech_config, audio_config)
es = speechsdk.EventSignal(sr.recognized, sr.recognized)
result = sr.recognize_once()
if result.reason == speechsdk.ResultReason.RecognizedSpeech:
print("Recognized: {}".format(result.text))
elif result.reason == speechsdk.ResultReason.NoMatch:
print("No speech could be recognized: {}".format(result.no_match_details))
elif result.reason == speechsdk.ResultReason.Canceled:
cancellation_details = result.cancellation_details
print("Speech Recognition canceled: {}".format(cancellation_details.reason))
if cancellation_details.reason == speechsdk.CancellationReason.Error:
print("Error details: {}".format(cancellation_details.error_details))
```
Based on the documentation, looks like I have to use signals and events to capture the full audio using method start\_continuous\_recognition (which is not documented for python, but looks like the method and related classes are implemented).
I tried to follow other examples from c# and Java but was not able to implement this in Python.
Has anyone been able to do this and provie some pointers?
Thank you very much! | 2019/01/13 | [
"https://Stackoverflow.com/questions/54166387",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10267489/"
] | Check the Azure python sample: <https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py>
Or other language samples: <https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples>
Basically, the below:
```
def speech_recognize_continuous_from_file():
"""performs continuous speech recognition with input from an audio file"""
# <SpeechContinuousRecognitionWithFile>
speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)
audio_config = speechsdk.audio.AudioConfig(filename=weatherfilename)
speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_config)
done = False
def stop_cb(evt):
"""callback that stops continuous recognition upon receiving an event `evt`"""
print('CLOSING on {}'.format(evt))
speech_recognizer.stop_continuous_recognition()
nonlocal done
done = True
# Connect callbacks to the events fired by the speech recognizer
speech_recognizer.recognizing.connect(lambda evt: print('RECOGNIZING: {}'.format(evt)))
speech_recognizer.recognized.connect(lambda evt: print('RECOGNIZED: {}'.format(evt)))
speech_recognizer.session_started.connect(lambda evt: print('SESSION STARTED: {}'.format(evt)))
speech_recognizer.session_stopped.connect(lambda evt: print('SESSION STOPPED {}'.format(evt)))
speech_recognizer.canceled.connect(lambda evt: print('CANCELED {}'.format(evt)))
# stop continuous recognition on either session stopped or canceled events
speech_recognizer.session_stopped.connect(stop_cb)
speech_recognizer.canceled.connect(stop_cb)
# Start continuous speech recognition
speech_recognizer.start_continuous_recognition()
while not done:
time.sleep(.5)
# </SpeechContinuousRecognitionWithFile>
``` | and to further assist with @David Beauchemin's solution, the following code block worked for me to get the final result in a neat list:
```
speech_recognizer.recognizing.connect(lambda evt: print('RECOGNIZING:{}'.format(evt)))
speech_recognizer.recognized.connect(lambda evt: print('RECOGNIZED:{}'.format(evt)))
all_results = []
def handle_final_result(evt):
all_results.append(evt.result.text)
speech_recognizer.recognized.connect(handle_final_result)
speech_recognizer.session_started.connect(lambda evt: print('SESSION STARTED:{}'.format(evt)))
speech_recognizer.session_stopped.connect(lambda evt: print('SESSION STOPPED {}'.format(evt)))
speech_recognizer.canceled.connect(lambda evt: print('CANCELED {}'.format(evt)))
speech_recognizer.session_stopped.connect(stop_cb)
speech_recognizer.canceled.connect(stop_cb)
``` |
54,166,387 | I would like to see the accuracy of the speech services from Azure, specifically speech-to-text using an audio file.
I have been reading the documentation <https://learn.microsoft.com/en-us/python/api/azure-cognitiveservices-speech/?view=azure-python> and playing around with a suggested code from the MS quickstar page. The code workds fine and I can get some transcription, but it just transcribes the beginning of the audio (first utterance):
```
import azure.cognitiveservices.speech as speechsdk
speechKey = 'xxx'
service_region = 'westus'
speech_config = speechsdk.SpeechConfig(subscription=speechKey, region=service_region, speech_recognition_language="es-MX")
audio_config = speechsdk.audio.AudioConfig(use_default_microphone=False, filename='lala.wav')
sr = speechsdk.SpeechRecognizer(speech_config, audio_config)
es = speechsdk.EventSignal(sr.recognized, sr.recognized)
result = sr.recognize_once()
if result.reason == speechsdk.ResultReason.RecognizedSpeech:
print("Recognized: {}".format(result.text))
elif result.reason == speechsdk.ResultReason.NoMatch:
print("No speech could be recognized: {}".format(result.no_match_details))
elif result.reason == speechsdk.ResultReason.Canceled:
cancellation_details = result.cancellation_details
print("Speech Recognition canceled: {}".format(cancellation_details.reason))
if cancellation_details.reason == speechsdk.CancellationReason.Error:
print("Error details: {}".format(cancellation_details.error_details))
```
Based on the documentation, looks like I have to use signals and events to capture the full audio using method start\_continuous\_recognition (which is not documented for python, but looks like the method and related classes are implemented).
I tried to follow other examples from c# and Java but was not able to implement this in Python.
Has anyone been able to do this and provie some pointers?
Thank you very much! | 2019/01/13 | [
"https://Stackoverflow.com/questions/54166387",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10267489/"
] | And to further improve @manyways solutions here own to collect the data.
```
all_results = []
def handle_final_result(evt):
all_results.append(evt.result.text)
speech_recognizer.recognized.connect(handle_final_result) # to collect data at the end
``` | and to further assist with @David Beauchemin's solution, the following code block worked for me to get the final result in a neat list:
```
speech_recognizer.recognizing.connect(lambda evt: print('RECOGNIZING:{}'.format(evt)))
speech_recognizer.recognized.connect(lambda evt: print('RECOGNIZED:{}'.format(evt)))
all_results = []
def handle_final_result(evt):
all_results.append(evt.result.text)
speech_recognizer.recognized.connect(handle_final_result)
speech_recognizer.session_started.connect(lambda evt: print('SESSION STARTED:{}'.format(evt)))
speech_recognizer.session_stopped.connect(lambda evt: print('SESSION STOPPED {}'.format(evt)))
speech_recognizer.canceled.connect(lambda evt: print('CANCELED {}'.format(evt)))
speech_recognizer.session_stopped.connect(stop_cb)
speech_recognizer.canceled.connect(stop_cb)
``` |
47,792,503 | I have been having issues with the code I am trying to right with the model I am trying to code the following error has appeared and being a relative novice I am unsure of how to resolve it.
```
ValueError Traceback (most recent call last)
<ipython-input-2-5f21a0ce8185> in <module>()
26 proposed[j] = proposed[j] + np.random.normal(0,propsigma[j])
27 if (proposed[j]>0): # automatically reject moves if proposed parameter <=0
---> 28 alpha = np.exp(logistic_loglik(proposed,time,ExRatio,sig)-logistic_loglik(par_out[i-1,],time,ExRatio,sig))
29 u = np.random.rand()
30 if (u < alpha):
<ipython-input-2-5f21a0ce8185> in logistic_loglik(params, t, data, sig)
3 # set up a function to return the log likelihood
4 def logistic_loglik(params,t,data,sig):
----> 5 return sum(norm.logpdf(logistic(data, t, params),sig))
6
7 # set standard deviations to be 10% of the population values
<ipython-input-1-c9480e66b7ef> in logistic(x, t, params)
6
7 def logistic(x,t,params):
----> 8 S, R, A = x
9 r, Nmax, delta_s, beta, gamma, delta_r, delta_a, Emax, H, MICs, MICr = params
10 N = S + R
ValueError: too many values to unpack (expected 3)
```
The model I am trying to code is an MCMC to fit some ODE's to some data I have added the code below for context.
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
%matplotlib inline
def logistic(x,t,params):
S, R, A = x
r, Nmax, delta_s, beta, gamma, delta_r, delta_a, Emax, H, MICs, MICr = params
N = S + R
E_s = 1 - (Emax * A**H)/(MICs**H + A**H)
E_r = 1- (Emax * A**H)/(MICr**H + A**H)
derivs = [r * (1 - N / Nmax ) * E_s * S - delta_s * S - ((beta * S * R)/N),
r * (1 - gamma) * (1 - N/Nmax) * E_r * R - delta_r * R + ((beta * S * R)/N), - delta_a * A]
return derivs
r = 0.5
Nmax = 10**7
delta_s = 0.025
beta = 10**-2
gamma = 0.5
delta_r = 0.025
delta_a = 0.003
Emax = 2
H = 2
MICs = 8
MICr = 2000
[r, Nmax, delta_s, beta, gamma, delta_r, delta_a, Emax, H, MICs, MICr] = params
S = 9 * 10**6
R = 10**5
A = 5.6
x0 = [S, R, A]
maxt = 2000
tstep = 1
t = np.arange(0,maxt,tstep)
def logistic_resid(params,t,data):
return logistic(params,t)-data
logistic_out = odeint(logistic, x0, t, args=(params,))
time = np.array([0, 168, 336, 504, 672, 840, 1008, 1176, 1344, 1512, 1680, 1848, 2016, 2184, 2352, 2520, 2688, 2856])
ExRatio = np.array([2, 27, 43, 36, 39, 32, 27, 22, 13, 10, 14, 14, 4, 4, 7, 3, 3, 1])
ratio = 100* logistic_out[:,1]/(logistic_out[:,0]+logistic_out[:,1])
plt.plot(t,ratio)
plt.plot(time,ExRatio,'h')
xlabel('Position')
ylabel('Pollution')
```
New Cell
```
from scipy.stats import norm
# set up a function to return the log likelihood
def logistic_loglik(params,t,data,sig):
return sum(norm.logpdf(logistic(data, t, params),sig))
# set standard deviations to be 10% of the population values
sig = ExRatio/10
# parameters for the MCMC
reps = 50000
npars = 3
# output matrix
par_out = np.ones(shape=(reps,npars))
# acceptance
accept = np.zeros(shape=(reps,npars))
# proposal standard deviations. These have been pre-optimized.
propsigma = [0.05,20,5]
for i in range(1,reps):
# make a copy of previous parameters
par_out[i,] = par_out[i-1,]
for j in range(npars):
proposed = np.copy(par_out[i,:]) # we need to make a copy so that rejected moves don't affect the original matrix
proposed[j] = proposed[j] + np.random.normal(0,propsigma[j])
if (proposed[j]>0): # automatically reject moves if proposed parameter <=0
alpha = np.exp(logistic_loglik(proposed,time,ExRatio,sig)-logistic_loglik(par_out[i-1,],time,ExRatio,sig))
u = np.random.rand()
if (u < alpha):
par_out[i,j] = proposed[j]
accept[i,j] = 1
#print(sum(accept[range(101,reps),:])/(reps-100))
#plt.plot(par_out[:,0])
#plt.plot(par_out[range(101,reps),0])
#plt.plot(par_out[:,0],par_out[:,2])
plt.hist(par_out[range(101,reps),0],50)
print('\n')
a=np.mean(par_out[range(101,reps),0])
```
I think its mistaking my parameters for something else but that might be wrong.
I am using Jupyter notebook | 2017/12/13 | [
"https://Stackoverflow.com/questions/47792503",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9090239/"
] | You cannot use `S, R, A = x`, if `x` is empty or has not enough (too much) values to unpack.
For what I see, you are trying to define S, R and A values using the variable `x`. It is possible in that way only if `x` is the `len` of 3. If you want to assign certain `x` values to specific S, R or A use loop, or if you want to do this that way you can use:
`S, R, *A = x`,
this way the variable S and R will have the first and second element of x, and variable A the rest. You can put `*` before any variable to make it take the excessive values you store in x. | The immediate syntax error
--------------------------
In your call
```
---> 28 alpha = np.exp(logistic_loglik(proposed,time,ExRatio,sig)-logistic_loglik(par_out[i-1,],time,ExRatio,sig))
```
to
```
4 def logistic_loglik(params,t,data,sig):
----> 5 return sum(norm.logpdf(logistic(data, t, params),sig))
```
where finally the paramerers are used as defined in
```
7 def logistic(x,t,params):
----> 8 S, R, A = x
```
the `x` that causes the error is the `data` of the previous call which is set to `exTatio` in the first call which is defined in your first block as an array of several handful of values. There might be something wrong with the logic that you use, as the structure of `exRatio` is not the one of 3 state variables.
---
The correct implementation of the concept of the log-likelihood sum
-------------------------------------------------------------------
What you want is to compute the log-likelihood of the computed ratios at your sample points where the distribution for `ExTime[k]` is given as a normal distribution with mean `ExRatio[k]` and variance `sig[k]` which is set to `ExRatio[k]/10`. In your code you need to do exactly that, solve the ODE with the proposed initial values, compute the ratios and sum the logs of the pdf values:
```
# set up a function to return the log likelihood
def logistic_loglik(SRA0,ExTime,ExRatio,sig):
# solve the ODE with the trial values `SRA0` and
# output the samples at the sample times `ExTime`
logistic_out = odeint(logistic, SRA0, ExTime, args=(params,))
# compute the ratios
ratio = 100* logistic_out[:,1]/(logistic_out[:,0]+logistic_out[:,1])
# return the summed log-likelihood
return sum(norm.logpdf(ratio, ExRatio, sig))
```
Trying variants of `propsigma` leads to initially rapid convergence to qualitatively reasonable fits.
```
propsigma i S, R, A = par_out[i]
[0.05,20.,5.] 59 [ 2.14767909 0.18163897 5.45312544]
[20,0.5,5.] 39 [ 56.48959836 0.50890498 5.80229728]
[5.,2.,5.] 79 [ 67.26394337 0.15865463 6.0213663 ]
``` |
20,105,738 | So I am relatively new to coding and python, and I am trying to follow examples and tutorials on youtube to help me learn more. Currently I am watching and following this tutorial "[Intro to scikit-learn](http://www.youtube.com/watch?v=uX4ZirOiWkw)"
I have visited the supplied github (something else I am new to) and attempted to load the contents of the github file in Ipython but to little effect. I was wondering if anybody could provide some assistance or instructions on how to do this?
[Here is a link](https://github.com/jakevdp/sklearn_scipy2013/blob/6fab3e5394d653f44a72c572e7ee7920c513b36b/notebooks/06.1_validation_and_testing.ipynb) to the github in question. | 2013/11/20 | [
"https://Stackoverflow.com/questions/20105738",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2444373/"
] | Below aproximate scenario:
```
In [1]: !git clone your_desired_project
In [2]: import sys
In [3]: sys.path.insert(0, '/path/to/folder')
In [4]: import desired_module
```
Instead of `In [2, 3]` you can
```
In [2]: %cd /path/to/folder
``` | You can just clone the git repository and launch IPython from the notebooks folder:
```
git clone https://github.com/jakevdp/sklearn_scipy2013.git
cd sklearn_scipy2013/notebooks
ipython notebook
``` |
34,273,859 | I'm trying to click on multiple dropdown lists on a page but I keep getting an error saying that my list object has no attribute tag\_name'.
My code
```
def click_follow_buttons(driver):
selects = Select(driver.find_elements_by_class_name("jBa"))#jBa
print selects
for select in selects:
select.select_by_index(0)
driver.find_element_by_class_name("bA").click()
```
My traceback
```
Traceback (most recent call last):
File "google_follow.py", line 50, in <module>
if click_follow_buttons(driver) == False:
File "google_follow.py", line 18, in click_follow_buttons
selects = Select(driver.find_elements_by_class_name("jBa"))#jBa
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/support/select.py", line 35, in __init__
if webelement.tag_name.lower() != "select":
AttributeError: 'list' object has no attribute 'tag_name'
```
The Html dropdown
```
<div class="jBa XG">
<div class="ny dl d-k-l" jslog="7128; track:impression">
``` | 2015/12/14 | [
"https://Stackoverflow.com/questions/34273859",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | First of all, you are using the `find_elements_by_class_name()` method that would return you a *list of web elements* matching a class name and not a single element.
But, even if you would use `find_element_by_class_name()` instead, you'll get a different error since this is a `div` element matching the class name and not a `select` element. | You need to pass to constructor of `Select` class web element which has `select` tag name:
<https://selenium.googlecode.com/git/docs/api/py/webdriver_support/selenium.webdriver.support.select.html>
>
> Constructor. A check is made that the given element is, indeed, a
> SELECT tag. If it is not, then an UnexpectedTagNameException is
> thrown.
>
>
> |
63,379,860 | Completely new to Webhook concept and Rundeck. I have a job in rundeck where it checkes health of some servers, code being in python.
[Fetch 200 Ok status after running Curl Command and using that status write a condition using python in RUNDECK](https://stackoverflow.com/questions/61952749/fetch-200-ok-status-after-running-curl-command-and-using-that-status-write-a-con)
i want to use webhook to provide update via email/slack channel to 5-6 users
Created a webhook, i selected a job which it should invoke, but i didnt understand what options to be entered options section[Job Option arguments, in the form `-opt1 value -opt2 "other value"`]
when i click on the webhook URL it gives 404 error found.
this might be the very basic questions. sorry kindly help | 2020/08/12 | [
"https://Stackoverflow.com/questions/63379860",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13542200/"
] | >
> i want to use webhook to provide update via email/slack channel to 5-6 users
>
>
>
Webhooks are used to enable third-party applications to trigger jobs. If you just want to send notifications of job status when a job is run, you don't need to use a webhook.
When you configure the job there's a "Notifications" tab. You can select when and how to send notifications. For slack specifically, there is a [notification plugin](https://github.com/rundeck-plugins/pagerduty-notification/) you can install. | To pass options to webhook you need to pass it in this [way](https://stackoverflow.com/a/60400477/10426011). [Here](https://stackoverflow.com/a/63359601/10426011) you have a good example to integrate it with python code.
UPDATE: Anyway, this [answer](https://stackoverflow.com/a/63385980/10426011) is the way to achieve your goal. |
31,054,393 | I just conducted an interesting test:
```
~$ python3 # I also conducted this on python 2.7.6, with the same result
Python 3.4.0 (default, Apr 11 2014, 13:05:11)
[GCC 4.8.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> class Foo(object):
... def __add__(self, other):
... global add_calls
... add_calls += 1
... return Foo()
... def __iadd__(self, other):
... return self
...
>>> add_calls = 0
>>> a = list(map(lambda x:Foo(), range(6)))
>>> a[0] + a[1] + a[2]
<__main__.Foo object at 0x7fb588e6c400>
>>> add_calls
2
>>> add_calls = 0
>>> sum(a, Foo())
<__main__.Foo object at 0x7fb588e6c4a8>
>>> add_calls
6
```
Obviously, the `__iadd__` method is more efficient than the `__add__` method, not requiring the allocation of a new class. If my objects being added were sufficiently complicated, this would create unnecessary new objects, potentially creating huge bottlenecks in my code.
I would expect that, in an `a[0] + a[1] + a[2]`, the first operation would call `__add__`, and the second operation would call `__iadd__` on the newly created object.
Why doesn't python optimize this? | 2015/06/25 | [
"https://Stackoverflow.com/questions/31054393",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1335431/"
] | The `__add__` method is free to return a different type of object, while `__iadd__` should, if using in-place semantics, return `self`. They are not required to return the same type of object here, so `sum()` should not rely on the special semantics of `__iadd__`.
You can use the [`functools.reduce()` function](https://docs.python.org/3/library/functools.html#functools.reduce) to implement your desired functionality yourself:
```
from functools import reduce
sum_with_inplace_semantics = reduce(Foo.__iadd__, a, Foo())
```
Demo:
```
>>> from functools import reduce
>>> class Foo(object):
... def __add__(self, other):
... global add_calls
... add_calls += 1
... return Foo()
... def __iadd__(self, other):
... global iadd_calls
... iadd_calls += 1
... return self
...
>>> a = [Foo() for _ in range(6)]
>>> result = Foo()
>>> add_calls = iadd_calls = 0
>>> reduce(Foo.__iadd__, a, result) is result
True
>>> add_calls, iadd_calls
(0, 6)
``` | [Martjin's answer](https://stackoverflow.com/a/31054540/1335431) provides an excellent workaround, but I feel the need to summarize the bits and pieces of answers scattered throughout the comments:
The `sum` function is primarily used for immutable types. Performing all additions except the first in-place would create a performance improvement on objects that had an `__iadd__` method, but checking for the `__iadd__` method would cause a performance loss in the more typical case. [Special cases aren't special enough to break the rules](https://www.python.org/dev/peps/pep-0020/).
I also stated that `__add__` should probably only be called once in `a + b + c`, where `a + b` creates a temporary variable, and then calls `tmp.__iadd__(c)` before returning it. However, this would violate the principle of least surprise. |
31,054,393 | I just conducted an interesting test:
```
~$ python3 # I also conducted this on python 2.7.6, with the same result
Python 3.4.0 (default, Apr 11 2014, 13:05:11)
[GCC 4.8.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> class Foo(object):
... def __add__(self, other):
... global add_calls
... add_calls += 1
... return Foo()
... def __iadd__(self, other):
... return self
...
>>> add_calls = 0
>>> a = list(map(lambda x:Foo(), range(6)))
>>> a[0] + a[1] + a[2]
<__main__.Foo object at 0x7fb588e6c400>
>>> add_calls
2
>>> add_calls = 0
>>> sum(a, Foo())
<__main__.Foo object at 0x7fb588e6c4a8>
>>> add_calls
6
```
Obviously, the `__iadd__` method is more efficient than the `__add__` method, not requiring the allocation of a new class. If my objects being added were sufficiently complicated, this would create unnecessary new objects, potentially creating huge bottlenecks in my code.
I would expect that, in an `a[0] + a[1] + a[2]`, the first operation would call `__add__`, and the second operation would call `__iadd__` on the newly created object.
Why doesn't python optimize this? | 2015/06/25 | [
"https://Stackoverflow.com/questions/31054393",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1335431/"
] | The `__add__` method is free to return a different type of object, while `__iadd__` should, if using in-place semantics, return `self`. They are not required to return the same type of object here, so `sum()` should not rely on the special semantics of `__iadd__`.
You can use the [`functools.reduce()` function](https://docs.python.org/3/library/functools.html#functools.reduce) to implement your desired functionality yourself:
```
from functools import reduce
sum_with_inplace_semantics = reduce(Foo.__iadd__, a, Foo())
```
Demo:
```
>>> from functools import reduce
>>> class Foo(object):
... def __add__(self, other):
... global add_calls
... add_calls += 1
... return Foo()
... def __iadd__(self, other):
... global iadd_calls
... iadd_calls += 1
... return self
...
>>> a = [Foo() for _ in range(6)]
>>> result = Foo()
>>> add_calls = iadd_calls = 0
>>> reduce(Foo.__iadd__, a, result) is result
True
>>> add_calls, iadd_calls
(0, 6)
``` | Since you are writting your class anyway, you know it's `__add__` can return the same object as well, don't you?
And therefore you can do your currying optimized code to run with both the `+` operator and the built-in `sum`:
```
>>> class Foo(object):
... def __add__(self, other):
... global add_calls
... add_calls += 1
... return self
```
(Just beware of passing your code to third party functions that expect "+" to be a new object) |
55,658,189 | Two python objects have the same id but "is" operation returns false as shown below:
```
a = np.arange(12).reshape(2, -1)
c = a.reshape(12, 1)
print("id(c.data)", id(c.data))
print("id(a.data)", id(a.data))
print(c.data is a.data)
print(id(c.data) == id(a.data))
```
Here is the actual output:
```
id(c.data) 241233112
id(a.data) 241233112
False
True
```
My question is... why "c.data is a.data" returns false even though they point to the same ID, thus pointing to the same object? I thought that they point to the same object if they have same ID or am I wrong? Thank you! | 2019/04/12 | [
"https://Stackoverflow.com/questions/55658189",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1985968/"
] | `a.data` and `c.data` both produce a *transient* object, with no reference to it. As such, both are immediately garbage-collected. The same id can be used for both.
In your first `if` statement, the objects have to co-exist while `is` checks if they are identical, which they are not.
In the second `if` statement, each object is released as soon as `id` returns its id.
If you save references to both objects, keeping them alive, you can see they are not the same object.
```
r0 = a.data
r1 = c.data
assert r0 is not r1
``` | ```
In [62]: a = np.arange(12).reshape(2,-1)
...: c = a.reshape(12,1)
```
`.data` returns a `memoryview` object. `id` just gives the id of that object; it's not the value of the object, or any indication of where `a` databuffer is located.
```
In [63]: a.data
Out[63]: <memory at 0x7f672d1101f8>
In [64]: c.data
Out[64]: <memory at 0x7f672d1103a8>
In [65]: type(a.data)
Out[65]: memoryview
```
<https://docs.python.org/3/library/stdtypes.html#memoryview>
If you want to verify that `a` and `c` share a data buffer, I find the `__array_interface__` to be a better tool.
```
In [66]: a.__array_interface__['data']
Out[66]: (50988640, False)
In [67]: c.__array_interface__['data']
Out[67]: (50988640, False)
```
It even shows the offset produced by slicing - here 24 bytes, 3\*8
```
In [68]: c[3:].__array_interface__['data']
Out[68]: (50988664, False)
```
---
I haven't seen much use of `a.data`. It can be used as the `buffer` object when creating a new array with `ndarray`:
```
In [70]: d = np.ndarray((2,6), dtype=a.dtype, buffer=a.data)
In [71]: d
Out[71]:
array([[ 0, 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11]])
In [72]: d.__array_interface__['data']
Out[72]: (50988640, False)
```
But normally we create new arrays with shared memory with slicing or `np.array` (copy=False). |
20,237,871 | I'm new in python.
I want to read data from a csv file, and then create a graph from those data.
I have a csv file with 2 columns and 20 row.
In the first row I have 1 1, second row have 2 2, and so on, until 20 20.
I want to take this coordinates and make graph.
This is what I have so far:
```
import csv
from pylab import *
with open('test.csv', 'rb') as csvfile:
spamreader = csv.reader(csvfile, delimiter=' ', quotechar='|') # open the csv file
for row in spamreader:
print ', '.join(row) # in each loop, row is getting the data,
# first row is [1,1] , then [2,2] and so on
plot()
show()
```
Now, What I was thinking is to make row to be in the end, with 2 cols and 20 row with all data.
Then I need to do parameter x, to be the first col, and y to be the second col, and give plot the x,y.
My problem is that I don't know how to save all values in row, and how to take only the first col and second col.
Thank you all! | 2013/11/27 | [
"https://Stackoverflow.com/questions/20237871",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1171526/"
] | You had problems in reading csv file
1) **delimiter = ','**
Regarding populating the x and y values for the graph. Just read the first and second values of each row and populate the x and y lists.
**Here is the modified code:**
```
import csv
from pylab import *
with open('test.csv', 'rb') as csvfile:
spamreader = csv.reader(csvfile, delimiter=',', quotechar='|') # open the csv file
x = []
y = []
for row in spamreader:
x.append(row[0])
y.append(row[1])
print ', '.join(row) # in each loop, row is getting the data,
# first row is [1,1] , then [2,2] and so on
plot(x, y)
show()
``` | You can use this to get your X and Y axis:
```
with open('./test4.csv', 'rb') as csvfile:
(X,Y) = zip(*[row.split() for row in csvfile])
``` |
61,975,704 | I am using the following tutorial for developing a basic neural network that does feedforward and backdrop. The link to the tutorial is here : [Python Neural Network Tutorial](https://towardsdatascience.com/how-to-build-your-own-neural-network-from-scratch-in-python-68998a08e4f6)
```
import numpy as np
def sigmoid(x):
return 1.0/(1+ np.exp(-x))
def sigmoid_derivative(x):
return x * (1.0 - x)
class NeuralNetwork:
def __init__(self, x, y):
self.input = x
self.weights1 = np.random.rand(self.input.shape[1],4)
self.weights2 = np.random.rand(4,1)
self.y = y
self.output = np.zeros(self.y.shape)
def feedforward(self):
self.layer1 = sigmoid(np.dot(self.input, self.weights1))
self.output = sigmoid(np.dot(self.layer1, self.weights2))
def backprop(self):
# application of the chain rule to find derivative of the loss function with respect to weights2 and weights1
d_weights2 = np.dot(self.layer1.T, (2*(self.y - self.output) * sigmoid_derivative(self.output)))
d_weights1 = np.dot(self.input.T, (np.dot(2*(self.y - self.output) * sigmoid_derivative(self.output), self.weights2.T) * sigmoid_derivative(self.layer1)))
# update the weights with the derivative (slope) of the loss function
self.weights1 += d_weights1
self.weights2 += d_weights2
if __name__ == "__main__":
X = np.array([[0,0,1],
[0,1,1],
[1,0,1],
[1,1,1]])
y = np.array([[0],[1],[1],[0]])
nn = NeuralNetwork(X,y)
for i in range(1500):
nn.feedforward()
nn.backprop()
print(nn.output)
```
What im trying to do is change the data set and return 1 if the predicted number is even and 0 if the same is odd. So I made the following changes :
```
if __name__ == "__main__":
X = np.array([[2,4,6,8,10],
[1,3,5,7,9],
[11,13,15,17,19],
[22,24,26,28,30]])
y = np.array([[1],[0],[0],[1]])
nn = NeuralNetwork(X,y)
The output I get is :
[[0.50000001]
[0.50000002]
[0.50000001]
[0.50000001]]
```
What am I doing wrong? | 2020/05/23 | [
"https://Stackoverflow.com/questions/61975704",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4382723/"
] | Basically there are two problems here:
1. Your expression of sigmoid\_derivative is wrong, it should be:
return sigmoid(x)\*((1.0 - sigmoid(x)))
2. If you take a look at the sigmoid function plot or your network weights, you would find out that your network saturated due to your large input. By doing something like X=X%5 you can get the training result you want, as the result of mine on your data:
[[9.99626174e-01]
[3.55126310e-04]
[3.55126310e-04]
[9.99626174e-01]]
[](https://i.stack.imgur.com/yVtai.png) | Just add `X = X/30` and train the network 10 times longer. This converged for me. You divide `X` by 30 to make every input in between 0 and 1. You train it longer because it is a more complex dataset.
Your derivative is fine because when you use the derivative function, the input to it is already `sigmoid(x)`. So `x*(1-x)` *is* `sigmoid(x)*(1-sigmoid(x))` |
54,300,059 | I'm trying to validate an email address. The username, domain, and TLD must be digits and letters only, and between 3-20 in length. I've got that as
`[a-zA-Z0-9]{3,20}`
So, when I want to use that to check all 3 portions of the email, I would think to do it as follows:
`[a-zA-Z0-9]{3,20}\+@[a-zA-Z0-9]{3,20}\+\.[a-zA-Z0-9]{3,20}`
However, this does not validate. Is there a way to validate the min/max length once, instead of at 3 different points?
The email I'm attempting to validate is: `testuser@testdomain.com`
I'm writing this using python3 and the data is read in from a .csv | 2019/01/22 | [
"https://Stackoverflow.com/questions/54300059",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7541680/"
] | The '+' are causing the emails to not match *unless* the '+' are in that position:
This is what you want:
`[a-zA-Z0-9]{3,20}@[a-zA-Z0-9]{3,20}\.[a-zA-Z0-9]{3,20}`
To help write regex this site is a life saver:
<https://regex101.com/> | `([a-zA-Z0-9]{3,20})@([a-zA-Z0-9]{3,20})[.]([a-zA-Z0-9]{3,20})`
using <https://regex101.com> |
55,295,305 | I am facing weird issue with boto3 module in AWS. I am writing serverless framework with lambda functions. I am using aws boto3 module & running below code in python. Code execution is successful when running locally but fails with UnknownServiceError when executed in AWS.
```
client_api = boto3.client(service_name='apigatewaymanagementapi')
```
After a lot of research, I found that local boto3 version is 1.9.119 and AWS boto3 version is 1.9.42. I am not too sure if this is the root cause for the issue.
I have tried installing boto3 in venv target and used that reference. No matter what, code execution fails in AWS.
I have checked if there is a way I can update aws boto3 version.
I have also tried adding boto3 as external dependency in requirements file
I have also tried adding layers with boto3 zip and mapped to the lambda function.
Unfortunately none of the solutions works. Please suggest alternate solution for this issue. | 2019/03/22 | [
"https://Stackoverflow.com/questions/55295305",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2442227/"
] | You are correct, The boto3 library is older in lambda that what is on your local machine. You can create a lambda layer that includes a newer version of boto3 or package boto3 in your lambda package.
Here are some links with step by step instructions. They are installing pymysql, you can replace that with boto3. Otherwise the instructions are exactly the same.
<https://geektopia.tech/post.php?blogpost=Create_Lambda_Layer_Python>
<https://geektopia.tech/post.php?blogpost=Create_Lambda_Package_Python> | This what The python 3.7 AWS lambda environment looks like at time of writing:
```
python: 3.7.2 (default, Mar 1 2019, 11:28:42)
[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)], boto3: 1.9.42, botocore: 1.12.42
```
By comparing botocore 1.12.42 (error) with 1.12.133 (working ok) I found that an outdated botocore in AWS Lambda is the culprit. One solution could be to include the latest botocore in your lambda package. For example using the the python requirements plugin:
```
serverless plugin install -n serverless-python-requirements
```
And creating a `requirements.txt` file containing `botocore==1.12.133`
(instead of 1.12.133 you might want to use the latest version at the time you read this) |
34,211,781 | I'm having some problems logging into a certain website, using python. I'm using the post method but I think that my parameters for the form are not right. This is my fist time trying things like this so maybe i'm doing it completely wrong, any kind of help is welcome.
This is what i got from the website:
```
<form method="post" action="/auth/login" id="login-form" novalidate>
<input type="hidden" name="_token" value="4d1964264067f1789bcbb7b01ca3f8366864ee7c" />
<div class="form-item text username">
<label>Gebruikersnaam</label>
<span><input type="email" name="username" autofocus /></span>
</div>
<div class="form-item text password">
<label>Wachtwoord</label>
<span><input type="password" name="password" /></span>
```
And this is my code,
```
import requests
import mechanize
from bs4 import BeautifulSoup
url = 'aurl.com'
br = mechanize.Browser()
br.set_handle_robots(False)
htmltext = br.open(url).read()
soup = BeautifulSoup(htmltext,"html.parser")
zoek = soup.findAll('input',attrs={'name':'_token'})
zoektekst = zoek[0]["value"]
print _token
Payload = {'password':'??','_token':_token,'username':'??@gmail.com'}
print Payload
r = requests.post("theurl.com",data=Payload)
print r.text
``` | 2015/12/10 | [
"https://Stackoverflow.com/questions/34211781",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5666114/"
] | I've checked Your code, and seems to be working correctly. Yet without url and error message i don't think anyone can answer your question why it does not work.
Wild guess for now - form posts data to url/auth/login, yet You are posting to "url.com"
And small tip:
```
#instead of
zoektekst = str(zoek[0])
_token = zoektekst.replace("input"," ").replace("<"," ").replace("name="," ") \
.replace("_token"," ").replace("type="," ") \
.replace("hidden"," ").replace("value="," ") \
.replace("/>","").replace('"','').replace(' ','')
# use
zoektekst = zoek[0]["value"]
# now you can remove this insane amount of replace's
``` | It's hard to tell exactly what is wrong since you've removed a lot of details, but assuming your `_token` is correct, I suspect your problem is:
```
r = requests.post("theurl.com",data=Payload)
```
needs to be
```
r = requests.post("theurl.com/auth/login",data=Payload)
``` |
61,450,379 | I am pretty new to programming and python. My question is I had these lines running but first I'll explain. I wanted to write a program that would ask your weight in pounds and my program would convert it to kgs. Now here is the correct answer:
```
weight = input ("What is your weight in pounds? ")
converter = int(weight) * 0.45
print (converter)
```
Now I wanted it to work for decimals (lbs in decimals). So I wrote this:
```
weight = input ("What is your weight in pounds? ")
converter = int(0.45) * weight
print (converter)
```
But the second program doesn't work. Can anyone explain why? Thank you | 2020/04/27 | [
"https://Stackoverflow.com/questions/61450379",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13414328/"
] | `int(0.45)` converts the 0.45 to an integer (whole number) which is truncated to 0 so you are effectively multiplying any input by 0.
In the original program you were taking the input as a string with the `input` command and then converting that string to an integer with `int(weight)`. If you want to have the program work with decimals then you would want to use `float(weight)` | In your second program you are casting to int the number 0.45 which evaluates to be 0 In order for this to work with float, just remove the int() before the 0.45 , because it's a floating number the whole expression will be float. |
61,450,379 | I am pretty new to programming and python. My question is I had these lines running but first I'll explain. I wanted to write a program that would ask your weight in pounds and my program would convert it to kgs. Now here is the correct answer:
```
weight = input ("What is your weight in pounds? ")
converter = int(weight) * 0.45
print (converter)
```
Now I wanted it to work for decimals (lbs in decimals). So I wrote this:
```
weight = input ("What is your weight in pounds? ")
converter = int(0.45) * weight
print (converter)
```
But the second program doesn't work. Can anyone explain why? Thank you | 2020/04/27 | [
"https://Stackoverflow.com/questions/61450379",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13414328/"
] | `int(0.45)` converts the 0.45 to an integer (whole number) which is truncated to 0 so you are effectively multiplying any input by 0.
In the original program you were taking the input as a string with the `input` command and then converting that string to an integer with `int(weight)`. If you want to have the program work with decimals then you would want to use `float(weight)` | weight = input ("What is your weight in pounds? ")
The above code always returns a string.
If you try running the following after the above line you will notice it prints **str**, which means its a string data type.
```
print(type(weight))
```
Now that we know the type of data store in the variable weight is of **str**, we need to ensure that we convert it into a number before using it in a mathematical equation.
In your case i understand that, in your second program you want to have your output of the variable **converter** in decimals.
hence you have to rewrite the line as follows:
```
converter = 0.45 * float(weight)
```
In order to ensure that the converter variable holds a decimal value, you can try:
```
print(type(converter))
```
if the above line gives the output as **float**, you have got your intended output.
For future reference, you may refer this link which shows all the data types available in Python: <https://docs.python.org/3/library/datatypes.html> |
32,056,548 | Scenario:
```
dirA/
__init__.py
file1.py
file2.py
dirB/
__init__.py
file11.py
file22.py
dirC/
__init__.py
file111.py
file222.py
```
I read on <https://docs.python.org/2/tutorial/modules.html#tut-standardmodules> that Contrarily, **when using syntax like import item.subitem.subsubitem, each item except for the last must be a package; the last item can be a module or a package but can’t be a class or function or variable defined in the previous item.**
Now because of above reading i have a numbers on doubt:
1. When i do write import dirA.dirB so whether this statement import modules inside dirC package too or only the modules inside dirB ?
2. what is the use of ***import dirA.dirB***, because when i am using this statement, am still unable to use the modules inside the dirB sub package ?
and i know that to use the modules inside dirB i have to use the ***import dirA.dirB.file11*** | 2015/08/17 | [
"https://Stackoverflow.com/questions/32056548",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2736302/"
] | When you do `import dirA.dirB` , you only import the package `dirB` , which means you basically import the `__init__.py` under `dirB.py` , if that `__init__.py` was defining some functions or classes etc, you would be able to use them after import `dirA.dirB` .
Example -
My directory structure -
shared/
a.py
pkg/
**init**.py
b.py
Now in `pkg/__init__.py` , I have -
```
x = 1
import pkg.b
```
My `pkg/b.py` is -
```
a = 1
```
Now in `a.py` , I can do -
```
import pkg
print(pkg.b.a)
print(pkg.x)
```
And this would work, because `__init__.py` imported `pkg.b` and hence it is accessible using `pkg.b.a` . | When you use `from dirA import dirB`, `dirB` will be a module which contains anything (variables, functions, etc) defined in or imported by `dirB/__init__.py`.
This is often used to make the "public" components of `dirB` easily accessible. For example, you might have:
```
from .file11 import Foo
from .file22 import Bar
```
So that users of your library could simply call:
```
from dirA.dirB import Foo, Bar
```
Instead of having to remember which file defines `Foo` and `Bar`. |
5,943,778 | I know how to detect if my Python script's `stdout` is being redirected (`>`) using `sys.stdout.isatty()` but is there a way to discover what it's being redirected to?
For example:
```
python my.py > somefile.txt
```
Is there a way to discover the name `somefile.txt` on both Windows and Linux? | 2011/05/10 | [
"https://Stackoverflow.com/questions/5943778",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/419/"
] | I doubt you can do that in a system-independent way. On Linux, the following works:
```
import os
my_output_file = os.readlink('/proc/%d/fd/1' % os.getpid())
``` | If you need a platform-independent way to get the name of the file, pass it as an argument and use argparse (or optparse) to read your arguments, don't rely on shell redirection at all.
Use `python my.py --output somefile.txt` with code such as:
```
parser = argparse.ArgumentParser()
parser.add_argument('--output', # nargs='?', default=sys.stdout,
type=argparse.FileType('w'),
help="write the output to FILE",
metavar="FILE")
args = parser.parse_args()
filename = args.output.name
```
If knowing the name is optional and used for some weird optimization, then use Igor Nazarenko's solution and check that `sys.platform` is `'linux2'`, otherwise assume that you don't have the name and treat it as a normal pipe. |
26,915,355 | I'm writing an app on django 1.7.1
I recently created a VM to setup a development server and moved my app to that VM (my machine was Open SUSE and the VM is CentOS7)
After set up the database and the python packages I run the migration and run the server. The server start without any problem
The problem is when I try to run my script to populate the DB.
I load the settings
```
sys.path.append('/home/evtdb/FLWeb/FLWeb')
os.environ['DJANGO_SETTINGS_MODULE'] = 'settings'
django.setup()
```
but i got this error
```
File "scripts/populate_database.py", line 14, in <module>
django.setup()
File "/usr/lib64/python2.7/site-packages/django/__init__.py", line 21, in setup
apps.populate(settings.INSTALLED_APPS)
File "/usr/lib64/python2.7/site-packages/django/apps/registry.py", line 85, in populate
app_config = AppConfig.create(entry)
File "/usr/lib64/python2.7/site-packages/django/apps/config.py", line 87, in create
module = import_module(entry)
File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
**ImportError: No module named events**
```
The settings.py include the events app on INSTALLED APPS
```
INSTALLED_APPS = (
#'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'events',
'dh5bp',
'django_tables2',
)
```
The apps.py and the **init**.py are also set up
**events/apps.py**
```
from django.apps import AppConfig
class EventsConfig(AppConfig):
name = 'events'
verbose_name = "Events"
```
**events/\_\_init\_\_.py**
```
default_app_config = 'events.apps.EventsConfig'
```
I don't know why the server can start running but not the script | 2014/11/13 | [
"https://Stackoverflow.com/questions/26915355",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2631891/"
] | You can match the whole string of numbers, and split them into chunks like this:
```js
var num = "1234512345671";
alert("Parsed: " + num.replace(/(\d{5})(\d{7})(\d)/, "$1-$2-$3"));
``` | **An Alternate approach without RegEx**
**Edit:** Updated to add hypens as you type.
You could use `.slice` the string and then insert at desired position. See below code and demo.
```js
$('.creditCardText').keyup(function() {
var foo = $(this).val().split("-").join(""); // remove hyphens
if (foo.length > 5 && foo.length < 13) {
foo = foo.slice(0, 5) + '-' + foo.slice(5);
} else if (foo.length >= 13) {
foo = foo.slice(0, 5) + '-' + foo.slice(5);
foo = foo.slice(0, 13) + '-' + foo.slice(13);
}
$(this).val(foo);
});
```
```html
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<input type="text" class="creditCardText" />
``` |
26,915,355 | I'm writing an app on django 1.7.1
I recently created a VM to setup a development server and moved my app to that VM (my machine was Open SUSE and the VM is CentOS7)
After set up the database and the python packages I run the migration and run the server. The server start without any problem
The problem is when I try to run my script to populate the DB.
I load the settings
```
sys.path.append('/home/evtdb/FLWeb/FLWeb')
os.environ['DJANGO_SETTINGS_MODULE'] = 'settings'
django.setup()
```
but i got this error
```
File "scripts/populate_database.py", line 14, in <module>
django.setup()
File "/usr/lib64/python2.7/site-packages/django/__init__.py", line 21, in setup
apps.populate(settings.INSTALLED_APPS)
File "/usr/lib64/python2.7/site-packages/django/apps/registry.py", line 85, in populate
app_config = AppConfig.create(entry)
File "/usr/lib64/python2.7/site-packages/django/apps/config.py", line 87, in create
module = import_module(entry)
File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
**ImportError: No module named events**
```
The settings.py include the events app on INSTALLED APPS
```
INSTALLED_APPS = (
#'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'events',
'dh5bp',
'django_tables2',
)
```
The apps.py and the **init**.py are also set up
**events/apps.py**
```
from django.apps import AppConfig
class EventsConfig(AppConfig):
name = 'events'
verbose_name = "Events"
```
**events/\_\_init\_\_.py**
```
default_app_config = 'events.apps.EventsConfig'
```
I don't know why the server can start running but not the script | 2014/11/13 | [
"https://Stackoverflow.com/questions/26915355",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2631891/"
] | You can match the whole string of numbers, and split them into chunks like this:
```js
var num = "1234512345671";
alert("Parsed: " + num.replace(/(\d{5})(\d{7})(\d)/, "$1-$2-$3"));
``` | Thanx a lot #Vega. its working perfect. but is it possible when enter first 5 character then hyphen "-" is inserted and then after 7 character hyphen "-" is inserted? In you code when complete character 13 character is entered then hyphen is inserted. thanx –
```js
$('.creditCardText').keyup(function() {
var foo = $(this).val().split("-").join(""); // remove hyphens
if (foo.length == 13) {
foo = foo.slice(0, 5) + '-' + foo.slice(5);
foo = foo.slice(0, 13) + '-' + foo.slice(13);
}
$(this).val(foo);
});
```
```html
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<input type="text" class="creditCardText" />
```
```js
$('.creditCardText').keyup(function() {
var foo = $(this).val().split("-").join(""); // remove hyphens
if (foo.length == 13) {
foo = foo.slice(0, 5) + '-' + foo.slice(5);
foo = foo.slice(0, 13) + '-' + foo.slice(13);
}
$(this).val(foo);
});
```
```html
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<input type="text" class="creditCardText" />
``` |
60,563,604 | Using the python library, I'm training a GLM as part of a H2O ensemble that I'm creating:
(relevant snippet from script):
```
from h2o.estimators.glm import H2OGeneralizedLinearEstimator
estimator = H2OGeneralizedLinearEstimator(
nfolds=5, keep_cross_validation_predictions=True,
fold_assignment='Modulo',
solver='COORDINATE_DESCENT',
alpha=[0.0, 0.2, 0.4, 0.6, 0.8, 1.0],
lambda_=[
319.3503133509223, 198.32195498930167,
123.16129399741205, 76.48525015768037,
47.49863615273374, 29.49745776759067,
18.318421016404645, 11.376049799889723,
7.064719657533626, 4.387310597042732,
2.7245942101039184, 1.6920191642541902,
1.050772567007053, 0.652547566186266,
0.4052430939917762, 0.2516628269534475,
0.1562868791824152, 0.09705679976763945,
0.060273916981481074, 0.03743112359966524,
0.023245361909429163, 0.01443576356615629,
0.008964853743724, 0.005567326056432596,
0.003457403802078978, 0.0021471063360513445,
0.0013333894107306014, 0.0008280574142025093,
0.0005142376830786764, 0.0003193503133509216],
lambda_search=True, nlambdas=30, max_iterations=300,
objective_epsilon=0.0001, gradient_epsilon=1.00E-06,
link='identity', lambda_min_ratio=1.00E-06,
max_active_predictors=5000, obj_reg=1.03E-05,
max_runtime_secs=342.6666667)
estimator.train(x=predictors, y=response, training_frame=df_h20)
```
I run this training in parallel with other dataframes containing different combinations of features
```
with futures.ThreadPoolExecutor(
max_workers=len(persona_list)) as executor:
future_list = {
executor.submit(
AVM_H2O.regressor,
area,
[x[1]],
dataset,
h20_mms_GB,
timestamp,
datestring,
S3_upload_bucket,
logfile,
54320 + x[0]): x for x in enumerate(persona_list, 1)}
for future in futures.as_completed(future_list):
future.result()
```
I do this many times over many different datasets and I only seemingly randomly run in to this error. When I try to recreate the error I can't seem to do so.
The full error message is:
```
H2OResponseError: ModelBuilderErrorV3 (water.exceptions.H2OModelBuilderIllegalArgumentException):
timestamp = 1583433807040
error_url = '/3/ModelBuilders/glm'
msg = 'Illegal argument(s) for GLM model: GLM_model_python_1583433786455_5. Details: ERRR on field: _train: Missing training frame: py_7_sid_b8c3'
dev_msg = 'Illegal argument(s) for GLM model: GLM_model_python_1583433786455_5. Details: ERRR on field: _train: Missing training frame: py_7_sid_b8c3'
http_status = 412
values = {'messages': [{'_log_level': 1, '_field_name': '_train', '_message': 'Missing training frame: py_7_sid_b8c3'}, {'_log_level': 5, '_field_name': '_balance_classes', '_message': 'Not applicable since class balancing is not required for GLM.'}, {'_log_level': 5, '_field_name': '_max_after_balance_size', '_message': 'Not applicable since class balancing is not required for GLM.'}, {'_log_level': 5, '_field_name': '_class_sampling_factors', '_message': 'Not applicable since class balancing is not required for GLM.'}, {'_log_level': 5, '_field_name': '_tweedie_variance_power', '_message': 'Only applicable with Tweedie family'}, {'_log_level': 5, '_field_name': '_tweedie_link_power', '_message': 'Only applicable with Tweedie family'}, {'_log_level': 5, '_field_name': '_theta', '_message': 'Only applicable with Negative Binomial family'}], 'algo': 'GLM', 'parameters': {'_train': {'name': 'py_7_sid_b8c3', 'type': 'Key'}, '_valid': None, '_nfolds': 5, '_keep_cross_validation_models': True, '_keep_cross_validation_predictions': True, '_keep_cross_validation_fold_assignment': False, '_parallelize_cross_validation': True, '_auto_rebalance': True, '_seed': -1, '_fold_assignment': 'Modulo', '_categorical_encoding': 'AUTO', '_max_categorical_levels': 10, '_distribution': 'AUTO', '_tweedie_power': 1.5, '_quantile_alpha': 0.5, '_huber_alpha': 0.9, '_ignored_columns': None, '_ignore_const_cols': True, '_weights_column': None, '_offset_column': None, '_fold_column': None, '_check_constant_response': True, '_is_cv_model': False, '_score_each_iteration': False, '_max_runtime_secs': 342.6666667, '_stopping_rounds': 3, '_stopping_metric': 'deviance', '_stopping_tolerance': 0.0001, '_response_column': 'price_lr', '_balance_classes': False, '_max_after_balance_size': 5.0, '_class_sampling_factors': None, '_max_confusion_matrix_size': 20, '_checkpoint': None, '_pretrained_autoencoder': None, '_custom_metric_func': None, '_custom_distribution_func': None, '_export_checkpoints_dir': None, '_standardize': True, '_useDispersion1': False, '_family': 'gaussian', '_rand_family': None, '_link': 'identity', '_rand_link': None, '_solver': 'COORDINATE_DESCENT', '_tweedie_variance_power': 0.0, '_tweedie_link_power': 1.0, '_theta': 1e-10, '_invTheta': 10000000000.0, '_alpha': [0.0, 0.2, 0.4, 0.6, 0.8, 1.0], '_lambda': [319.3503133509223, 198.32195498930167, 123.16129399741205, 76.48525015768037, 47.49863615273374, 29.49745776759067, 18.318421016404645, 11.376049799889723, 7.064719657533626, 4.387310597042732, 2.7245942101039184, 1.6920191642541902, 1.050772567007053, 0.652547566186266, 0.4052430939917762, 0.2516628269534475, 0.1562868791824152, 0.09705679976763945, 0.060273916981481074, 0.03743112359966524, 0.023245361909429163, 0.01443576356615629, 0.008964853743724, 0.005567326056432596, 0.003457403802078978, 0.0021471063360513445, 0.0013333894107306014, 0.0008280574142025093, 0.0005142376830786764, 0.0003193503133509216], '_startval': None, '_calc_like': False, '_random_columns': None, '_missing_values_handling': None, '_prior': -1.0, '_lambda_search': True, '_HGLM': False, '_nlambdas': 30, '_non_negative': False, '_exactLambdas': False, '_lambda_min_ratio': 1e-06, '_use_all_factor_levels': False, '_max_iterations': 300, '_intercept': True, '_beta_epsilon': 0.0001, '_objective_epsilon': 0.0001, '_gradient_epsilon': 1e-06, '_obj_reg': 1.03e-05, '_compute_p_values': False, '_remove_collinear_columns': False, '_interactions': None, '_interaction_pairs': None, '_early_stopping': True, '_beta_constraints': None, '_plug_values': None, '_max_active_predictors': 5000, '_stdOverride': False}, 'error_count': 2}
exception_msg = 'Illegal argument(s) for GLM model: GLM_model_python_1583433786455_5. Details: ERRR on field: _train: Missing training frame: py_7_sid_b8c3'
stacktrace =
water.exceptions.H2OModelBuilderIllegalArgumentException: Illegal argument(s) for GLM model: GLM_model_python_1583433786455_5. Details: ERRR on field: _train: Missing training frame: py_7_sid_b8c3
water.exceptions.H2OModelBuilderIllegalArgumentException.makeFromBuilder(H2OModelBuilderIllegalArgumentException.java:19)
hex.ModelBuilder.trainModelOnH2ONode(ModelBuilder.java:304)
water.api.ModelBuilderHandler.handle(ModelBuilderHandler.java:64)
water.api.ModelBuilderHandler.handle(ModelBuilderHandler.java:17)
water.api.RequestServer.serve(RequestServer.java:471)
water.api.RequestServer.doGeneric(RequestServer.java:301)
water.api.RequestServer.doPost(RequestServer.java:227)
javax.servlet.http.HttpServlet.service(HttpServlet.java:755)
javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:501)
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:427)
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
water.webserver.jetty8.Jetty8ServerAdapter$LoginHandler.handle(Jetty8ServerAdapter.java:119)
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
org.eclipse.jetty.server.Server.handle(Server.java:370)
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494)
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:984)
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1045)
org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)
org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:236)
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
java.base/java.lang.Thread.run(Thread.java:834)
parameters = {'__meta': {'schema_version': 3, 'schema_name': 'GLMParametersV3', 'schema_type': 'GLMParameters'}, 'model_id': None, 'training_frame': None, 'validation_frame': None, 'nfolds': 5, 'keep_cross_validation_models': True, 'keep_cross_validation_predictions': True, 'keep_cross_validation_fold_assignment': False, 'parallelize_cross_validation': True, 'distribution': 'AUTO', 'tweedie_power': 1.5, 'quantile_alpha': 0.5, 'huber_alpha': 0.9, 'response_column': {'__meta': {'schema_version': 3, 'schema_name': 'ColSpecifierV3', 'schema_type': 'VecSpecifier'}, 'column_name': 'price_lr', 'is_member_of_frames': None}, 'weights_column': None, 'offset_column': None, 'fold_column': None, 'fold_assignment': 'Modulo', 'categorical_encoding': 'AUTO', 'max_categorical_levels': 10, 'ignored_columns': None, 'ignore_const_cols': True, 'score_each_iteration': False, 'checkpoint': None, 'stopping_rounds': 3, 'max_runtime_secs': 342.6666667, 'stopping_metric': 'deviance', 'stopping_tolerance': 0.0001, 'custom_metric_func': None, 'custom_distribution_func': None, 'export_checkpoints_dir': None, 'seed': -1, 'family': 'gaussian', 'rand_family': None, 'tweedie_variance_power': 0.0, 'tweedie_link_power': 1.0, 'theta': 1e-10, 'solver': 'COORDINATE_DESCENT', 'alpha': [0.0, 0.2, 0.4, 0.6, 0.8, 1.0], 'lambda': [319.3503133509223, 198.32195498930167, 123.16129399741205, 76.48525015768037, 47.49863615273374, 29.49745776759067, 18.318421016404645, 11.376049799889723, 7.064719657533626, 4.387310597042732, 2.7245942101039184, 1.6920191642541902, 1.050772567007053, 0.652547566186266, 0.4052430939917762, 0.2516628269534475, 0.1562868791824152, 0.09705679976763945, 0.060273916981481074, 0.03743112359966524, 0.023245361909429163, 0.01443576356615629, 0.008964853743724, 0.005567326056432596, 0.003457403802078978, 0.0021471063360513445, 0.0013333894107306014, 0.0008280574142025093, 0.0005142376830786764, 0.0003193503133509216], 'lambda_search': True, 'early_stopping': True, 'nlambdas': 30, 'standardize': True, 'missing_values_handling': 'MeanImputation', 'plug_values': None, 'non_negative': False, 'max_iterations': 300, 'beta_epsilon': 0.0001, 'objective_epsilon': 0.0001, 'gradient_epsilon': 1e-06, 'obj_reg': 1.03e-05, 'link': 'identity', 'rand_link': None, 'startval': None, 'random_columns': None, 'calc_like': False, 'intercept': True, 'HGLM': False, 'prior': -1.0, 'lambda_min_ratio': 1e-06, 'beta_constraints': None, 'max_active_predictors': 5000, 'interactions': None, 'interaction_pairs': None, 'balance_classes': False, 'class_sampling_factors': None, 'max_after_balance_size': 5.0, 'max_confusion_matrix_size': 20, 'max_hit_ratio_k': 0, 'compute_p_values': False, 'remove_collinear_columns': False}
messages = [{'__meta': {'schema_version': 3, 'schema_name': 'ValidationMessageV3', 'schema_type': 'ValidationMessage'}, 'message_type': 'ERRR', 'field_name': 'train', 'message': 'Missing training frame: py_7_sid_b8c3'}, {'__meta': {'schema_version': 3, 'schema_name': 'ValidationMessageV3', 'schema_type': 'ValidationMessage'}, 'message_type': 'TRACE', 'field_name': 'balance_classes', 'message': 'Not applicable since class balancing is not required for GLM.'}, {'__meta': {'schema_version': 3, 'schema_name': 'ValidationMessageV3', 'schema_type': 'ValidationMessage'}, 'message_type': 'TRACE', 'field_name': 'max_after_balance_size', 'message': 'Not applicable since class balancing is not required for GLM.'}, {'__meta': {'schema_version': 3, 'schema_name': 'ValidationMessageV3', 'schema_type': 'ValidationMessage'}, 'message_type': 'TRACE', 'field_name': 'class_sampling_factors', 'message': 'Not applicable since class balancing is not required for GLM.'}, {'__meta': {'schema_version': 3, 'schema_name': 'ValidationMessageV3', 'schema_type': 'ValidationMessage'}, 'message_type': 'TRACE', 'field_name': 'tweedie_variance_power', 'message': 'Only applicable with Tweedie family'}, {'__meta': {'schema_version': 3, 'schema_name': 'ValidationMessageV3', 'schema_type': 'ValidationMessage'}, 'message_type': 'TRACE', 'field_name': 'tweedie_link_power', 'message': 'Only applicable with Tweedie family'}, {'__meta': {'schema_version': 3, 'schema_name': 'ValidationMessageV3', 'schema_type': 'ValidationMessage'}, 'message_type': 'TRACE', 'field_name': 'theta', 'message': 'Only applicable with Negative Binomial family'}]
error_count = 2
```
Because I can't reproduce the error, I'm struggling to get an idea of what's causing it. Any help would be greatly appreciated. | 2020/03/06 | [
"https://Stackoverflow.com/questions/60563604",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7535311/"
] | The exception is raised [here](https://github.com/h2oai/h2o-3/blob/master/h2o-core/src/main/java/hex/ModelBuilder.java#L1182); in particular, in your case, the object `py_7_sid_b8c3` is valued with `null`. That's a (Java, not Python despite the name) H2O object, which should encode the H2O frame passed for the training. Objects live within each node, I faced a similar issue and I discover in my case that one node was crashing for memory issues. Of course, it could be anything else which does not allow the cluster to get the H2O frame. In anycase, I suggest to inspect the stdout and the stderr logged by the JVMs which run the H2O cluster. They are generally stored in `/tmp`, their paths are shown at the moment you init the H2O cluster. Generally in the Java stacktrace more info is logged.
**UPDATE**: I was able to trigger again this issue in another case
1. import of a CSV file into an H2O frame
2. drop of a column, like `df = df.drop('col')`
3. run then the train of an estimator: `estimator.train(x=predictors, y=response, training_frame=df)` (with `predictors` valued with a list of columns which refer to the feature of the model, and `response` valued with the label needed for the training)
I think the error was due because H2O frames as Python objects refer to a Java object in the H2O backend: probably here after the `drop` there was a pending reference to a null vector, which triggered the exception. Maybe there was some race condition because not all the time the exception was raised.
I finally solved simply passing the proper list of usable features in the `predictors` variable. But as said in my previous comment, the main insight arrived looking at JVMs logs. | The question is not about `R`, but I've been consistently having the same error in `R`. If any other `R` user stumbled on this thread looking for a solution:
I restarted h2o
```
h2o.shutdown()
h2o.init(enable_assertions = FALSE)
```
and immediately recreated my h2o training data
```
train.hex <- as.h2o(train_set)
tune.hex <- as.h2o(tune_set)
test.hex <- as.h2o(test_set)
trainh2o <- h2o.assign(train.hex, key = "train")
tuneh2o <- h2o.assign(tune.hex, key = "tune")
testh2o <- h2o.assign(test.hex, key = "test")
#Set up variable names
xnames = setdiff(names(trainh2o, y = trainh2o$outcome)
yname = "outcome"
``` |
54,603,348 | So I am trying to make a simple program in python that uses matplotlib to plot some data. The problem is that I need the x-axis to incress by one each iteration. My code looks like this:
```
import matplotlib.pyplot as plt
%matplotlib inline
d = -1
money = 40
def test ():
for testing in range (5)
...
...
...
plot(d,money)
def plot(d,money):
d = d + 1
plt.plot(d, money, 'o')
```
The code works, but it plots all the data points on 0 of the x-axis, where as I would like it to plot the first point at 0, the second at 1, ect.
Thanks in advance for your time and help.
Edit: Basically I need a way so that in each loop d doesn't get reset to -1 but rather gets increased by 1 | 2019/02/09 | [
"https://Stackoverflow.com/questions/54603348",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9005366/"
] | In your consumer, you would be using [commitSync](https://kafka.apache.org/10/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#commitSync--) which commits offset returned on last poll. Now, when you start your 2nd consumer, since it is in same consumer group it will read messages from last committed offset.
Messages which your consumer will consumes depends on the ConsumerGroup it belongs to. Suppose you have 2 partitions and 2 consumers in single Consumer Group, then each consumer will read from different partitions which helps to achieve parallelism.
So, if you want your 2nd consumer to read from beginning, you can do one of 2 things:
a) Try putting 2nd consumer in different consumer group. For this consumer group, there won't be any offset stored anywhere. At this time, `auto.offset.reset` config will decide the starting offset. Set `auto.offset.reset` to `earliest`(reset the offset to earliest offset) or to `latest`(reset the offset to latest offset).
b) Seek to start of all partitions your consumer is assigned by using: `consumer.seekToBeginning(consumer.assignment())`
Documentation: <https://kafka.apache.org/11/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#seekToBeginning-java.util.Collection->
<https://kafka.apache.org/documentation/#consumerconfigs> | Partition is always assigned to unique consumer in single consumer group irrespective of multiplpe consumers. It means only that consumer can read the data and others won't consume data until the partition is assigned to them. When consumer goes down, partition rebalance happens and it will be assigned to another consumer. Since you are performing manual commit, new consumer will start reading from committed offset. |
42,349,470 | Currently trying to write a function to return the checked radiobutton from a group of radiobuttons in python, but no success so far.
PyQt Gui code:
```
self.hlw_customer = QtWidgets.QWidget(self.grb_main)
self.hlw_customer.setGeometry(QtCore.QRect(110, 26, 361, 21))
self.hlw_customer.setObjectName("hlw_customer")
self.hlb_customer = QtWidgets.QHBoxLayout(self.hlw_customer)
self.hlb_customer.setContentsMargins(0, 0, 0, 0)
self.hlb_customer.setObjectName("hlb_customer")
self.rdb_customer1 = QtWidgets.QRadioButton(self.hlw_customer)
self.rdb_customer1.setObjectName("rdb_customer1")
self.hlb_customer.addWidget(self.rdb_customer1)
self.rdb_customer2 = QtWidgets.QRadioButton(self.hlw_customer)
self.rdb_customer2.setObjectName("rdb_customer2")
self.hlb_customer.addWidget(self.rdb_customer2)
self.rdb_customer3 = QtWidgets.QRadioButton(self.hlw_customer)
self.rdb_customer3.setChecked(True)
self.rdb_customer3.setObjectName("rdb_customer3")
self.hlb_customer.addWidget(self.rdb_customer3)
self.rdb_customer4 = QtWidgets.QRadioButton(self.hlw_customer)
self.rdb_customer4.setObjectName("rdb_customer4")
self.hlb_customer.addWidget(self.rdb_customer4)
```
function to find the checked radiobutton:
```
def find_checked_radiobutton(self):
''' find the checked radiobutton '''
enabled_checkbox = self.hlw_customer.findChildren(QtWidgets.QRadioButton, 'checked')
```
But sadly this returns [] | 2017/02/20 | [
"https://Stackoverflow.com/questions/42349470",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4558193/"
] | Found the solution myself:
```
self.find_checked_radiobutton(self.hlw_customer.findChildren(QtWidgets.QRadioButton))
def find_checked_radiobutton(self, radiobuttons):
''' find the checked radiobutton '''
for items in radiobuttons:
if items.isChecked():
checked_radiobutton = items.text()
return checked_radiobutton
``` | Use this, it worked for me. I enumerated the button group list and checked if any of the buttons is clicked or not.
```
def test(self):
checked_btn = [button.text().ljust(0, ' ').lstrip() for i, button in enumerate(self.btnGrouptab1.buttons()) if button.isChecked()]
print(checked_btn[0])
return checked_btn[0]
``` |
42,349,470 | Currently trying to write a function to return the checked radiobutton from a group of radiobuttons in python, but no success so far.
PyQt Gui code:
```
self.hlw_customer = QtWidgets.QWidget(self.grb_main)
self.hlw_customer.setGeometry(QtCore.QRect(110, 26, 361, 21))
self.hlw_customer.setObjectName("hlw_customer")
self.hlb_customer = QtWidgets.QHBoxLayout(self.hlw_customer)
self.hlb_customer.setContentsMargins(0, 0, 0, 0)
self.hlb_customer.setObjectName("hlb_customer")
self.rdb_customer1 = QtWidgets.QRadioButton(self.hlw_customer)
self.rdb_customer1.setObjectName("rdb_customer1")
self.hlb_customer.addWidget(self.rdb_customer1)
self.rdb_customer2 = QtWidgets.QRadioButton(self.hlw_customer)
self.rdb_customer2.setObjectName("rdb_customer2")
self.hlb_customer.addWidget(self.rdb_customer2)
self.rdb_customer3 = QtWidgets.QRadioButton(self.hlw_customer)
self.rdb_customer3.setChecked(True)
self.rdb_customer3.setObjectName("rdb_customer3")
self.hlb_customer.addWidget(self.rdb_customer3)
self.rdb_customer4 = QtWidgets.QRadioButton(self.hlw_customer)
self.rdb_customer4.setObjectName("rdb_customer4")
self.hlb_customer.addWidget(self.rdb_customer4)
```
function to find the checked radiobutton:
```
def find_checked_radiobutton(self):
''' find the checked radiobutton '''
enabled_checkbox = self.hlw_customer.findChildren(QtWidgets.QRadioButton, 'checked')
```
But sadly this returns [] | 2017/02/20 | [
"https://Stackoverflow.com/questions/42349470",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4558193/"
] | I had the same question, and figured out this way:
```
import PyQt.QtGui as qg
boxElements = self.MainWindowUI.groupBox.children()
radioButtons = [elem for elem in boxElements if isinstance(elem, qg.QRadioButton)]
for rb in radioButtons:
if rb.isChecked():
checkedOnRb = rb.text()
```
I like your solution. Here's another using findChildren which I learned thanks to the OP solution.
```
rigTypeRadioButtons = self.MainWindowUI.groupBox_rigType.findChildren(qg.QRadioButton)
rigTypeRb = [rb.text() for rb in rigTypeRadioButtons if rb.isChecked()][0]
print 'rigType is: ', rigTypeRb
``` | Use this, it worked for me. I enumerated the button group list and checked if any of the buttons is clicked or not.
```
def test(self):
checked_btn = [button.text().ljust(0, ' ').lstrip() for i, button in enumerate(self.btnGrouptab1.buttons()) if button.isChecked()]
print(checked_btn[0])
return checked_btn[0]
``` |
51,195,607 | I have a file that stores variable names and values like below:
```
name,David
age,16
score,91.2
...
```
I want to write a python script that reads the file and automatically creates variables in an object without me actually doing it, something like:
```
self.name='David'
self.age=16
self.score=91.2
```
Is it possible to do that?
The reason I do this is there might be files containing very different types of variables which I don't know beforehand. | 2018/07/05 | [
"https://Stackoverflow.com/questions/51195607",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9705876/"
] | >
> The reason I do this is there might be files containing very different
> types of variables which I don't know beforehand.
>
>
>
If this is the primary reason, you should use a dictionary. There's no better way. See the marked duplicates.
---
### Original solution
While it's usually recommended to keep like variables in a dictionary, if the variables describe a class instance there is no harm in assigning them directly. If the dictionary keys are *arbitrary*, I recommend you pass the dictionary to the class instead.
As per @Delgan's comment, you can create a dictionary from your text file and then use `setattr` while iterating dictionary items. Below is a solution which uses the `csv` module and wraps the logic in a function.
```
import csv
def add_attributes(obj, fn):
# create dictionary from text file
with open(fn, 'r') as fin:
d = dict(csv.reader(fin))
# iterate dictionary and add attributes
for name, value in d.items():
setattr(obj, name, value)
return obj
class myClass():
def __init__(self):
pass
A = myClass()
add_attributes(A, 'file.csv')
print(A.age) # 16
``` | I recommend cleaner ways to store your data (like JSON or even python's pickle module). But this is out of the context of this answer. Assuming, you want to use your data format, you could read your file like this:
```
my_object = xyz()
variables_file = open("variables.txt", "r")
for line in variables_file:
line = line.rstrip("\n") # remove the linebreak at the end of each line
line = line.split(",") # split the line at the comma -> generates a list
setattr(my_object, line[0], line[1]) # sets the variable as attribute of my_object
print("And the name is: "+my_object.name)+"!!!")
```
A better way is the use of dictionaries which is a more pythonic way than the java way:
```
configs = {}
```
And assign values like:
```
configs["name"] = "David"
```
or do it dynamically by iterating through the file like in the example above, just without setattr and with:
```
configs[line[0]] = line[1]
``` |
51,195,607 | I have a file that stores variable names and values like below:
```
name,David
age,16
score,91.2
...
```
I want to write a python script that reads the file and automatically creates variables in an object without me actually doing it, something like:
```
self.name='David'
self.age=16
self.score=91.2
```
Is it possible to do that?
The reason I do this is there might be files containing very different types of variables which I don't know beforehand. | 2018/07/05 | [
"https://Stackoverflow.com/questions/51195607",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9705876/"
] | I recommend cleaner ways to store your data (like JSON or even python's pickle module). But this is out of the context of this answer. Assuming, you want to use your data format, you could read your file like this:
```
my_object = xyz()
variables_file = open("variables.txt", "r")
for line in variables_file:
line = line.rstrip("\n") # remove the linebreak at the end of each line
line = line.split(",") # split the line at the comma -> generates a list
setattr(my_object, line[0], line[1]) # sets the variable as attribute of my_object
print("And the name is: "+my_object.name)+"!!!")
```
A better way is the use of dictionaries which is a more pythonic way than the java way:
```
configs = {}
```
And assign values like:
```
configs["name"] = "David"
```
or do it dynamically by iterating through the file like in the example above, just without setattr and with:
```
configs[line[0]] = line[1]
``` | When you create a Class, you should know how many variables your Class will have.
It is a bad idea to dynamically create new variables for an object in a Class because you can't do operations with variables.
e.g. Suppose your class has variables `name`,`age`,`marks`,`address`. You can perform operations on this because you know the variables already.
But if you try to store variables dynamically, then you have no idea which variables are present and you can't perform operations on them.
If you only want to display the variable and its value present in a file, then you can use a dictionary to store the variables with their respective values. After all the variables are stored in the dictionary, you can display the variables and their values:
```
dict={}
with open('filename.txt') as f:
data = f.readlines()
for i in data:
i = i.rstrip()
inner_data = i.split(',')
variable = inner_data[0]
value = inner_data[1]
dict.update({variable:value})
print(dict)
```
This will print all the variables and thier value in the file.
But the file should contain variables and values in the following format:
```
name,Sam
age,10
address,Mumbai
mobile_no,199204099
....
....
``` |
51,195,607 | I have a file that stores variable names and values like below:
```
name,David
age,16
score,91.2
...
```
I want to write a python script that reads the file and automatically creates variables in an object without me actually doing it, something like:
```
self.name='David'
self.age=16
self.score=91.2
```
Is it possible to do that?
The reason I do this is there might be files containing very different types of variables which I don't know beforehand. | 2018/07/05 | [
"https://Stackoverflow.com/questions/51195607",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9705876/"
] | >
> The reason I do this is there might be files containing very different
> types of variables which I don't know beforehand.
>
>
>
If this is the primary reason, you should use a dictionary. There's no better way. See the marked duplicates.
---
### Original solution
While it's usually recommended to keep like variables in a dictionary, if the variables describe a class instance there is no harm in assigning them directly. If the dictionary keys are *arbitrary*, I recommend you pass the dictionary to the class instead.
As per @Delgan's comment, you can create a dictionary from your text file and then use `setattr` while iterating dictionary items. Below is a solution which uses the `csv` module and wraps the logic in a function.
```
import csv
def add_attributes(obj, fn):
# create dictionary from text file
with open(fn, 'r') as fin:
d = dict(csv.reader(fin))
# iterate dictionary and add attributes
for name, value in d.items():
setattr(obj, name, value)
return obj
class myClass():
def __init__(self):
pass
A = myClass()
add_attributes(A, 'file.csv')
print(A.age) # 16
``` | When you create a Class, you should know how many variables your Class will have.
It is a bad idea to dynamically create new variables for an object in a Class because you can't do operations with variables.
e.g. Suppose your class has variables `name`,`age`,`marks`,`address`. You can perform operations on this because you know the variables already.
But if you try to store variables dynamically, then you have no idea which variables are present and you can't perform operations on them.
If you only want to display the variable and its value present in a file, then you can use a dictionary to store the variables with their respective values. After all the variables are stored in the dictionary, you can display the variables and their values:
```
dict={}
with open('filename.txt') as f:
data = f.readlines()
for i in data:
i = i.rstrip()
inner_data = i.split(',')
variable = inner_data[0]
value = inner_data[1]
dict.update({variable:value})
print(dict)
```
This will print all the variables and thier value in the file.
But the file should contain variables and values in the following format:
```
name,Sam
age,10
address,Mumbai
mobile_no,199204099
....
....
``` |
55,857,581 | I feel like there is a gap in my understanding of async IO: **is there a benefit to wrapping small functions into coroutines, within the scope of larger coroutines?** Is there a benefit to this in signaling the event loop correctly? Does the extent of this benefit depend on whether the wrapped function is IO or CPU-bound?
Example: I have a coroutine, `download()`, which:
1. Downloads JSON-serialized bytes from an HTTP endpoint via `aiohttp`.
2. Compresses those bytes via [`bz2.compress()`](https://docs.python.org/3/library/bz2.html#one-shot-de-compression) - which is **not in itself awaitable**
3. Writes the compressed bytes to S3 via [`aioboto3`](https://aioboto3.readthedocs.io/en/latest/installation.html)
So parts 1 & 3 use predefined coroutines from those libraries; part 2 does not, by default.
Dumbed-down example:
```
import bz2
import io
import aiohttp
import aioboto3
async def download(endpoint, bucket_name, key):
async with aiohttp.ClientSession() as session:
async with session.request("GET", endpoint, raise_for_status=True) as resp:
raw = await resp.read() # payload (bytes)
# Yikes - isn't it bad to throw a synchronous call into the middle
# of a coroutine?
comp = bz2.compress(raw)
async with (
aioboto3.session.Session()
.resource('s3')
.Bucket(bucket_name)
) as bucket:
await bucket.upload_fileobj(io.BytesIO(comp), key)
```
As hinted by the comment above, my understanding has always been that throwing a synchronous function like `bz2.compress()` into a coroutine can mess with it. (Even if `bz2.compress()` is probably more IO-bound than CPU-bound.)
So, is there generally any benefit to this type of boilerplate?
```
async def compress(*args, **kwargs):
return bz2.compress(*args, **kwargs)
```
(And now `comp = await compress(raw)` within `download()`.)
Wa-la, this is now an awaitable coroutine, because a sole `return` is valid in a native coroutine. Is there a case to be made for using this?
Per [this answer](https://stackoverflow.com/a/55438300/7954504), I've heard justification for randomly throwing in `asyncio.sleep(0)` in a similar manner - just to single back up to the event loop that the calling coroutine wants a break. Is this right? | 2019/04/25 | [
"https://Stackoverflow.com/questions/55857581",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7954504/"
] | >
> So, is there generally any benefit to this type of boilerplate?
>
>
>
```
async def compress(*args, **kwargs):
return bz2.compress(*args, **kwargs)
```
There is no benefit to it whatsoever. Contrary to expectations, adding an `await` [doesn't guarantee](https://stackoverflow.com/a/48816319/1600898) that the control will be passed to the event loop - that will happen only if the awaited coroutine actually suspends. Since `compress` doesn't await anything, it will never suspend, so it's a coroutine in name only.
Note that adding `await asyncio.sleep(0)` in coroutines does not solve the problem; see [this answer](https://stackoverflow.com/a/55819648/1600898) for a more detailed discussion. If you need to run a blocking function, use [`run_in_executor`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.run_in_executor):
```
async def compress(*args, **kwargs):
loop = asyncio.get_event_loop()
return await loop.run_in_executor(None, lambda: bz2.compress(*args, **kwargs))
``` | Coroutines allow you to run something concurrently, *not* in parallel. They allow for a *single-threaded cooperative* multitasking. This makes sense in two cases:
* You need to produce results in lockstep, like two generators would.
* You want something useful be done while another coroutine is waiting for I/O.
Things like http requests or disk I/O would allow other coroutines to run while they are waiting for completion of an operation.
`bz2.compress()` is synchronous ~~and, *I suppose,* does not release GIL~~ [but does release GIL](https://github.com/python/cpython/blob/0353b4eaaf451ad463ce7eb3074f6b62d332f401/Modules/_bz2module.c#L180) while it is running. ~~This means that no meaningful work can be done while it's running.~~ That is, other coroutines would not run during its invocation, though other threads would.
If you anticipate a *large* amount of data to compress, so large that the overhead of running a coroutine is small in comparison, you can use `bz2.BZ2Compressor` and feed it with data in reasonably small blocks (like 128KB), write the result to a stream (S3 supports streaming, or you can use StringIO), and [`await asyncio.sleep(0)`](https://docs.python.org/3/library/asyncio-task.html#asyncio.sleep) between compressing blocks to yield control.
This will allow other coroutines to also run concurrently with your compression coroutine. Possibly async S3 upload will be occurring in parallel at the socket level, too, while your coroutine would be inactive.
BTW making your compressor explicitly an [async generator](https://www.python.org/dev/peps/pep-0525/) can be a simpler way to express the same idea. |
618,827 | My application is running on Google App Engine and most of requests constantly gets yellow flag due to high CPU usage. Using profiler I tracked the issue down to the routine of creating `jinja2.Environment` instance.
I'm creating the instance at module level:
```
from jinja2 import Environment, FileSystemLoader
jinja_env = Environment(loader=FileSystemLoader(TEMPLATE_DIRS))
```
Due to the Google AppEngine operation mode (CGI), this code can be run upon each and every request (their module import cache seems to cache modules for seconds rather than for minutes).
I was thinking about storing the environment instance in memcache, but it seems to be not picklable. `FileSystemLoader` instance seems to be picklable and can be cached, but I did not observe any substantial improvement in CPU usage with this approach.
Anybody can suggest a way to decrease the overhead of creating `jinja2.Environment` instance?
**Edit**: below is (relevant) part of profiler output.
```
222172 function calls (215262 primitive calls) in 8.695 CPU seconds
ncalls tottime percall cumtime percall filename:lineno(function)
33 1.073 0.033 1.083 0.033 {google3.apphosting.runtime._apphosting_runtime___python__apiproxy.Wait}
438/111 0.944 0.002 2.009 0.018 /base/python_dist/lib/python2.5/sre_parse.py:385(_parse)
4218 0.655 0.000 1.002 0.000 /base/python_dist/lib/python2.5/pickle.py:1166(load_long_binput)
1 0.611 0.611 0.679 0.679 /base/data/home/apps/with-the-flow/1.331879498764931274/jinja2/environment.py:10()
```
One call, but as far I can see (and this is consistent across all my GAE-based apps), the most expensive in the whole request processing cycle. | 2009/03/06 | [
"https://Stackoverflow.com/questions/618827",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12138/"
] | OK, people, this is what I got today on #pocoo:
[20:59] zgoda: hello, i'd like to know if i could optimize my jinja2 environment creation process, the problem -> [Optimizing Jinja2 Environment creation](https://stackoverflow.com/questions/618827/optimizing-jinja2-environment-creation)
[21:00] zgoda: i have profiler output from "cold" app -> <http://paste.pocoo.org/show/107009/>
[21:01] zgoda: and for "hot" -> <http://paste.pocoo.org/show/107014/>
[21:02] zgoda: i'm wondering if i could somewhat lower the CPU cost of creating environment for "cold" requests
[21:05] mitsuhiko: zgoda: put the env creation into a module that you import
[21:05] mitsuhiko: like
[21:05] mitsuhiko: from yourapplication.utils import env
[21:05] zgoda: it's already there
[21:06] mitsuhiko: hmm
[21:06] mitsuhiko: i think the problem is that the template are re-compiled each access
[21:06] mitsuhiko: unfortunately gae is incredible limited, i don't know if there is much i can do currently
[21:07] zgoda: i tried with jinja bytecache but it does not work on prod (its on on dev server)
[21:08] mitsuhiko: i know
[21:08] mitsuhiko: appengine does not have marshal
[21:12] zgoda: mitsuhiko: thank you
[21:13] zgoda: i was hoping i'm doing something wrong and this can be optimized...
[21:13] mitsuhiko: zgoda: next release will come with improved appengine support, but i'm not sure yet how to implement improved caching for ae
It looks Armin is aware of problems with bytecode caching on AppEngine and has some plans to improve Jinja2 to allow caching on GAE. I hope things will get better over time. | According to this [google recipe](http://appengine-cookbook.appspot.com/recipe/better-performance-with-jinja2/?id=ahJhcHBlbmdpbmUtY29va2Jvb2tyqwELEgtSZWNpcGVJbmRleCJGYWhKaGNIQmxibWRwYm1VdFkyOXZhMkp2YjJ0eUhnc1NDRU5oZEdWbmIzSjVJaEJYWldKaGNIQWdSbkpoYldWM2IzSnJEQQwLEgZSZWNpcGUiSGFoSmhjSEJsYm1kcGJtVXRZMjl2YTJKdmIydHlIZ3NTQ0VOaGRHVm5iM0o1SWhCWFpXSmhjSEFnUm5KaGJXVjNiM0pyREExMAw) you can use memcache to cache bytecodes. You can also cache the template file content itself. All in the same recipe |
618,827 | My application is running on Google App Engine and most of requests constantly gets yellow flag due to high CPU usage. Using profiler I tracked the issue down to the routine of creating `jinja2.Environment` instance.
I'm creating the instance at module level:
```
from jinja2 import Environment, FileSystemLoader
jinja_env = Environment(loader=FileSystemLoader(TEMPLATE_DIRS))
```
Due to the Google AppEngine operation mode (CGI), this code can be run upon each and every request (their module import cache seems to cache modules for seconds rather than for minutes).
I was thinking about storing the environment instance in memcache, but it seems to be not picklable. `FileSystemLoader` instance seems to be picklable and can be cached, but I did not observe any substantial improvement in CPU usage with this approach.
Anybody can suggest a way to decrease the overhead of creating `jinja2.Environment` instance?
**Edit**: below is (relevant) part of profiler output.
```
222172 function calls (215262 primitive calls) in 8.695 CPU seconds
ncalls tottime percall cumtime percall filename:lineno(function)
33 1.073 0.033 1.083 0.033 {google3.apphosting.runtime._apphosting_runtime___python__apiproxy.Wait}
438/111 0.944 0.002 2.009 0.018 /base/python_dist/lib/python2.5/sre_parse.py:385(_parse)
4218 0.655 0.000 1.002 0.000 /base/python_dist/lib/python2.5/pickle.py:1166(load_long_binput)
1 0.611 0.611 0.679 0.679 /base/data/home/apps/with-the-flow/1.331879498764931274/jinja2/environment.py:10()
```
One call, but as far I can see (and this is consistent across all my GAE-based apps), the most expensive in the whole request processing cycle. | 2009/03/06 | [
"https://Stackoverflow.com/questions/618827",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12138/"
] | Armin suggested to pre-compile Jinja2 templates to python code, and use the compiled templates in production. So I've made a compiler/loader for that, and it now renders some complex templates 13 times faster, throwing away **all** the parsing overhead. The related discussion with link to the repository is [here](http://groups.google.com/group/google-appengine-python/browse_thread/thread/04d4bf3dd615ed1e/2907cdeff922710c). | OK, people, this is what I got today on #pocoo:
[20:59] zgoda: hello, i'd like to know if i could optimize my jinja2 environment creation process, the problem -> [Optimizing Jinja2 Environment creation](https://stackoverflow.com/questions/618827/optimizing-jinja2-environment-creation)
[21:00] zgoda: i have profiler output from "cold" app -> <http://paste.pocoo.org/show/107009/>
[21:01] zgoda: and for "hot" -> <http://paste.pocoo.org/show/107014/>
[21:02] zgoda: i'm wondering if i could somewhat lower the CPU cost of creating environment for "cold" requests
[21:05] mitsuhiko: zgoda: put the env creation into a module that you import
[21:05] mitsuhiko: like
[21:05] mitsuhiko: from yourapplication.utils import env
[21:05] zgoda: it's already there
[21:06] mitsuhiko: hmm
[21:06] mitsuhiko: i think the problem is that the template are re-compiled each access
[21:06] mitsuhiko: unfortunately gae is incredible limited, i don't know if there is much i can do currently
[21:07] zgoda: i tried with jinja bytecache but it does not work on prod (its on on dev server)
[21:08] mitsuhiko: i know
[21:08] mitsuhiko: appengine does not have marshal
[21:12] zgoda: mitsuhiko: thank you
[21:13] zgoda: i was hoping i'm doing something wrong and this can be optimized...
[21:13] mitsuhiko: zgoda: next release will come with improved appengine support, but i'm not sure yet how to implement improved caching for ae
It looks Armin is aware of problems with bytecode caching on AppEngine and has some plans to improve Jinja2 to allow caching on GAE. I hope things will get better over time. |
618,827 | My application is running on Google App Engine and most of requests constantly gets yellow flag due to high CPU usage. Using profiler I tracked the issue down to the routine of creating `jinja2.Environment` instance.
I'm creating the instance at module level:
```
from jinja2 import Environment, FileSystemLoader
jinja_env = Environment(loader=FileSystemLoader(TEMPLATE_DIRS))
```
Due to the Google AppEngine operation mode (CGI), this code can be run upon each and every request (their module import cache seems to cache modules for seconds rather than for minutes).
I was thinking about storing the environment instance in memcache, but it seems to be not picklable. `FileSystemLoader` instance seems to be picklable and can be cached, but I did not observe any substantial improvement in CPU usage with this approach.
Anybody can suggest a way to decrease the overhead of creating `jinja2.Environment` instance?
**Edit**: below is (relevant) part of profiler output.
```
222172 function calls (215262 primitive calls) in 8.695 CPU seconds
ncalls tottime percall cumtime percall filename:lineno(function)
33 1.073 0.033 1.083 0.033 {google3.apphosting.runtime._apphosting_runtime___python__apiproxy.Wait}
438/111 0.944 0.002 2.009 0.018 /base/python_dist/lib/python2.5/sre_parse.py:385(_parse)
4218 0.655 0.000 1.002 0.000 /base/python_dist/lib/python2.5/pickle.py:1166(load_long_binput)
1 0.611 0.611 0.679 0.679 /base/data/home/apps/with-the-flow/1.331879498764931274/jinja2/environment.py:10()
```
One call, but as far I can see (and this is consistent across all my GAE-based apps), the most expensive in the whole request processing cycle. | 2009/03/06 | [
"https://Stackoverflow.com/questions/618827",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12138/"
] | Armin suggested to pre-compile Jinja2 templates to python code, and use the compiled templates in production. So I've made a compiler/loader for that, and it now renders some complex templates 13 times faster, throwing away **all** the parsing overhead. The related discussion with link to the repository is [here](http://groups.google.com/group/google-appengine-python/browse_thread/thread/04d4bf3dd615ed1e/2907cdeff922710c). | According to this [google recipe](http://appengine-cookbook.appspot.com/recipe/better-performance-with-jinja2/?id=ahJhcHBlbmdpbmUtY29va2Jvb2tyqwELEgtSZWNpcGVJbmRleCJGYWhKaGNIQmxibWRwYm1VdFkyOXZhMkp2YjJ0eUhnc1NDRU5oZEdWbmIzSjVJaEJYWldKaGNIQWdSbkpoYldWM2IzSnJEQQwLEgZSZWNpcGUiSGFoSmhjSEJsYm1kcGJtVXRZMjl2YTJKdmIydHlIZ3NTQ0VOaGRHVm5iM0o1SWhCWFpXSmhjSEFnUm5KaGJXVjNiM0pyREExMAw) you can use memcache to cache bytecodes. You can also cache the template file content itself. All in the same recipe |
57,350,335 | My python version is 3.6. And I am using this tutorial for Statecraft AI (<https://pythonprogramming.net/building-neural-network-starcraft-ii-ai-python-sc2-tutorial/>). I imported SC2 module from here (<https://github.com/daniel-kukiela/python-sc2>).
I am facing this error while I run
ModuleNotFoundError: No module named 'websockets' | 2019/08/04 | [
"https://Stackoverflow.com/questions/57350335",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5355931/"
] | try using pip3, it worked for me. | Needed to install websockets for this
<https://websockets.readthedocs.io/en/stable/intro.html#installation> |
57,350,335 | My python version is 3.6. And I am using this tutorial for Statecraft AI (<https://pythonprogramming.net/building-neural-network-starcraft-ii-ai-python-sc2-tutorial/>). I imported SC2 module from here (<https://github.com/daniel-kukiela/python-sc2>).
I am facing this error while I run
ModuleNotFoundError: No module named 'websockets' | 2019/08/04 | [
"https://Stackoverflow.com/questions/57350335",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5355931/"
] | try using pip3, it worked for me. | I've spent hours trying to install and uninstall `websocket` and `websocket-client` but the true module that I need is named `websocket_client`.
```
pip install websocket_client
``` |
42,825,138 | This import works fine, but feels dirty in a few ways. Mainly that it uses a specific number in the slice\* to get the parent path, and that it annoys the flake8 linter.
```
import os
import sys
sys.path.append(os.path.dirname(__file__)[:-5])
from codeHelpers import completion_message
```
It's in a file system that looks a bit like this:
```
parent_folder
__init__.py
codeHelpers.py
child_folder
this_file.py
```
(`child_folder` is actually called `week1`, hence the 5 in the slice)
This question is extremely similar to [Python import from parent directory](https://stackoverflow.com/questions/19668729/python-import-from-parent-directory/), but in that case the discussion focused on whether or not it was good to run tests from the end point. [In my case](https://github.com/notionparallax/code1161base), I have a series of directories that have code that uses helpers that live in the parent.
*Context:* each directory is a set of weekly exercises, so I'd like to keep them as simple as possible.
Is there a cleaner, more pythonic way to do this import?
@cco solved the number problem, but it's still upsetting the linter. | 2017/03/16 | [
"https://Stackoverflow.com/questions/42825138",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1835727/"
] | You can import from a module a level up in a package by using `..`. In this\_file.py:
```
from ..codeHelpers import completion_message
```
Had you wanted to go more levels up just keep adding dots...
While I'm here, just be aware that `from ..codeHelpers` is a *relative* import, and you should always use them when importing something in the same package. `from codeHelpers` is an *absolute* import, which are ambiguous in Python 2 (should it import from in the package or from the unfortunately named `codeHelpers` module you have installed on your system?), and in Python 3 actually forbidden as a way to import from within the same module (i.e. they are always absolute). You can read the ancient [PEP 328](https://docs.python.org/2.5/whatsnew/pep-328.html) for an explanation of the difficulties. | You can remove the assumption about the length of the final directory name by applying `os.path.dirname` twice.
e.g. instead of `os.path.dirname(__file__)[:-5]`, use `os.path.dirname(os.path.dirname(__file__))` |
42,825,138 | This import works fine, but feels dirty in a few ways. Mainly that it uses a specific number in the slice\* to get the parent path, and that it annoys the flake8 linter.
```
import os
import sys
sys.path.append(os.path.dirname(__file__)[:-5])
from codeHelpers import completion_message
```
It's in a file system that looks a bit like this:
```
parent_folder
__init__.py
codeHelpers.py
child_folder
this_file.py
```
(`child_folder` is actually called `week1`, hence the 5 in the slice)
This question is extremely similar to [Python import from parent directory](https://stackoverflow.com/questions/19668729/python-import-from-parent-directory/), but in that case the discussion focused on whether or not it was good to run tests from the end point. [In my case](https://github.com/notionparallax/code1161base), I have a series of directories that have code that uses helpers that live in the parent.
*Context:* each directory is a set of weekly exercises, so I'd like to keep them as simple as possible.
Is there a cleaner, more pythonic way to do this import?
@cco solved the number problem, but it's still upsetting the linter. | 2017/03/16 | [
"https://Stackoverflow.com/questions/42825138",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1835727/"
] | It might be easier to use absolute import paths, like the following:
```
from parent_folder.code_helpers import completion_message
```
But this would require you to make sure that the PYTHONPATH environment variable is set such that it can see the highest root directory (`parent_folder` in this case, I think). For instance,
```
PYTHONPATH=. python parent_directory/child_directory/this_file.py
# here the '.' current directory would contain parent_directory
```
Make sure to add an `__init__.py` to the child\_directory as well. | You can remove the assumption about the length of the final directory name by applying `os.path.dirname` twice.
e.g. instead of `os.path.dirname(__file__)[:-5]`, use `os.path.dirname(os.path.dirname(__file__))` |
42,825,138 | This import works fine, but feels dirty in a few ways. Mainly that it uses a specific number in the slice\* to get the parent path, and that it annoys the flake8 linter.
```
import os
import sys
sys.path.append(os.path.dirname(__file__)[:-5])
from codeHelpers import completion_message
```
It's in a file system that looks a bit like this:
```
parent_folder
__init__.py
codeHelpers.py
child_folder
this_file.py
```
(`child_folder` is actually called `week1`, hence the 5 in the slice)
This question is extremely similar to [Python import from parent directory](https://stackoverflow.com/questions/19668729/python-import-from-parent-directory/), but in that case the discussion focused on whether or not it was good to run tests from the end point. [In my case](https://github.com/notionparallax/code1161base), I have a series of directories that have code that uses helpers that live in the parent.
*Context:* each directory is a set of weekly exercises, so I'd like to keep them as simple as possible.
Is there a cleaner, more pythonic way to do this import?
@cco solved the number problem, but it's still upsetting the linter. | 2017/03/16 | [
"https://Stackoverflow.com/questions/42825138",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1835727/"
] | First since you haven't been specific about which lint error you are getting, I am going to assume it's because you have an import after your `sys.path.append`.
The cleanest way to do it is with relative or absolute imports.
Using absolute imports:
```
from parent_path.codeHelpers import completion_message
```
Using relative imports:
```
from ..codeHelpers import completion_message
```
For the simple example listed in the original question this should be all that's required. It's simple, it's pythonic, it's reliable, and it fixes the lint issue.
You may find yourself in a situation where the above does not work for you and `sys.path` manipulation is still required. A drawback is that your IDE will likely not be able to resolve imports to modules from the new path causing issues such as automatic code completion not working and flagging the imports as errors, even though the code will run properly.
If you find you still need to use `sys.path` and want to avoid lint errors for this type of situation create a new module and do the `sys.path` manipulation in it instead. Then make sure that you import your new module before any modules that require the modified `sys.path`.
For example:
local\_imports.py
```
"""Add a path to sys.path for imports."""
import os
import sys
# Get the current directory
current_path = os.path.dirname(__file__)
# Get the parent directory
parent_path = os.path.dirname(current_path)
# Add the parent directory to sys.path
sys.path.append(parent_path)
```
Then in the target file:
```
import local_imports # now using modified sys.path
from codeHelpers import completion_message
```
The drawback to this is it requires you to include `local_imports.py` in each `child_folder` and if the folder structure changes, you would have to modify each one `local_imports` file.
Where this pattern is really useful is when you need to include external libraries in your package (for example in a `libs` folder) without requiring the user to install the libs themselves.
If you are using this pattern for a `libs` folder, you may want to make sure your included libraries are preferred over the installed libraries.
To do so, change
```
sys.path.append(custom_path)
```
to
```
sys.path.insert(1, custom_path)
```
This will make your custom path the second place the python interpreter will check (the first will still be `''` which is the local directory). | You can remove the assumption about the length of the final directory name by applying `os.path.dirname` twice.
e.g. instead of `os.path.dirname(__file__)[:-5]`, use `os.path.dirname(os.path.dirname(__file__))` |
42,825,138 | This import works fine, but feels dirty in a few ways. Mainly that it uses a specific number in the slice\* to get the parent path, and that it annoys the flake8 linter.
```
import os
import sys
sys.path.append(os.path.dirname(__file__)[:-5])
from codeHelpers import completion_message
```
It's in a file system that looks a bit like this:
```
parent_folder
__init__.py
codeHelpers.py
child_folder
this_file.py
```
(`child_folder` is actually called `week1`, hence the 5 in the slice)
This question is extremely similar to [Python import from parent directory](https://stackoverflow.com/questions/19668729/python-import-from-parent-directory/), but in that case the discussion focused on whether or not it was good to run tests from the end point. [In my case](https://github.com/notionparallax/code1161base), I have a series of directories that have code that uses helpers that live in the parent.
*Context:* each directory is a set of weekly exercises, so I'd like to keep them as simple as possible.
Is there a cleaner, more pythonic way to do this import?
@cco solved the number problem, but it's still upsetting the linter. | 2017/03/16 | [
"https://Stackoverflow.com/questions/42825138",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1835727/"
] | You can import from a module a level up in a package by using `..`. In this\_file.py:
```
from ..codeHelpers import completion_message
```
Had you wanted to go more levels up just keep adding dots...
While I'm here, just be aware that `from ..codeHelpers` is a *relative* import, and you should always use them when importing something in the same package. `from codeHelpers` is an *absolute* import, which are ambiguous in Python 2 (should it import from in the package or from the unfortunately named `codeHelpers` module you have installed on your system?), and in Python 3 actually forbidden as a way to import from within the same module (i.e. they are always absolute). You can read the ancient [PEP 328](https://docs.python.org/2.5/whatsnew/pep-328.html) for an explanation of the difficulties. | Either way you have to hack around. If you main goal avoid flakes warnings
* add a `noqa` comment
* `exec(open("../codeHelpers.py").read(), globals())`
* you can pass a filename with interpreter option -c (should not bother flakes8) |
42,825,138 | This import works fine, but feels dirty in a few ways. Mainly that it uses a specific number in the slice\* to get the parent path, and that it annoys the flake8 linter.
```
import os
import sys
sys.path.append(os.path.dirname(__file__)[:-5])
from codeHelpers import completion_message
```
It's in a file system that looks a bit like this:
```
parent_folder
__init__.py
codeHelpers.py
child_folder
this_file.py
```
(`child_folder` is actually called `week1`, hence the 5 in the slice)
This question is extremely similar to [Python import from parent directory](https://stackoverflow.com/questions/19668729/python-import-from-parent-directory/), but in that case the discussion focused on whether or not it was good to run tests from the end point. [In my case](https://github.com/notionparallax/code1161base), I have a series of directories that have code that uses helpers that live in the parent.
*Context:* each directory is a set of weekly exercises, so I'd like to keep them as simple as possible.
Is there a cleaner, more pythonic way to do this import?
@cco solved the number problem, but it's still upsetting the linter. | 2017/03/16 | [
"https://Stackoverflow.com/questions/42825138",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1835727/"
] | First since you haven't been specific about which lint error you are getting, I am going to assume it's because you have an import after your `sys.path.append`.
The cleanest way to do it is with relative or absolute imports.
Using absolute imports:
```
from parent_path.codeHelpers import completion_message
```
Using relative imports:
```
from ..codeHelpers import completion_message
```
For the simple example listed in the original question this should be all that's required. It's simple, it's pythonic, it's reliable, and it fixes the lint issue.
You may find yourself in a situation where the above does not work for you and `sys.path` manipulation is still required. A drawback is that your IDE will likely not be able to resolve imports to modules from the new path causing issues such as automatic code completion not working and flagging the imports as errors, even though the code will run properly.
If you find you still need to use `sys.path` and want to avoid lint errors for this type of situation create a new module and do the `sys.path` manipulation in it instead. Then make sure that you import your new module before any modules that require the modified `sys.path`.
For example:
local\_imports.py
```
"""Add a path to sys.path for imports."""
import os
import sys
# Get the current directory
current_path = os.path.dirname(__file__)
# Get the parent directory
parent_path = os.path.dirname(current_path)
# Add the parent directory to sys.path
sys.path.append(parent_path)
```
Then in the target file:
```
import local_imports # now using modified sys.path
from codeHelpers import completion_message
```
The drawback to this is it requires you to include `local_imports.py` in each `child_folder` and if the folder structure changes, you would have to modify each one `local_imports` file.
Where this pattern is really useful is when you need to include external libraries in your package (for example in a `libs` folder) without requiring the user to install the libs themselves.
If you are using this pattern for a `libs` folder, you may want to make sure your included libraries are preferred over the installed libraries.
To do so, change
```
sys.path.append(custom_path)
```
to
```
sys.path.insert(1, custom_path)
```
This will make your custom path the second place the python interpreter will check (the first will still be `''` which is the local directory). | You can import from a module a level up in a package by using `..`. In this\_file.py:
```
from ..codeHelpers import completion_message
```
Had you wanted to go more levels up just keep adding dots...
While I'm here, just be aware that `from ..codeHelpers` is a *relative* import, and you should always use them when importing something in the same package. `from codeHelpers` is an *absolute* import, which are ambiguous in Python 2 (should it import from in the package or from the unfortunately named `codeHelpers` module you have installed on your system?), and in Python 3 actually forbidden as a way to import from within the same module (i.e. they are always absolute). You can read the ancient [PEP 328](https://docs.python.org/2.5/whatsnew/pep-328.html) for an explanation of the difficulties. |
42,825,138 | This import works fine, but feels dirty in a few ways. Mainly that it uses a specific number in the slice\* to get the parent path, and that it annoys the flake8 linter.
```
import os
import sys
sys.path.append(os.path.dirname(__file__)[:-5])
from codeHelpers import completion_message
```
It's in a file system that looks a bit like this:
```
parent_folder
__init__.py
codeHelpers.py
child_folder
this_file.py
```
(`child_folder` is actually called `week1`, hence the 5 in the slice)
This question is extremely similar to [Python import from parent directory](https://stackoverflow.com/questions/19668729/python-import-from-parent-directory/), but in that case the discussion focused on whether or not it was good to run tests from the end point. [In my case](https://github.com/notionparallax/code1161base), I have a series of directories that have code that uses helpers that live in the parent.
*Context:* each directory is a set of weekly exercises, so I'd like to keep them as simple as possible.
Is there a cleaner, more pythonic way to do this import?
@cco solved the number problem, but it's still upsetting the linter. | 2017/03/16 | [
"https://Stackoverflow.com/questions/42825138",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1835727/"
] | It might be easier to use absolute import paths, like the following:
```
from parent_folder.code_helpers import completion_message
```
But this would require you to make sure that the PYTHONPATH environment variable is set such that it can see the highest root directory (`parent_folder` in this case, I think). For instance,
```
PYTHONPATH=. python parent_directory/child_directory/this_file.py
# here the '.' current directory would contain parent_directory
```
Make sure to add an `__init__.py` to the child\_directory as well. | Either way you have to hack around. If you main goal avoid flakes warnings
* add a `noqa` comment
* `exec(open("../codeHelpers.py").read(), globals())`
* you can pass a filename with interpreter option -c (should not bother flakes8) |
42,825,138 | This import works fine, but feels dirty in a few ways. Mainly that it uses a specific number in the slice\* to get the parent path, and that it annoys the flake8 linter.
```
import os
import sys
sys.path.append(os.path.dirname(__file__)[:-5])
from codeHelpers import completion_message
```
It's in a file system that looks a bit like this:
```
parent_folder
__init__.py
codeHelpers.py
child_folder
this_file.py
```
(`child_folder` is actually called `week1`, hence the 5 in the slice)
This question is extremely similar to [Python import from parent directory](https://stackoverflow.com/questions/19668729/python-import-from-parent-directory/), but in that case the discussion focused on whether or not it was good to run tests from the end point. [In my case](https://github.com/notionparallax/code1161base), I have a series of directories that have code that uses helpers that live in the parent.
*Context:* each directory is a set of weekly exercises, so I'd like to keep them as simple as possible.
Is there a cleaner, more pythonic way to do this import?
@cco solved the number problem, but it's still upsetting the linter. | 2017/03/16 | [
"https://Stackoverflow.com/questions/42825138",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1835727/"
] | First since you haven't been specific about which lint error you are getting, I am going to assume it's because you have an import after your `sys.path.append`.
The cleanest way to do it is with relative or absolute imports.
Using absolute imports:
```
from parent_path.codeHelpers import completion_message
```
Using relative imports:
```
from ..codeHelpers import completion_message
```
For the simple example listed in the original question this should be all that's required. It's simple, it's pythonic, it's reliable, and it fixes the lint issue.
You may find yourself in a situation where the above does not work for you and `sys.path` manipulation is still required. A drawback is that your IDE will likely not be able to resolve imports to modules from the new path causing issues such as automatic code completion not working and flagging the imports as errors, even though the code will run properly.
If you find you still need to use `sys.path` and want to avoid lint errors for this type of situation create a new module and do the `sys.path` manipulation in it instead. Then make sure that you import your new module before any modules that require the modified `sys.path`.
For example:
local\_imports.py
```
"""Add a path to sys.path for imports."""
import os
import sys
# Get the current directory
current_path = os.path.dirname(__file__)
# Get the parent directory
parent_path = os.path.dirname(current_path)
# Add the parent directory to sys.path
sys.path.append(parent_path)
```
Then in the target file:
```
import local_imports # now using modified sys.path
from codeHelpers import completion_message
```
The drawback to this is it requires you to include `local_imports.py` in each `child_folder` and if the folder structure changes, you would have to modify each one `local_imports` file.
Where this pattern is really useful is when you need to include external libraries in your package (for example in a `libs` folder) without requiring the user to install the libs themselves.
If you are using this pattern for a `libs` folder, you may want to make sure your included libraries are preferred over the installed libraries.
To do so, change
```
sys.path.append(custom_path)
```
to
```
sys.path.insert(1, custom_path)
```
This will make your custom path the second place the python interpreter will check (the first will still be `''` which is the local directory). | It might be easier to use absolute import paths, like the following:
```
from parent_folder.code_helpers import completion_message
```
But this would require you to make sure that the PYTHONPATH environment variable is set such that it can see the highest root directory (`parent_folder` in this case, I think). For instance,
```
PYTHONPATH=. python parent_directory/child_directory/this_file.py
# here the '.' current directory would contain parent_directory
```
Make sure to add an `__init__.py` to the child\_directory as well. |
42,825,138 | This import works fine, but feels dirty in a few ways. Mainly that it uses a specific number in the slice\* to get the parent path, and that it annoys the flake8 linter.
```
import os
import sys
sys.path.append(os.path.dirname(__file__)[:-5])
from codeHelpers import completion_message
```
It's in a file system that looks a bit like this:
```
parent_folder
__init__.py
codeHelpers.py
child_folder
this_file.py
```
(`child_folder` is actually called `week1`, hence the 5 in the slice)
This question is extremely similar to [Python import from parent directory](https://stackoverflow.com/questions/19668729/python-import-from-parent-directory/), but in that case the discussion focused on whether or not it was good to run tests from the end point. [In my case](https://github.com/notionparallax/code1161base), I have a series of directories that have code that uses helpers that live in the parent.
*Context:* each directory is a set of weekly exercises, so I'd like to keep them as simple as possible.
Is there a cleaner, more pythonic way to do this import?
@cco solved the number problem, but it's still upsetting the linter. | 2017/03/16 | [
"https://Stackoverflow.com/questions/42825138",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1835727/"
] | First since you haven't been specific about which lint error you are getting, I am going to assume it's because you have an import after your `sys.path.append`.
The cleanest way to do it is with relative or absolute imports.
Using absolute imports:
```
from parent_path.codeHelpers import completion_message
```
Using relative imports:
```
from ..codeHelpers import completion_message
```
For the simple example listed in the original question this should be all that's required. It's simple, it's pythonic, it's reliable, and it fixes the lint issue.
You may find yourself in a situation where the above does not work for you and `sys.path` manipulation is still required. A drawback is that your IDE will likely not be able to resolve imports to modules from the new path causing issues such as automatic code completion not working and flagging the imports as errors, even though the code will run properly.
If you find you still need to use `sys.path` and want to avoid lint errors for this type of situation create a new module and do the `sys.path` manipulation in it instead. Then make sure that you import your new module before any modules that require the modified `sys.path`.
For example:
local\_imports.py
```
"""Add a path to sys.path for imports."""
import os
import sys
# Get the current directory
current_path = os.path.dirname(__file__)
# Get the parent directory
parent_path = os.path.dirname(current_path)
# Add the parent directory to sys.path
sys.path.append(parent_path)
```
Then in the target file:
```
import local_imports # now using modified sys.path
from codeHelpers import completion_message
```
The drawback to this is it requires you to include `local_imports.py` in each `child_folder` and if the folder structure changes, you would have to modify each one `local_imports` file.
Where this pattern is really useful is when you need to include external libraries in your package (for example in a `libs` folder) without requiring the user to install the libs themselves.
If you are using this pattern for a `libs` folder, you may want to make sure your included libraries are preferred over the installed libraries.
To do so, change
```
sys.path.append(custom_path)
```
to
```
sys.path.insert(1, custom_path)
```
This will make your custom path the second place the python interpreter will check (the first will still be `''` which is the local directory). | Either way you have to hack around. If you main goal avoid flakes warnings
* add a `noqa` comment
* `exec(open("../codeHelpers.py").read(), globals())`
* you can pass a filename with interpreter option -c (should not bother flakes8) |
56,995,928 | How do I shorten my code to find the sum of numbers in an array.
```
var numbers= new double[4];
numbers[0]=12.7;
numbers[1]=10.4;
numbers[2]=9.2;
numbers[3]=8.5;
var results=numbers[0]+numbers[1]+numbers[2]+numbers[3];
Console.WriteLine(results);
```
My goal is to shorten this code to something like `results = sum(numbers[0:5])` like in python.
Please, what is he c# equivalent of this ? | 2019/07/11 | [
"https://Stackoverflow.com/questions/56995928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11076921/"
] | Use could use `.Sum` (from System.Linq)
e.g to sum all:
```
var results = numbers.Sum(n => n);
```
If you need only of the first results, e.g. the sum of the first 3, you could do:
```
var resultsFirst3 = numbers.Take(3).Sum(n => n);
``` | Although Julian's answer is the best approach, you could also do it without the use of LINQ (not saying that you should). A more classic approach would be something like this:
```
var numbers= new double[4];
numbers[0]=12.7;
numbers[1]=10.4;
numbers[2]=9.2;
double sum = 0;
foreach (var num in numbers)
{
sum += num;
}
Console.WriteLine("The sum is: " + sum);
```
As you can see, this is quite a bit longer, less elegant and (if you ask me) harder to read. So I would deffinitely go for the LINQ approach. |
66,110,794 | Say I have a list
```
List = ['bob', 'john', 'mary', 'jill']
```
I want to pass the list to a function but I do not want john to be part of the list that is passed. I do not want to permentaly remove it, just exclude it when being passed.
Is this possible? All the examples of pop/remove I saw removed it from the list. I know I can make a copy and remove it, but I thought I'd ask to see if something like this exists within python already. | 2021/02/08 | [
"https://Stackoverflow.com/questions/66110794",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/640558/"
] | It's better to use component life cycle events. You can to do something like
```
<div class="line">
@result
</div>
@code{
private string result;
protected async override Task OnInitializedAsync()
{
result = await Test(local);
}
}
```
See Blazor documentation [here](https://learn.microsoft.com/en-us/aspnet/core/blazor/components/lifecycle?view=aspnetcore-5.0) | As Shahid Said -
>
> It's better to use component life cycle events.
>
>
>
But there are some edge cases where lifecycle events wont work.
There isn't a lot of information to work off here but I'll make some assumptions.
Assuming it is absolutely necessary not to use the components life cycle you can use in your HTML Body:
```
@foreach(YourRowObject Row in Table)
{
<div>
@Test(Row.Local).Result
</div>
}
```
And i assume you have a C# code block or CS file that does something like:
```
@code{
private YourTable Table {get; set;}
private async Task<string> Test(string local)
{
return await SomeApiCall(local);
}
}
```
This should display the information you are after. Tweak it to fit your application :)
EDIT: Improved code to remove unnecessary var assignment |
52,857,746 | ```
#! /usr/bin/python3
import re
my_string = 'This is the string to test. It has several Capitalized words. My name is Robert, and I am learning pYthon.'
result = re.match(r'.*', my_string)
result.group(0)
print(result)
```
Forgive me for any issues I create posting this. I am a total noob. I am trying to figure out why it is that when I run the above code,
I get the follow results and not the full string.
```
<_sre.SRE_Match object; span=(0, 108), match='This is the string to test. It has several Capit>
```
Thanks in Advance. | 2018/10/17 | [
"https://Stackoverflow.com/questions/52857746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9916500/"
] | You can definitely use `mapStateToProps` with a functional component, the same way you would with a class component.
```
function MyComponent({ propOne }) {
return <p>{propOne}</p>
}
function mapStateToProps(state) {
return { propOne: state.propOne };
}
export default connect(mapStateToProps)(MyComponent);
``` | You should `connect` the component to the store at first.
The connection happen using the `connect` HOC provided by the `react-redux` package. The first paramter it takes, is a method that, given the global store, returns an object with only the properties you need in this component.
For instance:
```
import { connect } from 'react-redux'
const HelloComponent = ({ name }) =>
<p>{ name }</p>
export default connect(
globalState => ({ name: globalState.nestedObject.innerProperty })
)(HelloComponent)
```
To improve readability, it is common use the method `mapStateToProps`, like this:
```
const mapStateToProps = state => ({
name: state.nestedObject.innerProperty
})
export default connect(mapStateToProps)(HelloComponent)
``` |
52,857,746 | ```
#! /usr/bin/python3
import re
my_string = 'This is the string to test. It has several Capitalized words. My name is Robert, and I am learning pYthon.'
result = re.match(r'.*', my_string)
result.group(0)
print(result)
```
Forgive me for any issues I create posting this. I am a total noob. I am trying to figure out why it is that when I run the above code,
I get the follow results and not the full string.
```
<_sre.SRE_Match object; span=(0, 108), match='This is the string to test. It has several Capit>
```
Thanks in Advance. | 2018/10/17 | [
"https://Stackoverflow.com/questions/52857746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9916500/"
] | You can definitely use `mapStateToProps` with a functional component, the same way you would with a class component.
```
function MyComponent({ propOne }) {
return <p>{propOne}</p>
}
function mapStateToProps(state) {
return { propOne: state.propOne };
}
export default connect(mapStateToProps)(MyComponent);
``` | react-redux now has a useSelector method. That's a much cleaner approach for functional components that employ hooks. See: <https://react-redux.js.org/next/api/hooks#useselector> |
52,857,746 | ```
#! /usr/bin/python3
import re
my_string = 'This is the string to test. It has several Capitalized words. My name is Robert, and I am learning pYthon.'
result = re.match(r'.*', my_string)
result.group(0)
print(result)
```
Forgive me for any issues I create posting this. I am a total noob. I am trying to figure out why it is that when I run the above code,
I get the follow results and not the full string.
```
<_sre.SRE_Match object; span=(0, 108), match='This is the string to test. It has several Capit>
```
Thanks in Advance. | 2018/10/17 | [
"https://Stackoverflow.com/questions/52857746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9916500/"
] | You can definitely use `mapStateToProps` with a functional component, the same way you would with a class component.
```
function MyComponent({ propOne }) {
return <p>{propOne}</p>
}
function mapStateToProps(state) {
return { propOne: state.propOne };
}
export default connect(mapStateToProps)(MyComponent);
``` | With Hooks you can use something like this
------------------------------------------
```
import React from 'react';
import {useDispatch, useSelector} from "react-redux";
const AccountDetails = () => {
const accountDetails = useSelector(state => state.accountDetails);
const dispatch = useDispatch();
return (
<div>
<h2>Your user name is: {accountDetails.username}</h2>
<button onclick={() => dispatch(logout())}>Logout</button>
</div>
);
};
export default AccountDetails;
``` |
52,857,746 | ```
#! /usr/bin/python3
import re
my_string = 'This is the string to test. It has several Capitalized words. My name is Robert, and I am learning pYthon.'
result = re.match(r'.*', my_string)
result.group(0)
print(result)
```
Forgive me for any issues I create posting this. I am a total noob. I am trying to figure out why it is that when I run the above code,
I get the follow results and not the full string.
```
<_sre.SRE_Match object; span=(0, 108), match='This is the string to test. It has several Capit>
```
Thanks in Advance. | 2018/10/17 | [
"https://Stackoverflow.com/questions/52857746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9916500/"
] | You can definitely use `mapStateToProps` with a functional component, the same way you would with a class component.
```
function MyComponent({ propOne }) {
return <p>{propOne}</p>
}
function mapStateToProps(state) {
return { propOne: state.propOne };
}
export default connect(mapStateToProps)(MyComponent);
``` | ```
const CategoryList = (props) => {
return (
<div>
<h3>Category</h3>
<h5>Seçili Kategori:{props.currentCategory.categoryName}</h5>
</div>
);
}
function mapStateToProps(state) {
return {
currentCategory: state.changeCategoryReducer
}
}
export default connect(mapStateToProps)(CategoryList);
``` |
52,857,746 | ```
#! /usr/bin/python3
import re
my_string = 'This is the string to test. It has several Capitalized words. My name is Robert, and I am learning pYthon.'
result = re.match(r'.*', my_string)
result.group(0)
print(result)
```
Forgive me for any issues I create posting this. I am a total noob. I am trying to figure out why it is that when I run the above code,
I get the follow results and not the full string.
```
<_sre.SRE_Match object; span=(0, 108), match='This is the string to test. It has several Capit>
```
Thanks in Advance. | 2018/10/17 | [
"https://Stackoverflow.com/questions/52857746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9916500/"
] | react-redux now has a useSelector method. That's a much cleaner approach for functional components that employ hooks. See: <https://react-redux.js.org/next/api/hooks#useselector> | You should `connect` the component to the store at first.
The connection happen using the `connect` HOC provided by the `react-redux` package. The first paramter it takes, is a method that, given the global store, returns an object with only the properties you need in this component.
For instance:
```
import { connect } from 'react-redux'
const HelloComponent = ({ name }) =>
<p>{ name }</p>
export default connect(
globalState => ({ name: globalState.nestedObject.innerProperty })
)(HelloComponent)
```
To improve readability, it is common use the method `mapStateToProps`, like this:
```
const mapStateToProps = state => ({
name: state.nestedObject.innerProperty
})
export default connect(mapStateToProps)(HelloComponent)
``` |
52,857,746 | ```
#! /usr/bin/python3
import re
my_string = 'This is the string to test. It has several Capitalized words. My name is Robert, and I am learning pYthon.'
result = re.match(r'.*', my_string)
result.group(0)
print(result)
```
Forgive me for any issues I create posting this. I am a total noob. I am trying to figure out why it is that when I run the above code,
I get the follow results and not the full string.
```
<_sre.SRE_Match object; span=(0, 108), match='This is the string to test. It has several Capit>
```
Thanks in Advance. | 2018/10/17 | [
"https://Stackoverflow.com/questions/52857746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9916500/"
] | With Hooks you can use something like this
------------------------------------------
```
import React from 'react';
import {useDispatch, useSelector} from "react-redux";
const AccountDetails = () => {
const accountDetails = useSelector(state => state.accountDetails);
const dispatch = useDispatch();
return (
<div>
<h2>Your user name is: {accountDetails.username}</h2>
<button onclick={() => dispatch(logout())}>Logout</button>
</div>
);
};
export default AccountDetails;
``` | You should `connect` the component to the store at first.
The connection happen using the `connect` HOC provided by the `react-redux` package. The first paramter it takes, is a method that, given the global store, returns an object with only the properties you need in this component.
For instance:
```
import { connect } from 'react-redux'
const HelloComponent = ({ name }) =>
<p>{ name }</p>
export default connect(
globalState => ({ name: globalState.nestedObject.innerProperty })
)(HelloComponent)
```
To improve readability, it is common use the method `mapStateToProps`, like this:
```
const mapStateToProps = state => ({
name: state.nestedObject.innerProperty
})
export default connect(mapStateToProps)(HelloComponent)
``` |
52,857,746 | ```
#! /usr/bin/python3
import re
my_string = 'This is the string to test. It has several Capitalized words. My name is Robert, and I am learning pYthon.'
result = re.match(r'.*', my_string)
result.group(0)
print(result)
```
Forgive me for any issues I create posting this. I am a total noob. I am trying to figure out why it is that when I run the above code,
I get the follow results and not the full string.
```
<_sre.SRE_Match object; span=(0, 108), match='This is the string to test. It has several Capit>
```
Thanks in Advance. | 2018/10/17 | [
"https://Stackoverflow.com/questions/52857746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9916500/"
] | You should `connect` the component to the store at first.
The connection happen using the `connect` HOC provided by the `react-redux` package. The first paramter it takes, is a method that, given the global store, returns an object with only the properties you need in this component.
For instance:
```
import { connect } from 'react-redux'
const HelloComponent = ({ name }) =>
<p>{ name }</p>
export default connect(
globalState => ({ name: globalState.nestedObject.innerProperty })
)(HelloComponent)
```
To improve readability, it is common use the method `mapStateToProps`, like this:
```
const mapStateToProps = state => ({
name: state.nestedObject.innerProperty
})
export default connect(mapStateToProps)(HelloComponent)
``` | ```
const CategoryList = (props) => {
return (
<div>
<h3>Category</h3>
<h5>Seçili Kategori:{props.currentCategory.categoryName}</h5>
</div>
);
}
function mapStateToProps(state) {
return {
currentCategory: state.changeCategoryReducer
}
}
export default connect(mapStateToProps)(CategoryList);
``` |
52,857,746 | ```
#! /usr/bin/python3
import re
my_string = 'This is the string to test. It has several Capitalized words. My name is Robert, and I am learning pYthon.'
result = re.match(r'.*', my_string)
result.group(0)
print(result)
```
Forgive me for any issues I create posting this. I am a total noob. I am trying to figure out why it is that when I run the above code,
I get the follow results and not the full string.
```
<_sre.SRE_Match object; span=(0, 108), match='This is the string to test. It has several Capit>
```
Thanks in Advance. | 2018/10/17 | [
"https://Stackoverflow.com/questions/52857746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9916500/"
] | react-redux now has a useSelector method. That's a much cleaner approach for functional components that employ hooks. See: <https://react-redux.js.org/next/api/hooks#useselector> | ```
const CategoryList = (props) => {
return (
<div>
<h3>Category</h3>
<h5>Seçili Kategori:{props.currentCategory.categoryName}</h5>
</div>
);
}
function mapStateToProps(state) {
return {
currentCategory: state.changeCategoryReducer
}
}
export default connect(mapStateToProps)(CategoryList);
``` |
52,857,746 | ```
#! /usr/bin/python3
import re
my_string = 'This is the string to test. It has several Capitalized words. My name is Robert, and I am learning pYthon.'
result = re.match(r'.*', my_string)
result.group(0)
print(result)
```
Forgive me for any issues I create posting this. I am a total noob. I am trying to figure out why it is that when I run the above code,
I get the follow results and not the full string.
```
<_sre.SRE_Match object; span=(0, 108), match='This is the string to test. It has several Capit>
```
Thanks in Advance. | 2018/10/17 | [
"https://Stackoverflow.com/questions/52857746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9916500/"
] | With Hooks you can use something like this
------------------------------------------
```
import React from 'react';
import {useDispatch, useSelector} from "react-redux";
const AccountDetails = () => {
const accountDetails = useSelector(state => state.accountDetails);
const dispatch = useDispatch();
return (
<div>
<h2>Your user name is: {accountDetails.username}</h2>
<button onclick={() => dispatch(logout())}>Logout</button>
</div>
);
};
export default AccountDetails;
``` | ```
const CategoryList = (props) => {
return (
<div>
<h3>Category</h3>
<h5>Seçili Kategori:{props.currentCategory.categoryName}</h5>
</div>
);
}
function mapStateToProps(state) {
return {
currentCategory: state.changeCategoryReducer
}
}
export default connect(mapStateToProps)(CategoryList);
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.