qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
---|---|---|---|---|---|
51,268,745 | How can I create a permutation mapping of two lists in python?
For example I have two lists `[1,2,3]` and `['A','B','C']`
Then my code should generate a list of 6 dictionaries
```
[ {1:'A',2:'B',3:'C'},
{1:'A',2:'C',3:'B'},
{1:'B',2:'A',3:'C'},
{1:'B',2:'C',3:'A'},
{1:'C',2:'A',3:'B'},
{1:'C',2:'B',3:'A'} ]
``` | 2018/07/10 | [
"https://Stackoverflow.com/questions/51268745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8272518/"
] | Using [`zip`](https://docs.python.org/3/library/functions.html#zip) and [`itertools.permutations`](https://docs.python.org/3/library/itertools.html#itertools.permutations) in a list comprehension:
```
>>> from itertools import permutations
>>> L1 = [1,2,3]
>>> L2 = ['A','B','C']
>>> [dict(zip(L1, p)) for p in permutations(L2)]
[{1: 'A', 2: 'B', 3: 'C'},
{1: 'A', 2: 'C', 3: 'B'},
{1: 'B', 2: 'A', 3: 'C'},
{1: 'B', 2: 'C', 3: 'A'},
{1: 'C', 2: 'A', 3: 'B'},
{1: 'C', 2: 'B', 3: 'A'}]
``` | You seem to permutate only the values of the dicts, so you could do something like
```
from itertools import permutations
dicts = []
keys = [1, 2, 3]
for values in permutations(['A', 'B', 'C']):
new_dict = dict(zip(keys, values))
dicts.append(new_dict)
``` |
1,150,765 | I was trying to figure out which integers python only instantiates once (-6 to 256 it seems), and in the process stumbled on some string behaviour I can't see the pattern in. Sometimes, equal strings created in different ways share the same id, sometimes not. This code:
```
A = "10000"
B = "10000"
C = "100" + "00"
D = "%i"%10000
E = str(10000)
F = str(10000)
G = str(100) + "00"
H = "0".join(("10","00"))
for obj in (A,B,C,D,E,F,G,H):
print obj, id(obj), obj is A
```
prints:
```
10000 4959776 True
10000 4959776 True
10000 4959776 True
10000 4959776 True
10000 4959456 False
10000 4959488 False
10000 4959520 False
10000 4959680 False
```
I don't even see the pattern - save for the fact that the first four don't have an explicit function call - but surely that can't be it, since the "`+`" in C for example implies a function call to **add**. I especially don't understand why C and G are different, seeing as that implies that the ids of the components of the addition are more important than the outcome.
So, what is the special treatment that A-D undergo, making them come out as the same instance? | 2009/07/19 | [
"https://Stackoverflow.com/questions/1150765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/125185/"
] | Python is allowed to inline string constants; A,B,C,D are actually the same literals (if Python sees a constant expression, it treats it as a constant).
`str` is actually a class, so `str(whatever)` is calling this class' constructor, which should yield a fresh object. This explains E,F,G (note that each of these has separate identity).
As for H, I am not sure, but I'd go for explanation that this expression is too complicated for Python to figure out it's actually a constant, so it computes a new string. | I believe short strings that can be evaluated at compile time, will be interned automatically. In the last examples, the result can't be evaluated at compile time because `str` or `join` might be redefined. |
1,150,765 | I was trying to figure out which integers python only instantiates once (-6 to 256 it seems), and in the process stumbled on some string behaviour I can't see the pattern in. Sometimes, equal strings created in different ways share the same id, sometimes not. This code:
```
A = "10000"
B = "10000"
C = "100" + "00"
D = "%i"%10000
E = str(10000)
F = str(10000)
G = str(100) + "00"
H = "0".join(("10","00"))
for obj in (A,B,C,D,E,F,G,H):
print obj, id(obj), obj is A
```
prints:
```
10000 4959776 True
10000 4959776 True
10000 4959776 True
10000 4959776 True
10000 4959456 False
10000 4959488 False
10000 4959520 False
10000 4959680 False
```
I don't even see the pattern - save for the fact that the first four don't have an explicit function call - but surely that can't be it, since the "`+`" in C for example implies a function call to **add**. I especially don't understand why C and G are different, seeing as that implies that the ids of the components of the addition are more important than the outcome.
So, what is the special treatment that A-D undergo, making them come out as the same instance? | 2009/07/19 | [
"https://Stackoverflow.com/questions/1150765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/125185/"
] | In terms of language specification, any compliant Python compiler and runtime is fully allowed, for any instance of an immutable type, to make a new instance OR find an existing instance of the same type that's equal to the required value and use a new reference to that same instance. This means it's always incorrect to use `is` or by-id comparison among immutables, and any minor release may tweak or change strategy in this matter to enhance optimization.
In terms of implementations, the tradeoff are pretty clear: trying to reuse an existing instance may mean time spent (perhaps wasted) trying to find such an instance, but if the attempt succeeds then some memory is saved (as well as the time to allocate and later free the memory bits needed to hold a new instance).
How to solve those implementation tradeoffs is not entirely obvious -- if you can identify heuristics that indicate that finding a suitable existing instance is likely and the search (even if it fails) will be fast, then you may want to attempt the search-and-reuse when the heuristics suggest it, but skip it otherwise.
In your observations you seem to have found a particular dot-release implementation that performs a modicum of peephole optimization when that's entirely safe, fast, and simple, so the assignments A to D all boil down to exactly the same as A (but E to F don't, as they involve named functions or methods that the optimizer's authors may reasonably have considered not 100% safe to assume semantics for -- and low-ROI if that was done -- so they're not peephole-optimized).
Thus, A to D reusing the same instance boils down to A and B doing so (as C and D get peephole-optimized to exactly the same construct).
That reuse, in turn, clearly suggests compiler tactics/optimizer heuristics whereby identical literal constants of an immutable type in the same function's local namespace are collapsed to references to just one instance in the function's `.func_code.co_consts` (to use current CPython's terminology for attributes of functions and code objects) -- reasonable tactics and heuristics, as reuse of the same immutable constant literal within one function are somewhat frequent, AND the price is only paid once (at compile time) while the advantage is accrued many times (every time the function runs, maybe within loops etc etc).
(It so happens that these specific tactics and heuristics, given their clearly-positive tradeoffs, have been pervasive in all recent versions of CPython, and, I believe, IronPython, Jython, and PyPy as well;-).
This is a somewhat worthy and interesting are of study if you're planning to write compilers, runtime environments, peephole optimizers, etc etc, for Python itself or similar languages. I guess that deep study of the internals (ideally of many different correct implementations, of course, so as not to fixate on the quirks of a specific one -- good thing Python currently enjoys at least 4 separate production-worthy implementations, not to mention several versions of each!) can also help, indirectly, make one a better Python programmer -- but it's particularly important to focus on what's *guaranteed* by the language itself, which is somewhat less than what you'll find in common among separate implementations, because the parts that "just happen" to be in common right now (without being *required* to be so by the language specs) may perfectly well change under you at the next point release of one or another implementation and, if your production code was mistakenly relying on such details, that might cause nasty surprises;-). Plus -- it's hardly ever necessary, or even particularly helpful, to rely on such variable implementation details rather than on language-mandated behavior (unless you're coding something like an optimizer, debugger, profiler, or the like, of course;-). | I believe short strings that can be evaluated at compile time, will be interned automatically. In the last examples, the result can't be evaluated at compile time because `str` or `join` might be redefined. |
1,150,765 | I was trying to figure out which integers python only instantiates once (-6 to 256 it seems), and in the process stumbled on some string behaviour I can't see the pattern in. Sometimes, equal strings created in different ways share the same id, sometimes not. This code:
```
A = "10000"
B = "10000"
C = "100" + "00"
D = "%i"%10000
E = str(10000)
F = str(10000)
G = str(100) + "00"
H = "0".join(("10","00"))
for obj in (A,B,C,D,E,F,G,H):
print obj, id(obj), obj is A
```
prints:
```
10000 4959776 True
10000 4959776 True
10000 4959776 True
10000 4959776 True
10000 4959456 False
10000 4959488 False
10000 4959520 False
10000 4959680 False
```
I don't even see the pattern - save for the fact that the first four don't have an explicit function call - but surely that can't be it, since the "`+`" in C for example implies a function call to **add**. I especially don't understand why C and G are different, seeing as that implies that the ids of the components of the addition are more important than the outcome.
So, what is the special treatment that A-D undergo, making them come out as the same instance? | 2009/07/19 | [
"https://Stackoverflow.com/questions/1150765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/125185/"
] | In terms of language specification, any compliant Python compiler and runtime is fully allowed, for any instance of an immutable type, to make a new instance OR find an existing instance of the same type that's equal to the required value and use a new reference to that same instance. This means it's always incorrect to use `is` or by-id comparison among immutables, and any minor release may tweak or change strategy in this matter to enhance optimization.
In terms of implementations, the tradeoff are pretty clear: trying to reuse an existing instance may mean time spent (perhaps wasted) trying to find such an instance, but if the attempt succeeds then some memory is saved (as well as the time to allocate and later free the memory bits needed to hold a new instance).
How to solve those implementation tradeoffs is not entirely obvious -- if you can identify heuristics that indicate that finding a suitable existing instance is likely and the search (even if it fails) will be fast, then you may want to attempt the search-and-reuse when the heuristics suggest it, but skip it otherwise.
In your observations you seem to have found a particular dot-release implementation that performs a modicum of peephole optimization when that's entirely safe, fast, and simple, so the assignments A to D all boil down to exactly the same as A (but E to F don't, as they involve named functions or methods that the optimizer's authors may reasonably have considered not 100% safe to assume semantics for -- and low-ROI if that was done -- so they're not peephole-optimized).
Thus, A to D reusing the same instance boils down to A and B doing so (as C and D get peephole-optimized to exactly the same construct).
That reuse, in turn, clearly suggests compiler tactics/optimizer heuristics whereby identical literal constants of an immutable type in the same function's local namespace are collapsed to references to just one instance in the function's `.func_code.co_consts` (to use current CPython's terminology for attributes of functions and code objects) -- reasonable tactics and heuristics, as reuse of the same immutable constant literal within one function are somewhat frequent, AND the price is only paid once (at compile time) while the advantage is accrued many times (every time the function runs, maybe within loops etc etc).
(It so happens that these specific tactics and heuristics, given their clearly-positive tradeoffs, have been pervasive in all recent versions of CPython, and, I believe, IronPython, Jython, and PyPy as well;-).
This is a somewhat worthy and interesting are of study if you're planning to write compilers, runtime environments, peephole optimizers, etc etc, for Python itself or similar languages. I guess that deep study of the internals (ideally of many different correct implementations, of course, so as not to fixate on the quirks of a specific one -- good thing Python currently enjoys at least 4 separate production-worthy implementations, not to mention several versions of each!) can also help, indirectly, make one a better Python programmer -- but it's particularly important to focus on what's *guaranteed* by the language itself, which is somewhat less than what you'll find in common among separate implementations, because the parts that "just happen" to be in common right now (without being *required* to be so by the language specs) may perfectly well change under you at the next point release of one or another implementation and, if your production code was mistakenly relying on such details, that might cause nasty surprises;-). Plus -- it's hardly ever necessary, or even particularly helpful, to rely on such variable implementation details rather than on language-mandated behavior (unless you're coding something like an optimizer, debugger, profiler, or the like, of course;-). | Python is allowed to inline string constants; A,B,C,D are actually the same literals (if Python sees a constant expression, it treats it as a constant).
`str` is actually a class, so `str(whatever)` is calling this class' constructor, which should yield a fresh object. This explains E,F,G (note that each of these has separate identity).
As for H, I am not sure, but I'd go for explanation that this expression is too complicated for Python to figure out it's actually a constant, so it computes a new string. |
1,150,765 | I was trying to figure out which integers python only instantiates once (-6 to 256 it seems), and in the process stumbled on some string behaviour I can't see the pattern in. Sometimes, equal strings created in different ways share the same id, sometimes not. This code:
```
A = "10000"
B = "10000"
C = "100" + "00"
D = "%i"%10000
E = str(10000)
F = str(10000)
G = str(100) + "00"
H = "0".join(("10","00"))
for obj in (A,B,C,D,E,F,G,H):
print obj, id(obj), obj is A
```
prints:
```
10000 4959776 True
10000 4959776 True
10000 4959776 True
10000 4959776 True
10000 4959456 False
10000 4959488 False
10000 4959520 False
10000 4959680 False
```
I don't even see the pattern - save for the fact that the first four don't have an explicit function call - but surely that can't be it, since the "`+`" in C for example implies a function call to **add**. I especially don't understand why C and G are different, seeing as that implies that the ids of the components of the addition are more important than the outcome.
So, what is the special treatment that A-D undergo, making them come out as the same instance? | 2009/07/19 | [
"https://Stackoverflow.com/questions/1150765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/125185/"
] | Python is allowed to inline string constants; A,B,C,D are actually the same literals (if Python sees a constant expression, it treats it as a constant).
`str` is actually a class, so `str(whatever)` is calling this class' constructor, which should yield a fresh object. This explains E,F,G (note that each of these has separate identity).
As for H, I am not sure, but I'd go for explanation that this expression is too complicated for Python to figure out it's actually a constant, so it computes a new string. | in answer to S.Lott's suggestion of examining the byte code:
```
import dis
def moo():
A = "10000"
B = "10000"
C = "100" + "00"
D = "%i"%10000
E = str(10000)
F = str(10000)
G = "1000"+str(0)
H = "0".join(("10","00"))
I = str("10000")
for obj in (A,B,C,D,E,F,G,H, I):
print obj, id(obj), obj is A
moo()
print dis.dis(moo)
```
yields:
```
10000 4968128 True
10000 4968128 True
10000 4968128 True
10000 4968128 True
10000 2840928 False
10000 2840896 False
10000 2840864 False
10000 2840832 False
10000 4968128 True
4 0 LOAD_CONST 1 ('10000')
3 STORE_FAST 0 (A)
5 6 LOAD_CONST 1 ('10000')
9 STORE_FAST 1 (B)
6 12 LOAD_CONST 10 ('10000')
15 STORE_FAST 2 (C)
7 18 LOAD_CONST 11 ('10000')
21 STORE_FAST 3 (D)
8 24 LOAD_GLOBAL 0 (str)
27 LOAD_CONST 5 (10000)
30 CALL_FUNCTION 1
33 STORE_FAST 4 (E)
9 36 LOAD_GLOBAL 0 (str)
39 LOAD_CONST 5 (10000)
42 CALL_FUNCTION 1
45 STORE_FAST 5 (F)
10 48 LOAD_CONST 6 ('1000')
51 LOAD_GLOBAL 0 (str)
54 LOAD_CONST 7 (0)
57 CALL_FUNCTION 1
60 BINARY_ADD
61 STORE_FAST 6 (G)
11 64 LOAD_CONST 8 ('0')
67 LOAD_ATTR 1 (join)
70 LOAD_CONST 12 (('10', '00'))
73 CALL_FUNCTION 1
76 STORE_FAST 7 (H)
12 79 LOAD_GLOBAL 0 (str)
82 LOAD_CONST 1 ('10000')
85 CALL_FUNCTION 1
88 STORE_FAST 8 (I)
14 91 SETUP_LOOP 66 (to 160)
94 LOAD_FAST 0 (A)
97 LOAD_FAST 1 (B)
100 LOAD_FAST 2 (C)
103 LOAD_FAST 3 (D)
106 LOAD_FAST 4 (E)
109 LOAD_FAST 5 (F)
112 LOAD_FAST 6 (G)
115 LOAD_FAST 7 (H)
118 LOAD_FAST 8 (I)
121 BUILD_TUPLE 9
124 GET_ITER
>> 125 FOR_ITER 31 (to 159)
128 STORE_FAST 9 (obj)
15 131 LOAD_FAST 9 (obj)
134 PRINT_ITEM
135 LOAD_GLOBAL 2 (id)
138 LOAD_FAST 9 (obj)
141 CALL_FUNCTION 1
144 PRINT_ITEM
145 LOAD_FAST 9 (obj)
148 LOAD_FAST 0 (A)
151 COMPARE_OP 8 (is)
154 PRINT_ITEM
155 PRINT_NEWLINE
156 JUMP_ABSOLUTE 125
>> 159 POP_BLOCK
>> 160 LOAD_CONST 0 (None)
163 RETURN_VALUE
```
so it would seem that indeed the compiler understands A-D to mean the same thing, and so it saves memory by only generating it once (as suggested by Alex,Maciej and Greg). (added case `I` seems to just be str() realising it's trying to make a string from a string, and just passing it through.)
Thanks everyone, that's a lot clearer now. |
1,150,765 | I was trying to figure out which integers python only instantiates once (-6 to 256 it seems), and in the process stumbled on some string behaviour I can't see the pattern in. Sometimes, equal strings created in different ways share the same id, sometimes not. This code:
```
A = "10000"
B = "10000"
C = "100" + "00"
D = "%i"%10000
E = str(10000)
F = str(10000)
G = str(100) + "00"
H = "0".join(("10","00"))
for obj in (A,B,C,D,E,F,G,H):
print obj, id(obj), obj is A
```
prints:
```
10000 4959776 True
10000 4959776 True
10000 4959776 True
10000 4959776 True
10000 4959456 False
10000 4959488 False
10000 4959520 False
10000 4959680 False
```
I don't even see the pattern - save for the fact that the first four don't have an explicit function call - but surely that can't be it, since the "`+`" in C for example implies a function call to **add**. I especially don't understand why C and G are different, seeing as that implies that the ids of the components of the addition are more important than the outcome.
So, what is the special treatment that A-D undergo, making them come out as the same instance? | 2009/07/19 | [
"https://Stackoverflow.com/questions/1150765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/125185/"
] | In terms of language specification, any compliant Python compiler and runtime is fully allowed, for any instance of an immutable type, to make a new instance OR find an existing instance of the same type that's equal to the required value and use a new reference to that same instance. This means it's always incorrect to use `is` or by-id comparison among immutables, and any minor release may tweak or change strategy in this matter to enhance optimization.
In terms of implementations, the tradeoff are pretty clear: trying to reuse an existing instance may mean time spent (perhaps wasted) trying to find such an instance, but if the attempt succeeds then some memory is saved (as well as the time to allocate and later free the memory bits needed to hold a new instance).
How to solve those implementation tradeoffs is not entirely obvious -- if you can identify heuristics that indicate that finding a suitable existing instance is likely and the search (even if it fails) will be fast, then you may want to attempt the search-and-reuse when the heuristics suggest it, but skip it otherwise.
In your observations you seem to have found a particular dot-release implementation that performs a modicum of peephole optimization when that's entirely safe, fast, and simple, so the assignments A to D all boil down to exactly the same as A (but E to F don't, as they involve named functions or methods that the optimizer's authors may reasonably have considered not 100% safe to assume semantics for -- and low-ROI if that was done -- so they're not peephole-optimized).
Thus, A to D reusing the same instance boils down to A and B doing so (as C and D get peephole-optimized to exactly the same construct).
That reuse, in turn, clearly suggests compiler tactics/optimizer heuristics whereby identical literal constants of an immutable type in the same function's local namespace are collapsed to references to just one instance in the function's `.func_code.co_consts` (to use current CPython's terminology for attributes of functions and code objects) -- reasonable tactics and heuristics, as reuse of the same immutable constant literal within one function are somewhat frequent, AND the price is only paid once (at compile time) while the advantage is accrued many times (every time the function runs, maybe within loops etc etc).
(It so happens that these specific tactics and heuristics, given their clearly-positive tradeoffs, have been pervasive in all recent versions of CPython, and, I believe, IronPython, Jython, and PyPy as well;-).
This is a somewhat worthy and interesting are of study if you're planning to write compilers, runtime environments, peephole optimizers, etc etc, for Python itself or similar languages. I guess that deep study of the internals (ideally of many different correct implementations, of course, so as not to fixate on the quirks of a specific one -- good thing Python currently enjoys at least 4 separate production-worthy implementations, not to mention several versions of each!) can also help, indirectly, make one a better Python programmer -- but it's particularly important to focus on what's *guaranteed* by the language itself, which is somewhat less than what you'll find in common among separate implementations, because the parts that "just happen" to be in common right now (without being *required* to be so by the language specs) may perfectly well change under you at the next point release of one or another implementation and, if your production code was mistakenly relying on such details, that might cause nasty surprises;-). Plus -- it's hardly ever necessary, or even particularly helpful, to rely on such variable implementation details rather than on language-mandated behavior (unless you're coding something like an optimizer, debugger, profiler, or the like, of course;-). | in answer to S.Lott's suggestion of examining the byte code:
```
import dis
def moo():
A = "10000"
B = "10000"
C = "100" + "00"
D = "%i"%10000
E = str(10000)
F = str(10000)
G = "1000"+str(0)
H = "0".join(("10","00"))
I = str("10000")
for obj in (A,B,C,D,E,F,G,H, I):
print obj, id(obj), obj is A
moo()
print dis.dis(moo)
```
yields:
```
10000 4968128 True
10000 4968128 True
10000 4968128 True
10000 4968128 True
10000 2840928 False
10000 2840896 False
10000 2840864 False
10000 2840832 False
10000 4968128 True
4 0 LOAD_CONST 1 ('10000')
3 STORE_FAST 0 (A)
5 6 LOAD_CONST 1 ('10000')
9 STORE_FAST 1 (B)
6 12 LOAD_CONST 10 ('10000')
15 STORE_FAST 2 (C)
7 18 LOAD_CONST 11 ('10000')
21 STORE_FAST 3 (D)
8 24 LOAD_GLOBAL 0 (str)
27 LOAD_CONST 5 (10000)
30 CALL_FUNCTION 1
33 STORE_FAST 4 (E)
9 36 LOAD_GLOBAL 0 (str)
39 LOAD_CONST 5 (10000)
42 CALL_FUNCTION 1
45 STORE_FAST 5 (F)
10 48 LOAD_CONST 6 ('1000')
51 LOAD_GLOBAL 0 (str)
54 LOAD_CONST 7 (0)
57 CALL_FUNCTION 1
60 BINARY_ADD
61 STORE_FAST 6 (G)
11 64 LOAD_CONST 8 ('0')
67 LOAD_ATTR 1 (join)
70 LOAD_CONST 12 (('10', '00'))
73 CALL_FUNCTION 1
76 STORE_FAST 7 (H)
12 79 LOAD_GLOBAL 0 (str)
82 LOAD_CONST 1 ('10000')
85 CALL_FUNCTION 1
88 STORE_FAST 8 (I)
14 91 SETUP_LOOP 66 (to 160)
94 LOAD_FAST 0 (A)
97 LOAD_FAST 1 (B)
100 LOAD_FAST 2 (C)
103 LOAD_FAST 3 (D)
106 LOAD_FAST 4 (E)
109 LOAD_FAST 5 (F)
112 LOAD_FAST 6 (G)
115 LOAD_FAST 7 (H)
118 LOAD_FAST 8 (I)
121 BUILD_TUPLE 9
124 GET_ITER
>> 125 FOR_ITER 31 (to 159)
128 STORE_FAST 9 (obj)
15 131 LOAD_FAST 9 (obj)
134 PRINT_ITEM
135 LOAD_GLOBAL 2 (id)
138 LOAD_FAST 9 (obj)
141 CALL_FUNCTION 1
144 PRINT_ITEM
145 LOAD_FAST 9 (obj)
148 LOAD_FAST 0 (A)
151 COMPARE_OP 8 (is)
154 PRINT_ITEM
155 PRINT_NEWLINE
156 JUMP_ABSOLUTE 125
>> 159 POP_BLOCK
>> 160 LOAD_CONST 0 (None)
163 RETURN_VALUE
```
so it would seem that indeed the compiler understands A-D to mean the same thing, and so it saves memory by only generating it once (as suggested by Alex,Maciej and Greg). (added case `I` seems to just be str() realising it's trying to make a string from a string, and just passing it through.)
Thanks everyone, that's a lot clearer now. |
20,448,734 | ```
#!/bin/sh
echo "Hello from sh"
```
---
When I run this program (hello.sh) I get a command not found error. I'm using cygwin on Windows and I looked at cy's FAQ. It said to change the permissions to 755. I already did that and still no luck. I get the same error with these other two programs.
```
#!/usr/bin/env python
print "Hello from python"
```
---
```
#!/usr/local/bin/perl
print "Hello from perl\n";
```
---
Any answers are appreciated. Thanks. | 2013/12/08 | [
"https://Stackoverflow.com/questions/20448734",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | As has already been said, you need to add the Cygwin binaries to your path. To do so, right click on "My Computer", click "Properties", then "Advanced", then "Environment Variables".
Create a new environment variable with name `CYGWIN_HOME` and value `C:\cygwin` (or wherever you installed cygwin. The default location is `C:\cygwin\` so this should probably work for you).
Then edit the environment variable named "PATH", and tack on the following to the end:
```
;%CYGWIN_HOME%\bin;%CYGWIN_HOME%\sbin;%CYGWIN_HOME%\usr\bin;%CYGWIN_HOME%\usr\sbin;%CYGWIN_HOME%\usr\local\bin;%CYGWIN_HOME%\usr\local\sbin
```
Close your command prompt, then reopen it. The cygwin binaries should now be available. You can double-check this by typing "which bash". It should report the location of your bash executable | I was getting the "command not found" error on a Perl script. That script has the shebang line: "#!/usr/bin/env perl" as the first line and my user is the owner who has execute permissions. I was trying to run the script using the command line "ppminstall.pl ?" (the script is set up to display documentation for using the script if passed the '?' argument). I was cd'd to the directory containing the script. The solution turned out to be to run it using "./ppminstall.pl ?", i.e. explicitly specify the current directory. I think that you might be able to add "." to the current path and get the original command line to work, but it does seem safer to me to use "./" to run it since it's pretty explicit about where the script that you want to run is located. It may be possible for someone to manipulate your Path variable and cause you to be running a version of the script that you didn't intend. The '.' and '/' characters are easy to type without taking your eyes off the screen, so it seems like a useful habit to get into. Of course, I don't know if that is your problem, but it was mine. |
20,448,734 | ```
#!/bin/sh
echo "Hello from sh"
```
---
When I run this program (hello.sh) I get a command not found error. I'm using cygwin on Windows and I looked at cy's FAQ. It said to change the permissions to 755. I already did that and still no luck. I get the same error with these other two programs.
```
#!/usr/bin/env python
print "Hello from python"
```
---
```
#!/usr/local/bin/perl
print "Hello from perl\n";
```
---
Any answers are appreciated. Thanks. | 2013/12/08 | [
"https://Stackoverflow.com/questions/20448734",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | As has already been said, you need to add the Cygwin binaries to your path. To do so, right click on "My Computer", click "Properties", then "Advanced", then "Environment Variables".
Create a new environment variable with name `CYGWIN_HOME` and value `C:\cygwin` (or wherever you installed cygwin. The default location is `C:\cygwin\` so this should probably work for you).
Then edit the environment variable named "PATH", and tack on the following to the end:
```
;%CYGWIN_HOME%\bin;%CYGWIN_HOME%\sbin;%CYGWIN_HOME%\usr\bin;%CYGWIN_HOME%\usr\sbin;%CYGWIN_HOME%\usr\local\bin;%CYGWIN_HOME%\usr\local\sbin
```
Close your command prompt, then reopen it. The cygwin binaries should now be available. You can double-check this by typing "which bash". It should report the location of your bash executable | Maybe caused by the wrong line break?
Open this shell in Notepad++, and then check if the line break is `UNIX(LF)` in the lower right corner.
If not, click the text `Windows (CR LF)` and then click `Convert to UNIX(LF)`.
This works for me. |
67,698,235 | I wonder that is there anyway to speed up python3/numpy's `np.where` operation? I have a minimal working example as follows:
```
from time import time
import numpy as np
a = np.random.randint(0, 4, (768, 512, 512))
b = a.copy().astype(np.uint8)
c = b.copy()
print(a.shape)
ts = time()
a[a > 0] = 1
print(f'normalize in {time() - ts}s')
ts = time()
b[b > 0] = 1
print(f'normalize in {time() - ts}s')
ts = time()
c = np.where(c > 0, 1, c)
print(f'normalize in {time() - ts}s')
```
and the output is:
```
normalize in 0.9307191371917725s
normalize in 0.8891170024871826s
normalize in 0.7120938301086426s
```
as we can see that `np.where` gives the fastest result but it still pretty slow, as in my project I need to do such normalize operation about 50 times...
I am wondering that is there any faster way to do this? All I need is to convert any non-zero elements to 1. Thx! | 2021/05/26 | [
"https://Stackoverflow.com/questions/67698235",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15424596/"
] | The best way to accomplish your operation is to convert the matrix to the boolean datatype and then back to integer, especially if 8-bit integers are used:
```
import numpy as np
import timeit
a = np.random.randint(0, 4, (768, 512, 512))
a_short = a.astype(np.uint8)
# Time to initialize
%timeit b=a.copy()
201 ms ± 561 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit b=a.copy(); b=b.astype(bool).astype(int)
373 ms ± 347 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit b=a.copy(); b=np.where(b>0,1,b)
985 ms ± 150 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit b=a.copy(); b[b>0]=1
1.09 s ± 1.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# Time to initialize
%timeit b=a_short.copy()
26.7 ms ± 202 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
```
The best of the best:
```
%timeit b=a_short.copy(); b=b.astype(bool).astype(np.uint8)
77.5 ms ± 47.8 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit b=a_short.copy(); b=np.where(b>0,1,b)
570 ms ± 476 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit b=a_short.copy();b[b>0]=1
844 ms ± 4.09 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
``` | You could try with `numexpr`:
```
import numpy as np
import numexpr as ne
# for timing reference
a = np.random.randint(0, 4, (768, 512, 512))
%timeit a[a > 0] = 1
```
782 ms ± 9.42 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
a = np.random.randint(0, 4, (768, 512, 512))
%timeit a = ne.evaluate("where(a > 0, 1, a)")
```
254 ms ± 2.65 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
---
Or with `Numba`:
```
import numba as nb
@nb.jit(nopython=True, fastmath=True, parallel=True)
def n(x):
for a in nb.prange(x.shape[0]):
for b in range(x.shape[1]):
for c in range(x.shape[2]):
if x[a,b,c]>0:
x[a,b,c]=1
return x
# You should do this twice as it gets compiled first time through
%timeit c = n(a)
# Also, try re-assigning back on top of original, i.e. a=n(a)
```
113 ms ± 551 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) |
66,730 | I've created a python object, but I want to send signals on it. I made it inherit from gobject.GObject, but there doesn't seem to be any way to create a new signal on my object. | 2008/09/15 | [
"https://Stackoverflow.com/questions/66730",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8453/"
] | You can also define signals inside the class definition:
```
class MyGObjectClass(gobject.GObject):
__gsignals__ = {
"some-signal": (gobject.SIGNAL_RUN_FIRST, gobject.TYPE_NONE, (object, )),
}
```
The contents of the tuple are the the same as the three last arguments to `gobject.signal_new`. | Here is how:
```
import gobject
class MyGObjectClass(gobject.GObject):
...
gobject.signal_new("signal-name", MyGObjectClass, gobject.SIGNAL_RUN_FIRST,
None, (str, int))
```
Where the second to last argument is the return type and the last argument is a tuple of argument types. |
66,730 | I've created a python object, but I want to send signals on it. I made it inherit from gobject.GObject, but there doesn't seem to be any way to create a new signal on my object. | 2008/09/15 | [
"https://Stackoverflow.com/questions/66730",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8453/"
] | You can also define signals inside the class definition:
```
class MyGObjectClass(gobject.GObject):
__gsignals__ = {
"some-signal": (gobject.SIGNAL_RUN_FIRST, gobject.TYPE_NONE, (object, )),
}
```
The contents of the tuple are the the same as the three last arguments to `gobject.signal_new`. | If you use kiwi available [here](http://kiwi.async.com.br/) you can just do:
```
from kiwi.utils import gsignal
class MyObject(gobject.GObject):
gsignal('signal-name')
``` |
73,069,374 | I have this table
| Stars |
| --- |
| 3 stars |
| Stars 20 |
| 901stars |
| 8 |
I'm using python to filter the table but I'm not sure of the regex to reject 8. `[^0-9]` will flag `3 stars, 901stars` as errors too but I just want to flag that 8 is incorrect based on the regex.
The regex I need would only flag out the numbers (not number+string or string+number). | 2022/07/21 | [
"https://Stackoverflow.com/questions/73069374",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16125533/"
] | I'd suggest `\d+$` to match 1 or more digits at the end of the line. (or even `^\d+$` to search from beginning of line).
disclaimer: I don't have a clue wrt Python, but if has a standard rx library, this should work. | Here is a way without using regex.
```
pd.to_numeric(df['Stars'],errors = 'coerce').isna()
``` |
2,587,709 | I was wondering if there is a way to automatically run commands on entering the python shell as you would with the .bash\_profile or .profile scripts with bash. I would like to automatically import some modules so I don't have to type the whole shebang everytime I hop into the shell.
Thanks, | 2010/04/06 | [
"https://Stackoverflow.com/questions/2587709",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/278205/"
] | Yup you can use the `PYTHONSTARTUP` environment variable to do this as outlined [here](http://docs.python.org/tutorial/interpreter.html#the-interactive-startup-file) | Also consider using [ipython](http://ipython.scipy.org/) if you're doing a lot of interactive work. Your options for this kind of automation expand significantly. |
56,902,458 | I am trying to use this example code from the PyTorch [website](https://pytorch.org/tutorials/advanced/cpp_export.html) to convert a python model for use in the PyTorch c++ api (LibTorch).
```
Converting to Torch Script via Tracing
To convert a PyTorch model to Torch Script via tracing, you must pass an instance of your model along with an example input to the torch.jit.trace function. This will produce a torch.jit.ScriptModule object with the trace of your model evaluation embedded in the module’s forward method:
import torch
import torchvision
# An instance of your model.
model = torchvision.models.resnet18()
# An example input you would normally provide to your model's forward() method.
example = torch.rand(1, 3, 224, 224)
# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, example)
traced_script_module.save("model.pt")
```
This example works fine, and saves out the file as expected.
When i switch to this model:
`model = models.segmentation.deeplabv3_resnet101(pretrained=True)`
It gives me the following error:
```
File "convert.py", line 14, in <module>
traced_script_module = torch.jit.trace(model, example)
File "C:\Python37\lib\site-packages\torch\jit\__init__.py", line 636, in trace
raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size))
ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 256, 1, 1])
```
I assume this is because the `example` format is wrong, but how can I get the correct one?
Based on the comments below, my new code is:
```
import torch
import torchvision
from torchvision import models
model = models.segmentation.deeplabv3_resnet101(pretrained=True)
model.eval()
# An example input you would normally provide to your model's forward() method.
example = torch.rand(1, 3, 224, 224)
# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, example)
traced_script_module.save("model.pt")
```
And i now get the error:
```
File "convert.py", line 15, in <module>
traced_script_module = torch.jit.trace(model, example)
File "C:\Python37\lib\site-packages\torch\jit\__init__.py", line 636, in trace
var_lookup_fn, _force_outplace)
RuntimeError: Only tensors and (possibly nested) tuples of tensors are supported as inputs or outputs of traced functions (toIValue at C:\a\w\1\s\windows\pytorch\torch/csrc/jit/pybind_utils.h:91)
(no backtrace available)
``` | 2019/07/05 | [
"https://Stackoverflow.com/questions/56902458",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4402282/"
] | (from pytorch forums)
trace only supports modules that have tensor or tuple of tensor as output.
According to deeplabv3 implementation, its output is OrderedDict. That is a problem.
To solve this, make a wrapper module
```
class wrapper(torch.nn.Module):
def __init__(self, model):
super(wrapper, self).__init__()
self.model = model
def forward(self, input):
results = []
output = self.model(input)
for k, v in output.items():
results.append(v)
return tuple(results)
model = wrapper(deeplap_model)
#trace...
```
Has my model saving out. | Your problem originates in the BatchNorm layer. If it requires *more than one value per channel*, then your model is in training mode. Could you invoke <https://pytorch.org/cppdocs/api/classtorch_1_1nn_1_1_module.html#_CPPv4N5torch2nn6Module4evalEv> on the model and see if there's an improvement?
Otherwise you could also try to generate random data with more than one instance in a batch, i.e. `example = torch.rand(5, 3, 224, 224)`.
Furthermore, you should take care to properly normalise your data, however, this isn't causing the error here. |
14,425,833 | What I'm trying to do seems rather simple, but I can't find a way to do it.
Imagine somebody sends you a link for a dropbox folder. You can go to that URL and see all the files in the folder.
I'm trying to write a script in either python, php, or javascript to get all the download links in that folder from that URL.
How would I do that? Without logging in/authenticating into the account of the user who sent the link in the first place. | 2013/01/20 | [
"https://Stackoverflow.com/questions/14425833",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/706798/"
] | In absence of suffixes, sufficiently small numbers have `int` or `double` types
```
a = 42; /* 42 has type int */
b = 42.0; /* 42.0 has type double */
```
You can use suffixes to specify the type of the literal
```
c = 42U; /* unsigned int */
d = 42.0f; /* float */
e = 42.0L; /* long double */
f = 42ULL; /* unsigned long long (in C99; C89 didn't have long long) */
``` | >
> Will I need to cast one of the operands to (float) to make this
> condition true?
>
>
>
Yes, because integral literals are of type `int` and a division between two `int` types returns also an `int`, meaning that the fraction is omitted.
>
> Has the situation now changed, because the compiler notices one of
> the operands has to be float?
>
>
>
Yes, because one of the operands of the division is now a `double` and therefore the division returns a `double` to maintain precision. (`float` literals are followed by an `f`, as in `1.0f`, but this doesn't matter much regarding your question.) |
14,425,833 | What I'm trying to do seems rather simple, but I can't find a way to do it.
Imagine somebody sends you a link for a dropbox folder. You can go to that URL and see all the files in the folder.
I'm trying to write a script in either python, php, or javascript to get all the download links in that folder from that URL.
How would I do that? Without logging in/authenticating into the account of the user who sent the link in the first place. | 2013/01/20 | [
"https://Stackoverflow.com/questions/14425833",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/706798/"
] | >
> Will I need to cast one of the operands to (float) to make this
> condition true?
>
>
>
Yes, because integral literals are of type `int` and a division between two `int` types returns also an `int`, meaning that the fraction is omitted.
>
> Has the situation now changed, because the compiler notices one of
> the operands has to be float?
>
>
>
Yes, because one of the operands of the division is now a `double` and therefore the division returns a `double` to maintain precision. (`float` literals are followed by an `f`, as in `1.0f`, but this doesn't matter much regarding your question.) | "1", "3", "0" are all integers.
while "1.0" is a double.
what's different between the to situations is that double/integer is a double while integer/integer is an integer. |
14,425,833 | What I'm trying to do seems rather simple, but I can't find a way to do it.
Imagine somebody sends you a link for a dropbox folder. You can go to that URL and see all the files in the folder.
I'm trying to write a script in either python, php, or javascript to get all the download links in that folder from that URL.
How would I do that? Without logging in/authenticating into the account of the user who sent the link in the first place. | 2013/01/20 | [
"https://Stackoverflow.com/questions/14425833",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/706798/"
] | In absence of suffixes, sufficiently small numbers have `int` or `double` types
```
a = 42; /* 42 has type int */
b = 42.0; /* 42.0 has type double */
```
You can use suffixes to specify the type of the literal
```
c = 42U; /* unsigned int */
d = 42.0f; /* float */
e = 42.0L; /* long double */
f = 42ULL; /* unsigned long long (in C99; C89 didn't have long long) */
``` | "1", "3", "0" are all integers.
while "1.0" is a double.
what's different between the to situations is that double/integer is a double while integer/integer is an integer. |
58,464,713 | `H4sIAAAAAAAAAO1aT3PbSHaHrPHYkj1j73gn2doku3B2N7PJLjz4T0BVqQpFQiQ4BCCBoCjiomoADRIk/mhBUBT5AXJL5ZbkkqocUqVrDvkE+ijzFXJNJXkNSjLHpjyyx95JuWQfRHSjG6/f+/V7v/e6tylqi9qItimKYu5R96Jg4x82qPu1bJoWG9vUZoEGW9QnOPWHFPm3SW01owDvxWgwgcf/2aa2O+NpHFuzFOcPqXt6QP06CESZD8KAYVUkMUJFDhkk4goTYB7xvuf7LO/BuP08O8F5EeHJFvWwwGfFNMcTIsbGQ+r+IYqnmPpPPG+x7tGQDY5asT/XZXh2Omxs6aOTip4ezr2aLusJ9Dd3F1aknAaNQzFoHs7dI2Pq9eKp2zPn/Z7LthMpDmrq9VzthRm7I501Fi34a8xMx+ANxxDNxgG0aaLb6HP9xB6avdbY6O2NjORA6ieHQ6PuwjP08W7c7x3wbuNANPnuwuTtsTvqRuERp8IKtqkHQTQ5idEcdNfOcvwQGh9Rzy7OFRv/YRrBQukYn+KYlqkH0LibzYg26YvzSmeYzeh5Ns1pHOMBSn1Mz6JiSM+GUYGpn8Ibk5MoR3GUDmiU5zDwOYz8S2ivnpzEc7oYRhM6n6aYLjLag246y+lwOsHUn8M7xSyD9gEuhjinUQFvY9qeppFPPYbefRzgSYFiMuPnF+fo4jzumjXLMCzzIfWJiRJM/dnFefjtP/893SPi0J1SFDIDpnUdlv1EOytyVC2KPPKmBTHnfSLLZJN63Gvqjnbc2dftahvmv0c9JGCjPrG7pkYeP62jBA0wIKxEIcVdofDxEoWvI2+Nju9RXxJ15nhG9KOngxwHEU4LsrwQlvPdxTyCRsubREGE0jXCLwV8aO129LpeNVeFpJZCbvzTlZCbt9kqawTepH55cY7taDAsGD+O/DEx22mEZ3SO/egEl9bdvjiXQHhtX69diQ5Qkmr5nFiL3svRICGLvGkJT2t2v+NU28d7drVhaKazbiknV0t5+hb63livWABPqKUBYGwfozy+UbBHmlnX7ON9rWq318n0Qz1R6Ps+UkSfkYUKeKJACBlP9XxG9LGIKkooCzz60T3RwaLv7A3dup30eYM1EjcyR+T/4MxY6KLrHMzden/RXwwWfV47sxwzMhbGot/rz/vQZ478udXQ5mZPE2E8b9SrvOXob++JxFVPRPxJLccI9jiiBzihcRhiv6A9PIzSoPRPBJWVD+mB1iLri4tznvifBsh0G7ez2dCMW3ibjfwKaX/3ITfyDU71M2juZXFIO1lWDG/cLds9q7137FiW01y3WT6CJfz8agl/9Y5LeLBEbpZOpgmJACUWDTSIfPCTxnQyzLMsoX5B4mw2PaGzdBlqT/LolGA9msQoBb9BkB3HBLoEnuQbAFZQRgjwDgHNsFiefAnBR8ji4P0XdBuj0+U38dWE1HPoWU4KUTyOQZFJdorLV5Y76g0q/Rk0vyo6TcS+UbvPjGpDrx0b3U7TtizjuGN199ep+W9ecasb1KNu6sF2HyMvxhu30vRnZCtW6jjE6QTvEI38jmepP4U2PS1wHEcDsNxlhyTBgL+4OJf3wD/THVzQu1k6nezQNg5z5BdRlpaackAtsPsL8Bjh0nQoT0Dd/hClIP5X8EqYgxJgThZ+E4sQS7Dsb8hTgE9wGoAFSvuCaYmW/WmeQ2ikY4Lqpad7AcL8CUHJZVe77GqTLuonMB8HsVahfwvTvmB/89eXtAyaHCIQsJ6E9lEKnhCMCVgY4OA5GXYVoOmm1jY0Z5O672dxllP//V//e2XQn6zE7CaOE7w+Yl8OfMRL/A4vCju8JD2ktrI8GkSpgwbUlzW7uufoZuO4Yev145rVbms1Z4mAz68i/VIK+PJ0Cs2/VjxfDRFSGBZxEP44RWUUUcJMKAU8z8tCIAoyfKSIEuJ+kxPqifo1r37NqbS6I6j0vvHG6PzsJsbwekRGPK9glVOYCs8HjIAEjvF8TmBAPJWVEBsIiP/xIzLw/y7E2HjojiDyOvrc6hmSMTJEt16dA99nTb7PGfxh3B91IW84OINIfGZBXmD09IXZg/jcg1zBOZDMBskxDsdu3XhjRH5XGggRVQJdFeBJYJ/UgQgC7r+XD36xb1sOYMay1zLCu5h4t4S7JXx0S3j+yhI+QEJNGhvT9CSbQe53o5wPOt32frNrf6R6/t2HYVeCRBjhGnbFcexHTq++XKFXtabWcfbbVUdbR7G+XKFYtSFwGdAr0PD3TbO+uKJZL6W5plqCxFUQV1EZT8UqI8oKZhQFcQwbyh5f8X0RS69TLYlQLXlHVOjqWqr1EWyLuyXcLeEHLuF6Ff/4Sm34rVdxn/p56Ut9cMsTvEzz02niQZqfhdSvoA/yjWya+9Dnl3WwgPYuS1uQ8xOPSrwgNMGSzgTiH0ldQKCHMIiUA24sYJGacw2BfwIndXPNuVYF7wIuZp0Jf/+B4stN2bv6sYeXZyvhpa01GuDzO+uCy2rNvY0HECdAw+87tFzX6q8kuQ4snqAqiljxGFlRIYcXQ4HxOCFkFM4LIbCwouwH6wNLZYcT1ufw1G/fhRTeUPV/Ao02bJTrOttN8H5sa/XrKtWdVD9Mqq/eo1SwgcLdPJul3y/X57u21TPvJHu/kr124nWbmposewKreiIji4hlBFypgE9Q4FcQSArrS7zgK7eqqW0ckZoa6nGxL9hD76gaWSONM0fG3BzpgumMWfg90dPduce7J17jsOWSGtuoKxgNfWb2WkNj5CbGwh5ajcPITezIqu/Gfb41cutBDHPM9GgW+c3WqZvEE7crDb1eN7KiluEJrXGXV6deOo7atdYC9YJp/8jm/OSw4x7tceioFZNvBc0W53Zm0UotL7LSSdRli5YTlTVAGDMjc2ZB0569Vd2P12bmyB5bji+ZyeHY7LnD/uiANXt21F+MZ/D7zBy5I7PRnZuLg7kxGojGQhcMJ45MxzizGtrMqGtzM9HmllOFOTTWHWlvrPs9JIEVwlNllVBB8EPpnJ6hOCbn7RUIp+UphT/MIr88VZuc5GhOIuEvlqPbOFxhY8gH2lJymsuMcUwbKAWU5TQ5iOiQweVpWwfHZeHwxU1khTSWr9O1G07IH0FUgzCMShbwiVbtkMr38q4CRf0tAKsUldoiXqxqA/w3qU/nmJytQP+T5bbY6uzb1f5x7btH7FdVyMH6M+ASuPeozfg0pjaXu5qm3oUKPSI7WNXCMPJhJf6c1nW9PEV6yQ7JlYw/TEG78XxpiSLLSrZRKb81ob0488eTF0Rfqo2i1IMBesk8amhKZpgMMT4htglgE9KIzsEQ4CjIgegM5qIJYYB5Z0OckndRjoMX1FOYzQETjaMUT4C9LMXaJd8qDZlkXjnfhB7AxBHw7yKelyQuSuFTpaRRegrmyfISLc9WbUx3mlrV7lyZeouYmnx5PaUh+v10OWKbekwMgNKCFJknm9SDfLlkYpFNahtfq5Kw8U3qUfFyDaUbXef3mA9DajnpBlIrfuxHUk9XSO0upFJrGe3TFUa7CznZ+6ezn13R2VKGl0USTgwVXvQYkVcqjIh4CFkyChhflnhPlCsIAtqrXJYruay4Iwo3FEl+8L01XlGCioIYgav4jOBJEqMgFTEVzgtkFYUKV2F/7LMpXhchzkIUChKr3hpDrB33RwOp74y5fo/cQRvGEIvmpmPG5M6a6XTZ/sJN3HorMZLuwupB7KobPPTBX3NoNEhE87/ntshPX78tAq9vQ2sPo5Msvbq7VhuW99VIgAL0jPES4impHMN2Ihl3efCd4gSUR/wTbJAQkuvnl+PX3B+ZLT9AA/7IdiTXSOjbXyEBli3DJoBUSjPrVbu/crSuklsk7Wv5bnOXZKutN5qOCUCnbnGj5F2YXIhDVfUUgeEVjyAQdoSq4gpTkcEFqEhBSJVvh8D/uD0Cq3J7rq68KxWoJ8V9oTV004OplxyybcGOcZOwse6pyR8srHo8Mp1DYHZ9wW3onLlojc3RmDd6gDr+gDP5si02F/0zYzEW+qO+0HfgHYLIhiEYiTuynL2x27DHwKTeiL6tMuh9A+FziRgUR6d4GSjLxxzTl646ACQNv5rQQZblv6cHEGKL8pIkBNQsL0omBVqLsoC4coiLUepHXhRHBcTGZ9e3RiAnAz8NMCM++XnpZn3TcrQd2jLbfbpn2d904CftNDUaQEVqWr7eaVfNOrkEQq09pSVVaRsnKUi0DCOY1ubrq9IrbnV71+p0jjv71Z55WXW2NcOsms6xtXcMnz/W+i+rzlwgsYIcYqaCPJ4RWcgBVFn1GKwIKqtwqsCz/ncdKsd+zV86VO6V4sAdhO8g/MeHMM+rIe+TG+OeyDGiDwktYkWf8VSVq4ghL7BicAfhOwj/P4ZwKCOskvvGCgoURgyQyCicX2FYFPisKEqBLEvfgfBTgPCS1yo7vHKH4TsM/+gYZoNAxaIgAIZ5lRErQcioISsyHhIUlvXEkGPDt8HwVXb2s7fKztbWyhig8zrJdqtLbe/QnQTF5DIl5K8X5xiUZROKTtfaeu2bsrR22eXz7Lf/+u/w/PWkPKirTF5cVt6UHimlDOE1Yu/0q4Kk+/64NKRCymZ0LZsUpGggyCwpMCi1LIuDbJaWhQSOyPtL+MC3//JvdGmcyzTJzRIvwnT7sE1zZK7V81C6B0ZeudGOShnAnkRaUi34Hnu+KeF+QuYmdm1q1Ta8cm1XUVBCWUYKw7EepNyiLzFqgDnGD8Mg9Cuij9Aau3LErtIOJ792fkRR/weTySE2OjYAAA\u003d\u003d
H4sIAAAAAAAAAO1aT3PbSHaHrPHYkj1j73gn2doku3B2N7PJLjz4T0BVqQpFQiQ4BCCBoCjiomoADRIk/mhBUBT5AXJL5ZbkkqocUqVrDvkE+ijzFXJNJXkNSjLHpjyyx95JuWQfRHSjG6/f+/V7v/e6tylqi9qItimKYu5R96Jg4x82qPu1bJoWG9vUZoEGW9QnOPWHFPm3SW01owDvxWgwgcf/2aa2O+NpHFuzFOcPqXt6QP06CESZD8KAYVUkMUJFDhkk4goTYB7xvuf7LO/BuP08O8F5EeHJFvWwwGfFNMcTIsbGQ+r+IYqnmPpPPG+x7tGQDY5asT/XZXh2Omxs6aOTip4ezr2aLusJ9Dd3F1aknAaNQzFoHs7dI2Pq9eKp2zPn/Z7LthMpDmrq9VzthRm7I501Fi34a8xMx+ANxxDNxgG0aaLb6HP9xB6avdbY6O2NjORA6ieHQ6PuwjP08W7c7x3wbuNANPnuwuTtsTvqRuERp8IKtqkHQTQ5idEcdNfOcvwQGh9Rzy7OFRv/YRrBQukYn+KYlqkH0LibzYg26YvzSmeYzeh5Ns1pHOMBSn1Mz6JiSM+GUYGpn8Ibk5MoR3GUDmiU5zDwOYz8S2ivnpzEc7oYRhM6n6aYLjLag246y+lwOsHUn8M7xSyD9gEuhjinUQFvY9qeppFPPYbefRzgSYFiMuPnF+fo4jzumjXLMCzzIfWJiRJM/dnFefjtP/893SPi0J1SFDIDpnUdlv1EOytyVC2KPPKmBTHnfSLLZJN63Gvqjnbc2dftahvmv0c9JGCjPrG7pkYeP62jBA0wIKxEIcVdofDxEoWvI2+Nju9RXxJ15nhG9KOngxwHEU4LsrwQlvPdxTyCRsubREGE0jXCLwV8aO129LpeNVeFpJZCbvzTlZCbt9kqawTepH55cY7taDAsGD+O/DEx22mEZ3SO/egEl9bdvjiXQHhtX69diQ5Qkmr5nFiL3svRICGLvGkJT2t2v+NU28d7drVhaKazbiknV0t5+hb63livWABPqKUBYGwfozy+UbBHmlnX7ON9rWq318n0Qz1R6Ps+UkSfkYUKeKJACBlP9XxG9LGIKkooCzz60T3RwaLv7A3dup30eYM1EjcyR+T/4MxY6KLrHMzden/RXwwWfV47sxwzMhbGot/rz/vQZ478udXQ5mZPE2E8b9SrvOXob++JxFVPRPxJLccI9jiiBzihcRhiv6A9PIzSoPRPBJWVD+mB1iLri4tznvifBsh0G7ez2dCMW3ibjfwKaX/3ITfyDU71M2juZXFIO1lWDG/cLds9q7137FiW01y3WT6CJfz8agl/9Y5LeLBEbpZOpgmJACUWDTSIfPCTxnQyzLMsoX5B4mw2PaGzdBlqT/LolGA9msQoBb9BkB3HBLoEnuQbAFZQRgjwDgHNsFiefAnBR8ji4P0XdBuj0+U38dWE1HPoWU4KUTyOQZFJdorLV5Y76g0q/Rk0vyo6TcS+UbvPjGpDrx0b3U7TtizjuGN199ep+W9ecasb1KNu6sF2HyMvxhu30vRnZCtW6jjE6QTvEI38jmepP4U2PS1wHEcDsNxlhyTBgL+4OJf3wD/THVzQu1k6nezQNg5z5BdRlpaackAtsPsL8Bjh0nQoT0Dd/hClIP5X8EqYgxJgThZ+E4sQS7Dsb8hTgE9wGoAFSvuCaYmW/WmeQ2ikY4Lqpad7AcL8CUHJZVe77GqTLuonMB8HsVahfwvTvmB/89eXtAyaHCIQsJ6E9lEKnhCMCVgY4OA5GXYVoOmm1jY0Z5O672dxllP//V//e2XQn6zE7CaOE7w+Yl8OfMRL/A4vCju8JD2ktrI8GkSpgwbUlzW7uufoZuO4Yev145rVbms1Z4mAz68i/VIK+PJ0Cs2/VjxfDRFSGBZxEP44RWUUUcJMKAU8z8tCIAoyfKSIEuJ+kxPqifo1r37NqbS6I6j0vvHG6PzsJsbwekRGPK9glVOYCs8HjIAEjvF8TmBAPJWVEBsIiP/xIzLw/y7E2HjojiDyOvrc6hmSMTJEt16dA99nTb7PGfxh3B91IW84OINIfGZBXmD09IXZg/jcg1zBOZDMBskxDsdu3XhjRH5XGggRVQJdFeBJYJ/UgQgC7r+XD36xb1sOYMay1zLCu5h4t4S7JXx0S3j+yhI+QEJNGhvT9CSbQe53o5wPOt32frNrf6R6/t2HYVeCRBjhGnbFcexHTq++XKFXtabWcfbbVUdbR7G+XKFYtSFwGdAr0PD3TbO+uKJZL6W5plqCxFUQV1EZT8UqI8oKZhQFcQwbyh5f8X0RS69TLYlQLXlHVOjqWqr1EWyLuyXcLeEHLuF6Ff/4Sm34rVdxn/p56Ut9cMsTvEzz02niQZqfhdSvoA/yjWya+9Dnl3WwgPYuS1uQ8xOPSrwgNMGSzgTiH0ldQKCHMIiUA24sYJGacw2BfwIndXPNuVYF7wIuZp0Jf/+B4stN2bv6sYeXZyvhpa01GuDzO+uCy2rNvY0HECdAw+87tFzX6q8kuQ4snqAqiljxGFlRIYcXQ4HxOCFkFM4LIbCwouwH6wNLZYcT1ufw1G/fhRTeUPV/Ao02bJTrOttN8H5sa/XrKtWdVD9Mqq/eo1SwgcLdPJul3y/X57u21TPvJHu/kr124nWbmposewKreiIji4hlBFypgE9Q4FcQSArrS7zgK7eqqW0ckZoa6nGxL9hD76gaWSONM0fG3BzpgumMWfg90dPduce7J17jsOWSGtuoKxgNfWb2WkNj5CbGwh5ajcPITezIqu/Gfb41cutBDHPM9GgW+c3WqZvEE7crDb1eN7KiluEJrXGXV6deOo7atdYC9YJp/8jm/OSw4x7tceioFZNvBc0W53Zm0UotL7LSSdRli5YTlTVAGDMjc2ZB0569Vd2P12bmyB5bji+ZyeHY7LnD/uiANXt21F+MZ/D7zBy5I7PRnZuLg7kxGojGQhcMJ45MxzizGtrMqGtzM9HmllOFOTTWHWlvrPs9JIEVwlNllVBB8EPpnJ6hOCbn7RUIp+UphT/MIr88VZuc5GhOIuEvlqPbOFxhY8gH2lJymsuMcUwbKAWU5TQ5iOiQweVpWwfHZeHwxU1khTSWr9O1G07IH0FUgzCMShbwiVbtkMr38q4CRf0tAKsUldoiXqxqA/w3qU/nmJytQP+T5bbY6uzb1f5x7btH7FdVyMH6M+ASuPeozfg0pjaXu5qm3oUKPSI7WNXCMPJhJf6c1nW9PEV6yQ7JlYw/TEG78XxpiSLLSrZRKb81ob0488eTF0Rfqo2i1IMBesk8amhKZpgMMT4htglgE9KIzsEQ4CjIgegM5qIJYYB5Z0OckndRjoMX1FOYzQETjaMUT4C9LMXaJd8qDZlkXjnfhB7AxBHw7yKelyQuSuFTpaRRegrmyfISLc9WbUx3mlrV7lyZeouYmnx5PaUh+v10OWKbekwMgNKCFJknm9SDfLlkYpFNahtfq5Kw8U3qUfFyDaUbXef3mA9DajnpBlIrfuxHUk9XSO0upFJrGe3TFUa7CznZ+6ezn13R2VKGl0USTgwVXvQYkVcqjIh4CFkyChhflnhPlCsIAtqrXJYruay4Iwo3FEl+8L01XlGCioIYgav4jOBJEqMgFTEVzgtkFYUKV2F/7LMpXhchzkIUChKr3hpDrB33RwOp74y5fo/cQRvGEIvmpmPG5M6a6XTZ/sJN3HorMZLuwupB7KobPPTBX3NoNEhE87/ntshPX78tAq9vQ2sPo5Msvbq7VhuW99VIgAL0jPES4impHMN2Ihl3efCd4gSUR/wTbJAQkuvnl+PX3B+ZLT9AA/7IdiTXSOjbXyEBli3DJoBUSjPrVbu/crSuklsk7Wv5bnOXZKutN5qOCUCnbnGj5F2YXIhDVfUUgeEVjyAQdoSq4gpTkcEFqEhBSJVvh8D/uD0Cq3J7rq68KxWoJ8V9oTV004OplxyybcGOcZOwse6pyR8srHo8Mp1DYHZ9wW3onLlojc3RmDd6gDr+gDP5si02F/0zYzEW+qO+0HfgHYLIhiEYiTuynL2x27DHwKTeiL6tMuh9A+FziRgUR6d4GSjLxxzTl646ACQNv5rQQZblv6cHEGKL8pIkBNQsL0omBVqLsoC4coiLUepHXhRHBcTGZ9e3RiAnAz8NMCM++XnpZn3TcrQd2jLbfbpn2d904CftNDUaQEVqWr7eaVfNOrkEQq09pSVVaRsnKUi0DCOY1ubrq9IrbnV71+p0jjv71Z55WXW2NcOsms6xtXcMnz/W+i+rzlwgsYIcYqaCPJ4RWcgBVFn1GKwIKqtwqsCz/ncdKsd+zV86VO6V4sAdhO8g/MeHMM+rIe+TG+OeyDGiDwktYkWf8VSVq4ghL7BicAfhOwj/P4ZwKCOskvvGCgoURgyQyCicX2FYFPisKEqBLEvfgfBTgPCS1yo7vHKH4TsM/+gYZoNAxaIgAIZ5lRErQcioISsyHhIUlvXEkGPDt8HwVXb2s7fKztbWyhig8zrJdqtLbe/QnQTF5DIl5K8X5xiUZROKTtfaeu2bsrR22eXz7Lf/+u/w/PWkPKirTF5cVt6UHimlDOE1Yu/0q4Kk+/64NKRCymZ0LZsUpGggyCwpMCi1LIuDbJaWhQSOyPtL+MC3//JvdGmcyzTJzRIvwnT7sE1zZK7V81C6B0ZeudGOShnAnkRaUi34Hnu+KeF+QuYmdm1q1Ta8cm1XUVBCWUYKw7EepNyiLzFqgDnGD8Mg9Cuij9Aau3LErtIOJ792fkRR/weTySE2OjYAAA==`
When I try to store it in a variable in python, it doesn't work because it contains single and double quotes, so I get an EOL error. How can I fix this? | 2019/10/19 | [
"https://Stackoverflow.com/questions/58464713",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11989704/"
] | There are no quotes in your string; it's simply made up of two identical base64 encoded strings, each of which can be decoded fine after a small fix: it appears that what has happened is that the trailing `==` in the first string have become `\u003d\u003d`. Replace `\u003d\u003d` with `==` and use the first string, or just use the second string as is, and you're good.
I.e.,
```py
s1 = 'H4sIAAAAAAAAAO1aT3PbSHaHrPHYkj1j73gn2doku3B2N7PJLjz4T0BVqQpFQiQ4BCCBoCjiomoADRIk/mhBUBT5AXJL5ZbkkqocUqVrDvkE+ijzFXJNJXkNSjLHpjyyx95JuWQfRHSjG6/f+/V7v/e6tylqi9qItimKYu5R96Jg4x82qPu1bJoWG9vUZoEGW9QnOPWHFPm3SW01owDvxWgwgcf/2aa2O+NpHFuzFOcPqXt6QP06CESZD8KAYVUkMUJFDhkk4goTYB7xvuf7LO/BuP08O8F5EeHJFvWwwGfFNMcTIsbGQ+r+IYqnmPpPPG+x7tGQDY5asT/XZXh2Omxs6aOTip4ezr2aLusJ9Dd3F1aknAaNQzFoHs7dI2Pq9eKp2zPn/Z7LthMpDmrq9VzthRm7I501Fi34a8xMx+ANxxDNxgG0aaLb6HP9xB6avdbY6O2NjORA6ieHQ6PuwjP08W7c7x3wbuNANPnuwuTtsTvqRuERp8IKtqkHQTQ5idEcdNfOcvwQGh9Rzy7OFRv/YRrBQukYn+KYlqkH0LibzYg26YvzSmeYzeh5Ns1pHOMBSn1Mz6JiSM+GUYGpn8Ibk5MoR3GUDmiU5zDwOYz8S2ivnpzEc7oYRhM6n6aYLjLag246y+lwOsHUn8M7xSyD9gEuhjinUQFvY9qeppFPPYbefRzgSYFiMuPnF+fo4jzumjXLMCzzIfWJiRJM/dnFefjtP/893SPi0J1SFDIDpnUdlv1EOytyVC2KPPKmBTHnfSLLZJN63Gvqjnbc2dftahvmv0c9JGCjPrG7pkYeP62jBA0wIKxEIcVdofDxEoWvI2+Nju9RXxJ15nhG9KOngxwHEU4LsrwQlvPdxTyCRsubREGE0jXCLwV8aO129LpeNVeFpJZCbvzTlZCbt9kqawTepH55cY7taDAsGD+O/DEx22mEZ3SO/egEl9bdvjiXQHhtX69diQ5Qkmr5nFiL3svRICGLvGkJT2t2v+NU28d7drVhaKazbiknV0t5+hb63livWABPqKUBYGwfozy+UbBHmlnX7ON9rWq318n0Qz1R6Ps+UkSfkYUKeKJACBlP9XxG9LGIKkooCzz60T3RwaLv7A3dup30eYM1EjcyR+T/4MxY6KLrHMzden/RXwwWfV47sxwzMhbGot/rz/vQZ478udXQ5mZPE2E8b9SrvOXob++JxFVPRPxJLccI9jiiBzihcRhiv6A9PIzSoPRPBJWVD+mB1iLri4tznvifBsh0G7ez2dCMW3ibjfwKaX/3ITfyDU71M2juZXFIO1lWDG/cLds9q7137FiW01y3WT6CJfz8agl/9Y5LeLBEbpZOpgmJACUWDTSIfPCTxnQyzLMsoX5B4mw2PaGzdBlqT/LolGA9msQoBb9BkB3HBLoEnuQbAFZQRgjwDgHNsFiefAnBR8ji4P0XdBuj0+U38dWE1HPoWU4KUTyOQZFJdorLV5Y76g0q/Rk0vyo6TcS+UbvPjGpDrx0b3U7TtizjuGN199ep+W9ecasb1KNu6sF2HyMvxhu30vRnZCtW6jjE6QTvEI38jmepP4U2PS1wHEcDsNxlhyTBgL+4OJf3wD/THVzQu1k6nezQNg5z5BdRlpaackAtsPsL8Bjh0nQoT0Dd/hClIP5X8EqYgxJgThZ+E4sQS7Dsb8hTgE9wGoAFSvuCaYmW/WmeQ2ikY4Lqpad7AcL8CUHJZVe77GqTLuonMB8HsVahfwvTvmB/89eXtAyaHCIQsJ6E9lEKnhCMCVgY4OA5GXYVoOmm1jY0Z5O672dxllP//V//e2XQn6zE7CaOE7w+Yl8OfMRL/A4vCju8JD2ktrI8GkSpgwbUlzW7uufoZuO4Yev145rVbms1Z4mAz68i/VIK+PJ0Cs2/VjxfDRFSGBZxEP44RWUUUcJMKAU8z8tCIAoyfKSIEuJ+kxPqifo1r37NqbS6I6j0vvHG6PzsJsbwekRGPK9glVOYCs8HjIAEjvF8TmBAPJWVEBsIiP/xIzLw/y7E2HjojiDyOvrc6hmSMTJEt16dA99nTb7PGfxh3B91IW84OINIfGZBXmD09IXZg/jcg1zBOZDMBskxDsdu3XhjRH5XGggRVQJdFeBJYJ/UgQgC7r+XD36xb1sOYMay1zLCu5h4t4S7JXx0S3j+yhI+QEJNGhvT9CSbQe53o5wPOt32frNrf6R6/t2HYVeCRBjhGnbFcexHTq++XKFXtabWcfbbVUdbR7G+XKFYtSFwGdAr0PD3TbO+uKJZL6W5plqCxFUQV1EZT8UqI8oKZhQFcQwbyh5f8X0RS69TLYlQLXlHVOjqWqr1EWyLuyXcLeEHLuF6Ff/4Sm34rVdxn/p56Ut9cMsTvEzz02niQZqfhdSvoA/yjWya+9Dnl3WwgPYuS1uQ8xOPSrwgNMGSzgTiH0ldQKCHMIiUA24sYJGacw2BfwIndXPNuVYF7wIuZp0Jf/+B4stN2bv6sYeXZyvhpa01GuDzO+uCy2rNvY0HECdAw+87tFzX6q8kuQ4snqAqiljxGFlRIYcXQ4HxOCFkFM4LIbCwouwH6wNLZYcT1ufw1G/fhRTeUPV/Ao02bJTrOttN8H5sa/XrKtWdVD9Mqq/eo1SwgcLdPJul3y/X57u21TPvJHu/kr124nWbmposewKreiIji4hlBFypgE9Q4FcQSArrS7zgK7eqqW0ckZoa6nGxL9hD76gaWSONM0fG3BzpgumMWfg90dPduce7J17jsOWSGtuoKxgNfWb2WkNj5CbGwh5ajcPITezIqu/Gfb41cutBDHPM9GgW+c3WqZvEE7crDb1eN7KiluEJrXGXV6deOo7atdYC9YJp/8jm/OSw4x7tceioFZNvBc0W53Zm0UotL7LSSdRli5YTlTVAGDMjc2ZB0569Vd2P12bmyB5bji+ZyeHY7LnD/uiANXt21F+MZ/D7zBy5I7PRnZuLg7kxGojGQhcMJ45MxzizGtrMqGtzM9HmllOFOTTWHWlvrPs9JIEVwlNllVBB8EPpnJ6hOCbn7RUIp+UphT/MIr88VZuc5GhOIuEvlqPbOFxhY8gH2lJymsuMcUwbKAWU5TQ5iOiQweVpWwfHZeHwxU1khTSWr9O1G07IH0FUgzCMShbwiVbtkMr38q4CRf0tAKsUldoiXqxqA/w3qU/nmJytQP+T5bbY6uzb1f5x7btH7FdVyMH6M+ASuPeozfg0pjaXu5qm3oUKPSI7WNXCMPJhJf6c1nW9PEV6yQ7JlYw/TEG78XxpiSLLSrZRKb81ob0488eTF0Rfqo2i1IMBesk8amhKZpgMMT4htglgE9KIzsEQ4CjIgegM5qIJYYB5Z0OckndRjoMX1FOYzQETjaMUT4C9LMXaJd8qDZlkXjnfhB7AxBHw7yKelyQuSuFTpaRRegrmyfISLc9WbUx3mlrV7lyZeouYmnx5PaUh+v10OWKbekwMgNKCFJknm9SDfLlkYpFNahtfq5Kw8U3qUfFyDaUbXef3mA9DajnpBlIrfuxHUk9XSO0upFJrGe3TFUa7CznZ+6ezn13R2VKGl0USTgwVXvQYkVcqjIh4CFkyChhflnhPlCsIAtqrXJYruay4Iwo3FEl+8L01XlGCioIYgav4jOBJEqMgFTEVzgtkFYUKV2F/7LMpXhchzkIUChKr3hpDrB33RwOp74y5fo/cQRvGEIvmpmPG5M6a6XTZ/sJN3HorMZLuwupB7KobPPTBX3NoNEhE87/ntshPX78tAq9vQ2sPo5Msvbq7VhuW99VIgAL0jPES4impHMN2Ihl3efCd4gSUR/wTbJAQkuvnl+PX3B+ZLT9AA/7IdiTXSOjbXyEBli3DJoBUSjPrVbu/crSuklsk7Wv5bnOXZKutN5qOCUCnbnGj5F2YXIhDVfUUgeEVjyAQdoSq4gpTkcEFqEhBSJVvh8D/uD0Cq3J7rq68KxWoJ8V9oTV004OplxyybcGOcZOwse6pyR8srHo8Mp1DYHZ9wW3onLlojc3RmDd6gDr+gDP5si02F/0zYzEW+qO+0HfgHYLIhiEYiTuynL2x27DHwKTeiL6tMuh9A+FziRgUR6d4GSjLxxzTl646ACQNv5rQQZblv6cHEGKL8pIkBNQsL0omBVqLsoC4coiLUepHXhRHBcTGZ9e3RiAnAz8NMCM++XnpZn3TcrQd2jLbfbpn2d904CftNDUaQEVqWr7eaVfNOrkEQq09pSVVaRsnKUi0DCOY1ubrq9IrbnV71+p0jjv71Z55WXW2NcOsms6xtXcMnz/W+i+rzlwgsYIcYqaCPJ4RWcgBVFn1GKwIKqtwqsCz/ncdKsd+zV86VO6V4sAdhO8g/MeHMM+rIe+TG+OeyDGiDwktYkWf8VSVq4ghL7BicAfhOwj/P4ZwKCOskvvGCgoURgyQyCicX2FYFPisKEqBLEvfgfBTgPCS1yo7vHKH4TsM/+gYZoNAxaIgAIZ5lRErQcioISsyHhIUlvXEkGPDt8HwVXb2s7fKztbWyhig8zrJdqtLbe/QnQTF5DIl5K8X5xiUZROKTtfaeu2bsrR22eXz7Lf/+u/w/PWkPKirTF5cVt6UHimlDOE1Yu/0q4Kk+/64NKRCymZ0LZsUpGggyCwpMCi1LIuDbJaWhQSOyPtL+MC3//JvdGmcyzTJzRIvwnT7sE1zZK7V81C6B0ZeudGOShnAnkRaUi34Hnu+KeF+QuYmdm1q1Ta8cm1XUVBCWUYKw7EepNyiLzFqgDnGD8Mg9Cuij9Aau3LErtIOJ792fkRR/weTySE2OjYAAA=='
s2 = 'H4sIAAAAAAAAAO1aT3PbSHaHrPHYkj1j73gn2doku3B2N7PJLjz4T0BVqQpFQiQ4BCCBoCjiomoADRIk/mhBUBT5AXJL5ZbkkqocUqVrDvkE+ijzFXJNJXkNSjLHpjyyx95JuWQfRHSjG6/f+/V7v/e6tylqi9qItimKYu5R96Jg4x82qPu1bJoWG9vUZoEGW9QnOPWHFPm3SW01owDvxWgwgcf/2aa2O+NpHFuzFOcPqXt6QP06CESZD8KAYVUkMUJFDhkk4goTYB7xvuf7LO/BuP08O8F5EeHJFvWwwGfFNMcTIsbGQ+r+IYqnmPpPPG+x7tGQDY5asT/XZXh2Omxs6aOTip4ezr2aLusJ9Dd3F1aknAaNQzFoHs7dI2Pq9eKp2zPn/Z7LthMpDmrq9VzthRm7I501Fi34a8xMx+ANxxDNxgG0aaLb6HP9xB6avdbY6O2NjORA6ieHQ6PuwjP08W7c7x3wbuNANPnuwuTtsTvqRuERp8IKtqkHQTQ5idEcdNfOcvwQGh9Rzy7OFRv/YRrBQukYn+KYlqkH0LibzYg26YvzSmeYzeh5Ns1pHOMBSn1Mz6JiSM+GUYGpn8Ibk5MoR3GUDmiU5zDwOYz8S2ivnpzEc7oYRhM6n6aYLjLag246y+lwOsHUn8M7xSyD9gEuhjinUQFvY9qeppFPPYbefRzgSYFiMuPnF+fo4jzumjXLMCzzIfWJiRJM/dnFefjtP/893SPi0J1SFDIDpnUdlv1EOytyVC2KPPKmBTHnfSLLZJN63Gvqjnbc2dftahvmv0c9JGCjPrG7pkYeP62jBA0wIKxEIcVdofDxEoWvI2+Nju9RXxJ15nhG9KOngxwHEU4LsrwQlvPdxTyCRsubREGE0jXCLwV8aO129LpeNVeFpJZCbvzTlZCbt9kqawTepH55cY7taDAsGD+O/DEx22mEZ3SO/egEl9bdvjiXQHhtX69diQ5Qkmr5nFiL3svRICGLvGkJT2t2v+NU28d7drVhaKazbiknV0t5+hb63livWABPqKUBYGwfozy+UbBHmlnX7ON9rWq318n0Qz1R6Ps+UkSfkYUKeKJACBlP9XxG9LGIKkooCzz60T3RwaLv7A3dup30eYM1EjcyR+T/4MxY6KLrHMzden/RXwwWfV47sxwzMhbGot/rz/vQZ478udXQ5mZPE2E8b9SrvOXob++JxFVPRPxJLccI9jiiBzihcRhiv6A9PIzSoPRPBJWVD+mB1iLri4tznvifBsh0G7ez2dCMW3ibjfwKaX/3ITfyDU71M2juZXFIO1lWDG/cLds9q7137FiW01y3WT6CJfz8agl/9Y5LeLBEbpZOpgmJACUWDTSIfPCTxnQyzLMsoX5B4mw2PaGzdBlqT/LolGA9msQoBb9BkB3HBLoEnuQbAFZQRgjwDgHNsFiefAnBR8ji4P0XdBuj0+U38dWE1HPoWU4KUTyOQZFJdorLV5Y76g0q/Rk0vyo6TcS+UbvPjGpDrx0b3U7TtizjuGN199ep+W9ecasb1KNu6sF2HyMvxhu30vRnZCtW6jjE6QTvEI38jmepP4U2PS1wHEcDsNxlhyTBgL+4OJf3wD/THVzQu1k6nezQNg5z5BdRlpaackAtsPsL8Bjh0nQoT0Dd/hClIP5X8EqYgxJgThZ+E4sQS7Dsb8hTgE9wGoAFSvuCaYmW/WmeQ2ikY4Lqpad7AcL8CUHJZVe77GqTLuonMB8HsVahfwvTvmB/89eXtAyaHCIQsJ6E9lEKnhCMCVgY4OA5GXYVoOmm1jY0Z5O672dxllP//V//e2XQn6zE7CaOE7w+Yl8OfMRL/A4vCju8JD2ktrI8GkSpgwbUlzW7uufoZuO4Yev145rVbms1Z4mAz68i/VIK+PJ0Cs2/VjxfDRFSGBZxEP44RWUUUcJMKAU8z8tCIAoyfKSIEuJ+kxPqifo1r37NqbS6I6j0vvHG6PzsJsbwekRGPK9glVOYCs8HjIAEjvF8TmBAPJWVEBsIiP/xIzLw/y7E2HjojiDyOvrc6hmSMTJEt16dA99nTb7PGfxh3B91IW84OINIfGZBXmD09IXZg/jcg1zBOZDMBskxDsdu3XhjRH5XGggRVQJdFeBJYJ/UgQgC7r+XD36xb1sOYMay1zLCu5h4t4S7JXx0S3j+yhI+QEJNGhvT9CSbQe53o5wPOt32frNrf6R6/t2HYVeCRBjhGnbFcexHTq++XKFXtabWcfbbVUdbR7G+XKFYtSFwGdAr0PD3TbO+uKJZL6W5plqCxFUQV1EZT8UqI8oKZhQFcQwbyh5f8X0RS69TLYlQLXlHVOjqWqr1EWyLuyXcLeEHLuF6Ff/4Sm34rVdxn/p56Ut9cMsTvEzz02niQZqfhdSvoA/yjWya+9Dnl3WwgPYuS1uQ8xOPSrwgNMGSzgTiH0ldQKCHMIiUA24sYJGacw2BfwIndXPNuVYF7wIuZp0Jf/+B4stN2bv6sYeXZyvhpa01GuDzO+uCy2rNvY0HECdAw+87tFzX6q8kuQ4snqAqiljxGFlRIYcXQ4HxOCFkFM4LIbCwouwH6wNLZYcT1ufw1G/fhRTeUPV/Ao02bJTrOttN8H5sa/XrKtWdVD9Mqq/eo1SwgcLdPJul3y/X57u21TPvJHu/kr124nWbmposewKreiIji4hlBFypgE9Q4FcQSArrS7zgK7eqqW0ckZoa6nGxL9hD76gaWSONM0fG3BzpgumMWfg90dPduce7J17jsOWSGtuoKxgNfWb2WkNj5CbGwh5ajcPITezIqu/Gfb41cutBDHPM9GgW+c3WqZvEE7crDb1eN7KiluEJrXGXV6deOo7atdYC9YJp/8jm/OSw4x7tceioFZNvBc0W53Zm0UotL7LSSdRli5YTlTVAGDMjc2ZB0569Vd2P12bmyB5bji+ZyeHY7LnD/uiANXt21F+MZ/D7zBy5I7PRnZuLg7kxGojGQhcMJ45MxzizGtrMqGtzM9HmllOFOTTWHWlvrPs9JIEVwlNllVBB8EPpnJ6hOCbn7RUIp+UphT/MIr88VZuc5GhOIuEvlqPbOFxhY8gH2lJymsuMcUwbKAWU5TQ5iOiQweVpWwfHZeHwxU1khTSWr9O1G07IH0FUgzCMShbwiVbtkMr38q4CRf0tAKsUldoiXqxqA/w3qU/nmJytQP+T5bbY6uzb1f5x7btH7FdVyMH6M+ASuPeozfg0pjaXu5qm3oUKPSI7WNXCMPJhJf6c1nW9PEV6yQ7JlYw/TEG78XxpiSLLSrZRKb81ob0488eTF0Rfqo2i1IMBesk8amhKZpgMMT4htglgE9KIzsEQ4CjIgegM5qIJYYB5Z0OckndRjoMX1FOYzQETjaMUT4C9LMXaJd8qDZlkXjnfhB7AxBHw7yKelyQuSuFTpaRRegrmyfISLc9WbUx3mlrV7lyZeouYmnx5PaUh+v10OWKbekwMgNKCFJknm9SDfLlkYpFNahtfq5Kw8U3qUfFyDaUbXef3mA9DajnpBlIrfuxHUk9XSO0upFJrGe3TFUa7CznZ+6ezn13R2VKGl0USTgwVXvQYkVcqjIh4CFkyChhflnhPlCsIAtqrXJYruay4Iwo3FEl+8L01XlGCioIYgav4jOBJEqMgFTEVzgtkFYUKV2F/7LMpXhchzkIUChKr3hpDrB33RwOp74y5fo/cQRvGEIvmpmPG5M6a6XTZ/sJN3HorMZLuwupB7KobPPTBX3NoNEhE87/ntshPX78tAq9vQ2sPo5Msvbq7VhuW99VIgAL0jPES4impHMN2Ihl3efCd4gSUR/wTbJAQkuvnl+PX3B+ZLT9AA/7IdiTXSOjbXyEBli3DJoBUSjPrVbu/crSuklsk7Wv5bnOXZKutN5qOCUCnbnGj5F2YXIhDVfUUgeEVjyAQdoSq4gpTkcEFqEhBSJVvh8D/uD0Cq3J7rq68KxWoJ8V9oTV004OplxyybcGOcZOwse6pyR8srHo8Mp1DYHZ9wW3onLlojc3RmDd6gDr+gDP5si02F/0zYzEW+qO+0HfgHYLIhiEYiTuynL2x27DHwKTeiL6tMuh9A+FziRgUR6d4GSjLxxzTl646ACQNv5rQQZblv6cHEGKL8pIkBNQsL0omBVqLsoC4coiLUepHXhRHBcTGZ9e3RiAnAz8NMCM++XnpZn3TcrQd2jLbfbpn2d904CftNDUaQEVqWr7eaVfNOrkEQq09pSVVaRsnKUi0DCOY1ubrq9IrbnV71+p0jjv71Z55WXW2NcOsms6xtXcMnz/W+i+rzlwgsYIcYqaCPJ4RWcgBVFn1GKwIKqtwqsCz/ncdKsd+zV86VO6V4sAdhO8g/MeHMM+rIe+TG+OeyDGiDwktYkWf8VSVq4ghL7BicAfhOwj/P4ZwKCOskvvGCgoURgyQyCicX2FYFPisKEqBLEvfgfBTgPCS1yo7vHKH4TsM/+gYZoNAxaIgAIZ5lRErQcioISsyHhIUlvXEkGPDt8HwVXb2s7fKztbWyhig8zrJdqtLbe/QnQTF5DIl5K8X5xiUZROKTtfaeu2bsrR22eXz7Lf/+u/w/PWkPKirTF5cVt6UHimlDOE1Yu/0q4Kk+/64NKRCymZ0LZsUpGggyCwpMCi1LIuDbJaWhQSOyPtL+MC3//JvdGmcyzTJzRIvwnT7sE1zZK7V81C6B0ZeudGOShnAnkRaUi34Hnu+KeF+QuYmdm1q1Ta8cm1XUVBCWUYKw7EepNyiLzFqgDnGD8Mg9Cuij9Aau3LErtIOJ792fkRR/weTySE2OjYAAA=='
print(s1 == s2) # True
print(base64.b64decode(s1))
``` | You can use triple quotes like so :
```
my_var = """My text with quotes ' " is stored in a variable this way"""
```
You could also use ''' instead of """ if you prefer.
```
my_var = '''My text with quotes ' " is also stored in a variable this way'''
```
See : <https://docs.python.org/3/tutorial/introduction.html#strings> |
28,654,590 | Our security team asked me to not submit `plain text` passwords in my log in page, we use HTTPS though. so I thought that I need to do client side encryption before submit, I searched for solution and decided to implement [jCryption](http://www.jcryption.org/).
However the example presented there is PHP/python, after a few google found [this](https://github.com/JakeJP/jCryption.NET). I did whatever was explained in the link but I don't know how to retrieve form data that user submitted in form.

I only see a `key` returns in my login post back action and the `LoginModel` that should contain username, password is null.
**Login.cshtml**
```
@model Portal.Model.Membership.LoginModel
@using jCryption
@{
Layout = null;
jCryption.HandleRequest(Request);
}
<html>
<head>
<script src="~/Assets/global/plugins/jquery.min.js" type="text/javascript"></script>
<script src="~/Assets/global/plugins/jquery-migrate.min.js" type="text/javascript"></script>
@jCryption.RenderScriptFor("form", src: @Url.Content("~/Assets/admin/scripts/jquery.jcryption.3.1.0.js"))
</head>
<body>
@using (Html.BeginForm(null, null, FormMethod.Post, new { @autocomplete = "off" }))
{
<div class="form-body">
<div class="form-group">
@Html.LabelFor(x => x.Username, new { @class = "placeholder" })
@Html.TextBoxFor(x => x.Username, new { @class = "form-input", autocomplete = "off" })
<span></span>
</div>
<div class="form-group">
@Html.LabelFor(x => x.Password, new { @class = "placeholder" })
@Html.PasswordFor(x => x.Password, new { @class = "form-input", autocomplete = "off" })
<span></span>
</div>
</div>
<div class="form-group">
<button id="btnLogin">Login</button>
</div>
}
</body>
<!-- END BODY -->
</html>
```
**Update**
I put break point on login post action and it popup twice, one with `key` and another with `jCryption`:
 | 2015/02/22 | [
"https://Stackoverflow.com/questions/28654590",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/735839/"
] | Your understanding about the second function is correct.
You may want to store actual nodes in the `edges` slot instead of node numbers. Then, instead of binding local variables to the node list inside of the two nodes that you want to connect, though, you can bind them to the nodes themselves, which would also look better than the repeated invocations of `nth` inside of the `setf` forms. You could then also directly operate with the nodes when you access `edges` instead of having to perform an extra lookup.
To improve the style of the first function, I'd suggest two things:
Use `push` instead of `(setf ... (cons thing ...))`
===================================================
`slot-value` is an accessor, and as such, it can be used as a place. `setf` is one way to change the value of a place, but Common Lisp defines other operations on places. The pattern that you are using here is implemented in the macro [`push`](http://www.lispworks.com/documentation/lw51/CLHS/Body/m_push.htm). By using it, you can simplify your expressions significantly:
```
(push end-node (slot-value (nth begin-node node-list) 'edges))
```
Define an accessor for edges instead of using `slot-value`
==========================================================
`slot-value` should be used rarely, and as a low-level mechanism, because it is verbose and less flexible than using a named accessor. `slot-value` also puts the important part of the access, the name of the slot, to the end of the expression, which often makes the code harder to read. In your case, I would name the accessor `edges` in the class definition:
```
(edges :initform nil :accessor edges)
```
That would make your first version more readable:
```
(push end-node (edges (nth begin-node node-list)))
``` | Instead of:
```
(setf (slot-value (nth begin-node node-list) 'edges)
(cons end-node (slot-value (nth begin-node node-list) 'edges)))
```
You can write:
```
(push end-node (slot-value (nth begin-node node-list) 'edges))
```
Why is the following not working as expected?
```
(let ((begin-node-lst (slot-value (nth begin-node node-list) 'edges))
(end-node-lst (slot-value (nth end-node node-list) 'edges)))
(setf begin-node-lst (cons end-node begin-node-lst))
(setf end-node-lst (cons begin-node end-node-lst)))
```
You write: *attempts to solve the double lookup by binding the location*.
That does not work. You can bind locations. You can only bind values. `LET` binds the values of forms to variables.
In Common Lisp there is the idea of a *place*. Many side-effect macros work with places: `SETF` and `PUSH` are examples. A place is only the source of the accessing code, not a real first-class object
Examples for places:
* `foo` as a variable
* `(aref foo 10)` as an array access
* `(slot-value object 'foo)` as a slot access
* `(slot-value (find-object *somewhere* 'foo) 'bar)` as a slot access...
Macros like `SETF` find out at macroexpansion time, based on the source of the accessing form, what form for a setting form to generate. It can't look at things like bindings, where the bindings form is coming from.
In this case one would usually retrieve the object (typically CLOS object or structure) from the data structure, keep a reference to the object and then change the slot value using `SLOT-VALUE` or `WITH-SLOTS`. Alternatively use an accessor.
```
(setf (slot-value person 'name) "Eva Lu Ator")
(setf (slot-value person 'group) :development)
```
would be
```
(with-slots (name group) person
(setf name "Eva Lu Ator"
group :development))
```
**General Advice**:
Also note in your function the confusion what a `node` is. Is it an object of type `node` or is it a number? If it is a number, I would name the variable `node-number`.
Avoid `NTH` and lists. If you need random access, use vectors.
Either use node objects directly (and not numbers for those) or use symbols for them: `node-123` and link the node symbol to the node object in some registry. You might want to use numbers only in some cases...
I would write code like this:
```
(defun connect-nodes (node-vector)
"Given a NODE-VECTOR, repeats for +MAX-EDGES+ amount of times to connect
nodes via randomly generated edges."
(loop repeat +max-edges+
for begin-node-number = (random +max-rooms+) and
end-node-number = (random +max-rooms+)
when (/= begin-node-number end-node-number) do
(let ((begin-node (aref node-vector begin-node-number))
(end-node (aref node-vector begin-node-number)))
(push end-node (slot-value begin-node 'edges))
(push begin-node (slot-value end-node 'edges))))
node-vector)
``` |
40,712,568 | This python script returns a value of `90.0`:
```
import itertools
a=[12,345,1423,65,234]
b=[234,12,34,1,1,1]
c=[1,2,3,4]
def TestFunction (a, b, c):
result = a + b/c
return result
Params=itertools.product(a, b, c)
x = 2
print(TestFunction(*list(Params)[x]))
```
However, I would like to evaluate my function for a range of x as:
```
for x in range (5):
print(TestFunction(*list(Params)[x]))
```
I expected this to return a sequence of values: `246.0`, `129.0`, `90.0`, `70.5`, `14.0`; however, I get:
>
> "IndexError: list index out of range."
>
>
>
Why does the function evaluate when `x` is defined explicitly but not when it is an iterator? | 2016/11/21 | [
"https://Stackoverflow.com/questions/40712568",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7098896/"
] | ```
for x in range (5):
print(TestFunction(*list(Params)[x]))
```
`Params` is an iterator. The first time through the loop, you consume it entirely by converting it to a list. Therefore on the second iteration there's nothing in it and converting it to a list yields `[]`, the empty list, and trying get index 1 of that won't work.
Instead, convert the iterator to a list *outside* the loop.
```
params = list(Params)
for x in range(5):
print(TestFunction(*params[x]))
``` | Because calling `list()` on the iterator exhausts the iterator. Thus it can be called once only:
```
>>> Params=itertools.product(a, b, c)
>>> Params
<itertools.product object at 0x7f5ed3da5870>
>>> list(Params)
[(12, 234, 1), (12, 234, 2)..., (234, 1, 4)]
>>> list(Params)
[]
```
You can see that the second call to `list(Params)` evaluates to an empty list, and attempting to subscript that raises an `IndexError`.
You could store the values from the iterator in another variable:
```
l = list(Params)
```
and then access it as you want, accessing arbitrary items by subscript.
If you want to access the items sequentially, as per your for loop example, just call `next()` on the iterator:
```
for i in range(5):
print(TestFunction(*next(Params)))
``` |
40,712,568 | This python script returns a value of `90.0`:
```
import itertools
a=[12,345,1423,65,234]
b=[234,12,34,1,1,1]
c=[1,2,3,4]
def TestFunction (a, b, c):
result = a + b/c
return result
Params=itertools.product(a, b, c)
x = 2
print(TestFunction(*list(Params)[x]))
```
However, I would like to evaluate my function for a range of x as:
```
for x in range (5):
print(TestFunction(*list(Params)[x]))
```
I expected this to return a sequence of values: `246.0`, `129.0`, `90.0`, `70.5`, `14.0`; however, I get:
>
> "IndexError: list index out of range."
>
>
>
Why does the function evaluate when `x` is defined explicitly but not when it is an iterator? | 2016/11/21 | [
"https://Stackoverflow.com/questions/40712568",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7098896/"
] | ```
for x in range (5):
print(TestFunction(*list(Params)[x]))
```
`Params` is an iterator. The first time through the loop, you consume it entirely by converting it to a list. Therefore on the second iteration there's nothing in it and converting it to a list yields `[]`, the empty list, and trying get index 1 of that won't work.
Instead, convert the iterator to a list *outside* the loop.
```
params = list(Params)
for x in range(5):
print(TestFunction(*params[x]))
``` | Because `Params` is an iterator and converting it to a list consumes it entirely.
There are 2 methods to handle your case:
1. Convert the iterator to a list outside of loop:
```
Params = list(itertools.product(a, b, c))
```
2. Use `copy.copy`:
```
from copy import copy
print(TestFunction(*list(copy(Params))[x]))
```
Most of the time Method 1 is appropriate. Occasionally you may want Method 2. |
40,712,568 | This python script returns a value of `90.0`:
```
import itertools
a=[12,345,1423,65,234]
b=[234,12,34,1,1,1]
c=[1,2,3,4]
def TestFunction (a, b, c):
result = a + b/c
return result
Params=itertools.product(a, b, c)
x = 2
print(TestFunction(*list(Params)[x]))
```
However, I would like to evaluate my function for a range of x as:
```
for x in range (5):
print(TestFunction(*list(Params)[x]))
```
I expected this to return a sequence of values: `246.0`, `129.0`, `90.0`, `70.5`, `14.0`; however, I get:
>
> "IndexError: list index out of range."
>
>
>
Why does the function evaluate when `x` is defined explicitly but not when it is an iterator? | 2016/11/21 | [
"https://Stackoverflow.com/questions/40712568",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7098896/"
] | Because calling `list()` on the iterator exhausts the iterator. Thus it can be called once only:
```
>>> Params=itertools.product(a, b, c)
>>> Params
<itertools.product object at 0x7f5ed3da5870>
>>> list(Params)
[(12, 234, 1), (12, 234, 2)..., (234, 1, 4)]
>>> list(Params)
[]
```
You can see that the second call to `list(Params)` evaluates to an empty list, and attempting to subscript that raises an `IndexError`.
You could store the values from the iterator in another variable:
```
l = list(Params)
```
and then access it as you want, accessing arbitrary items by subscript.
If you want to access the items sequentially, as per your for loop example, just call `next()` on the iterator:
```
for i in range(5):
print(TestFunction(*next(Params)))
``` | Because `Params` is an iterator and converting it to a list consumes it entirely.
There are 2 methods to handle your case:
1. Convert the iterator to a list outside of loop:
```
Params = list(itertools.product(a, b, c))
```
2. Use `copy.copy`:
```
from copy import copy
print(TestFunction(*list(copy(Params))[x]))
```
Most of the time Method 1 is appropriate. Occasionally you may want Method 2. |
68,650,493 | I have some experience starting starting up Apache Airflow but I have now an error when I try to `airflow db init` command. The error is as below. I am running Airflow on virtual env with Python 3.8. Any help would appreciated. I am not sure to understand this error as I managed to init the db without importing any `_cffi_backend` module in the past.
The error:
```
airflow) airflow@26c8ed88c008:~/airflow$ airflow db init
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/__main__.py", line 40, in main
args.func(args)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 47, in command
func = import_string(import_path)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/module_loading.py", line 32, in import_string
module = import_module(module_path)
File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/commands/db_command.py", line 24, in <module>
from airflow.utils import cli as cli_utils, db
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/db.py", line 26, in <module>
from airflow.jobs.base_job import BaseJob # noqa: F401
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/__init__.py", line 19, in <module>
import airflow.jobs.backfill_job
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/backfill_job.py", line 29, in <module>
from airflow import models
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/__init__.py", line 20, in <module>
from airflow.models.baseoperator import BaseOperator, BaseOperatorLink
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/baseoperator.py", line 59, in <module>
from airflow.models.taskinstance import Context, TaskInstance, clear_task_instances
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 57, in <module>
from airflow.models.variable import Variable
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/variable.py", line 24, in <module>
from cryptography.fernet import InvalidToken as InvalidFernetToken
File "/usr/lib/python3/dist-packages/cryptography/fernet.py", line 17, in <module>
from cryptography.hazmat.primitives import hashes, padding
File "/usr/lib/python3/dist-packages/cryptography/hazmat/primitives/padding.py", line 13, in <module>
from cryptography.hazmat.bindings._padding import lib
ModuleNotFoundError: No module named '_cffi_backend'
``` | 2021/08/04 | [
"https://Stackoverflow.com/questions/68650493",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8867871/"
] | Use `CROSS JOIN` to build all combinations and top up with a `LEFT JOIN`:
```
SELECT p.product_id, s.status, COUNT(t.any_not_null_column)
FROM (SELECT DISTINCT product_id FROM t) AS p
CROSS JOIN (SELECT DISTINCT status FROM t) AS s
LEFT JOIN t ON p.product_id = t.product_id AND s.status = t.status
GROUP BY p.product_id, s.status
``` | The following is a Postgres solution (a database I strongly recommend over MS Access). The idea is to generate all the rows and then use `left join` and `group by` to get the counts you want:
```
select p.product_id, s.status, count(d.product_id)
from (select distinct product_id from details) p cross join
(values ('ok'), ('bad')) s left join
details d
on d.product_id = p.product_id and d.status = s.status
group by p.product_id, s.status;
```
Note: You might have other tables that have the list of products and/or statuses that you want.
An equivalent version in MS Access (which would also work in Postgres) might look like:
```
select p.product_id, s.status, count(d.product_id)
from ((select distinct product_id from details) p,
(select distinct status from details) s
) left join
details d
on d.product_id = p.product_id and
d.status = s.status
group by p.product_id, s.status;
``` |
42,881,650 | I have a list e.g. `l1 = [1,2,3,4]` and another list: `l2 = [1,2,3,4,5,6,7,1,2,3,4]`.
I would like to check if `l1` is a subset in `l2` and if it is, then I want to delete these elements from `l2` such that `l2` would become `[5,6,7,1,2,3,4]`, where indexes 0-3 have been removed.
Is there a pythonic way of doing this?
I tried this:
```
l1 = [1,2,3,4]
l2 = [1,2,3,4,5,6,7,1,2,3,4]
l3 = []
for i in l2:
if i in l1:
l3.append(i)
-> prints [5,6,7]
```
However I would like the output to be `[5,6,7,1,2,3,4]`. | 2017/03/19 | [
"https://Stackoverflow.com/questions/42881650",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | Well, here is a brute-force way. There are probably more efficient ways. If you expect to encounter a matching sublist early, the performance shouldn't be terrible.
```
>>> l1 = [1,2,3,4]
>>> l2 = [1,2,3,4,5,6,7,1,2,3,4]
>>> for i in range(0, len(l2), len(l1)):
... if l2[i:len(l1)] == l1:
... del l2[i:len(l1)]
... break
...
>>> l1
[1, 2, 3, 4]
>>> l2
[5, 6, 7, 1, 2, 3, 4]
>>>
```
Or if you don't want to modify `l2`, you could do the following:
```
>>> l1 = [1,2,3,4]
>>> l2 = [1,2,3,4,5,6,7,1,2,3,4]
>>> for i in range(0, len(l2), len(l1)):
... if l2[i:len(l1)] == l1:
... break
...
>>> l2[:i] + l2[i+len(l1):]
[5, 6, 7, 1, 2, 3, 4]
>>>
``` | I'm not proud of this, and it's not pythonic, but I thought it might be a bit of fun to write. I've annotated the code to make it a little more obvious what's happening.
```
>>> import re
>>> from ast import literal_eval
>>> l1 = [1,2,3,4]
>>> l2 = [1,2,3,4,5,6,7,1,2,3,4]
>>> literal_eval( # convert the string into a python collection
... re.sub( # use a regex as a replacement
... str(l1)[1:-1], # string of the list, without surrounding brackets
... '', # replace with empty
... str(l2)[1:-1], # string for replacement, again without brackets
... count=1 # only replace the first match
... ).strip(',') # replace any preceeding or trailing commas
... .strip() # replace any preceeding or trailing whitespace
... )
(5, 6, 7, 1, 2, 3, 4)
```
The output here is a tuple, but you could wrap it in `list()` if that's what you really want. Again, I'm not proud of this, but it works at least for this simple case. Juanpa's solution is much more preferable to this. |
42,890,951 | I have anaconda installed in my Mac. I am trying to install python-igraph.
I tried the following commands to install it:
```
$ brew install igraph
$ pip install python-igraph
```
My python setup:
```
Python 2.7.13 |Anaconda custom (x86_64)| (default, Dec 20 2016, 23:05:08)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] on darwin`
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
```
But when I try to import `igraph`, I get the following error:
```
>>> import igraph
```
>
> Traceback (most recent call last):
>
>
> File "", line 1, in
>
>
> File
> "/Users/arefindk/anaconda2/lib/python2.7/site-packages/igraph/**init**.py",
> line 34, in
>
>
>
> ```
> from igraph._igraph import *
>
> ```
>
> ImportError:
> dlopen(/Users/arefindk/anaconda2/lib/python2.7/site-packages/igraph/\_igraph.so,
> 2): Symbol not found: \_iconv
>
>
> Referenced from:
> /Users/arefindk/anaconda2/lib/python2.7/site-packages/igraph/\_igraph.so
>
>
> Expected in: flat namespace
>
>
> in
> /Users/arefindk/anaconda2/lib/python2.7/site-packages/igraph/\_igraph.so
>
>
>
Now, I have tried to follow all the steps that is provided in this [answer](https://stackoverflow.com/questions/30505490/error-importing-igraph).
My hunch is, it might be a problem related to Anaconda because my friend successfully installed and imported python-igraph using the above commands and he doesn't have Anaconda installed.
How can I both solve this problem and keep Anaconda in my Mac? | 2017/03/19 | [
"https://Stackoverflow.com/questions/42890951",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2529269/"
] | I found exactly what I was looking for, [SwipeCellKit](https://github.com/jerkoch/SwipeCellKit), by jerkoch. This library performs the same exact actions as the stock iOS Mail app does when swiping to the left. No need to deal with different `UIViews` and `UIButtons`.
To use, simply conform to the `SwipeTableViewCellDelegate`, and use it in `editActionsForRowAt` like so:
```
func tableView(_ tableView: UITableView, editActionsForRowAt indexPath: IndexPath, for orientation: SwipeActionsOrientation) -> [SwipeAction]? {
guard orientation == .right else { return nil }
let deleteAction = SwipeAction(style: .destructive, title: "Delete") { action, indexPath in
// handle action by updating model with deletion
}
// customize the action appearance
deleteAction.image = UIImage(named: "delete")
return [deleteAction]
}
```
Make sure to change the cell's class to `SwipeTableViewCell` and set its delegate like so: `cell.delegate = self`. | I would take a look at the [SWTableViewCell](https://github.com/CEWendel/SWTableViewCell) by CEWendel. It looks like it has exactly what you're looking for. |
42,890,951 | I have anaconda installed in my Mac. I am trying to install python-igraph.
I tried the following commands to install it:
```
$ brew install igraph
$ pip install python-igraph
```
My python setup:
```
Python 2.7.13 |Anaconda custom (x86_64)| (default, Dec 20 2016, 23:05:08)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] on darwin`
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
```
But when I try to import `igraph`, I get the following error:
```
>>> import igraph
```
>
> Traceback (most recent call last):
>
>
> File "", line 1, in
>
>
> File
> "/Users/arefindk/anaconda2/lib/python2.7/site-packages/igraph/**init**.py",
> line 34, in
>
>
>
> ```
> from igraph._igraph import *
>
> ```
>
> ImportError:
> dlopen(/Users/arefindk/anaconda2/lib/python2.7/site-packages/igraph/\_igraph.so,
> 2): Symbol not found: \_iconv
>
>
> Referenced from:
> /Users/arefindk/anaconda2/lib/python2.7/site-packages/igraph/\_igraph.so
>
>
> Expected in: flat namespace
>
>
> in
> /Users/arefindk/anaconda2/lib/python2.7/site-packages/igraph/\_igraph.so
>
>
>
Now, I have tried to follow all the steps that is provided in this [answer](https://stackoverflow.com/questions/30505490/error-importing-igraph).
My hunch is, it might be a problem related to Anaconda because my friend successfully installed and imported python-igraph using the above commands and he doesn't have Anaconda installed.
How can I both solve this problem and keep Anaconda in my Mac? | 2017/03/19 | [
"https://Stackoverflow.com/questions/42890951",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2529269/"
] | I would take a look at the [SWTableViewCell](https://github.com/CEWendel/SWTableViewCell) by CEWendel. It looks like it has exactly what you're looking for. | May be late to answer this, but in case anyone else is looking - consider this read to answer the question: <https://www.raywenderlich.com/62435/make-swipeable-table-view-cell-actions-without-going-nuts-scroll-views> |
42,890,951 | I have anaconda installed in my Mac. I am trying to install python-igraph.
I tried the following commands to install it:
```
$ brew install igraph
$ pip install python-igraph
```
My python setup:
```
Python 2.7.13 |Anaconda custom (x86_64)| (default, Dec 20 2016, 23:05:08)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] on darwin`
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
```
But when I try to import `igraph`, I get the following error:
```
>>> import igraph
```
>
> Traceback (most recent call last):
>
>
> File "", line 1, in
>
>
> File
> "/Users/arefindk/anaconda2/lib/python2.7/site-packages/igraph/**init**.py",
> line 34, in
>
>
>
> ```
> from igraph._igraph import *
>
> ```
>
> ImportError:
> dlopen(/Users/arefindk/anaconda2/lib/python2.7/site-packages/igraph/\_igraph.so,
> 2): Symbol not found: \_iconv
>
>
> Referenced from:
> /Users/arefindk/anaconda2/lib/python2.7/site-packages/igraph/\_igraph.so
>
>
> Expected in: flat namespace
>
>
> in
> /Users/arefindk/anaconda2/lib/python2.7/site-packages/igraph/\_igraph.so
>
>
>
Now, I have tried to follow all the steps that is provided in this [answer](https://stackoverflow.com/questions/30505490/error-importing-igraph).
My hunch is, it might be a problem related to Anaconda because my friend successfully installed and imported python-igraph using the above commands and he doesn't have Anaconda installed.
How can I both solve this problem and keep Anaconda in my Mac? | 2017/03/19 | [
"https://Stackoverflow.com/questions/42890951",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2529269/"
] | I found exactly what I was looking for, [SwipeCellKit](https://github.com/jerkoch/SwipeCellKit), by jerkoch. This library performs the same exact actions as the stock iOS Mail app does when swiping to the left. No need to deal with different `UIViews` and `UIButtons`.
To use, simply conform to the `SwipeTableViewCellDelegate`, and use it in `editActionsForRowAt` like so:
```
func tableView(_ tableView: UITableView, editActionsForRowAt indexPath: IndexPath, for orientation: SwipeActionsOrientation) -> [SwipeAction]? {
guard orientation == .right else { return nil }
let deleteAction = SwipeAction(style: .destructive, title: "Delete") { action, indexPath in
// handle action by updating model with deletion
}
// customize the action appearance
deleteAction.image = UIImage(named: "delete")
return [deleteAction]
}
```
Make sure to change the cell's class to `SwipeTableViewCell` and set its delegate like so: `cell.delegate = self`. | May be late to answer this, but in case anyone else is looking - consider this read to answer the question: <https://www.raywenderlich.com/62435/make-swipeable-table-view-cell-actions-without-going-nuts-scroll-views> |
49,132,008 | I have next method:
```
public void callPython() throws IOException {
Runtime rt = Runtime.getRuntime();
Process pr = rt.exec("python -c \"from test import read_and_show; read_and_show()\" src/main/python");
BufferedReader bfr = new BufferedReader(new InputStreamReader(pr.getInputStream()));
BufferedReader bfre = new BufferedReader(new InputStreamReader(pr.getErrorStream()));
String outputStr = "";
while ((outputStr = bfr.readLine()) != null) {
System.out.println(outputStr);
}
while ((outputStr = bfre.readLine()) != null) {
System.out.println(outputStr);
}
}
```
in python file next code:
```
import os
from stat import *
def read_and_show():
print('worked!')
```
when i call this in terminal all worked correctly (before i cd to this directory):
```
MacBook-Pro-Nikita-2:python NG$ python -c "from test import read_and_show; read_and_show()"
worked!
```
when i run this code in my java code he return error:
```
File "<string>", line 1
"from
^
SyntaxError: EOL while scanning string literal
```
What i make wrong?
P.S.: i need run python method/class/file for read, parse and show graphical data. but for this need when java run python single method (def) | 2018/03/06 | [
"https://Stackoverflow.com/questions/49132008",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5417750/"
] | When executing other programs from java, I've found it's easier to keep it as simple as possible in java and instead execute a batch file
```
Runtime.getRuntime().exec("chrome.exe www.google.com");
```
Would instead become
```
Runtime.getRuntime().exec("openChrome.bat");
```
and openChrome.bat:
```
chrome.exe www.google.com
```
This makes it easier to test the command without recompiling, but may get complicated if you need to pass variables as arguments
To use shell built-ins like `echo` and `cd`, the batch file works wonders *(ie `echo test | program`)*
---
The major downside is you will have a floating .bat file next to your code
If packaging to a .jar, you may need to first [copy the .bat file out of the .jar](https://stackoverflow.com/a/44077426/5475891) before executing | You're missing the shebang statement that states where the python interpreter is. It should be line #1
```
#!/usr/bin/python
``` |
49,132,008 | I have next method:
```
public void callPython() throws IOException {
Runtime rt = Runtime.getRuntime();
Process pr = rt.exec("python -c \"from test import read_and_show; read_and_show()\" src/main/python");
BufferedReader bfr = new BufferedReader(new InputStreamReader(pr.getInputStream()));
BufferedReader bfre = new BufferedReader(new InputStreamReader(pr.getErrorStream()));
String outputStr = "";
while ((outputStr = bfr.readLine()) != null) {
System.out.println(outputStr);
}
while ((outputStr = bfre.readLine()) != null) {
System.out.println(outputStr);
}
}
```
in python file next code:
```
import os
from stat import *
def read_and_show():
print('worked!')
```
when i call this in terminal all worked correctly (before i cd to this directory):
```
MacBook-Pro-Nikita-2:python NG$ python -c "from test import read_and_show; read_and_show()"
worked!
```
when i run this code in my java code he return error:
```
File "<string>", line 1
"from
^
SyntaxError: EOL while scanning string literal
```
What i make wrong?
P.S.: i need run python method/class/file for read, parse and show graphical data. but for this need when java run python single method (def) | 2018/03/06 | [
"https://Stackoverflow.com/questions/49132008",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5417750/"
] | When executing other programs from java, I've found it's easier to keep it as simple as possible in java and instead execute a batch file
```
Runtime.getRuntime().exec("chrome.exe www.google.com");
```
Would instead become
```
Runtime.getRuntime().exec("openChrome.bat");
```
and openChrome.bat:
```
chrome.exe www.google.com
```
This makes it easier to test the command without recompiling, but may get complicated if you need to pass variables as arguments
To use shell built-ins like `echo` and `cd`, the batch file works wonders *(ie `echo test | program`)*
---
The major downside is you will have a floating .bat file next to your code
If packaging to a .jar, you may need to first [copy the .bat file out of the .jar](https://stackoverflow.com/a/44077426/5475891) before executing | Runtime.exec is obsolete. It was replaced by the [ProcessBuilder class](https://docs.oracle.com/javase/9/docs/api/java/lang/ProcessBuilder.html) a long time ago:
```
ProcessBuilder builder = new ProcessBuilder(
"python", "-c", "from test import read_and_show; read_and_show()", "src/main/python");
builder.redirectInput(ProcessBuilder.Redirect.INHERIT);
Process pr = builder.start();
```
Notice that `from test import read_and_show; read_and_show()` does *not* have double-quote characters around it. Those quotes are something used by a shell (like `bash`). The python command never actually sees them, and shouldn’t see them. Executing a child process from Java (or any other language, really) does not invoke a shell; it executes the command directly. Which means the quotes wouldn’t be interpreted by any shell, and they’d be passed to the python program as part of the argument. |
33,560,877 | I would like to convert my list(items) from string to int, therefore I can calculate the numbers in it. However, the python showed up the invalid literal for int() with base 10 error, and I've no idea what's wrong with it. (list: in one line only, separate by comma and no space before and after comma.)
list:
```
51,2,2,49,15,2,1,14
```
Code:
```
def parseCSV(file):
file_open = open(file)
print (file_open.read())
with open(file) as rd:
items = rd.readlines()
print (items)
items = list(map(int, items))
print (items)
def main():
file = str(input("Please enter the full name of the desired file (with extension) at the prompt below: \n"))
parseCSV(file)
main()
``` | 2015/11/06 | [
"https://Stackoverflow.com/questions/33560877",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5532342/"
] | I modified according to your code. Please have try.
```
def main():
file = str(input("Please enter the full name of the desired file (with extension) at the prompt below: \n"))
parseCSV(file)
def parseCSV(file):
file_open = open(file)
print (file_open.read())
with open(file) as rd:
lines = rd.readlines()
for line in lines:
items = map(int, line.split(','))
print (items)
main()
``` | To answer your question, *'What's wrong with it?'*:
You are reading in your whole csv to a list containing one item that is the whole file as a long string. Even if your csv only contains integers the way you are parsing in all of the lines will not work. |
33,560,877 | I would like to convert my list(items) from string to int, therefore I can calculate the numbers in it. However, the python showed up the invalid literal for int() with base 10 error, and I've no idea what's wrong with it. (list: in one line only, separate by comma and no space before and after comma.)
list:
```
51,2,2,49,15,2,1,14
```
Code:
```
def parseCSV(file):
file_open = open(file)
print (file_open.read())
with open(file) as rd:
items = rd.readlines()
print (items)
items = list(map(int, items))
print (items)
def main():
file = str(input("Please enter the full name of the desired file (with extension) at the prompt below: \n"))
parseCSV(file)
main()
``` | 2015/11/06 | [
"https://Stackoverflow.com/questions/33560877",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5532342/"
] | You are trying to convert this to an integer:
```
'51,2,2,49,15,2,1,14'
```
The error message is pretty explicit in showing you what it is you are trying to typecast to an `int`.
The data in your file is most likely comma separated, and you are reading it in as a list with a single string entry that is comma separated like this:
```
['51,2,2,49,15,2,1,14']
```
So, you should be reading your file using read() and then split on ',' like this:
```
with open(file) as f:
fd = f.read().split(',')
d = list(map(int, fd))
```
Output of `d`:
```
[51, 2, 2, 49, 15, 2, 1, 14]
```
Alternatively, because you are dealing with a delimited file, and if you might have several of those rows, it might be worth it to also look in to using the [csv](https://docs.python.org/3/library/csv.html) module as well. Here is an example using it while also incorporating your bit of code:
```
import csv
with open(file) as cf:
c = csv.reader(cf, delimiter=',')
for i in c:
print(list(map(int, i)))
```
Finally, can be further simplified:
```
with open('crma.csv') as cf:
data = [list(map(int, i)) for i in csv.reader(cf, delimiter=',')]
```
Assuming you have three rows in your file, it would then look like this:
```
[[51, 2, 2, 49, 15, 2, 1, 14], [51, 2, 2, 49, 15, 2, 1, 12], [51, 2, 2, 49, 15, 2, 1, 11]]
``` | I modified according to your code. Please have try.
```
def main():
file = str(input("Please enter the full name of the desired file (with extension) at the prompt below: \n"))
parseCSV(file)
def parseCSV(file):
file_open = open(file)
print (file_open.read())
with open(file) as rd:
lines = rd.readlines()
for line in lines:
items = map(int, line.split(','))
print (items)
main()
``` |
33,560,877 | I would like to convert my list(items) from string to int, therefore I can calculate the numbers in it. However, the python showed up the invalid literal for int() with base 10 error, and I've no idea what's wrong with it. (list: in one line only, separate by comma and no space before and after comma.)
list:
```
51,2,2,49,15,2,1,14
```
Code:
```
def parseCSV(file):
file_open = open(file)
print (file_open.read())
with open(file) as rd:
items = rd.readlines()
print (items)
items = list(map(int, items))
print (items)
def main():
file = str(input("Please enter the full name of the desired file (with extension) at the prompt below: \n"))
parseCSV(file)
main()
``` | 2015/11/06 | [
"https://Stackoverflow.com/questions/33560877",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5532342/"
] | You are trying to convert this to an integer:
```
'51,2,2,49,15,2,1,14'
```
The error message is pretty explicit in showing you what it is you are trying to typecast to an `int`.
The data in your file is most likely comma separated, and you are reading it in as a list with a single string entry that is comma separated like this:
```
['51,2,2,49,15,2,1,14']
```
So, you should be reading your file using read() and then split on ',' like this:
```
with open(file) as f:
fd = f.read().split(',')
d = list(map(int, fd))
```
Output of `d`:
```
[51, 2, 2, 49, 15, 2, 1, 14]
```
Alternatively, because you are dealing with a delimited file, and if you might have several of those rows, it might be worth it to also look in to using the [csv](https://docs.python.org/3/library/csv.html) module as well. Here is an example using it while also incorporating your bit of code:
```
import csv
with open(file) as cf:
c = csv.reader(cf, delimiter=',')
for i in c:
print(list(map(int, i)))
```
Finally, can be further simplified:
```
with open('crma.csv') as cf:
data = [list(map(int, i)) for i in csv.reader(cf, delimiter=',')]
```
Assuming you have three rows in your file, it would then look like this:
```
[[51, 2, 2, 49, 15, 2, 1, 14], [51, 2, 2, 49, 15, 2, 1, 12], [51, 2, 2, 49, 15, 2, 1, 11]]
``` | To answer your question, *'What's wrong with it?'*:
You are reading in your whole csv to a list containing one item that is the whole file as a long string. Even if your csv only contains integers the way you are parsing in all of the lines will not work. |
22,023,184 | I tried to subclass NSThread in order to operate a thread with some data. I want to simulate the join() in python, according to the doc:
>
> join(): Wait until the thread terminates. This blocks the calling thread until
> the thread whose join() method is called terminates
>
>
>
So I think using performSelector: onThread: withObject: waitUntilDone:YES would be fine, but it does not work. It just do nothing and would not exit, running like forever.
This is my code:
```
@interface MyClass : NSThread
@property (strong, nonatomic) NSMutableArray *msgQueue;
@property (assign, nonatomic) BOOL stop;
@end
@implementation MyClass
-(id)init
{
self = [super init];
if (self) {
self.msgQueue = [NSMutableArray array];
self.stop = NO;
[self start];
return self;
}
return nil;
}
-(void)myRun
{
while (!self.stop) {
NSLock *arrayLock = [[NSLock alloc] init];
[arrayLock lock];
NSArray *message = [self.msgQueue firstObject];
[self.msgQueue removeObjectAtIndex:0];
[arrayLock unlock];
NSLog(@"%@", message);
if ([message[0] isEqualToString:@"terminate"]) {
self.stop = YES;
}
}
}
-(void)join
{
[self performSelector:@selector(myRun) onThread:self withObject:nil waitUntilDone:YES];
}
@end
int main(int argc, const char * argv[])
{
@autoreleasepool {
MyClass *a = [[MyClass alloc] init];
[a.msgQueue addObject:@[@"terminate",@"hello world"]];
//[a myRun]; // this line works so the myRun method should be good,
[a join]; // but I want this line work, and I have no idea what the problem is.
}
return 0;
}
``` | 2014/02/25 | [
"https://Stackoverflow.com/questions/22023184",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2014948/"
] | From Apple's documentation on `performSelector:onThread:withObject:waitUntilDone:`:
>
> This method queues the message on the run loop of the target thread using the default run loop modes—that is, the modes associated with the NSRunLoopCommonModes constant. As part of its normal run loop processing, the target thread dequeues the message (assuming it is running in one of the default run loop modes) and invokes the desired method.
>
>
>
You probably never started a run loop on the thread, so it will never execute your `myRun` method, since it has no run loop to execute on.
As for Merlevede's answer, `myRun` is not enqueued on the same thread as `join`. `join` was called on your main thread, whereas you're trying to enqueue `myRun` on your secondary thread. So his theory is incorrect. Also from Apple's documentation regarding the wait parameter:
>
> If the current thread and target thread are the same, and you specify YES for this parameter, the selector is performed immediately on the current thread. If you specify NO, this method queues the message on the thread’s run loop and returns, just like it does for other threads. The current thread must then dequeue and process the message when it has an opportunity to do so.
>
>
>
So even if it was on the same thread, it wouldn't be stuck waiting, it would just execute it right away as if you had directly called the method instead of using `performSelector:` in the first place. | You're basically in a deadlock condition.
```
-(void)join
{
[self performSelector:@selector(myRun) onThread:self withObject:nil waitUntilDone:YES];
}
```
`join` is waiting for `myRun` to finish (waitUntilDone flag), but `myRun` is enqueued on the same thread as `join`, so it's also waiting for `join` to finish.
For `performSelector:onThread:withObject:waitUntilDone:` you would never pass the current thread as the thread parameter. |
56,143,264 | i upgrade pip. But after the upgrade have some syntax error.
i try install python 3.x but not fixed.
Traceback (most recent call last):
```
File "/usr/bin/pip", line 7, in <module>
from pip._internal import main
File "/usr/lib/python2.6/site-packages/pip/_internal/__init__.py", line 19, in <module>
from pip._vendor.urllib3.exceptions import DependencyWarning
File "/usr/lib/python2.6/site-packages/pip/_vendor/urllib3/__init__.py", line 8, in <module>
from .connectionpool import (
File "/usr/lib/python2.6/site-packages/pip/_vendor/urllib3/connectionpool.py", line 92
_blocking_errnos = {errno.EAGAIN, errno.EWOULDBLOCK}
```
I try to install pyopenssl
----------UPDATE----------------
i try to install python36u i got some errors
```
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: liblzma.so.5(XZ_5.0)(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libgdbm_compat.so.4()(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libgdbm.so.4()(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libcrypto.so.10(OPENSSL_1.0.2)(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: liblzma.so.5()(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libffi.so.6()(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libc.so.6(GLIBC_2.17)(64bit)
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
``` | 2019/05/15 | [
"https://Stackoverflow.com/questions/56143264",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7571523/"
] | python2.6 is not supported anymore, try to change you SYS PATH to point for new python and pip
check this : [Python ENV](https://www.tutorialspoint.com/python/python_environment.htm)
alternatively you can use the following:
```
/path/to/pip3 install ....
/path/to/python3 <NAME_OF_THE_SCRIPT>
``` | ----------UPDATE----------------
i try to install python36u i got some errors
```
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: liblzma.so.5(XZ_5.0)(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libgdbm_compat.so.4()(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libgdbm.so.4()(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libcrypto.so.10(OPENSSL_1.0.2)(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: liblzma.so.5()(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libffi.so.6()(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libc.so.6(GLIBC_2.17)(64bit)
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
``` |
56,143,264 | i upgrade pip. But after the upgrade have some syntax error.
i try install python 3.x but not fixed.
Traceback (most recent call last):
```
File "/usr/bin/pip", line 7, in <module>
from pip._internal import main
File "/usr/lib/python2.6/site-packages/pip/_internal/__init__.py", line 19, in <module>
from pip._vendor.urllib3.exceptions import DependencyWarning
File "/usr/lib/python2.6/site-packages/pip/_vendor/urllib3/__init__.py", line 8, in <module>
from .connectionpool import (
File "/usr/lib/python2.6/site-packages/pip/_vendor/urllib3/connectionpool.py", line 92
_blocking_errnos = {errno.EAGAIN, errno.EWOULDBLOCK}
```
I try to install pyopenssl
----------UPDATE----------------
i try to install python36u i got some errors
```
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: liblzma.so.5(XZ_5.0)(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libgdbm_compat.so.4()(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libgdbm.so.4()(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libcrypto.so.10(OPENSSL_1.0.2)(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: liblzma.so.5()(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libffi.so.6()(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libc.so.6(GLIBC_2.17)(64bit)
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
``` | 2019/05/15 | [
"https://Stackoverflow.com/questions/56143264",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7571523/"
] | Change your Default python version using this link [Change default python version](https://stackoverflow.com/questions/45542690/linux-centos-7-how-to-set-python3-5-2-as-default-python-version)
you have installed pyopenssl for python2 environment. so first uninstall it.
check version: python3.6 -V, pip3 -V etc
While installing pyopenssl or any software use pip3.
use **sudo pip3 install pyopenssl** or **pip3 install pyopenssl**,
It will work. | ----------UPDATE----------------
i try to install python36u i got some errors
```
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: liblzma.so.5(XZ_5.0)(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libgdbm_compat.so.4()(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libgdbm.so.4()(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libcrypto.so.10(OPENSSL_1.0.2)(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: liblzma.so.5()(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libffi.so.6()(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libc.so.6(GLIBC_2.17)(64bit)
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
``` |
43,773,802 | Using python and pandas I can easily construct a sparse DataFrame from a list of dictionary objects. The following code snippet shows how this can be done in pandas:
```
In [1]: import pandas as pd; (pd.DataFrame([{'a':1, 'b':10},
{'d':99, 'c':1},
{'b':1, 'd': 4}])
.fillna(0))
Out[1]:
a b c d
0 1.0 10.0 0.0 0.0
1 0.0 0.0 1.0 99.0
2 0.0 1.0 0.0 4.0
```
What if I want to easily reproduce this behavior in R? Let's suppose that I have the following variable:
```
values <- list(list(a = 1, b = 10),
list(d = 99, c = 1),
list(b = 1, d = 4))
```
Then, how the same result achieved in python can be obtained using R? | 2017/05/04 | [
"https://Stackoverflow.com/questions/43773802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4862483/"
] | We can use `melt` with `xtabs` in `R`
```
library(reshape2)
xtabs(value~L1 + L2, melt(values))
# L2
#L1 a b c d
# 1 1 10 0 0
# 2 0 0 1 99
# 3 0 1 0 4
``` | Here's a solution with `plyr` package:
```
ldply(values, data.frame)
a b d c
1 1 10 NA NA
2 NA NA 99 1
3 NA 1 4 NA
# mutate each to replace NA with 0
ldply(values, data.frame) %>%
mutate_each(funs(replace(., is.na(.), 0)))
a b d c
1 1 10 0 0
2 0 0 99 1
3 0 1 4 0
``` |
43,773,802 | Using python and pandas I can easily construct a sparse DataFrame from a list of dictionary objects. The following code snippet shows how this can be done in pandas:
```
In [1]: import pandas as pd; (pd.DataFrame([{'a':1, 'b':10},
{'d':99, 'c':1},
{'b':1, 'd': 4}])
.fillna(0))
Out[1]:
a b c d
0 1.0 10.0 0.0 0.0
1 0.0 0.0 1.0 99.0
2 0.0 1.0 0.0 4.0
```
What if I want to easily reproduce this behavior in R? Let's suppose that I have the following variable:
```
values <- list(list(a = 1, b = 10),
list(d = 99, c = 1),
list(b = 1, d = 4))
```
Then, how the same result achieved in python can be obtained using R? | 2017/05/04 | [
"https://Stackoverflow.com/questions/43773802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4862483/"
] | We can use `melt` with `xtabs` in `R`
```
library(reshape2)
xtabs(value~L1 + L2, melt(values))
# L2
#L1 a b c d
# 1 1 10 0 0
# 2 0 0 1 99
# 3 0 1 0 4
``` | Using base R to construct a matrix, you could do the following.
first, the set up
```
# flatten list to pull out info for matrix construction
flat <- unlist(values)
# build a 0 matrix with correct dimensions and column names
myMat <- matrix(0, nrow=length(values), ncol=length(unique(names(flat))),
dimnames=list(NULL, sort(unique(names(flat)))))
```
Now, fill in the matrix
```
for(i in seq_along(values)) myMat[i, names(values[[i]])] <- unlist(values[[i]])
```
This results in
```
myMat
a b c d
[1,] 1 10 0 0
[2,] 0 0 1 99
[3,] 0 1 0 4
```
If you actually wanted a data.frame, you can convert the matrix with the `as.data.frame` or `data.frame` functions. |
43,773,802 | Using python and pandas I can easily construct a sparse DataFrame from a list of dictionary objects. The following code snippet shows how this can be done in pandas:
```
In [1]: import pandas as pd; (pd.DataFrame([{'a':1, 'b':10},
{'d':99, 'c':1},
{'b':1, 'd': 4}])
.fillna(0))
Out[1]:
a b c d
0 1.0 10.0 0.0 0.0
1 0.0 0.0 1.0 99.0
2 0.0 1.0 0.0 4.0
```
What if I want to easily reproduce this behavior in R? Let's suppose that I have the following variable:
```
values <- list(list(a = 1, b = 10),
list(d = 99, c = 1),
list(b = 1, d = 4))
```
Then, how the same result achieved in python can be obtained using R? | 2017/05/04 | [
"https://Stackoverflow.com/questions/43773802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4862483/"
] | With `dplyr` you could do it like this:
```
library(dplyr)
values %>% bind_rows() %>% mutate_all(function(x) coalesce(x, 0))
# A tibble: 3 × 4
a b d c
<dbl> <dbl> <dbl> <dbl>
1 1 10 0 0
2 0 0 99 1
3 0 1 4 0
``` | Here's a solution with `plyr` package:
```
ldply(values, data.frame)
a b d c
1 1 10 NA NA
2 NA NA 99 1
3 NA 1 4 NA
# mutate each to replace NA with 0
ldply(values, data.frame) %>%
mutate_each(funs(replace(., is.na(.), 0)))
a b d c
1 1 10 0 0
2 0 0 99 1
3 0 1 4 0
``` |
43,773,802 | Using python and pandas I can easily construct a sparse DataFrame from a list of dictionary objects. The following code snippet shows how this can be done in pandas:
```
In [1]: import pandas as pd; (pd.DataFrame([{'a':1, 'b':10},
{'d':99, 'c':1},
{'b':1, 'd': 4}])
.fillna(0))
Out[1]:
a b c d
0 1.0 10.0 0.0 0.0
1 0.0 0.0 1.0 99.0
2 0.0 1.0 0.0 4.0
```
What if I want to easily reproduce this behavior in R? Let's suppose that I have the following variable:
```
values <- list(list(a = 1, b = 10),
list(d = 99, c = 1),
list(b = 1, d = 4))
```
Then, how the same result achieved in python can be obtained using R? | 2017/05/04 | [
"https://Stackoverflow.com/questions/43773802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4862483/"
] | Here's a solution with `plyr` package:
```
ldply(values, data.frame)
a b d c
1 1 10 NA NA
2 NA NA 99 1
3 NA 1 4 NA
# mutate each to replace NA with 0
ldply(values, data.frame) %>%
mutate_each(funs(replace(., is.na(.), 0)))
a b d c
1 1 10 0 0
2 0 0 99 1
3 0 1 4 0
``` | Using base R to construct a matrix, you could do the following.
first, the set up
```
# flatten list to pull out info for matrix construction
flat <- unlist(values)
# build a 0 matrix with correct dimensions and column names
myMat <- matrix(0, nrow=length(values), ncol=length(unique(names(flat))),
dimnames=list(NULL, sort(unique(names(flat)))))
```
Now, fill in the matrix
```
for(i in seq_along(values)) myMat[i, names(values[[i]])] <- unlist(values[[i]])
```
This results in
```
myMat
a b c d
[1,] 1 10 0 0
[2,] 0 0 1 99
[3,] 0 1 0 4
```
If you actually wanted a data.frame, you can convert the matrix with the `as.data.frame` or `data.frame` functions. |
43,773,802 | Using python and pandas I can easily construct a sparse DataFrame from a list of dictionary objects. The following code snippet shows how this can be done in pandas:
```
In [1]: import pandas as pd; (pd.DataFrame([{'a':1, 'b':10},
{'d':99, 'c':1},
{'b':1, 'd': 4}])
.fillna(0))
Out[1]:
a b c d
0 1.0 10.0 0.0 0.0
1 0.0 0.0 1.0 99.0
2 0.0 1.0 0.0 4.0
```
What if I want to easily reproduce this behavior in R? Let's suppose that I have the following variable:
```
values <- list(list(a = 1, b = 10),
list(d = 99, c = 1),
list(b = 1, d = 4))
```
Then, how the same result achieved in python can be obtained using R? | 2017/05/04 | [
"https://Stackoverflow.com/questions/43773802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4862483/"
] | With `dplyr` you could do it like this:
```
library(dplyr)
values %>% bind_rows() %>% mutate_all(function(x) coalesce(x, 0))
# A tibble: 3 × 4
a b d c
<dbl> <dbl> <dbl> <dbl>
1 1 10 0 0
2 0 0 99 1
3 0 1 4 0
``` | Using base R to construct a matrix, you could do the following.
first, the set up
```
# flatten list to pull out info for matrix construction
flat <- unlist(values)
# build a 0 matrix with correct dimensions and column names
myMat <- matrix(0, nrow=length(values), ncol=length(unique(names(flat))),
dimnames=list(NULL, sort(unique(names(flat)))))
```
Now, fill in the matrix
```
for(i in seq_along(values)) myMat[i, names(values[[i]])] <- unlist(values[[i]])
```
This results in
```
myMat
a b c d
[1,] 1 10 0 0
[2,] 0 0 1 99
[3,] 0 1 0 4
```
If you actually wanted a data.frame, you can convert the matrix with the `as.data.frame` or `data.frame` functions. |
18,942,318 | I try to upload the data into datastore use remote\_api at my dev server, but I got the following error, the SDK version is 1.8.4. Is there anyone has the same error? It looks like the new datastore version 4 cause this?
```
Traceback (most recent call last):
File "D:\python-lib\google_appengine\appcfg.py", line 184, in <module>
run_file(__file__, globals())
File "D:\python-lib\google_appengine\appcfg.py", line 180, in run_file
execfile(script_path, globals_)
File "D:\python-lib\google_appengine\google\appengine\tools\appcfg.py", line 4
403, in <module>
main(sys.argv)
File "D:\python-lib\google_appengine\google\appengine\tools\appcfg.py", line 4
394, in main
result = AppCfgApp(argv).Run()
File "D:\python-lib\google_appengine\google\appengine\tools\appcfg.py", line 2
469, in Run
self.action(self)
File "D:\python-lib\google_appengine\google\appengine\tools\appcfg.py", line 4
103, in __call__
return method()
File "D:\python-lib\google_appengine\google\appengine\tools\appcfg.py", line 3
915, in PerformUpload
run_fn(args)
File "D:\python-lib\google_appengine\google\appengine\tools\appcfg.py", line 3
806, in RunBulkloader
sys.exit(bulkloader.Run(arg_dict))
File "D:\python-lib\google_appengine\google\appengine\tools\bulkloader.py", li
ne 4419, in Run
return _PerformBulkload(arg_dict)
File "D:\python-lib\google_appengine\google\appengine\tools\bulkloader.py", li
ne 4284, in _PerformBulkload
loader.finalize()
File "D:\python-lib\google_appengine\google\appengine\ext\bulkload\bulkloader_
config.py", line 382, in finalize
self.reserve_keys(self.keys_to_reserve)
File "D:\python-lib\google_appengine\google\appengine\tools\bulkloader.py", li
ne 1228, in ReserveKeys
datastore._GetConnection()._reserve_keys(ConvertKeys(keys))
File "D:\python-lib\google_appengine\google\appengine\datastore\datastore_rpc.
py", line 1880, in _reserve_keys
self._async_reserve_keys(None, keys).get_result()
File "D:\python-lib\google_appengine\google\appengine\datastore\datastore_rpc.
py", line 838, in get_result
results = self.__rpcs[0].get_result()
File "D:\python-lib\google_appengine\google\appengine\api\apiproxy_stub_map.py
", line 612, in get_result
return self.__get_result_hook(self)
File "D:\python-lib\google_appengine\google\appengine\datastore\datastore_rpc.
py", line 1921, in __reserve_keys_hook
self.check_rpc_success(rpc)
File "D:\python-lib\google_appengine\google\appengine\datastore\datastore_rpc.
py", line 1234, in check_rpc_success
rpc.check_success()
File "D:\python-lib\google_appengine\google\appengine\api\apiproxy_stub_map.py
", line 578, in check_success
self.__rpc.CheckSuccess()
File "D:\python-lib\google_appengine\google\appengine\api\apiproxy_rpc.py", li
ne 156, in _WaitImpl
self.request, self.response)
File "D:\python-lib\google_appengine\google\appengine\ext\remote_api\remote_ap
i_stub.py", line 200, in MakeSyncCall
self._MakeRealSyncCall(service, call, request, response)
File "D:\python-lib\google_appengine\google\appengine\ext\remote_api\remote_ap
i_stub.py", line 234, in _MakeRealSyncCall
raise pickle.loads(response_pb.exception())
AssertionError: No api proxy found for service "datastore_v4"
``` | 2013/09/22 | [
"https://Stackoverflow.com/questions/18942318",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2345755/"
] | What about this?
```
MyBase * base = dynamic_cast<MyBase *>(clicked_shape);
base->SetText("too");
```
You might want to check for `base` being null, if the Shape you get isn't actually one of yours.
`MyBase` needs at least one virtual function for this - the destructor would do. | Shape class is a base class hence it provides an interface that can be overridden. E.g. there could be `draw()` method which is called to draw a shape. That one would be a good candidate to be overridden in your new class with text box. For example:
```
class SquareWithText: public Square {
void draw() {
Square::draw(); // call base method
... // add text box here
}
```
There is no multiple inheritance here, which gives you a simpler solution.
If single inheritance doesn't work for you you may go for a more complex solution.
```
class ShapeWithTextBox : public Shape {
void draw() {
... // add text box here
}
}
```
Now you will write your new classes as:
```
class MySquare : public Square, public ShapeWithTextBox
{
void draw() {
Square::draw();
ShapeWithTextBox::draw();
}
```
It is not necessary to derive `ShapeWithTextBox` from `Shape`. It depends on the interface of `Shape class`. |
28,962,266 | I had drawn up an UI using the QT Designer but found out that there are no parameters for me to set QLineEdit inputs to be uppercase.
After doing some online searching, I have only seen a very few handful of results that cater to my needs, however all are coded in Qt. Example, this [link](http://www.qtforum.org/article/16178/making-all-qlineedit-input-uppercase.html)
And so, are there ways for me to do this in the pythonic way? | 2015/03/10 | [
"https://Stackoverflow.com/questions/28962266",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3212246/"
] | Try this,
I believe this serves your purpose. I won't call it much pythonic. More like PyQt override.
#minor code edit
```
from PyQt4 import QtGui
import sys
#===============================================================================
# MyEditableTextBox-
#===============================================================================
class MyEditableTextBox(QtGui.QLineEdit):
#|-----------------------------------------------------------------------------|
# Constructor
#|-----------------------------------------------------------------------------|
def __init__(self,*args):
#*args to set parent
QtGui.QLineEdit.__init__(self,*args)
#|-----------------------------------------------------------------------------|
# focusOutEvent :-
#|-----------------------------------------------------------------------------|
def focusOutEvent(self, *args, **kwargs):
text = self.text()
self.setText(text.__str__().upper())
return QtGui.QLineEdit.focusOutEvent(self, *args, **kwargs)
#|--------------------------End of focusOutEvent--------------------------------|
#|-----------------------------------------------------------------------------|
# keyPressEvent
#|-----------------------------------------------------------------------------|
def keyPressEvent(self, event):
if not self.hasSelectedText():
pretext = self.text()
self.setText(pretext.__str__().upper())
return QtGui.QLineEdit.keyPressEvent(self, event)
#|--------------------End of keyPressEvent-------------------------------------|
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
w = QtGui.QWidget()
lay = QtGui.QHBoxLayout()
w.setLayout(lay)
le1 = MyEditableTextBox()
lay.addWidget(le1)
le2 = MyEditableTextBox()
lay.addWidget(le2)
w.show()
sys.exit(app.exec_())
``` | Hey i know i am kind of late, but I hope this might help some one else like me who spent some time searching for this
**Mycase:**
I was trying to convert only the first letter to capital and this is what i ended up with and it worked (just a beginner in python so if you can make this more pythonic please let me know)
In the defining function: `line_edit_object.textChanged.connect(lambda:auto_capital(line_edit_object))`
the function auto\_capital:
```
def auto_capital(line_edit_object):
edit=line_edit_object
text=edit.text()
edit.text(text.title())
```
this shall fix every issue. Feel free to make it more pytonic. |
28,962,266 | I had drawn up an UI using the QT Designer but found out that there are no parameters for me to set QLineEdit inputs to be uppercase.
After doing some online searching, I have only seen a very few handful of results that cater to my needs, however all are coded in Qt. Example, this [link](http://www.qtforum.org/article/16178/making-all-qlineedit-input-uppercase.html)
And so, are there ways for me to do this in the pythonic way? | 2015/03/10 | [
"https://Stackoverflow.com/questions/28962266",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3212246/"
] | Try this,
I believe this serves your purpose. I won't call it much pythonic. More like PyQt override.
#minor code edit
```
from PyQt4 import QtGui
import sys
#===============================================================================
# MyEditableTextBox-
#===============================================================================
class MyEditableTextBox(QtGui.QLineEdit):
#|-----------------------------------------------------------------------------|
# Constructor
#|-----------------------------------------------------------------------------|
def __init__(self,*args):
#*args to set parent
QtGui.QLineEdit.__init__(self,*args)
#|-----------------------------------------------------------------------------|
# focusOutEvent :-
#|-----------------------------------------------------------------------------|
def focusOutEvent(self, *args, **kwargs):
text = self.text()
self.setText(text.__str__().upper())
return QtGui.QLineEdit.focusOutEvent(self, *args, **kwargs)
#|--------------------------End of focusOutEvent--------------------------------|
#|-----------------------------------------------------------------------------|
# keyPressEvent
#|-----------------------------------------------------------------------------|
def keyPressEvent(self, event):
if not self.hasSelectedText():
pretext = self.text()
self.setText(pretext.__str__().upper())
return QtGui.QLineEdit.keyPressEvent(self, event)
#|--------------------End of keyPressEvent-------------------------------------|
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
w = QtGui.QWidget()
lay = QtGui.QHBoxLayout()
w.setLayout(lay)
le1 = MyEditableTextBox()
lay.addWidget(le1)
le2 = MyEditableTextBox()
lay.addWidget(le2)
w.show()
sys.exit(app.exec_())
``` | I am also late but after contemplating on this question I think this is some sort of pythonic way of accomplishing it in PyQt5:
```
class CustomInput(QLineEdit):
def __init__(self):
super().__init__()
self.textChanged.connect(self.text_changed)
def text_changed(self):
if self.text().isupper():
return
self.setText(self.text().upper())
``` |
28,962,266 | I had drawn up an UI using the QT Designer but found out that there are no parameters for me to set QLineEdit inputs to be uppercase.
After doing some online searching, I have only seen a very few handful of results that cater to my needs, however all are coded in Qt. Example, this [link](http://www.qtforum.org/article/16178/making-all-qlineedit-input-uppercase.html)
And so, are there ways for me to do this in the pythonic way? | 2015/03/10 | [
"https://Stackoverflow.com/questions/28962266",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3212246/"
] | The simplest way would be to use a [validator](http://qt-project.org/doc/qt-4.8/qvalidator.html).
This will immediately uppercase anything the user types, or pastes, into the line-edit:
```
from PyQt4 import QtCore, QtGui
class Validator(QtGui.QValidator):
def validate(self, string, pos):
return QtGui.QValidator.Acceptable, string.upper(), pos
# for old code still using QString, use this instead
# string.replace(0, string.count(), string.toUpper())
# return QtGui.QValidator.Acceptable, pos
class Window(QtGui.QWidget):
def __init__(self):
super(Window, self).__init__()
self.edit = QtGui.QLineEdit(self)
self.validator = Validator(self)
self.edit.setValidator(self.validator)
layout = QtGui.QVBoxLayout(self)
layout.addWidget(self.edit)
if __name__ == '__main__':
import sys
app = QtGui.QApplication(sys.argv)
window = Window()
window.setGeometry(500, 300, 300, 100)
window.show()
sys.exit(app.exec_())
``` | Hey i know i am kind of late, but I hope this might help some one else like me who spent some time searching for this
**Mycase:**
I was trying to convert only the first letter to capital and this is what i ended up with and it worked (just a beginner in python so if you can make this more pythonic please let me know)
In the defining function: `line_edit_object.textChanged.connect(lambda:auto_capital(line_edit_object))`
the function auto\_capital:
```
def auto_capital(line_edit_object):
edit=line_edit_object
text=edit.text()
edit.text(text.title())
```
this shall fix every issue. Feel free to make it more pytonic. |
28,962,266 | I had drawn up an UI using the QT Designer but found out that there are no parameters for me to set QLineEdit inputs to be uppercase.
After doing some online searching, I have only seen a very few handful of results that cater to my needs, however all are coded in Qt. Example, this [link](http://www.qtforum.org/article/16178/making-all-qlineedit-input-uppercase.html)
And so, are there ways for me to do this in the pythonic way? | 2015/03/10 | [
"https://Stackoverflow.com/questions/28962266",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3212246/"
] | The simplest way would be to use a [validator](http://qt-project.org/doc/qt-4.8/qvalidator.html).
This will immediately uppercase anything the user types, or pastes, into the line-edit:
```
from PyQt4 import QtCore, QtGui
class Validator(QtGui.QValidator):
def validate(self, string, pos):
return QtGui.QValidator.Acceptable, string.upper(), pos
# for old code still using QString, use this instead
# string.replace(0, string.count(), string.toUpper())
# return QtGui.QValidator.Acceptable, pos
class Window(QtGui.QWidget):
def __init__(self):
super(Window, self).__init__()
self.edit = QtGui.QLineEdit(self)
self.validator = Validator(self)
self.edit.setValidator(self.validator)
layout = QtGui.QVBoxLayout(self)
layout.addWidget(self.edit)
if __name__ == '__main__':
import sys
app = QtGui.QApplication(sys.argv)
window = Window()
window.setGeometry(500, 300, 300, 100)
window.show()
sys.exit(app.exec_())
``` | I am also late but after contemplating on this question I think this is some sort of pythonic way of accomplishing it in PyQt5:
```
class CustomInput(QLineEdit):
def __init__(self):
super().__init__()
self.textChanged.connect(self.text_changed)
def text_changed(self):
if self.text().isupper():
return
self.setText(self.text().upper())
``` |
28,962,266 | I had drawn up an UI using the QT Designer but found out that there are no parameters for me to set QLineEdit inputs to be uppercase.
After doing some online searching, I have only seen a very few handful of results that cater to my needs, however all are coded in Qt. Example, this [link](http://www.qtforum.org/article/16178/making-all-qlineedit-input-uppercase.html)
And so, are there ways for me to do this in the pythonic way? | 2015/03/10 | [
"https://Stackoverflow.com/questions/28962266",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3212246/"
] | Hey i know i am kind of late, but I hope this might help some one else like me who spent some time searching for this
**Mycase:**
I was trying to convert only the first letter to capital and this is what i ended up with and it worked (just a beginner in python so if you can make this more pythonic please let me know)
In the defining function: `line_edit_object.textChanged.connect(lambda:auto_capital(line_edit_object))`
the function auto\_capital:
```
def auto_capital(line_edit_object):
edit=line_edit_object
text=edit.text()
edit.text(text.title())
```
this shall fix every issue. Feel free to make it more pytonic. | I am also late but after contemplating on this question I think this is some sort of pythonic way of accomplishing it in PyQt5:
```
class CustomInput(QLineEdit):
def __init__(self):
super().__init__()
self.textChanged.connect(self.text_changed)
def text_changed(self):
if self.text().isupper():
return
self.setText(self.text().upper())
``` |
42,683,602 | I am writing a new Python application that I intend to distribute to several colleagues. Instead of my normal carefree attitude of just having everything self contained and run inside a folder in my home directory, this time I would like to broaden my horizon and actually try to utilize the Linux directory structure as it was intended (at least somewhat). Can you please read my breakdown below and comment and or make recommendations if this is not correct.
Lets call the application "narf"
```
/usr/narf - Install location for the actual python file(s).
/usr/bin/narf - Either a softlink to the main python file above or use this location instead.
/etc/narf - Any configuration files for app narf.
/var/log/narf - Any log files for app narf.
/usr/lib - Any required libraries for app narf.
/run/narf - Any persistent (across reboot), but still temp files for app narf.
/tmp/narf - Very temp files for app narf that go away with reboot
```
I assume I should stick to using /usr/X (for example /usr/bin instead of just /bin) since my application is not system critical and a mere addon.
I currently use Ubuntu 16 LTS, however part of this is intended as a way to try to standardize my app for any popular Linux distro.
Thanks for the help.
**\* UPDATE \***
I think I see the answer to at least part of my question. Looking in /usr, I now see that it is a pretty barebones directory and almost akin to user level root directory (ie has bin, lib, local, sbin, etc. but thats pretty much all). This leads me to believe my application should absolutely NOT live in /usr, and ONLY in /usr/bin. | 2017/03/08 | [
"https://Stackoverflow.com/questions/42683602",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2272450/"
] | You could use the `map()` feature of the stream to convert each `User` instance in your list to a `UserWithAge` instance.
```
List<User> userList = ... // your list
List<UserWithAge> usersWithAgeList = userList.stream()
.map(user -> {
// create UserWithAge instance and copy user name
UserWithAge userWithAge = new UserWithAge();
userWithAge.setName(user.getName());
userWithAge.setAge(27);
return userWithAge;
})
.collect(Collectors.toList()); // return the UserWithAge's as a list
``` | While you could do this, You should not do like this.
```
List<UserWithAge> userWithAgeList = new ArrayList<UserWithAge>();
userList.stream().forEach(user -> {
UserWithAge userWithAge = new UserWithAge();
userWithAge.setName(user.getName());
userWithAge.setAge(27);
userWithAgeList.add(userWithAge);
});
``` |
42,683,602 | I am writing a new Python application that I intend to distribute to several colleagues. Instead of my normal carefree attitude of just having everything self contained and run inside a folder in my home directory, this time I would like to broaden my horizon and actually try to utilize the Linux directory structure as it was intended (at least somewhat). Can you please read my breakdown below and comment and or make recommendations if this is not correct.
Lets call the application "narf"
```
/usr/narf - Install location for the actual python file(s).
/usr/bin/narf - Either a softlink to the main python file above or use this location instead.
/etc/narf - Any configuration files for app narf.
/var/log/narf - Any log files for app narf.
/usr/lib - Any required libraries for app narf.
/run/narf - Any persistent (across reboot), but still temp files for app narf.
/tmp/narf - Very temp files for app narf that go away with reboot
```
I assume I should stick to using /usr/X (for example /usr/bin instead of just /bin) since my application is not system critical and a mere addon.
I currently use Ubuntu 16 LTS, however part of this is intended as a way to try to standardize my app for any popular Linux distro.
Thanks for the help.
**\* UPDATE \***
I think I see the answer to at least part of my question. Looking in /usr, I now see that it is a pretty barebones directory and almost akin to user level root directory (ie has bin, lib, local, sbin, etc. but thats pretty much all). This leads me to believe my application should absolutely NOT live in /usr, and ONLY in /usr/bin. | 2017/03/08 | [
"https://Stackoverflow.com/questions/42683602",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2272450/"
] | You could use the `map()` feature of the stream to convert each `User` instance in your list to a `UserWithAge` instance.
```
List<User> userList = ... // your list
List<UserWithAge> usersWithAgeList = userList.stream()
.map(user -> {
// create UserWithAge instance and copy user name
UserWithAge userWithAge = new UserWithAge();
userWithAge.setName(user.getName());
userWithAge.setAge(27);
return userWithAge;
})
.collect(Collectors.toList()); // return the UserWithAge's as a list
``` | ```
public class ListIteratorExp {
public static void main(String[] args) {
List<Person> list = new ArrayList<>();
Person p1 = new Person();
p1.setName("foo");
Person p2 = new Person();
p2.setName("bee");
list.add(p1);
list.add(p2);
list.stream().forEach(p -> {
String name = p.getName();
System.out.println(name);
});
}
}
```
```
class Person{
private String name;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
```
```
output:-
vishal
thakur
``` |
42,683,602 | I am writing a new Python application that I intend to distribute to several colleagues. Instead of my normal carefree attitude of just having everything self contained and run inside a folder in my home directory, this time I would like to broaden my horizon and actually try to utilize the Linux directory structure as it was intended (at least somewhat). Can you please read my breakdown below and comment and or make recommendations if this is not correct.
Lets call the application "narf"
```
/usr/narf - Install location for the actual python file(s).
/usr/bin/narf - Either a softlink to the main python file above or use this location instead.
/etc/narf - Any configuration files for app narf.
/var/log/narf - Any log files for app narf.
/usr/lib - Any required libraries for app narf.
/run/narf - Any persistent (across reboot), but still temp files for app narf.
/tmp/narf - Very temp files for app narf that go away with reboot
```
I assume I should stick to using /usr/X (for example /usr/bin instead of just /bin) since my application is not system critical and a mere addon.
I currently use Ubuntu 16 LTS, however part of this is intended as a way to try to standardize my app for any popular Linux distro.
Thanks for the help.
**\* UPDATE \***
I think I see the answer to at least part of my question. Looking in /usr, I now see that it is a pretty barebones directory and almost akin to user level root directory (ie has bin, lib, local, sbin, etc. but thats pretty much all). This leads me to believe my application should absolutely NOT live in /usr, and ONLY in /usr/bin. | 2017/03/08 | [
"https://Stackoverflow.com/questions/42683602",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2272450/"
] | While you could do this, You should not do like this.
```
List<UserWithAge> userWithAgeList = new ArrayList<UserWithAge>();
userList.stream().forEach(user -> {
UserWithAge userWithAge = new UserWithAge();
userWithAge.setName(user.getName());
userWithAge.setAge(27);
userWithAgeList.add(userWithAge);
});
``` | ```
public class ListIteratorExp {
public static void main(String[] args) {
List<Person> list = new ArrayList<>();
Person p1 = new Person();
p1.setName("foo");
Person p2 = new Person();
p2.setName("bee");
list.add(p1);
list.add(p2);
list.stream().forEach(p -> {
String name = p.getName();
System.out.println(name);
});
}
}
```
```
class Person{
private String name;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
```
```
output:-
vishal
thakur
``` |
70,600,154 | How can I implement a selection based on selecting the first 3n+1 elements from a tag in it's path? For example, let's say I have the following xpath:
```
//div[@class='ResultsSectionContainer-sc-gdhf14-0 kteggz']/div[@class='Wrapper-sc-11673k2-0 gIBPSk']//div/div/a
```
Taken from this url:
```
https://www.jobsite.co.uk/jobs?page=2&action=paging_next.html
```
Which selects the @href from each `a` tag. This website is dynamic, so I would need something that can generalise the path. However, for every three tags, two I do not need. Hence, I require all those hrefs belonging to 3n+1 selections when n = 0,1, 2, ...
Is there a way to implement this?
For example, the total number of hrefs I can select is 75, therefore I'm looking for 25 elements as 75/3 = 25, as we divide by the leading coefficient.
Can a selection like this be done in python? Otherwise, given a general selection:
```
a*n+c
```
How can I implement a selection like this when selecting for xpaths?
Although, if an easier alternative is available in selecting those href's from the url is also appreciated as I can only think of this way. Though, I presume this method can generalise more over different urls.
If there's a CSS-selector version, that would also be appreciated! | 2022/01/05 | [
"https://Stackoverflow.com/questions/70600154",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15675231/"
] | The XPath filter predicate `[position() mod 3 = 1]` selects all elements whose 1-based position is 3n+1 for some integer n. | All you need here is to use a **correct** locator.
I guess you are trying to get all the job links?
If so, instead of this
`//div[@class='ResultsSectionContainer-sc-gdhf14-0 kteggz']/div[@class='Wrapper-sc-11673k2-0 gIBPSk']//div/div/a`
very long, complex and fragile XPath you can use this XPath:
```py
//a[@data-at='job-item-title']
```
In case you still need to select from the list some specific elements matching some locator you can collect all these elements into the list with `driver.find_elements_by_whatever` and then to iterate over this list in a loop with condition you need. |
34,791,797 | I would like to know how to determine the precise Linux distribution I am on (excluding version numbers), from within a Python script and define a variable as equal to it. Now, I should clarify and say that I have seen these two questions:
* [Python: What OS am I running on?](https://stackoverflow.com/questions/1854/python-what-os-am-i-running-on)
* [How can I find the current OS in Python?](https://stackoverflow.com/questions/110362/how-can-i-find-the-current-os-in-python)
and neither one was helpful to me as, the first of these questions had answers that were very generalized and merely returned `posix` for all Linux distributions. The second question's answers weren't helpful as I sometimes operate on some more obscure distributions like Manjaro Linux and Sabayon Linux. The most applicable answer to the second question, was `platform.linux_distribution()`, which on Manjaro, returns:
```
('', '', '')
```
which as you can see is not helpful. Now I know a way I can get half-way to an acceptable answer, as:
```
from subprocess import call
call(["lsb_release", "-si"])
```
returns the output (on Manjaro Linux, of course):
```
ManjaroLinux
0
```
but defining a variable:
```
a=call(["lsb_release", "-si"])
```
gives an a with the value:
```
>>> a
0
``` | 2016/01/14 | [
"https://Stackoverflow.com/questions/34791797",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1876983/"
] | Replace call with check\_output.
```
from subprocess import check_output
a = check_output(["lsb_release", "-si"])
``` | You can also try subprocess.check\_output.
Based on docs: "Run command with arguments and return its output as a byte string." Docs: <https://docs.python.org/2/library/subprocess.html>
Code:
```
a = subprocess.check_output(["lsb_release", "-si"])
```
In my case, output was:
```
'Ubuntu\n'
``` |
34,791,797 | I would like to know how to determine the precise Linux distribution I am on (excluding version numbers), from within a Python script and define a variable as equal to it. Now, I should clarify and say that I have seen these two questions:
* [Python: What OS am I running on?](https://stackoverflow.com/questions/1854/python-what-os-am-i-running-on)
* [How can I find the current OS in Python?](https://stackoverflow.com/questions/110362/how-can-i-find-the-current-os-in-python)
and neither one was helpful to me as, the first of these questions had answers that were very generalized and merely returned `posix` for all Linux distributions. The second question's answers weren't helpful as I sometimes operate on some more obscure distributions like Manjaro Linux and Sabayon Linux. The most applicable answer to the second question, was `platform.linux_distribution()`, which on Manjaro, returns:
```
('', '', '')
```
which as you can see is not helpful. Now I know a way I can get half-way to an acceptable answer, as:
```
from subprocess import call
call(["lsb_release", "-si"])
```
returns the output (on Manjaro Linux, of course):
```
ManjaroLinux
0
```
but defining a variable:
```
a=call(["lsb_release", "-si"])
```
gives an a with the value:
```
>>> a
0
``` | 2016/01/14 | [
"https://Stackoverflow.com/questions/34791797",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1876983/"
] | The 'a' is just the exit status, try:
```
from subprocess import Popen, PIPE, STDOUT
cmd = "lsb_release -si"
p = Popen(cmd, shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT, close_fds=True)
output = p.stdout.read()
``` | As you can see from the link I linked to in the 2nd answer : <https://github.com/easybuilders/easybuild/wiki/OS_flavor_name_version>
you notice that platform.dist is a way better method of determining the current linux platform, however this is deprecated in python 3.5 and is gone in 3.8. It seems like there hardly is a good answer for this.
However weird OS'es will sometimes still not define this, but there is also absolutely no guarentee that lsb\_release will be available on a system...
```
dist: ('redhat', '5.8', 'Final')
dist: ('debian', '4.0', '')
dist: ('centos', '6.3', 'Final')
dist: ('fedora', '16', 'Verne')
dist: ('SuSE', '11.3', 'x86_64')
dist: ('Ubuntu', '12.10', 'quantal')
``` |
34,791,797 | I would like to know how to determine the precise Linux distribution I am on (excluding version numbers), from within a Python script and define a variable as equal to it. Now, I should clarify and say that I have seen these two questions:
* [Python: What OS am I running on?](https://stackoverflow.com/questions/1854/python-what-os-am-i-running-on)
* [How can I find the current OS in Python?](https://stackoverflow.com/questions/110362/how-can-i-find-the-current-os-in-python)
and neither one was helpful to me as, the first of these questions had answers that were very generalized and merely returned `posix` for all Linux distributions. The second question's answers weren't helpful as I sometimes operate on some more obscure distributions like Manjaro Linux and Sabayon Linux. The most applicable answer to the second question, was `platform.linux_distribution()`, which on Manjaro, returns:
```
('', '', '')
```
which as you can see is not helpful. Now I know a way I can get half-way to an acceptable answer, as:
```
from subprocess import call
call(["lsb_release", "-si"])
```
returns the output (on Manjaro Linux, of course):
```
ManjaroLinux
0
```
but defining a variable:
```
a=call(["lsb_release", "-si"])
```
gives an a with the value:
```
>>> a
0
``` | 2016/01/14 | [
"https://Stackoverflow.com/questions/34791797",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1876983/"
] | The 'a' is just the exit status, try:
```
from subprocess import Popen, PIPE, STDOUT
cmd = "lsb_release -si"
p = Popen(cmd, shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT, close_fds=True)
output = p.stdout.read()
``` | What about `open("/etc/issue","r").read()`? |
34,791,797 | I would like to know how to determine the precise Linux distribution I am on (excluding version numbers), from within a Python script and define a variable as equal to it. Now, I should clarify and say that I have seen these two questions:
* [Python: What OS am I running on?](https://stackoverflow.com/questions/1854/python-what-os-am-i-running-on)
* [How can I find the current OS in Python?](https://stackoverflow.com/questions/110362/how-can-i-find-the-current-os-in-python)
and neither one was helpful to me as, the first of these questions had answers that were very generalized and merely returned `posix` for all Linux distributions. The second question's answers weren't helpful as I sometimes operate on some more obscure distributions like Manjaro Linux and Sabayon Linux. The most applicable answer to the second question, was `platform.linux_distribution()`, which on Manjaro, returns:
```
('', '', '')
```
which as you can see is not helpful. Now I know a way I can get half-way to an acceptable answer, as:
```
from subprocess import call
call(["lsb_release", "-si"])
```
returns the output (on Manjaro Linux, of course):
```
ManjaroLinux
0
```
but defining a variable:
```
a=call(["lsb_release", "-si"])
```
gives an a with the value:
```
>>> a
0
``` | 2016/01/14 | [
"https://Stackoverflow.com/questions/34791797",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1876983/"
] | Replace call with check\_output.
```
from subprocess import check_output
a = check_output(["lsb_release", "-si"])
``` | As you can see from the link I linked to in the 2nd answer : <https://github.com/easybuilders/easybuild/wiki/OS_flavor_name_version>
you notice that platform.dist is a way better method of determining the current linux platform, however this is deprecated in python 3.5 and is gone in 3.8. It seems like there hardly is a good answer for this.
However weird OS'es will sometimes still not define this, but there is also absolutely no guarentee that lsb\_release will be available on a system...
```
dist: ('redhat', '5.8', 'Final')
dist: ('debian', '4.0', '')
dist: ('centos', '6.3', 'Final')
dist: ('fedora', '16', 'Verne')
dist: ('SuSE', '11.3', 'x86_64')
dist: ('Ubuntu', '12.10', 'quantal')
``` |
34,791,797 | I would like to know how to determine the precise Linux distribution I am on (excluding version numbers), from within a Python script and define a variable as equal to it. Now, I should clarify and say that I have seen these two questions:
* [Python: What OS am I running on?](https://stackoverflow.com/questions/1854/python-what-os-am-i-running-on)
* [How can I find the current OS in Python?](https://stackoverflow.com/questions/110362/how-can-i-find-the-current-os-in-python)
and neither one was helpful to me as, the first of these questions had answers that were very generalized and merely returned `posix` for all Linux distributions. The second question's answers weren't helpful as I sometimes operate on some more obscure distributions like Manjaro Linux and Sabayon Linux. The most applicable answer to the second question, was `platform.linux_distribution()`, which on Manjaro, returns:
```
('', '', '')
```
which as you can see is not helpful. Now I know a way I can get half-way to an acceptable answer, as:
```
from subprocess import call
call(["lsb_release", "-si"])
```
returns the output (on Manjaro Linux, of course):
```
ManjaroLinux
0
```
but defining a variable:
```
a=call(["lsb_release", "-si"])
```
gives an a with the value:
```
>>> a
0
``` | 2016/01/14 | [
"https://Stackoverflow.com/questions/34791797",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1876983/"
] | What about `open("/etc/issue","r").read()`? | You can also try subprocess.check\_output.
Based on docs: "Run command with arguments and return its output as a byte string." Docs: <https://docs.python.org/2/library/subprocess.html>
Code:
```
a = subprocess.check_output(["lsb_release", "-si"])
```
In my case, output was:
```
'Ubuntu\n'
``` |
34,791,797 | I would like to know how to determine the precise Linux distribution I am on (excluding version numbers), from within a Python script and define a variable as equal to it. Now, I should clarify and say that I have seen these two questions:
* [Python: What OS am I running on?](https://stackoverflow.com/questions/1854/python-what-os-am-i-running-on)
* [How can I find the current OS in Python?](https://stackoverflow.com/questions/110362/how-can-i-find-the-current-os-in-python)
and neither one was helpful to me as, the first of these questions had answers that were very generalized and merely returned `posix` for all Linux distributions. The second question's answers weren't helpful as I sometimes operate on some more obscure distributions like Manjaro Linux and Sabayon Linux. The most applicable answer to the second question, was `platform.linux_distribution()`, which on Manjaro, returns:
```
('', '', '')
```
which as you can see is not helpful. Now I know a way I can get half-way to an acceptable answer, as:
```
from subprocess import call
call(["lsb_release", "-si"])
```
returns the output (on Manjaro Linux, of course):
```
ManjaroLinux
0
```
but defining a variable:
```
a=call(["lsb_release", "-si"])
```
gives an a with the value:
```
>>> a
0
``` | 2016/01/14 | [
"https://Stackoverflow.com/questions/34791797",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1876983/"
] | ```
In [24]: open('/etc/lsb-release').readline().strip().split('=')[-1]
Out[24]: 'LinuxMint'
``` | What about `open("/etc/issue","r").read()`? |
34,791,797 | I would like to know how to determine the precise Linux distribution I am on (excluding version numbers), from within a Python script and define a variable as equal to it. Now, I should clarify and say that I have seen these two questions:
* [Python: What OS am I running on?](https://stackoverflow.com/questions/1854/python-what-os-am-i-running-on)
* [How can I find the current OS in Python?](https://stackoverflow.com/questions/110362/how-can-i-find-the-current-os-in-python)
and neither one was helpful to me as, the first of these questions had answers that were very generalized and merely returned `posix` for all Linux distributions. The second question's answers weren't helpful as I sometimes operate on some more obscure distributions like Manjaro Linux and Sabayon Linux. The most applicable answer to the second question, was `platform.linux_distribution()`, which on Manjaro, returns:
```
('', '', '')
```
which as you can see is not helpful. Now I know a way I can get half-way to an acceptable answer, as:
```
from subprocess import call
call(["lsb_release", "-si"])
```
returns the output (on Manjaro Linux, of course):
```
ManjaroLinux
0
```
but defining a variable:
```
a=call(["lsb_release", "-si"])
```
gives an a with the value:
```
>>> a
0
``` | 2016/01/14 | [
"https://Stackoverflow.com/questions/34791797",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1876983/"
] | Replace call with check\_output.
```
from subprocess import check_output
a = check_output(["lsb_release", "-si"])
``` | What about `open("/etc/issue","r").read()`? |
34,791,797 | I would like to know how to determine the precise Linux distribution I am on (excluding version numbers), from within a Python script and define a variable as equal to it. Now, I should clarify and say that I have seen these two questions:
* [Python: What OS am I running on?](https://stackoverflow.com/questions/1854/python-what-os-am-i-running-on)
* [How can I find the current OS in Python?](https://stackoverflow.com/questions/110362/how-can-i-find-the-current-os-in-python)
and neither one was helpful to me as, the first of these questions had answers that were very generalized and merely returned `posix` for all Linux distributions. The second question's answers weren't helpful as I sometimes operate on some more obscure distributions like Manjaro Linux and Sabayon Linux. The most applicable answer to the second question, was `platform.linux_distribution()`, which on Manjaro, returns:
```
('', '', '')
```
which as you can see is not helpful. Now I know a way I can get half-way to an acceptable answer, as:
```
from subprocess import call
call(["lsb_release", "-si"])
```
returns the output (on Manjaro Linux, of course):
```
ManjaroLinux
0
```
but defining a variable:
```
a=call(["lsb_release", "-si"])
```
gives an a with the value:
```
>>> a
0
``` | 2016/01/14 | [
"https://Stackoverflow.com/questions/34791797",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1876983/"
] | ```
In [24]: open('/etc/lsb-release').readline().strip().split('=')[-1]
Out[24]: 'LinuxMint'
``` | You can also try subprocess.check\_output.
Based on docs: "Run command with arguments and return its output as a byte string." Docs: <https://docs.python.org/2/library/subprocess.html>
Code:
```
a = subprocess.check_output(["lsb_release", "-si"])
```
In my case, output was:
```
'Ubuntu\n'
``` |
34,791,797 | I would like to know how to determine the precise Linux distribution I am on (excluding version numbers), from within a Python script and define a variable as equal to it. Now, I should clarify and say that I have seen these two questions:
* [Python: What OS am I running on?](https://stackoverflow.com/questions/1854/python-what-os-am-i-running-on)
* [How can I find the current OS in Python?](https://stackoverflow.com/questions/110362/how-can-i-find-the-current-os-in-python)
and neither one was helpful to me as, the first of these questions had answers that were very generalized and merely returned `posix` for all Linux distributions. The second question's answers weren't helpful as I sometimes operate on some more obscure distributions like Manjaro Linux and Sabayon Linux. The most applicable answer to the second question, was `platform.linux_distribution()`, which on Manjaro, returns:
```
('', '', '')
```
which as you can see is not helpful. Now I know a way I can get half-way to an acceptable answer, as:
```
from subprocess import call
call(["lsb_release", "-si"])
```
returns the output (on Manjaro Linux, of course):
```
ManjaroLinux
0
```
but defining a variable:
```
a=call(["lsb_release", "-si"])
```
gives an a with the value:
```
>>> a
0
``` | 2016/01/14 | [
"https://Stackoverflow.com/questions/34791797",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1876983/"
] | What about `open("/etc/issue","r").read()`? | As you can see from the link I linked to in the 2nd answer : <https://github.com/easybuilders/easybuild/wiki/OS_flavor_name_version>
you notice that platform.dist is a way better method of determining the current linux platform, however this is deprecated in python 3.5 and is gone in 3.8. It seems like there hardly is a good answer for this.
However weird OS'es will sometimes still not define this, but there is also absolutely no guarentee that lsb\_release will be available on a system...
```
dist: ('redhat', '5.8', 'Final')
dist: ('debian', '4.0', '')
dist: ('centos', '6.3', 'Final')
dist: ('fedora', '16', 'Verne')
dist: ('SuSE', '11.3', 'x86_64')
dist: ('Ubuntu', '12.10', 'quantal')
``` |
34,791,797 | I would like to know how to determine the precise Linux distribution I am on (excluding version numbers), from within a Python script and define a variable as equal to it. Now, I should clarify and say that I have seen these two questions:
* [Python: What OS am I running on?](https://stackoverflow.com/questions/1854/python-what-os-am-i-running-on)
* [How can I find the current OS in Python?](https://stackoverflow.com/questions/110362/how-can-i-find-the-current-os-in-python)
and neither one was helpful to me as, the first of these questions had answers that were very generalized and merely returned `posix` for all Linux distributions. The second question's answers weren't helpful as I sometimes operate on some more obscure distributions like Manjaro Linux and Sabayon Linux. The most applicable answer to the second question, was `platform.linux_distribution()`, which on Manjaro, returns:
```
('', '', '')
```
which as you can see is not helpful. Now I know a way I can get half-way to an acceptable answer, as:
```
from subprocess import call
call(["lsb_release", "-si"])
```
returns the output (on Manjaro Linux, of course):
```
ManjaroLinux
0
```
but defining a variable:
```
a=call(["lsb_release", "-si"])
```
gives an a with the value:
```
>>> a
0
``` | 2016/01/14 | [
"https://Stackoverflow.com/questions/34791797",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1876983/"
] | ```
In [24]: open('/etc/lsb-release').readline().strip().split('=')[-1]
Out[24]: 'LinuxMint'
``` | As you can see from the link I linked to in the 2nd answer : <https://github.com/easybuilders/easybuild/wiki/OS_flavor_name_version>
you notice that platform.dist is a way better method of determining the current linux platform, however this is deprecated in python 3.5 and is gone in 3.8. It seems like there hardly is a good answer for this.
However weird OS'es will sometimes still not define this, but there is also absolutely no guarentee that lsb\_release will be available on a system...
```
dist: ('redhat', '5.8', 'Final')
dist: ('debian', '4.0', '')
dist: ('centos', '6.3', 'Final')
dist: ('fedora', '16', 'Verne')
dist: ('SuSE', '11.3', 'x86_64')
dist: ('Ubuntu', '12.10', 'quantal')
``` |
49,105,070 | I'm a python newbie. I created a calculator program that will accept 2 number and a type of operation from user. I already have a working code for this but I want to further simplify the code by exploring and using function.
Here's the portion of the code:
```
def addition(num1,num2):
sum = num1 + num2
print('The sum is ', sum)
def subtraction(num1,num2):
sub = num1 - num2
print('The difference is ', sub)
def inputNumber():
num1 = float(input('Enter the first number: '))
num2 = float(input('Enter the second number: '))
return num1,num2
print('Enter the corresponding number to perform the operation:\n')
print('1 - addition')
print('2 - subtraction')
print('q - quit')
while True:
try:
operation = input('Select operation > ').lower()
if operation == 'q':
break
elif operation == '1':
addition(inputNumber())
elif operation == '2':
subtraction(inputNumber())
else:
print('Not valid. Try again.')
except:
print('Invalid!')
```
My problem is after entering 2 numbers it doesn't perform the operation. I think the problem is the 2 input values didn't return properly.
Thanks | 2018/03/05 | [
"https://Stackoverflow.com/questions/49105070",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9404668/"
] | You should write something like that.. the ?? is so that if it can't convert the argument into Int it will add 0 to your variable myInt..
```
let myInt:Int = Int("1234") ?? 0
``` | You can do it like this by creating an extension of String:
```
extension String {
var toInt: Int {
return Int(self) ?? 0
}
}
```
and use it like this
```
let preparationTimeInt = preparationTime.toInt
``` |
52,349,669 | In [windows server 2012 R2 x64, python 3.7 64x]
```
pip install opencv-contrib-python
```
installed without any error .
and when I try to import it
```
import cv2
```
show me this error :
```
Traceback (most recent call last):
File "test.py", line 1, in <module>
import cv2
File "C:\Program Files\Python37\lib\site-packages\cv2\__init__.py", line 3, in
<module>
from .cv2 import *
ImportError: DLL load failed: The specified module could not be found.
```
and I see another topic about this error but their solution does not work for me :
- [DLL load failed error when importing cv2](https://stackoverflow.com/questions/43184887/dll-load-failed-error-when-importing-cv2)
- [Failure to import cv2 in python : DLL load failed](http://answers.opencv.org/question/69587/failure-to-import-cv2-in-python-dll-load-failed/)
- and a lot of pages in [here](https://stackoverflow.com/search?q=DLL%20load%20failed.)
some of them say about that problem because anaconda, and I don't have anaconda even. and many say that problem 32 or 64 version, I try both and disappointment. many about system variables...
what `dll` failed to load? cv2? why?
what i try till now:
* change python version : 3.5, 3.7, 2.7 in 86x and 64x (uninstall complete last version and delete all remain files in program files ... then install new one)
* try with another packages like : opencv-python, opencv-contrib, try install specific versions with .whl (in some topic i read about this so install **numpy-1.14.5+mkl-cp37-cp37m-win\_amd64.whl** and **opencv\_python-3.4.3+contrib-cp37-cp37m-win\_amd64.whl** but anything not change)
* import numpy before cv2 for every test
* installed Microsoft Visual C++ 2015 Redistributable Update 3 , 2008 , 2010 , 2017 for 64x
**UPDATE**
**NEXT 2 DAY**
I try to manage missing `dll` base on *dependency walker* that mention in bellow post, download all `dll` and put them in system 32, its a good clue but not solve the problem.
**DAY 4,5**
my next step , trying compile OpenCV in my machine , install visual studio 2012 + cmake and base on [document](https://docs.opencv.org/3.4/d5/de5/tutorial_py_setup_in_windows.html) compile 2 version , 64x and 32 , compiling have a lot of problem itself (like missing some lib from c lang and need to install *Visual C++ Redistributable for Visual Studio 2012* ...) and I resolve all of them ,then I have 2 success compiled version but both have the same problem with missing `dll`, the compile process is so slow and give 2 day from me. so if fail on this mission :/ I search for a new way...
**DAY 6**
my next step is to try to run OpenCV in my machine with docker ( I try but docker windows just install on windows 10 and windows server 2016) so I search for an alternative. I found Vagrant that work like docker but with VM (visual box) today my time spend for this articles, it's amazing, the concept of docker and vagrant is so Attractive, and I play with this like that I have new puzzle :)
first error show on ... and limitless errors go on :
**Day 7**
Vagrant tests failed too because my windows server 2012 is a vb machine (visual server) and run a visual box in another visual box impossible!
so I need new clues :
**Day 8-9**
try install vagrant with VMware , I download VMware, unfortunately this plugin is not free (79$) so I try to work with [pythonanywhere.com](http://pythonanywhere.com) that *'gives you access to machines with a full Python environment already installed.'* in this free account, you can't openurl with urllib , and more limit...
so I wanna try to install ubuntu in VMware my self... I read here results after work. it's Impossible with same reason vb (visual box) | 2018/09/15 | [
"https://Stackoverflow.com/questions/52349669",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7407809/"
] | I have faced the similar issue in Windows Server 2012 r2. After lot of findings I found that mfplat.dll was missing which is related to Window Media Service.
Hence you have to manually install the features so that you can get dll related to window media service.
1. Turn windows features on or off
2. Skip the roles screen and directly go to Feature screen
3. Select "Desktop Experience" under "User Interfaces and Infrastructure"
After this all required dll of opencv would be available. | I had the same problem on Windows Server 2012 R2 x64. I was creating executable file using PyInstaller and got error in runtime:
```
ImportError: DLL load failed: The specified module could not be found.
```
After installing "Visual C++ redistributable" 2015 and enabling "Media Foundation" feature my problem was resolved.
There is more informations in documentation: [OpenCV Documentation](https://pypi.org/project/opencv-python/) |
52,349,669 | In [windows server 2012 R2 x64, python 3.7 64x]
```
pip install opencv-contrib-python
```
installed without any error .
and when I try to import it
```
import cv2
```
show me this error :
```
Traceback (most recent call last):
File "test.py", line 1, in <module>
import cv2
File "C:\Program Files\Python37\lib\site-packages\cv2\__init__.py", line 3, in
<module>
from .cv2 import *
ImportError: DLL load failed: The specified module could not be found.
```
and I see another topic about this error but their solution does not work for me :
- [DLL load failed error when importing cv2](https://stackoverflow.com/questions/43184887/dll-load-failed-error-when-importing-cv2)
- [Failure to import cv2 in python : DLL load failed](http://answers.opencv.org/question/69587/failure-to-import-cv2-in-python-dll-load-failed/)
- and a lot of pages in [here](https://stackoverflow.com/search?q=DLL%20load%20failed.)
some of them say about that problem because anaconda, and I don't have anaconda even. and many say that problem 32 or 64 version, I try both and disappointment. many about system variables...
what `dll` failed to load? cv2? why?
what i try till now:
* change python version : 3.5, 3.7, 2.7 in 86x and 64x (uninstall complete last version and delete all remain files in program files ... then install new one)
* try with another packages like : opencv-python, opencv-contrib, try install specific versions with .whl (in some topic i read about this so install **numpy-1.14.5+mkl-cp37-cp37m-win\_amd64.whl** and **opencv\_python-3.4.3+contrib-cp37-cp37m-win\_amd64.whl** but anything not change)
* import numpy before cv2 for every test
* installed Microsoft Visual C++ 2015 Redistributable Update 3 , 2008 , 2010 , 2017 for 64x
**UPDATE**
**NEXT 2 DAY**
I try to manage missing `dll` base on *dependency walker* that mention in bellow post, download all `dll` and put them in system 32, its a good clue but not solve the problem.
**DAY 4,5**
my next step , trying compile OpenCV in my machine , install visual studio 2012 + cmake and base on [document](https://docs.opencv.org/3.4/d5/de5/tutorial_py_setup_in_windows.html) compile 2 version , 64x and 32 , compiling have a lot of problem itself (like missing some lib from c lang and need to install *Visual C++ Redistributable for Visual Studio 2012* ...) and I resolve all of them ,then I have 2 success compiled version but both have the same problem with missing `dll`, the compile process is so slow and give 2 day from me. so if fail on this mission :/ I search for a new way...
**DAY 6**
my next step is to try to run OpenCV in my machine with docker ( I try but docker windows just install on windows 10 and windows server 2016) so I search for an alternative. I found Vagrant that work like docker but with VM (visual box) today my time spend for this articles, it's amazing, the concept of docker and vagrant is so Attractive, and I play with this like that I have new puzzle :)
first error show on ... and limitless errors go on :
**Day 7**
Vagrant tests failed too because my windows server 2012 is a vb machine (visual server) and run a visual box in another visual box impossible!
so I need new clues :
**Day 8-9**
try install vagrant with VMware , I download VMware, unfortunately this plugin is not free (79$) so I try to work with [pythonanywhere.com](http://pythonanywhere.com) that *'gives you access to machines with a full Python environment already installed.'* in this free account, you can't openurl with urllib , and more limit...
so I wanna try to install ubuntu in VMware my self... I read here results after work. it's Impossible with same reason vb (visual box) | 2018/09/15 | [
"https://Stackoverflow.com/questions/52349669",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7407809/"
] | I had the same error (although I compiled OpenCV myself), in my case there were some DLL dependencies missing. You can check that with the program [Dependency Walker](http://dependencywalker.com/).
Download Dependency Walker and run it, and open the file *cv2.pyd* with Dependency Walker, it should be in `C:\Program Files\Python37\lib\site-packages\cv2\` if the pip package installed correctly.
This program then scans for dependencies (which may take a few minutes) and it then shows missing DLL files in red. | All you need is python 3.6.
I've been looking for solution for last 3 days and my problem was solved when i installed python 3.6.7.
After installing python 3.6 you can simply run `pip install opencv-python`.
Source: <https://www.geeksforgeeks.org/setup-opencv-with-pycharm-environment/> |
52,349,669 | In [windows server 2012 R2 x64, python 3.7 64x]
```
pip install opencv-contrib-python
```
installed without any error .
and when I try to import it
```
import cv2
```
show me this error :
```
Traceback (most recent call last):
File "test.py", line 1, in <module>
import cv2
File "C:\Program Files\Python37\lib\site-packages\cv2\__init__.py", line 3, in
<module>
from .cv2 import *
ImportError: DLL load failed: The specified module could not be found.
```
and I see another topic about this error but their solution does not work for me :
- [DLL load failed error when importing cv2](https://stackoverflow.com/questions/43184887/dll-load-failed-error-when-importing-cv2)
- [Failure to import cv2 in python : DLL load failed](http://answers.opencv.org/question/69587/failure-to-import-cv2-in-python-dll-load-failed/)
- and a lot of pages in [here](https://stackoverflow.com/search?q=DLL%20load%20failed.)
some of them say about that problem because anaconda, and I don't have anaconda even. and many say that problem 32 or 64 version, I try both and disappointment. many about system variables...
what `dll` failed to load? cv2? why?
what i try till now:
* change python version : 3.5, 3.7, 2.7 in 86x and 64x (uninstall complete last version and delete all remain files in program files ... then install new one)
* try with another packages like : opencv-python, opencv-contrib, try install specific versions with .whl (in some topic i read about this so install **numpy-1.14.5+mkl-cp37-cp37m-win\_amd64.whl** and **opencv\_python-3.4.3+contrib-cp37-cp37m-win\_amd64.whl** but anything not change)
* import numpy before cv2 for every test
* installed Microsoft Visual C++ 2015 Redistributable Update 3 , 2008 , 2010 , 2017 for 64x
**UPDATE**
**NEXT 2 DAY**
I try to manage missing `dll` base on *dependency walker* that mention in bellow post, download all `dll` and put them in system 32, its a good clue but not solve the problem.
**DAY 4,5**
my next step , trying compile OpenCV in my machine , install visual studio 2012 + cmake and base on [document](https://docs.opencv.org/3.4/d5/de5/tutorial_py_setup_in_windows.html) compile 2 version , 64x and 32 , compiling have a lot of problem itself (like missing some lib from c lang and need to install *Visual C++ Redistributable for Visual Studio 2012* ...) and I resolve all of them ,then I have 2 success compiled version but both have the same problem with missing `dll`, the compile process is so slow and give 2 day from me. so if fail on this mission :/ I search for a new way...
**DAY 6**
my next step is to try to run OpenCV in my machine with docker ( I try but docker windows just install on windows 10 and windows server 2016) so I search for an alternative. I found Vagrant that work like docker but with VM (visual box) today my time spend for this articles, it's amazing, the concept of docker and vagrant is so Attractive, and I play with this like that I have new puzzle :)
first error show on ... and limitless errors go on :
**Day 7**
Vagrant tests failed too because my windows server 2012 is a vb machine (visual server) and run a visual box in another visual box impossible!
so I need new clues :
**Day 8-9**
try install vagrant with VMware , I download VMware, unfortunately this plugin is not free (79$) so I try to work with [pythonanywhere.com](http://pythonanywhere.com) that *'gives you access to machines with a full Python environment already installed.'* in this free account, you can't openurl with urllib , and more limit...
so I wanna try to install ubuntu in VMware my self... I read here results after work. it's Impossible with same reason vb (visual box) | 2018/09/15 | [
"https://Stackoverflow.com/questions/52349669",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7407809/"
] | It helps me `pip install opencv-contrib-python` Anaconda Prompt, python 3.7.1 cv2 4.1.1 | Just ran into this problem of cv2 importable from my conda environment but not through the "same" environment in jupyter. The error I was getting was that it couldn't load the dll.
Additionally, I could not get opencv installed through jupyter, even using:
```
import sys
!conda install --yes --prefix {sys.prefix} numpy
```
I checked my path using:
```
import os
os.path
```
in both my shell and jupyter. They were the same(!).
I was finally able to get things working by running the command:
```
conda install nb_conda
```
in my conda environment, as outlined here: <https://github.com/udacity/P1_Facial_Keypoints/issues/13>
This was eye opening as it created a new install of jupyter note books associated with my env instead of anaconda, with five(!) environments to choose from when creating a page. There were three that seemed to be associated with my conda environment. Interestingly, now it only shows the one named environment as an option when creatin a new notebook again in jupyter (the one associated with the env), but it successfully imports cv2.
For a good read on why this is happening with jupyter:
<https://jakevdp.github.io/blog/2017/12/05/installing-python-packages-from-jupyter/> |
52,349,669 | In [windows server 2012 R2 x64, python 3.7 64x]
```
pip install opencv-contrib-python
```
installed without any error .
and when I try to import it
```
import cv2
```
show me this error :
```
Traceback (most recent call last):
File "test.py", line 1, in <module>
import cv2
File "C:\Program Files\Python37\lib\site-packages\cv2\__init__.py", line 3, in
<module>
from .cv2 import *
ImportError: DLL load failed: The specified module could not be found.
```
and I see another topic about this error but their solution does not work for me :
- [DLL load failed error when importing cv2](https://stackoverflow.com/questions/43184887/dll-load-failed-error-when-importing-cv2)
- [Failure to import cv2 in python : DLL load failed](http://answers.opencv.org/question/69587/failure-to-import-cv2-in-python-dll-load-failed/)
- and a lot of pages in [here](https://stackoverflow.com/search?q=DLL%20load%20failed.)
some of them say about that problem because anaconda, and I don't have anaconda even. and many say that problem 32 or 64 version, I try both and disappointment. many about system variables...
what `dll` failed to load? cv2? why?
what i try till now:
* change python version : 3.5, 3.7, 2.7 in 86x and 64x (uninstall complete last version and delete all remain files in program files ... then install new one)
* try with another packages like : opencv-python, opencv-contrib, try install specific versions with .whl (in some topic i read about this so install **numpy-1.14.5+mkl-cp37-cp37m-win\_amd64.whl** and **opencv\_python-3.4.3+contrib-cp37-cp37m-win\_amd64.whl** but anything not change)
* import numpy before cv2 for every test
* installed Microsoft Visual C++ 2015 Redistributable Update 3 , 2008 , 2010 , 2017 for 64x
**UPDATE**
**NEXT 2 DAY**
I try to manage missing `dll` base on *dependency walker* that mention in bellow post, download all `dll` and put them in system 32, its a good clue but not solve the problem.
**DAY 4,5**
my next step , trying compile OpenCV in my machine , install visual studio 2012 + cmake and base on [document](https://docs.opencv.org/3.4/d5/de5/tutorial_py_setup_in_windows.html) compile 2 version , 64x and 32 , compiling have a lot of problem itself (like missing some lib from c lang and need to install *Visual C++ Redistributable for Visual Studio 2012* ...) and I resolve all of them ,then I have 2 success compiled version but both have the same problem with missing `dll`, the compile process is so slow and give 2 day from me. so if fail on this mission :/ I search for a new way...
**DAY 6**
my next step is to try to run OpenCV in my machine with docker ( I try but docker windows just install on windows 10 and windows server 2016) so I search for an alternative. I found Vagrant that work like docker but with VM (visual box) today my time spend for this articles, it's amazing, the concept of docker and vagrant is so Attractive, and I play with this like that I have new puzzle :)
first error show on ... and limitless errors go on :
**Day 7**
Vagrant tests failed too because my windows server 2012 is a vb machine (visual server) and run a visual box in another visual box impossible!
so I need new clues :
**Day 8-9**
try install vagrant with VMware , I download VMware, unfortunately this plugin is not free (79$) so I try to work with [pythonanywhere.com](http://pythonanywhere.com) that *'gives you access to machines with a full Python environment already installed.'* in this free account, you can't openurl with urllib , and more limit...
so I wanna try to install ubuntu in VMware my self... I read here results after work. it's Impossible with same reason vb (visual box) | 2018/09/15 | [
"https://Stackoverflow.com/questions/52349669",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7407809/"
] | I had the same error (although I compiled OpenCV myself), in my case there were some DLL dependencies missing. You can check that with the program [Dependency Walker](http://dependencywalker.com/).
Download Dependency Walker and run it, and open the file *cv2.pyd* with Dependency Walker, it should be in `C:\Program Files\Python37\lib\site-packages\cv2\` if the pip package installed correctly.
This program then scans for dependencies (which may take a few minutes) and it then shows missing DLL files in red. | Installing Python version 3.6 and then installing `opencv` with the command:
`pip install opencv-python==3.3.0.9` resolved this issue for me |
52,349,669 | In [windows server 2012 R2 x64, python 3.7 64x]
```
pip install opencv-contrib-python
```
installed without any error .
and when I try to import it
```
import cv2
```
show me this error :
```
Traceback (most recent call last):
File "test.py", line 1, in <module>
import cv2
File "C:\Program Files\Python37\lib\site-packages\cv2\__init__.py", line 3, in
<module>
from .cv2 import *
ImportError: DLL load failed: The specified module could not be found.
```
and I see another topic about this error but their solution does not work for me :
- [DLL load failed error when importing cv2](https://stackoverflow.com/questions/43184887/dll-load-failed-error-when-importing-cv2)
- [Failure to import cv2 in python : DLL load failed](http://answers.opencv.org/question/69587/failure-to-import-cv2-in-python-dll-load-failed/)
- and a lot of pages in [here](https://stackoverflow.com/search?q=DLL%20load%20failed.)
some of them say about that problem because anaconda, and I don't have anaconda even. and many say that problem 32 or 64 version, I try both and disappointment. many about system variables...
what `dll` failed to load? cv2? why?
what i try till now:
* change python version : 3.5, 3.7, 2.7 in 86x and 64x (uninstall complete last version and delete all remain files in program files ... then install new one)
* try with another packages like : opencv-python, opencv-contrib, try install specific versions with .whl (in some topic i read about this so install **numpy-1.14.5+mkl-cp37-cp37m-win\_amd64.whl** and **opencv\_python-3.4.3+contrib-cp37-cp37m-win\_amd64.whl** but anything not change)
* import numpy before cv2 for every test
* installed Microsoft Visual C++ 2015 Redistributable Update 3 , 2008 , 2010 , 2017 for 64x
**UPDATE**
**NEXT 2 DAY**
I try to manage missing `dll` base on *dependency walker* that mention in bellow post, download all `dll` and put them in system 32, its a good clue but not solve the problem.
**DAY 4,5**
my next step , trying compile OpenCV in my machine , install visual studio 2012 + cmake and base on [document](https://docs.opencv.org/3.4/d5/de5/tutorial_py_setup_in_windows.html) compile 2 version , 64x and 32 , compiling have a lot of problem itself (like missing some lib from c lang and need to install *Visual C++ Redistributable for Visual Studio 2012* ...) and I resolve all of them ,then I have 2 success compiled version but both have the same problem with missing `dll`, the compile process is so slow and give 2 day from me. so if fail on this mission :/ I search for a new way...
**DAY 6**
my next step is to try to run OpenCV in my machine with docker ( I try but docker windows just install on windows 10 and windows server 2016) so I search for an alternative. I found Vagrant that work like docker but with VM (visual box) today my time spend for this articles, it's amazing, the concept of docker and vagrant is so Attractive, and I play with this like that I have new puzzle :)
first error show on ... and limitless errors go on :
**Day 7**
Vagrant tests failed too because my windows server 2012 is a vb machine (visual server) and run a visual box in another visual box impossible!
so I need new clues :
**Day 8-9**
try install vagrant with VMware , I download VMware, unfortunately this plugin is not free (79$) so I try to work with [pythonanywhere.com](http://pythonanywhere.com) that *'gives you access to machines with a full Python environment already installed.'* in this free account, you can't openurl with urllib , and more limit...
so I wanna try to install ubuntu in VMware my self... I read here results after work. it's Impossible with same reason vb (visual box) | 2018/09/15 | [
"https://Stackoverflow.com/questions/52349669",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7407809/"
] | i was suffering from the same problem "DLL load failed" after reading tons of answers and articles i got a solution.
i don't know this works for you or not but give it a try.
tools and versions i used: anaconda - 5.3.1, python - 3.7, win 10 (64 bit)
Steps i performed :
**step1**:i installed opencv 3.4.4 from [here](https://opencv.org/releases.html)
then extract into C drive (you can do wherever you want)
**step2**: copy cv2.pyd file from [C:\opencv\build\python\cv2\python-3.7] here to [C:\Users\"user-name"\Anaconda3\Lib\site-packages] here.
**step3**: run *dependency walker* to detect which **dll** files are missing,
after running dependency walker in my case it showed two dll files are not loading, they are (1)OPENCV\_WORLD344.DLL and (2)IESHIMS.DLL.
i don't know about your situation but you can do this-> find files name using dependency walker then find their path and follow step4.
**step4**: open System property->Advanced->Environment variables, now edit path and add C:\opencv\build\x64\vc14\bin (in my case this was the place where OPENCV\_WORLD344.DLL is stored) and after doing this, import cv2 in python.
If my answer does not work then add a comment. | All you need is python 3.6.
I've been looking for solution for last 3 days and my problem was solved when i installed python 3.6.7.
After installing python 3.6 you can simply run `pip install opencv-python`.
Source: <https://www.geeksforgeeks.org/setup-opencv-with-pycharm-environment/> |
52,349,669 | In [windows server 2012 R2 x64, python 3.7 64x]
```
pip install opencv-contrib-python
```
installed without any error .
and when I try to import it
```
import cv2
```
show me this error :
```
Traceback (most recent call last):
File "test.py", line 1, in <module>
import cv2
File "C:\Program Files\Python37\lib\site-packages\cv2\__init__.py", line 3, in
<module>
from .cv2 import *
ImportError: DLL load failed: The specified module could not be found.
```
and I see another topic about this error but their solution does not work for me :
- [DLL load failed error when importing cv2](https://stackoverflow.com/questions/43184887/dll-load-failed-error-when-importing-cv2)
- [Failure to import cv2 in python : DLL load failed](http://answers.opencv.org/question/69587/failure-to-import-cv2-in-python-dll-load-failed/)
- and a lot of pages in [here](https://stackoverflow.com/search?q=DLL%20load%20failed.)
some of them say about that problem because anaconda, and I don't have anaconda even. and many say that problem 32 or 64 version, I try both and disappointment. many about system variables...
what `dll` failed to load? cv2? why?
what i try till now:
* change python version : 3.5, 3.7, 2.7 in 86x and 64x (uninstall complete last version and delete all remain files in program files ... then install new one)
* try with another packages like : opencv-python, opencv-contrib, try install specific versions with .whl (in some topic i read about this so install **numpy-1.14.5+mkl-cp37-cp37m-win\_amd64.whl** and **opencv\_python-3.4.3+contrib-cp37-cp37m-win\_amd64.whl** but anything not change)
* import numpy before cv2 for every test
* installed Microsoft Visual C++ 2015 Redistributable Update 3 , 2008 , 2010 , 2017 for 64x
**UPDATE**
**NEXT 2 DAY**
I try to manage missing `dll` base on *dependency walker* that mention in bellow post, download all `dll` and put them in system 32, its a good clue but not solve the problem.
**DAY 4,5**
my next step , trying compile OpenCV in my machine , install visual studio 2012 + cmake and base on [document](https://docs.opencv.org/3.4/d5/de5/tutorial_py_setup_in_windows.html) compile 2 version , 64x and 32 , compiling have a lot of problem itself (like missing some lib from c lang and need to install *Visual C++ Redistributable for Visual Studio 2012* ...) and I resolve all of them ,then I have 2 success compiled version but both have the same problem with missing `dll`, the compile process is so slow and give 2 day from me. so if fail on this mission :/ I search for a new way...
**DAY 6**
my next step is to try to run OpenCV in my machine with docker ( I try but docker windows just install on windows 10 and windows server 2016) so I search for an alternative. I found Vagrant that work like docker but with VM (visual box) today my time spend for this articles, it's amazing, the concept of docker and vagrant is so Attractive, and I play with this like that I have new puzzle :)
first error show on ... and limitless errors go on :
**Day 7**
Vagrant tests failed too because my windows server 2012 is a vb machine (visual server) and run a visual box in another visual box impossible!
so I need new clues :
**Day 8-9**
try install vagrant with VMware , I download VMware, unfortunately this plugin is not free (79$) so I try to work with [pythonanywhere.com](http://pythonanywhere.com) that *'gives you access to machines with a full Python environment already installed.'* in this free account, you can't openurl with urllib , and more limit...
so I wanna try to install ubuntu in VMware my self... I read here results after work. it's Impossible with same reason vb (visual box) | 2018/09/15 | [
"https://Stackoverflow.com/questions/52349669",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7407809/"
] | I was having this problem on Windows. I resolved this error by checking instructions [here](https://pypi.org/project/opencv-python).
Q: Import fails on Windows: ImportError: DLL load failed: The specified module could not be found?
A: If the import fails on Windows, make sure you have
[Visual C++ redistributable 2015](https://www.microsoft.com/en-us/download/details.aspx?id=48145) installed.
If you are using older Windows version than Windows 10 and latest system updates are not installed, Universal C Runtime might be also required.
Windows N and KN editions do not include Media Feature Pack which is required by OpenCV. If you are using Windows N or KN edition, please install also [Windows Media Feature Pack](https://support.microsoft.com/en-us/help/3145500/media-feature-pack-list-for-windows-n-editions) | It helps me `pip install opencv-contrib-python` Anaconda Prompt, python 3.7.1 cv2 4.1.1 |
52,349,669 | In [windows server 2012 R2 x64, python 3.7 64x]
```
pip install opencv-contrib-python
```
installed without any error .
and when I try to import it
```
import cv2
```
show me this error :
```
Traceback (most recent call last):
File "test.py", line 1, in <module>
import cv2
File "C:\Program Files\Python37\lib\site-packages\cv2\__init__.py", line 3, in
<module>
from .cv2 import *
ImportError: DLL load failed: The specified module could not be found.
```
and I see another topic about this error but their solution does not work for me :
- [DLL load failed error when importing cv2](https://stackoverflow.com/questions/43184887/dll-load-failed-error-when-importing-cv2)
- [Failure to import cv2 in python : DLL load failed](http://answers.opencv.org/question/69587/failure-to-import-cv2-in-python-dll-load-failed/)
- and a lot of pages in [here](https://stackoverflow.com/search?q=DLL%20load%20failed.)
some of them say about that problem because anaconda, and I don't have anaconda even. and many say that problem 32 or 64 version, I try both and disappointment. many about system variables...
what `dll` failed to load? cv2? why?
what i try till now:
* change python version : 3.5, 3.7, 2.7 in 86x and 64x (uninstall complete last version and delete all remain files in program files ... then install new one)
* try with another packages like : opencv-python, opencv-contrib, try install specific versions with .whl (in some topic i read about this so install **numpy-1.14.5+mkl-cp37-cp37m-win\_amd64.whl** and **opencv\_python-3.4.3+contrib-cp37-cp37m-win\_amd64.whl** but anything not change)
* import numpy before cv2 for every test
* installed Microsoft Visual C++ 2015 Redistributable Update 3 , 2008 , 2010 , 2017 for 64x
**UPDATE**
**NEXT 2 DAY**
I try to manage missing `dll` base on *dependency walker* that mention in bellow post, download all `dll` and put them in system 32, its a good clue but not solve the problem.
**DAY 4,5**
my next step , trying compile OpenCV in my machine , install visual studio 2012 + cmake and base on [document](https://docs.opencv.org/3.4/d5/de5/tutorial_py_setup_in_windows.html) compile 2 version , 64x and 32 , compiling have a lot of problem itself (like missing some lib from c lang and need to install *Visual C++ Redistributable for Visual Studio 2012* ...) and I resolve all of them ,then I have 2 success compiled version but both have the same problem with missing `dll`, the compile process is so slow and give 2 day from me. so if fail on this mission :/ I search for a new way...
**DAY 6**
my next step is to try to run OpenCV in my machine with docker ( I try but docker windows just install on windows 10 and windows server 2016) so I search for an alternative. I found Vagrant that work like docker but with VM (visual box) today my time spend for this articles, it's amazing, the concept of docker and vagrant is so Attractive, and I play with this like that I have new puzzle :)
first error show on ... and limitless errors go on :
**Day 7**
Vagrant tests failed too because my windows server 2012 is a vb machine (visual server) and run a visual box in another visual box impossible!
so I need new clues :
**Day 8-9**
try install vagrant with VMware , I download VMware, unfortunately this plugin is not free (79$) so I try to work with [pythonanywhere.com](http://pythonanywhere.com) that *'gives you access to machines with a full Python environment already installed.'* in this free account, you can't openurl with urllib , and more limit...
so I wanna try to install ubuntu in VMware my self... I read here results after work. it's Impossible with same reason vb (visual box) | 2018/09/15 | [
"https://Stackoverflow.com/questions/52349669",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7407809/"
] | I was having the same issue. I resolved this error by downgrading open cv.
`pip install opencv-python==3.3.0.9` | I was having this problem on Windows Server 2008R2 fresh install and took almost a day to resolve, as was trying with many hits and trials finally I found solution somewhere in internet (not stackoverflow)
* Installed Windows Media Feature Pack for 2008R2 then installed Server Manager-> Features-> Add Features-> Desktop Experience
* Server Manager-> Features-> Add Features-> Desktop Experience
and Ink and Handwriting Services.
Besides check list includes
* Visual C++ redistributable 2015
* Universal C Runtime
Hopefully its will help save time |
52,349,669 | In [windows server 2012 R2 x64, python 3.7 64x]
```
pip install opencv-contrib-python
```
installed without any error .
and when I try to import it
```
import cv2
```
show me this error :
```
Traceback (most recent call last):
File "test.py", line 1, in <module>
import cv2
File "C:\Program Files\Python37\lib\site-packages\cv2\__init__.py", line 3, in
<module>
from .cv2 import *
ImportError: DLL load failed: The specified module could not be found.
```
and I see another topic about this error but their solution does not work for me :
- [DLL load failed error when importing cv2](https://stackoverflow.com/questions/43184887/dll-load-failed-error-when-importing-cv2)
- [Failure to import cv2 in python : DLL load failed](http://answers.opencv.org/question/69587/failure-to-import-cv2-in-python-dll-load-failed/)
- and a lot of pages in [here](https://stackoverflow.com/search?q=DLL%20load%20failed.)
some of them say about that problem because anaconda, and I don't have anaconda even. and many say that problem 32 or 64 version, I try both and disappointment. many about system variables...
what `dll` failed to load? cv2? why?
what i try till now:
* change python version : 3.5, 3.7, 2.7 in 86x and 64x (uninstall complete last version and delete all remain files in program files ... then install new one)
* try with another packages like : opencv-python, opencv-contrib, try install specific versions with .whl (in some topic i read about this so install **numpy-1.14.5+mkl-cp37-cp37m-win\_amd64.whl** and **opencv\_python-3.4.3+contrib-cp37-cp37m-win\_amd64.whl** but anything not change)
* import numpy before cv2 for every test
* installed Microsoft Visual C++ 2015 Redistributable Update 3 , 2008 , 2010 , 2017 for 64x
**UPDATE**
**NEXT 2 DAY**
I try to manage missing `dll` base on *dependency walker* that mention in bellow post, download all `dll` and put them in system 32, its a good clue but not solve the problem.
**DAY 4,5**
my next step , trying compile OpenCV in my machine , install visual studio 2012 + cmake and base on [document](https://docs.opencv.org/3.4/d5/de5/tutorial_py_setup_in_windows.html) compile 2 version , 64x and 32 , compiling have a lot of problem itself (like missing some lib from c lang and need to install *Visual C++ Redistributable for Visual Studio 2012* ...) and I resolve all of them ,then I have 2 success compiled version but both have the same problem with missing `dll`, the compile process is so slow and give 2 day from me. so if fail on this mission :/ I search for a new way...
**DAY 6**
my next step is to try to run OpenCV in my machine with docker ( I try but docker windows just install on windows 10 and windows server 2016) so I search for an alternative. I found Vagrant that work like docker but with VM (visual box) today my time spend for this articles, it's amazing, the concept of docker and vagrant is so Attractive, and I play with this like that I have new puzzle :)
first error show on ... and limitless errors go on :
**Day 7**
Vagrant tests failed too because my windows server 2012 is a vb machine (visual server) and run a visual box in another visual box impossible!
so I need new clues :
**Day 8-9**
try install vagrant with VMware , I download VMware, unfortunately this plugin is not free (79$) so I try to work with [pythonanywhere.com](http://pythonanywhere.com) that *'gives you access to machines with a full Python environment already installed.'* in this free account, you can't openurl with urllib , and more limit...
so I wanna try to install ubuntu in VMware my self... I read here results after work. it's Impossible with same reason vb (visual box) | 2018/09/15 | [
"https://Stackoverflow.com/questions/52349669",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7407809/"
] | I was having this problem on Windows. I resolved this error by checking instructions [here](https://pypi.org/project/opencv-python).
Q: Import fails on Windows: ImportError: DLL load failed: The specified module could not be found?
A: If the import fails on Windows, make sure you have
[Visual C++ redistributable 2015](https://www.microsoft.com/en-us/download/details.aspx?id=48145) installed.
If you are using older Windows version than Windows 10 and latest system updates are not installed, Universal C Runtime might be also required.
Windows N and KN editions do not include Media Feature Pack which is required by OpenCV. If you are using Windows N or KN edition, please install also [Windows Media Feature Pack](https://support.microsoft.com/en-us/help/3145500/media-feature-pack-list-for-windows-n-editions) | **its worked well for me.\***
Answer is Need to put cv2.pyd file to your virtual environment.
need to put under two folder of envs,
* **first** is under DLLS folder and
**Second** is under Lib/site-packages
To get cv2.pyd > download from this link <https://sourceforge.net/projects/opencvlibrary/> and then extract the download file
You will get opencv folder, after that go inside opencv folder like Downloads\opencv\build\python\2.7 and go one deeper folder depend on your 32 or 64 window version
Copy cv2.pyd Important \*\*\*\* *after copying cv2.pyd file to your envs, you need to rename cv2.pyd to \_cv2.pyd* |
52,349,669 | In [windows server 2012 R2 x64, python 3.7 64x]
```
pip install opencv-contrib-python
```
installed without any error .
and when I try to import it
```
import cv2
```
show me this error :
```
Traceback (most recent call last):
File "test.py", line 1, in <module>
import cv2
File "C:\Program Files\Python37\lib\site-packages\cv2\__init__.py", line 3, in
<module>
from .cv2 import *
ImportError: DLL load failed: The specified module could not be found.
```
and I see another topic about this error but their solution does not work for me :
- [DLL load failed error when importing cv2](https://stackoverflow.com/questions/43184887/dll-load-failed-error-when-importing-cv2)
- [Failure to import cv2 in python : DLL load failed](http://answers.opencv.org/question/69587/failure-to-import-cv2-in-python-dll-load-failed/)
- and a lot of pages in [here](https://stackoverflow.com/search?q=DLL%20load%20failed.)
some of them say about that problem because anaconda, and I don't have anaconda even. and many say that problem 32 or 64 version, I try both and disappointment. many about system variables...
what `dll` failed to load? cv2? why?
what i try till now:
* change python version : 3.5, 3.7, 2.7 in 86x and 64x (uninstall complete last version and delete all remain files in program files ... then install new one)
* try with another packages like : opencv-python, opencv-contrib, try install specific versions with .whl (in some topic i read about this so install **numpy-1.14.5+mkl-cp37-cp37m-win\_amd64.whl** and **opencv\_python-3.4.3+contrib-cp37-cp37m-win\_amd64.whl** but anything not change)
* import numpy before cv2 for every test
* installed Microsoft Visual C++ 2015 Redistributable Update 3 , 2008 , 2010 , 2017 for 64x
**UPDATE**
**NEXT 2 DAY**
I try to manage missing `dll` base on *dependency walker* that mention in bellow post, download all `dll` and put them in system 32, its a good clue but not solve the problem.
**DAY 4,5**
my next step , trying compile OpenCV in my machine , install visual studio 2012 + cmake and base on [document](https://docs.opencv.org/3.4/d5/de5/tutorial_py_setup_in_windows.html) compile 2 version , 64x and 32 , compiling have a lot of problem itself (like missing some lib from c lang and need to install *Visual C++ Redistributable for Visual Studio 2012* ...) and I resolve all of them ,then I have 2 success compiled version but both have the same problem with missing `dll`, the compile process is so slow and give 2 day from me. so if fail on this mission :/ I search for a new way...
**DAY 6**
my next step is to try to run OpenCV in my machine with docker ( I try but docker windows just install on windows 10 and windows server 2016) so I search for an alternative. I found Vagrant that work like docker but with VM (visual box) today my time spend for this articles, it's amazing, the concept of docker and vagrant is so Attractive, and I play with this like that I have new puzzle :)
first error show on ... and limitless errors go on :
**Day 7**
Vagrant tests failed too because my windows server 2012 is a vb machine (visual server) and run a visual box in another visual box impossible!
so I need new clues :
**Day 8-9**
try install vagrant with VMware , I download VMware, unfortunately this plugin is not free (79$) so I try to work with [pythonanywhere.com](http://pythonanywhere.com) that *'gives you access to machines with a full Python environment already installed.'* in this free account, you can't openurl with urllib , and more limit...
so I wanna try to install ubuntu in VMware my self... I read here results after work. it's Impossible with same reason vb (visual box) | 2018/09/15 | [
"https://Stackoverflow.com/questions/52349669",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7407809/"
] | After 15 days of brain storming, This solution worked for me. And I am sure that it will work for you too. I installed anaconda to use OpenCV 3.1.0. I followed following Steps:
1) I have installed anaconda-5.3 64-bit installer (614.3 MB) which uses python 3.7. You can download anaconda from link: <https://www.anaconda.com/download/#windows>
2) After installation of anaconda, open anaconda prompt by typing "anaconda prompt" on windows start button. Open with "Run as administrator".
3) You have to create new environment to install and use OpenCV module.Write following commands to create and activate new environment:
`>conda create --name myNewEnv python=3.5.0`
`>activate myNewEnv`
"myNewEnv" is the name of new enviroment.
4)Now you need to install prerequisite for OpenCV, which is numpy and then install opencv3. Start internet before executing following commands:
`>conda install numpy`
`>conda install anaconda-client`
`>conda install --channel https://conda.anaconda.org/menpo opencv3`
5)OpenCV3 has been installed. Now, verify installation by executing following commands:
`(myNewEnv) C:\Users\Nilesh> python`
`>>>import cv2`
`>>>cv2.__version__`
Follow steps given below, when you want to start OpenCV3 second time onwards:
1) Open "anaconda command promt" from start menu with "run as administrator" rights
2) Type command >activate your\_new\_environment\_name
3) Assume that you want to run file located on desktop, for that write following command:
`> cd C:\Users\Nilesh\Desktop`
4) Now your working directory is Desktop.
5) Type following command to run any code (for example test.py).
`> python test.py`
Note: Here, python means version-3.
I hope this solution will work for you. | I was having the same issue. I resolved this error by downgrading open cv.
`pip install opencv-python==3.3.0.9` |
52,349,669 | In [windows server 2012 R2 x64, python 3.7 64x]
```
pip install opencv-contrib-python
```
installed without any error .
and when I try to import it
```
import cv2
```
show me this error :
```
Traceback (most recent call last):
File "test.py", line 1, in <module>
import cv2
File "C:\Program Files\Python37\lib\site-packages\cv2\__init__.py", line 3, in
<module>
from .cv2 import *
ImportError: DLL load failed: The specified module could not be found.
```
and I see another topic about this error but their solution does not work for me :
- [DLL load failed error when importing cv2](https://stackoverflow.com/questions/43184887/dll-load-failed-error-when-importing-cv2)
- [Failure to import cv2 in python : DLL load failed](http://answers.opencv.org/question/69587/failure-to-import-cv2-in-python-dll-load-failed/)
- and a lot of pages in [here](https://stackoverflow.com/search?q=DLL%20load%20failed.)
some of them say about that problem because anaconda, and I don't have anaconda even. and many say that problem 32 or 64 version, I try both and disappointment. many about system variables...
what `dll` failed to load? cv2? why?
what i try till now:
* change python version : 3.5, 3.7, 2.7 in 86x and 64x (uninstall complete last version and delete all remain files in program files ... then install new one)
* try with another packages like : opencv-python, opencv-contrib, try install specific versions with .whl (in some topic i read about this so install **numpy-1.14.5+mkl-cp37-cp37m-win\_amd64.whl** and **opencv\_python-3.4.3+contrib-cp37-cp37m-win\_amd64.whl** but anything not change)
* import numpy before cv2 for every test
* installed Microsoft Visual C++ 2015 Redistributable Update 3 , 2008 , 2010 , 2017 for 64x
**UPDATE**
**NEXT 2 DAY**
I try to manage missing `dll` base on *dependency walker* that mention in bellow post, download all `dll` and put them in system 32, its a good clue but not solve the problem.
**DAY 4,5**
my next step , trying compile OpenCV in my machine , install visual studio 2012 + cmake and base on [document](https://docs.opencv.org/3.4/d5/de5/tutorial_py_setup_in_windows.html) compile 2 version , 64x and 32 , compiling have a lot of problem itself (like missing some lib from c lang and need to install *Visual C++ Redistributable for Visual Studio 2012* ...) and I resolve all of them ,then I have 2 success compiled version but both have the same problem with missing `dll`, the compile process is so slow and give 2 day from me. so if fail on this mission :/ I search for a new way...
**DAY 6**
my next step is to try to run OpenCV in my machine with docker ( I try but docker windows just install on windows 10 and windows server 2016) so I search for an alternative. I found Vagrant that work like docker but with VM (visual box) today my time spend for this articles, it's amazing, the concept of docker and vagrant is so Attractive, and I play with this like that I have new puzzle :)
first error show on ... and limitless errors go on :
**Day 7**
Vagrant tests failed too because my windows server 2012 is a vb machine (visual server) and run a visual box in another visual box impossible!
so I need new clues :
**Day 8-9**
try install vagrant with VMware , I download VMware, unfortunately this plugin is not free (79$) so I try to work with [pythonanywhere.com](http://pythonanywhere.com) that *'gives you access to machines with a full Python environment already installed.'* in this free account, you can't openurl with urllib , and more limit...
so I wanna try to install ubuntu in VMware my self... I read here results after work. it's Impossible with same reason vb (visual box) | 2018/09/15 | [
"https://Stackoverflow.com/questions/52349669",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7407809/"
] | Can you try uninstalling opencv and installing it using a wheel file instead? [Here](https://www.lfd.uci.edu/~gohlke/pythonlibs/) is a website that has many versions of OpenCV compiled for windows, search for the one you need and simply install it with `pip` command.
So if you have Python 3.6 (64 Bit) and wish to install OpenCV 3.4.3 then download the wheel file:
`opencv_python‑3.4.3‑cp36‑cp36m‑win_amd64.whl` | It helps me `pip install opencv-contrib-python` Anaconda Prompt, python 3.7.1 cv2 4.1.1 |
24,944,627 | I'm using the Canopy distribution and when I try to install pymatbridge using 'pip install pymatbridge' I get an error saying that pymatbridge does not work on win32. I've got the 64-bit version of Canopy so I don't understand what that means.
<http://arokem.github.io/python-matlab-bridge/>
```
Downloading/unpacking pymatbridge
Running setup.py (path:C:\Users\user\AppData\Local\Enthought\Canopy\User\build\pymatbridge\setup.py) egg_info for package pymatbridge
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "C:\Users\user\AppData\Local\Enthought\Canopy\User\build\pymatbridge\setup.py", line 25, in <module>
raise ValueError("pymatbridge does not work on win32")
ValueError: pymatbridge does not work on win32
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "C:\Users\user\AppData\Local\Enthought\Canopy\User\build\pymatbridge\setup.py", line 25, in <module>
raise ValueError("pymatbridge does not work on win32")
ValueError: pymatbridge does not work on win32
----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in C:\Users\user\AppData\Local\Enthought\Canopy\User\build\pymatbridge
Storing debug log for failure in C:\Users\user\pip\pip.log
``` | 2014/07/24 | [
"https://Stackoverflow.com/questions/24944627",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/334059/"
] | I am the developer of this software. This should work now (since version 0.4), but I don't have a Windows machine to test this one. I have had help from Windows users in developing the patches to make this Windows-functional. Though, I am not always able to solve issues, I am happy to receive suggestions/complaints/praise through our github page:
<https://github.com/arokem/python-matlab-bridge/issues> | "Win32" in this context means Windows 32- or 64-bit, as distinct from Cygwin.
The developer of pymatbridge introduced this explicit restriction in May 2014:
<https://github.com/arokem/python-matlab-bridge/commit/a6fd3cc3adf5ef2b5e3d9b83a8050d783c76d48f>
I don't know why. Perhaps, like many small developers, he found the idiosyncrasies of Windows frustrating, and did not have the time to handle them, at least for now. ***[EDIT: See package developer comment below; this restriction was not intentional and has been removed.]***
Is it an option for you to run Matlab in a Linux VM on your Windows machine, or directly on Linux or Mac, in order to use this package? |
60,119,580 | I am building HR app using python with Django framework, I am having issue to calculation retirement year of an employee, for example if an employee enters his/her date of birth let the system calculate his/her retirement year or how many years remaining to retire. staff retire at 60 years
Am getting this error:
```
TypeError at /staffprofile/profile_entry/
unsupported operand type(s) for -: 'datetime.date' and 'datetime.datetime'
Request Method: POST
Request URL: http://0.0.0.0:8080/staffprofile/profile_entry/
Django Version: 1.8
Exception Type: TypeError
Exception Value:
unsupported operand type(s) for -: 'datetime.date' and 'datetime.datetime'
Exception Location: /home/bakceesay/djangoapps/NAO/venv/src/staffprofile/views.py in profile_entry, line 57
```
this is my code in views.py
```
from __future__ import unicode_literals
from django.shortcuts import get_object_or_404, render, redirect
from django.contrib.auth.models import User
from django.http import HttpResponse, HttpResponseRedirect # HttpResponse allows the get_absolute_url to work ## and HttpresponseRedirect redirects page after a process
from .models import *
from .forms import *
from django.contrib import messages
from datetime import datetime, timedelta
def profile_entry(request):
title = 'ADD YOUR PROFILE INFORMATION'
form = ProfileForm(request.POST or None, request.FILES or None)
if form.is_valid():
instance = form.save(commit=False)
date_of_birth = instance.date_of_birth
age_days = (datetime.now().date() - date_of_birth)
age = (age_days/365)
rem_yrs = (60 - int(age))
instance.date_of_retirement = rem_yrs
instance.save()
messages.success(request, 'Successfully Saved')
return redirect('/profile/profile_detail')
context = {
"title": title,
"form": form,
}
return render(request, "profile_entry.html",context)
```
models.py
```
[address = models.CharField(max_length=30, blank=True, null=True)
date_of_birth = models.DateTimeField(blank=True, null=True)
email_address = models.CharField(max_length=30, blank=True, null=True)
phone_number = models.CharField(max_length=30, blank=True, null=True)
date_of_hire = models.DateTimeField(blank=True, null=True)
date_of_termination = models.DateField(blank=True, null=True)
date_of_retirement = models.CharField(max_length=30, blank=True, null=True)][1]
``` | 2020/02/07 | [
"https://Stackoverflow.com/questions/60119580",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12860292/"
] | Here's a full working example of what you want to achieve:
```
import pandas as pd
import matplotlib.pyplot as plt
import pandas as pd
df_1 = pd.DataFrame({'2010':[10,11,12,13],'2011':[14,18,14,15],'2012':[12,13,14,13]})
df_2 = pd.DataFrame({'2010':[10,11,12,13],'2011':[14,18,14,15],'2012':[12,13,14,13]})
df_3 = pd.DataFrame({'2010':[10,11,12,13],'2011':[14,18,14,15],'2012':[12,13,14,13]})
list_df = [df_1,df_2,df_3]
for i in range(len(list_df)):
ax = plt.subplot(len(list_df[i]),len(list(list_df[i])),i+1)
for j in list_df[i]:
ax.plot(list_df[i].index,list_df[i][j])
```
### Edit:
Given your answer which seems to fully address the issue, I would change a few lines to make it a bit more efficient. Below the code, I will upload some information regarding performance:
for i in range(df1.shape[1](https://i.stack.imgur.com/mWlvN.png)): `for i in range(len(df.columns)):`
```
leg_text = df1.columns.tolist() #What's the use of this line?
```
Speed comparison betweens `shape`, `len` and some other methods:
```
ns = np.power(10, np.arange(5))
results = pd.DataFrame(
columns=ns,
index=pd.MultiIndex.from_product(
[['len', 'len(tolist)', 'len(values)', 'shape'],
ns]))
dfs = {(n, m): pd.DataFrame(np.zeros((n, m))) for n in ns for m in ns}
for n, m in dfs.keys():
df = dfs[(n, m)]
results.loc[('len',n),m] = timeit('len(df.columns)', 'from __main__ import df', number=10000)
results.loc[('len(tolist)', n), m] = timeit('len(df.columns.tolist())', 'from __main__ import df', number=10000)
results.loc[('len(values)', n), m] = timeit('len(df.columns.values)', 'from __main__ import df', number=10000)
results.loc[('shape', n), m] = timeit('df.values.shape[1]', 'from __main__ import df', number=10000)
fig, axes = plt.subplots(2, 3, figsize=(9, 6), sharex=True, sharey=True)
for i, (m, col) in enumerate(results.iteritems()):
r, c = i // 3, i % 3
col.unstack(0).plot.bar(ax=axes[r, c], title=m)
```
Output:
[](https://i.stack.imgur.com/mWlvN.png)
```
1 10 100 1000 10000
len 1 0.0038 0.0046 0.0032 0.0037 0.0035
len 10 0.0032 0.0032 0.0032 0.0034 0.0035
len 100 0.0032 0.0052 0.0052 0.0053 0.0035
len 1000 0.0037 0.0036 0.0041 0.0039 0.0043
len 10000 0.0040 0.0038 0.0045 0.0043 0.0123
len(tolist) 1 0.0051 0.0075 0.0175 0.1629 1.6579
len(tolist) 10 0.0051 0.0059 0.0175 0.1588 1.9253
len(tolist) 100 0.0049 0.0097 0.0196 0.1635 1.7422
len(tolist) 1000 0.0053 0.0065 0.0198 0.1831 1.9897
len(tolist) 10000 0.0057 0.0069 0.0218 0.1995 2.2426
len(values) 1 0.0083 0.0097 0.0073 0.0074 0.0074
len(values) 10 0.0073 0.0072 0.0073 0.0107 0.0087
len(values) 100 0.0075 0.0094 0.0109 0.0072 0.0081
len(values) 1000 0.0081 0.0082 0.0081 0.0085 0.0088
len(values) 10000 0.0087 0.0084 0.0103 0.0101 0.0327
shape 1 0.1108 0.0838 0.0789 0.0779 0.0780
shape 10 0.0764 0.0770 0.0771 0.1118 0.0806
shape 100 0.0952 0.0826 0.1013 0.0800 0.0889
shape 1000 0.0881 0.0863 0.0867 0.0938 0.1063
shape 10000 0.0905 0.0999 0.1043 0.1013 0.2384
``` | I figured out the solution for this, hope it will be helpful to others. Since the input is list of dataframe, it is easier to do as follow:
```
import matplotlib.pyplot as plt
from matplotlib.pyplot import cm
from itertools import cycle
df1, df2 = list_of_df[0], list_of_df[1]
colors=cm.tab10(np.linspace(0, 1,len(df1.columns)))
lines = ["-","--","-.",":"]
linecycler = cycle(lines)
leg_text = df1.columns.tolist()
marker = cycle(('+', 'o', '*', 'v','^','<','>'))
fig,(ax1,ax2) = plt.subplots(nrows=2,ncols=1,sharex=True,squeeze=True,figsize=(10,8))
for i in range(df1.shape[1]):
ax1.plot(df1.index, df1.columns[i], next(linecycler), marker = next(marker), data=df1, color=colors[i], linewidth=3)
ax2.plot(df2.index, df2.columns[i], next(linecycler), marker = next(marker),data=df2, color=colors[i], linewidth=3)
plt.tight_layout(rect=[0, 0, 0.85, 1])
plt.gcf().autofmt_xdate()
plt.style.use('ggplot')
plt.xticks(rotation=0)
plt.show()
```
I get my expected output. I may need to come even more efficient code, so anyone has a better idea, please let me know. Thanks |
60,119,580 | I am building HR app using python with Django framework, I am having issue to calculation retirement year of an employee, for example if an employee enters his/her date of birth let the system calculate his/her retirement year or how many years remaining to retire. staff retire at 60 years
Am getting this error:
```
TypeError at /staffprofile/profile_entry/
unsupported operand type(s) for -: 'datetime.date' and 'datetime.datetime'
Request Method: POST
Request URL: http://0.0.0.0:8080/staffprofile/profile_entry/
Django Version: 1.8
Exception Type: TypeError
Exception Value:
unsupported operand type(s) for -: 'datetime.date' and 'datetime.datetime'
Exception Location: /home/bakceesay/djangoapps/NAO/venv/src/staffprofile/views.py in profile_entry, line 57
```
this is my code in views.py
```
from __future__ import unicode_literals
from django.shortcuts import get_object_or_404, render, redirect
from django.contrib.auth.models import User
from django.http import HttpResponse, HttpResponseRedirect # HttpResponse allows the get_absolute_url to work ## and HttpresponseRedirect redirects page after a process
from .models import *
from .forms import *
from django.contrib import messages
from datetime import datetime, timedelta
def profile_entry(request):
title = 'ADD YOUR PROFILE INFORMATION'
form = ProfileForm(request.POST or None, request.FILES or None)
if form.is_valid():
instance = form.save(commit=False)
date_of_birth = instance.date_of_birth
age_days = (datetime.now().date() - date_of_birth)
age = (age_days/365)
rem_yrs = (60 - int(age))
instance.date_of_retirement = rem_yrs
instance.save()
messages.success(request, 'Successfully Saved')
return redirect('/profile/profile_detail')
context = {
"title": title,
"form": form,
}
return render(request, "profile_entry.html",context)
```
models.py
```
[address = models.CharField(max_length=30, blank=True, null=True)
date_of_birth = models.DateTimeField(blank=True, null=True)
email_address = models.CharField(max_length=30, blank=True, null=True)
phone_number = models.CharField(max_length=30, blank=True, null=True)
date_of_hire = models.DateTimeField(blank=True, null=True)
date_of_termination = models.DateField(blank=True, null=True)
date_of_retirement = models.CharField(max_length=30, blank=True, null=True)][1]
``` | 2020/02/07 | [
"https://Stackoverflow.com/questions/60119580",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12860292/"
] | Here's a full working example of what you want to achieve:
```
import pandas as pd
import matplotlib.pyplot as plt
import pandas as pd
df_1 = pd.DataFrame({'2010':[10,11,12,13],'2011':[14,18,14,15],'2012':[12,13,14,13]})
df_2 = pd.DataFrame({'2010':[10,11,12,13],'2011':[14,18,14,15],'2012':[12,13,14,13]})
df_3 = pd.DataFrame({'2010':[10,11,12,13],'2011':[14,18,14,15],'2012':[12,13,14,13]})
list_df = [df_1,df_2,df_3]
for i in range(len(list_df)):
ax = plt.subplot(len(list_df[i]),len(list(list_df[i])),i+1)
for j in list_df[i]:
ax.plot(list_df[i].index,list_df[i][j])
```
### Edit:
Given your answer which seems to fully address the issue, I would change a few lines to make it a bit more efficient. Below the code, I will upload some information regarding performance:
for i in range(df1.shape[1](https://i.stack.imgur.com/mWlvN.png)): `for i in range(len(df.columns)):`
```
leg_text = df1.columns.tolist() #What's the use of this line?
```
Speed comparison betweens `shape`, `len` and some other methods:
```
ns = np.power(10, np.arange(5))
results = pd.DataFrame(
columns=ns,
index=pd.MultiIndex.from_product(
[['len', 'len(tolist)', 'len(values)', 'shape'],
ns]))
dfs = {(n, m): pd.DataFrame(np.zeros((n, m))) for n in ns for m in ns}
for n, m in dfs.keys():
df = dfs[(n, m)]
results.loc[('len',n),m] = timeit('len(df.columns)', 'from __main__ import df', number=10000)
results.loc[('len(tolist)', n), m] = timeit('len(df.columns.tolist())', 'from __main__ import df', number=10000)
results.loc[('len(values)', n), m] = timeit('len(df.columns.values)', 'from __main__ import df', number=10000)
results.loc[('shape', n), m] = timeit('df.values.shape[1]', 'from __main__ import df', number=10000)
fig, axes = plt.subplots(2, 3, figsize=(9, 6), sharex=True, sharey=True)
for i, (m, col) in enumerate(results.iteritems()):
r, c = i // 3, i % 3
col.unstack(0).plot.bar(ax=axes[r, c], title=m)
```
Output:
[](https://i.stack.imgur.com/mWlvN.png)
```
1 10 100 1000 10000
len 1 0.0038 0.0046 0.0032 0.0037 0.0035
len 10 0.0032 0.0032 0.0032 0.0034 0.0035
len 100 0.0032 0.0052 0.0052 0.0053 0.0035
len 1000 0.0037 0.0036 0.0041 0.0039 0.0043
len 10000 0.0040 0.0038 0.0045 0.0043 0.0123
len(tolist) 1 0.0051 0.0075 0.0175 0.1629 1.6579
len(tolist) 10 0.0051 0.0059 0.0175 0.1588 1.9253
len(tolist) 100 0.0049 0.0097 0.0196 0.1635 1.7422
len(tolist) 1000 0.0053 0.0065 0.0198 0.1831 1.9897
len(tolist) 10000 0.0057 0.0069 0.0218 0.1995 2.2426
len(values) 1 0.0083 0.0097 0.0073 0.0074 0.0074
len(values) 10 0.0073 0.0072 0.0073 0.0107 0.0087
len(values) 100 0.0075 0.0094 0.0109 0.0072 0.0081
len(values) 1000 0.0081 0.0082 0.0081 0.0085 0.0088
len(values) 10000 0.0087 0.0084 0.0103 0.0101 0.0327
shape 1 0.1108 0.0838 0.0789 0.0779 0.0780
shape 10 0.0764 0.0770 0.0771 0.1118 0.0806
shape 100 0.0952 0.0826 0.1013 0.0800 0.0889
shape 1000 0.0881 0.0863 0.0867 0.0938 0.1063
shape 10000 0.0905 0.0999 0.1043 0.1013 0.2384
``` | ```
import pandas as pd
import matplotlib.pyplot as plt
df_1 = pd.DataFrame({'A':[15,16,17,20],'B':[21,22,23,24],'C':[25,26,27,28]})
df_2 = pd.DataFrame({'A':[15,16,17,20],'B':[21,22,23,24],'C':[25,26,27,28]})
df_3 = pd.DataFrame({'A':[15,16,17,20],'B':[21,22,23,24],'C':[25,26,27,28]})
list_df = [df_1,df_2,df_3]
for i,j in enumerate(list_df):
plt.figure(i)
j.plot(kind = 'line')
```
I find this to be quite easy way to understand when making subplots.As you can see it can be easily used to match your requirements. |
60,119,580 | I am building HR app using python with Django framework, I am having issue to calculation retirement year of an employee, for example if an employee enters his/her date of birth let the system calculate his/her retirement year or how many years remaining to retire. staff retire at 60 years
Am getting this error:
```
TypeError at /staffprofile/profile_entry/
unsupported operand type(s) for -: 'datetime.date' and 'datetime.datetime'
Request Method: POST
Request URL: http://0.0.0.0:8080/staffprofile/profile_entry/
Django Version: 1.8
Exception Type: TypeError
Exception Value:
unsupported operand type(s) for -: 'datetime.date' and 'datetime.datetime'
Exception Location: /home/bakceesay/djangoapps/NAO/venv/src/staffprofile/views.py in profile_entry, line 57
```
this is my code in views.py
```
from __future__ import unicode_literals
from django.shortcuts import get_object_or_404, render, redirect
from django.contrib.auth.models import User
from django.http import HttpResponse, HttpResponseRedirect # HttpResponse allows the get_absolute_url to work ## and HttpresponseRedirect redirects page after a process
from .models import *
from .forms import *
from django.contrib import messages
from datetime import datetime, timedelta
def profile_entry(request):
title = 'ADD YOUR PROFILE INFORMATION'
form = ProfileForm(request.POST or None, request.FILES or None)
if form.is_valid():
instance = form.save(commit=False)
date_of_birth = instance.date_of_birth
age_days = (datetime.now().date() - date_of_birth)
age = (age_days/365)
rem_yrs = (60 - int(age))
instance.date_of_retirement = rem_yrs
instance.save()
messages.success(request, 'Successfully Saved')
return redirect('/profile/profile_detail')
context = {
"title": title,
"form": form,
}
return render(request, "profile_entry.html",context)
```
models.py
```
[address = models.CharField(max_length=30, blank=True, null=True)
date_of_birth = models.DateTimeField(blank=True, null=True)
email_address = models.CharField(max_length=30, blank=True, null=True)
phone_number = models.CharField(max_length=30, blank=True, null=True)
date_of_hire = models.DateTimeField(blank=True, null=True)
date_of_termination = models.DateField(blank=True, null=True)
date_of_retirement = models.CharField(max_length=30, blank=True, null=True)][1]
``` | 2020/02/07 | [
"https://Stackoverflow.com/questions/60119580",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12860292/"
] | I figured out the solution for this, hope it will be helpful to others. Since the input is list of dataframe, it is easier to do as follow:
```
import matplotlib.pyplot as plt
from matplotlib.pyplot import cm
from itertools import cycle
df1, df2 = list_of_df[0], list_of_df[1]
colors=cm.tab10(np.linspace(0, 1,len(df1.columns)))
lines = ["-","--","-.",":"]
linecycler = cycle(lines)
leg_text = df1.columns.tolist()
marker = cycle(('+', 'o', '*', 'v','^','<','>'))
fig,(ax1,ax2) = plt.subplots(nrows=2,ncols=1,sharex=True,squeeze=True,figsize=(10,8))
for i in range(df1.shape[1]):
ax1.plot(df1.index, df1.columns[i], next(linecycler), marker = next(marker), data=df1, color=colors[i], linewidth=3)
ax2.plot(df2.index, df2.columns[i], next(linecycler), marker = next(marker),data=df2, color=colors[i], linewidth=3)
plt.tight_layout(rect=[0, 0, 0.85, 1])
plt.gcf().autofmt_xdate()
plt.style.use('ggplot')
plt.xticks(rotation=0)
plt.show()
```
I get my expected output. I may need to come even more efficient code, so anyone has a better idea, please let me know. Thanks | ```
import pandas as pd
import matplotlib.pyplot as plt
df_1 = pd.DataFrame({'A':[15,16,17,20],'B':[21,22,23,24],'C':[25,26,27,28]})
df_2 = pd.DataFrame({'A':[15,16,17,20],'B':[21,22,23,24],'C':[25,26,27,28]})
df_3 = pd.DataFrame({'A':[15,16,17,20],'B':[21,22,23,24],'C':[25,26,27,28]})
list_df = [df_1,df_2,df_3]
for i,j in enumerate(list_df):
plt.figure(i)
j.plot(kind = 'line')
```
I find this to be quite easy way to understand when making subplots.As you can see it can be easily used to match your requirements. |
59,207,859 | I've found related question, mine about GitLab, those about GitHub: [How to remove extra line space on GitHub markdown bullets/lists?](https://stackoverflow.com/questions/45113083/how-to-remove-extra-line-space-on-github-markdown-bullets-lists) and one answer is to use 4 spaces and another:
>
> You cannot do this ... This is due to
> GitHub's styling of the markdown, not an inherent issue with the
> markdown itself.
>
>
>
I've made a list and in GitLab GUI I see it like that:
```
o ABC
o ABC
o ABC
o ABC
o ABC
```
Extra empty line before secondary list but no line after. I prefer to add space after secondary too, however deleting all empty lines in list would look good too. Below is my markdown, it has 4 spaces for secondary list and I've tried to add `<br/>` as per [Create two blank lines in Markdown](https://stackoverflow.com/questions/20543454/create-two-blank-lines-in-markdown) but it breaks list structure after that (which also results in disappearing empty lines between primary list entries btw).
```
- datapool
- specific
- jmx
- config
<br/> (that is temp try, see above text)
- python
```
Could it be done in GitLab? | 2019/12/06 | [
"https://Stackoverflow.com/questions/59207859",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5499118/"
] | I found that alternating between the `+` and `-` unordered list indicators produces the desired result.
```
- Item 1
- SubItem 1
- SubItem 2
+ Item 2
+ SubItem 1
+ SubItem 2
- Item 3
- SubItem 1
- SubItem 2
``` | Small mistake: should have put `<br/>` on same level as secondary list (indent with 4 spaces) - then extra line is seen in GUI. |
8,673,035 | I know feature hashing (hashing-trick) is used to reduce the dimensionality and handle sparsity of bit vectors but I don't understand how it really works. Can anyone explain this to me.Is there any python library available to do feature hashing?
Thank you. | 2011/12/29 | [
"https://Stackoverflow.com/questions/8673035",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/585329/"
] | On Pandas, you could use something like this:
```
import pandas as pd
import numpy as np
data = {'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'],
'year': [2000, 2001, 2002, 2001, 2002],
'pop': [1.5, 1.7, 3.6, 2.4, 2.9]}
data = pd.DataFrame(data)
def hash_col(df, col, N):
cols = [col + "_" + str(i) for i in range(N)]
def xform(x): tmp = [0 for i in range(N)]; tmp[hash(x) % N] = 1; return pd.Series(tmp,index=cols)
df[cols] = df[col].apply(xform)
return df.drop(col,axis=1)
print hash_col(data, 'state',4)
```
The output would be
```
pop year state_0 state_1 state_2 state_3
0 1.5 2000 0 1 0 0
1 1.7 2001 0 1 0 0
2 3.6 2002 0 1 0 0
3 2.4 2001 0 0 0 1
4 2.9 2002 0 0 0 1
```
Also on Series level, you could
import numpy as np, os
import sys, pandas as pd
```
def hash_col(df, col, N):
df = df.replace('',np.nan)
cols = [col + "_" + str(i) for i in range(N)]
tmp = [0 for i in range(N)]
tmp[hash(df.ix[col]) % N] = 1
res = df.append(pd.Series(tmp,index=cols))
return res.drop(col)
a = pd.Series(['new york',30,''],index=['city','age','test'])
b = pd.Series(['boston',30,''],index=['city','age','test'])
print hash_col(a,'city',10)
print hash_col(b,'city',10)
```
This will work per single Series, column name will be assumed to be a Pandas index. It also replaces blank strings with nan, and floats everything.
```
age 30
test NaN
city_0 0
city_1 0
city_2 0
city_3 0
city_4 0
city_5 0
city_6 0
city_7 1
city_8 0
city_9 0
dtype: object
age 30
test NaN
city_0 0
city_1 0
city_2 0
city_3 0
city_4 0
city_5 1
city_6 0
city_7 0
city_8 0
city_9 0
dtype: object
```
If, however, there is a vocabulary, and you simply want to one-hot-encode, you could use
```
import numpy as np
import pandas as pd, os
import scipy.sparse as sps
def hash_col(df, col, vocab):
cols = [col + "=" + str(v) for v in vocab]
def xform(x): tmp = [0 for i in range(len(vocab))]; tmp[vocab.index(x)] = 1; return pd.Series(tmp,index=cols)
df[cols] = df[col].apply(xform)
return df.drop(col,axis=1)
data = {'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'],
'year': [2000, 2001, 2002, 2001, 2002],
'pop': [1.5, 1.7, 3.6, 2.4, 2.9]}
df = pd.DataFrame(data)
df2 = hash_col(df, 'state', ['Ohio','Nevada'])
print sps.csr_matrix(df2)
```
which will give
```
pop year state=Ohio state=Nevada
0 1.5 2000 1 0
1 1.7 2001 1 0
2 3.6 2002 1 0
3 2.4 2001 0 1
4 2.9 2002 0 1
```
I also added sparsification of the final dataframe as well. In incremental setting where we might not have encountered all values beforehand (but we somehow obtained the list of all possible values somehow), the approach above can be used. Incremental ML methods would need the same number of features at each increment, hence one-hot encoding must produce the same number of rows at each batch. | [Here](http://metaoptimize.com/qa/questions/6943/what-is-the-hashing-trick#6945) (sorry I cannot add this as a comment for some reason.) Also, the first page of [Feature Hashing for Large Scale Multitask Learning](http://arxiv.org/pdf/0902.2206) explains it nicely. |
8,673,035 | I know feature hashing (hashing-trick) is used to reduce the dimensionality and handle sparsity of bit vectors but I don't understand how it really works. Can anyone explain this to me.Is there any python library available to do feature hashing?
Thank you. | 2011/12/29 | [
"https://Stackoverflow.com/questions/8673035",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/585329/"
] | [Here](http://metaoptimize.com/qa/questions/6943/what-is-the-hashing-trick#6945) (sorry I cannot add this as a comment for some reason.) Also, the first page of [Feature Hashing for Large Scale Multitask Learning](http://arxiv.org/pdf/0902.2206) explains it nicely. | Large sparse feature can be derivate from interaction, U as user and X as email, so the dimension of U x X is memory intensive. Usually, task like spam filtering has time limitation as well.
Hash trick like other hash function store binary bits (index) which make large scale training feasible. In theory, more hashed length more performance gain, as illustrated in the original paper.
It allocate origin feature into different bucket (finite length of feature space) so that their semantic get kept. Even when spammer use typo to miss on the radar. Although there is distortion error, heir hashed form remain close.
For example,
"the quick brown fox" transform to:
```
h(the) mod 5 = 0
h(quick) mod 5 = 1
h(brown) mod 5 = 1
h(fox) mod 5 = 3
```
Use index rather then text value, saves space.
*To summarize some of the applications:*
* dimensionality reduction for high dimension feature vector
+ text in email classification task, collaborate filtering on spam
* sparsification
* bag-of-words on the fly
* cross-product features
* multi-task learning
Reference:
* Origin paper:
1. **Feature Hashing** for Large Scale Multitask Learning
2. Shi, Q., Petterson, J., Dror, G., Langford, J., Smola, A., Strehl, A., & Vishwanathan, V. (2009). **Hash kernels**
* [What is the hashing trick](http://webcache.googleusercontent.com/search?q=cache:PRrGZpsAUJAJ:metaoptimize.com/qa/questions/6943/what-is-the-hashing-trick%20&cd=1&hl=en&ct=clnk&gl=au)
* [Quora](https://www.quora.com/Can-you-explain-feature-hashing-in-an-easily-understandable-way)
* Gionis, A., Indyk, P., & Motwani, R. (1999). Similarity search in high dimensions via hashing
**Implementation:**
* Langford, J., Li, L., & Strehl, A. (2007). Vow- pal wabbit online learning project (Technical Report). <http://hunch.net/?p=309>. |
8,673,035 | I know feature hashing (hashing-trick) is used to reduce the dimensionality and handle sparsity of bit vectors but I don't understand how it really works. Can anyone explain this to me.Is there any python library available to do feature hashing?
Thank you. | 2011/12/29 | [
"https://Stackoverflow.com/questions/8673035",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/585329/"
] | On Pandas, you could use something like this:
```
import pandas as pd
import numpy as np
data = {'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'],
'year': [2000, 2001, 2002, 2001, 2002],
'pop': [1.5, 1.7, 3.6, 2.4, 2.9]}
data = pd.DataFrame(data)
def hash_col(df, col, N):
cols = [col + "_" + str(i) for i in range(N)]
def xform(x): tmp = [0 for i in range(N)]; tmp[hash(x) % N] = 1; return pd.Series(tmp,index=cols)
df[cols] = df[col].apply(xform)
return df.drop(col,axis=1)
print hash_col(data, 'state',4)
```
The output would be
```
pop year state_0 state_1 state_2 state_3
0 1.5 2000 0 1 0 0
1 1.7 2001 0 1 0 0
2 3.6 2002 0 1 0 0
3 2.4 2001 0 0 0 1
4 2.9 2002 0 0 0 1
```
Also on Series level, you could
import numpy as np, os
import sys, pandas as pd
```
def hash_col(df, col, N):
df = df.replace('',np.nan)
cols = [col + "_" + str(i) for i in range(N)]
tmp = [0 for i in range(N)]
tmp[hash(df.ix[col]) % N] = 1
res = df.append(pd.Series(tmp,index=cols))
return res.drop(col)
a = pd.Series(['new york',30,''],index=['city','age','test'])
b = pd.Series(['boston',30,''],index=['city','age','test'])
print hash_col(a,'city',10)
print hash_col(b,'city',10)
```
This will work per single Series, column name will be assumed to be a Pandas index. It also replaces blank strings with nan, and floats everything.
```
age 30
test NaN
city_0 0
city_1 0
city_2 0
city_3 0
city_4 0
city_5 0
city_6 0
city_7 1
city_8 0
city_9 0
dtype: object
age 30
test NaN
city_0 0
city_1 0
city_2 0
city_3 0
city_4 0
city_5 1
city_6 0
city_7 0
city_8 0
city_9 0
dtype: object
```
If, however, there is a vocabulary, and you simply want to one-hot-encode, you could use
```
import numpy as np
import pandas as pd, os
import scipy.sparse as sps
def hash_col(df, col, vocab):
cols = [col + "=" + str(v) for v in vocab]
def xform(x): tmp = [0 for i in range(len(vocab))]; tmp[vocab.index(x)] = 1; return pd.Series(tmp,index=cols)
df[cols] = df[col].apply(xform)
return df.drop(col,axis=1)
data = {'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'],
'year': [2000, 2001, 2002, 2001, 2002],
'pop': [1.5, 1.7, 3.6, 2.4, 2.9]}
df = pd.DataFrame(data)
df2 = hash_col(df, 'state', ['Ohio','Nevada'])
print sps.csr_matrix(df2)
```
which will give
```
pop year state=Ohio state=Nevada
0 1.5 2000 1 0
1 1.7 2001 1 0
2 3.6 2002 1 0
3 2.4 2001 0 1
4 2.9 2002 0 1
```
I also added sparsification of the final dataframe as well. In incremental setting where we might not have encountered all values beforehand (but we somehow obtained the list of all possible values somehow), the approach above can be used. Incremental ML methods would need the same number of features at each increment, hence one-hot encoding must produce the same number of rows at each batch. | Large sparse feature can be derivate from interaction, U as user and X as email, so the dimension of U x X is memory intensive. Usually, task like spam filtering has time limitation as well.
Hash trick like other hash function store binary bits (index) which make large scale training feasible. In theory, more hashed length more performance gain, as illustrated in the original paper.
It allocate origin feature into different bucket (finite length of feature space) so that their semantic get kept. Even when spammer use typo to miss on the radar. Although there is distortion error, heir hashed form remain close.
For example,
"the quick brown fox" transform to:
```
h(the) mod 5 = 0
h(quick) mod 5 = 1
h(brown) mod 5 = 1
h(fox) mod 5 = 3
```
Use index rather then text value, saves space.
*To summarize some of the applications:*
* dimensionality reduction for high dimension feature vector
+ text in email classification task, collaborate filtering on spam
* sparsification
* bag-of-words on the fly
* cross-product features
* multi-task learning
Reference:
* Origin paper:
1. **Feature Hashing** for Large Scale Multitask Learning
2. Shi, Q., Petterson, J., Dror, G., Langford, J., Smola, A., Strehl, A., & Vishwanathan, V. (2009). **Hash kernels**
* [What is the hashing trick](http://webcache.googleusercontent.com/search?q=cache:PRrGZpsAUJAJ:metaoptimize.com/qa/questions/6943/what-is-the-hashing-trick%20&cd=1&hl=en&ct=clnk&gl=au)
* [Quora](https://www.quora.com/Can-you-explain-feature-hashing-in-an-easily-understandable-way)
* Gionis, A., Indyk, P., & Motwani, R. (1999). Similarity search in high dimensions via hashing
**Implementation:**
* Langford, J., Li, L., & Strehl, A. (2007). Vow- pal wabbit online learning project (Technical Report). <http://hunch.net/?p=309>. |
11,511,080 | I am a beginner at python (one week). Here I am trying print the list of all the prime factor of 60. But for line 19, I am getting following error:
*TypeError: unsupported operand type(s) for %: 'float' and 'list'*
The code:
```
whylist = []
factor = []
boom = []
primefactor = []
n = 60
j = (list(range(1, n, 1)))
for numbers in j:
if n%numbers == 0:
whylist.append(numbers)
for everynumber in whylist:
factor.append(everynumber)
for things in factor:
u = (list(range(1, things, 1)))
d = float(things)
if d%u == 0:
boom.append(things)
if len(boom) == 1:
for every in boom:
primefactor.append(every)
print(primefactor)
```
What am I doing wrong? | 2012/07/16 | [
"https://Stackoverflow.com/questions/11511080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1526409/"
] | To apply a math operation to every element in a list you can use a list-comprehension:
```
new_list = [ x%num for x in old_list]
```
There are other ways to do it as well. Sometimes people will use `map`
```
new_list = map(lambda x: x%num, old_list)
```
but most people prefer the first form which is generally more efficient and clear than using `lambda` (which can be a little confusing when you're just starting to learn python).
**EDIT**
Here's a recursive implementation of what you were trying:
```
def factorize(n):
out=[]
for i in range(2,n):
if(n%i == 0): #first thing to hit this is always prime
out.append(i) #add it to the list
out+=factorize(n/i) #get the list of primes from the other factor and append to this list.
return out
else:
return [n] # n%i was never 0, must be prime.
print factorize(2000)
``` | Another option is to use numpy arrays instead of lists.
```
import numpy as np
j = np.arange(1,n,1)
rem = np.mod(j,num)
```
and numpy will take care of broadcasting operations for you. It should also be faster than list comprehensions or map. |
55,235,230 | I get this warning most of the time when i define a model using Keras. It seems to somehow come from tensorflow though:
```
WARNING:tensorflow:From C:\Users\lenik\AppData\Local\Programs\Python\Python37\lib\site-packages\keras\backend\tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
```
Is this warning something to worry about?
If yes, how do i solve this problem? | 2019/03/19 | [
"https://Stackoverflow.com/questions/55235230",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8104036/"
] | This depreciation warning is due to the Dropout layer in `tf.keras.layers.Dropout`.
To avoid this warning, you need to clearly specify `rate=` in Dropout as: `Dropout(rate=0.2)`.
Earlier it was `keep_prob` and it is now deprecated to `rate` i.e. rate = 1-keep\_prob.
For more, you can check out this tensorflow [documentation](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dropout). | Tensorflow is telling you that the argument `keep_prob` is deprecated and that it has been replaced by the argument `rate`.
Now, to achieve the same behavior you have now and remove the warning, you need to replace every occurrence of the `keep_prob` argument with `rate` argument, and pass the value `1-keep_prob`. |
4,341,206 | When trying to authenticate via OAuth in Django Piston, the following exception is thrown:
```
Environment:
Request Method: GET
Request URL: http://localhost:8000/api/oauth/request_token/?oauth_nonce=32921052&oauth_timestamp=1291331173&oauth_consumer_key=ghof7av2vu8hal2hek&oauth_signature_method=HMAC-SHA1&oauth_version=1.0&oauth_signature=
Python Version:
Traceback:
File "/Users/derek/.virtualenvs/optimal-rest/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
100. response = callback(request, *callback_args, **callback_kwargs)
File "/Users/derek/.virtualenvs/optimal-rest/lib/python2.7/site-packages/piston/authentication.py" in oauth_request_token
130. token = oauth_server.fetch_request_token(oauth_request)
File "/Users/derek/.virtualenvs/optimal-rest/lib/python2.7/site-packages/piston/oauth.py" in fetch_request_token
302. self._check_signature(oauth_request, consumer, None)
File "/Users/derek/.virtualenvs/optimal-rest/lib/python2.7/site-packages/piston/oauth.py" in _check_signature
393. valid_sig = signature_method.check_signature(oauth_request, consumer, token, signature)
File "/Users/derek/.virtualenvs/optimal-rest/lib/python2.7/site-packages/piston/oauth.py" in check_signature
482. built = self.build_signature(oauth_request, consumer, token)
File "/Users/derek/.virtualenvs/optimal-rest/lib/python2.7/site-packages/piston/oauth.py" in build_signature
513. hashed = hmac.new(key, raw, sha)
File "/usr/local/Cellar/python/2.7/lib/python2.7/hmac.py" in new
133. return HMAC(key, msg, digestmod)
File "/usr/local/Cellar/python/2.7/lib/python2.7/hmac.py" in __init__
72. self.outer.update(key.translate(trans_5C))
Exception Type: TypeError at /api/oauth/request_token/?oauth_nonce=32921052&oauth_timestamp=1291331173&oauth_consumer_key=ghof7av2vu8hal2hek&oauth_signature_method=HMAC-SHA1&oauth_version=1.0&oauth_signature=
Exception Value: character mapping must return integer, None or unicode
```
Can't tell if it's a bug in Piston or if I can't use the oauth2 lib.
Consumer Code:
```
import os
import cgi
import oauth2 as oauth
# settings for the local test consumer
CONSUMER_SERVER = os.environ.get("CONSUMER_SERVER") or 'localhost'
CONSUMER_PORT = os.environ.get("CONSUMER_PORT") or '8000'
print CONSUMER_SERVER , CONSUMER_PORT
# fake urls for the test server (matches ones in server.py)
REQUEST_TOKEN_URL = 'http://%s:%s/api/oauth/request_token/' % (CONSUMER_SERVER, CONSUMER_PORT)
ACCESS_TOKEN_URL = 'http://%s:%s/api/oauth/access_token/' % (CONSUMER_SERVER, CONSUMER_PORT)
AUTHORIZE_URL = 'http://%s:%s/api/oauth/authorize/' % (CONSUMER_SERVER, CONSUMER_PORT)
# key and secret granted by the service provider for this consumer application - same as the MockOAuthDataStore
CONSUMER_KEY = 'ghof7av2vu8hal2hek'
CONSUMER_SECRET = 'ohhey'
consumer = oauth.Consumer(CONSUMER_KEY, CONSUMER_SECRET)
client = oauth.Client(consumer)
# Step 1: Get a request token. This is a temporary token that is used for
# having the user authorize an access token and to sign the request to obtain
# said access token.
resp, content = client.request(REQUEST_TOKEN_URL, "GET")
if resp['status'] != '200':
raise Exception("Invalid response %s." % resp['status'])
```
Referencing <https://github.com/clemesha/django-piston-oauth-example> for the consumer code. | 2010/12/02 | [
"https://Stackoverflow.com/questions/4341206",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/186101/"
] | This is a piston problem that comes from an encoding problem of the key/secret of the consumer.
The solution is to force the encoding of the key/secret returned from the database to ASCII.
In the `store.py` file of Piston, modify the `lookup_consumer` so it look like this:
```
def lookup_consumer(self, key):
try:
self.consumer = Consumer.objects.get(key=key)
self.consumer.key = self.consumer.key.encode('ascii')
self.consumer.secret = self.consumer.secret.encode('ascii')
return self.consumer
except Consumer.DoesNotExist:
return None
```
[Here](https://github.com/nbarraille/django-piston-nbarraille) is my fork of django-piston fixing this problem. | This problem also occurs inside Piston's "oauth.py" module's "build\_signature()" method if a unicode key value is passed in. I discovered this issue while using the clemesha/django-piston-oauth-example client code mentioned above because it kept failing after the prompt for the "PIN Code".
The underlying problem is documented as issue #169 here:
<https://bitbucket.org/jespern/django-piston/issue/169/oauth-request-token-error-with-hmac>
The re-encoding solution described above is also applicable in this case:
```
def build_signature(self, oauth_request, consumer, token):
"""Builds the base signature string."""
key, raw = self.build_signature_base_string(oauth_request, consumer,
token)
#BUG: character mapping must return integer, None or unicode
#FIX:
key = key.encode('ascii')
# HMAC object.
try:
import hashlib # 2.5
hashed = hmac.new(key, raw, hashlib.sha1)
except:
import sha # Deprecated
hashed = hmac.new(key, raw, sha)
# Calculate the digest base 64.
return binascii.b2a_base64(hashed.digest())[:-1]
``` |
62,555,213 | I am having two dicts, one in list:
```
var_a = [{'name':"John",'number':21},{'name':"Kevin",'number':23}]
var_b = {'21':"yes"},{'24':"yes"}
```
I need to compare var\_a and var\_b with the key from var\_b with the number value in var\_a.
I have tried this and got the output:
```
for key, value in var_b.iteritems():
for l in var_a:
if l['number'] == key:
print l
```
This needs two loops to get the output. So is there any way to finish this is single loop with python 2.7? | 2020/06/24 | [
"https://Stackoverflow.com/questions/62555213",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8741562/"
] | You can use `map` to create a keys set from `var_b` keys and then loop only over `var_a` to check if the number value exists in the `var_b` keys set
```
var_a = [{'name':"John",'number':21},{'name':"Kevin",'number':23}]
var_b = [{'21':"yes"},{'23':"no"}]
keys_set = set(map(lambda x: int(list(x.keys())[0]), var_b))
for i in var_a:
if i['number'] in keys_set:
print(i)
```
Output
```
{'name': 'John', 'number': 21}
{'name': 'Kevin', 'number': 23}
``` | I think you need to use the lambda function with one for-loop:
```
for key, value in var_b.iteritems():
result = filter(lambda d: d['id'] == key, var_a)
```
The result will give you the output for sure. |
35,823,709 | I have read the article "Ubuntu Installation --Guide for Ubuntu 14.04 with a 64 bit processor." from Github website (<https://github.com/tiangolo/caffe/blob/ubuntu-tutorial-b/docs/install_apt2.md>).
And now, I open IPython to test that PyCaffe is working. I input "ipython" command, and enter to the ipython page.
Then, I input the command "import caffe", but I got below warnings:
>
> /root/code/caffe-master/python/caffe/pycaffe.py:13:RuntimeWarning: to-Python converter for boost::shared\_ptr > already registered; second conversion method ignored. from .\_caffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, \
>
>
> /root/code/caffe-master/python/caffe/pycaffe.py:13: RuntimeWarning: to-Python converter for boost::shared\_ptr > already registered; second conversion method ignored. from .\_caffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, \
>
>
> /root/code/caffe-master/python/caffe/pycaffe.py:13: RuntimeWarning: to-Python converter for boost::shared\_ptr > already registered; second conversion method ignored. from .\_caffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, \
>
>
>
Would you please help to give me some suggestions to resolve it? | 2016/03/06 | [
"https://Stackoverflow.com/questions/35823709",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4159177/"
] | I found this:
<https://groups.google.com/forum/#!topic/caffe-users/C_air48cISU>
Claiming that this is a non-error, cause by mis-matched versions of Boost. You can safely ignore it. They've promised to clean up the warning (at some point not yet specified) | You can edit /caffe/python/caffe/\_caffe.cpp . There are four places need to change,like this
```
bp::register_ptr_to_python<shared_ptr<Layer<Dtype> > >();
```
to
```
const boost::python::type_info cinfo = boost::python::type_id<shared_ptr<Blob<Dtype> > >();
const boost::python::converter::registration* creg = boost::python::converter::registry::query(cinfo);
if (creg == NULL)
{
bp::register_ptr_to_python<shared_ptr<Layer<Dtype> > >();
}
else if ((*creg).m_to_python == NULL) {
bp::register_ptr_to_python<shared_ptr<Layer<Dtype> > >();
}
```
then compile use 'make pycaffe' ,it will be ok. |
49,963,862 | I have a dictionary that has tuple keys and numpy array values. I tried saving it using h5 and pickle but I get error messages. what is the best way to save this object to file?
```
import numpy as np
from collections import defaultdict
Q =defaultdict(lambda: np.zeros(2))
Q[(1,2,False)] = np.array([1,2])
Q[(1,3,True)] = np.array([3,4])
>>> Q
defaultdict(<function <lambda> at 0x10c51ce18>, {(1, 2, False): array([1, 2]), (1, 3, True): array([3, 4])})
```
np.save traceback:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-99-a071e1561501> in <module>()
----> 1 np.save('Q.npy', Q)
~/anaconda3_420/lib/python3.5/site-packages/numpy/lib/npyio.py in save(file, arr, allow_pickle, fix_imports)
509 arr = np.asanyarray(arr)
510 format.write_array(fid, arr, allow_pickle=allow_pickle,
--> 511 pickle_kwargs=pickle_kwargs)
512 finally:
513 if own_fid:
~/anaconda3_420/lib/python3.5/site-packages/numpy/lib/format.py in write_array(fp, array, version, allow_pickle, pickle_kwargs)
584 if pickle_kwargs is None:
585 pickle_kwargs = {}
--> 586 pickle.dump(array, fp, protocol=2, **pickle_kwargs)
587 elif array.flags.f_contiguous and not array.flags.c_contiguous:
588 if isfileobj(fp):
AttributeError: Can't pickle local object 'mc_control_epsilon_greedy.<locals>.<lambda>'
``` | 2018/04/22 | [
"https://Stackoverflow.com/questions/49963862",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7091646/"
] | How about saving it as a plain dictionary? You don't need the `defaultdict` behavior during saving.
```
In [126]: from collections import defaultdict
In [127]: Q =defaultdict(lambda: np.zeros(2))
...: Q[(1,2,False)] = np.array([1,2])
...: Q[(1,3,True)] = np.array([3,4])
...: Q[(3,4,False)]
...:
Out[127]: array([0., 0.])
In [128]: Q
Out[128]:
defaultdict(<function __main__.<lambda>>,
{(1, 2, False): array([1, 2]),
(1, 3, True): array([3, 4]),
(3, 4, False): array([0., 0.])})
```
We can pull it out of the `defaultdict` wrapping with:
```
In [130]: dict(Q)
Out[130]:
{(1, 2, False): array([1, 2]),
(1, 3, True): array([3, 4]),
(3, 4, False): array([0., 0.])}
```
Then we can pickle it (I'm using `np.save` as a pickle shortcut)
```
In [131]: np.save('stack49963862', np.array(dict(Q)))
```
`load` gives an object array containing this dictionary:
```
In [132]: P = np.load('stack49963862.npy')
In [133]: P
Out[133]:
array({(1, 2, False): array([1, 2]), (1, 3, True): array([3, 4]), (3, 4, False): array([0., 0.])},
dtype=object)
In [138]: P.item()
Out[138]:
{(1, 2, False): array([1, 2]),
(1, 3, True): array([3, 4]),
(3, 4, False): array([0., 0.])}
```
We can easily recreate the defaultdict with an update:
```
In [134]: Q1 =defaultdict(lambda: np.zeros(2))
In [139]: Q1.update(P.item())
In [140]: Q1
Out[140]:
defaultdict(<function __main__.<lambda>>,
{(1, 2, False): array([1, 2]),
(1, 3, True): array([3, 4]),
(3, 4, False): array([0., 0.])})
``` | I don't see any problems using `pickle`
```
import pickle
import numpy as np
x = {(1,2,False): np.array([1,4]), (1,3,False): np.array([4,5])}
with open('filename.pickle', 'wb') as handle:
pickle.dump(x, handle, protocol=pickle.HIGHEST_PROTOCOL)
with open('filename.pickle', 'rb') as handle:
y = pickle.load(handle)
print x
print y
```
---
**After the edit:**
What you actually have is a `lambda`, and that cannot be pickled by default. You need to install `dill` and import it for it to work (see [this answer](https://stackoverflow.com/a/25353243/3512538))
This is how it should look like:
```
import pickle
import numpy as np
from collections import defaultdict
import dill # doesn't come with default anaconda. Install with "conda install dill"
x = defaultdict(lambda: np.zeros(2))
with open('filename.pickle', 'wb') as handle:
pickle.dump(x, handle, protocol=pickle.HIGHEST_PROTOCOL)
with open('filename.pickle', 'rb') as handle:
y = pickle.load(handle)
print x
print y
```
**Output:**
```
# no errors :-)
defaultdict(<function <lambda> at 0x000000000CD0C898>, {})
defaultdict(<function <lambda> at 0x0000000002614C88>, {})
```
---
***OP's solution:***
your edited solution still generated the same error for me but this works fine:
```
import pickle
import dill
dill_file = open("Q.pickle", "wb")
dill_file.write(dill.dumps(Q))
dill_file.close()
```
*On my machine (Win 8.1 64 bit, using Spyder), I had no errors when using simple `dill`.* |
55,647,936 | I am porting the application from python 2 to python 3 and encountered the following problem: `random.randint` returns different result according to used Python version. So
```
import random
random.seed(1)
result = random.randint(1, 100)
```
On Python 2.x result will be 14 and on Python 3.x: 18
Unfortunately, I need to have the same output on python3 to have backward compatibility of service.
Now I have only working idea of usage `subprocess` module from Python 3.x to execute Python 2.x code
```
result = subprocess.check_output(
'''python2 -c "import random; random.seed('%s'); print(random.randint(1, 100))"''' % seed,
shell=True
)
```
But such an approach is slower approx. in 1000 times than execute just `random.randint(1, 100)`.
Maybe there are other approaches to do this? | 2019/04/12 | [
"https://Stackoverflow.com/questions/55647936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4542977/"
] | The difference is caused by two things:
1. You should use `random.seed(42, version=1)`
2. In python 3.2 there was a change to `random.randrange`, which is called by `random.randint` and probably add to above [issue](https://docs.python.org/3/library/random.html#random.randrange).
So use something like:
```
try: random.seed(42, version=1) # Python 3
except TypeError: random.seed(42) # Python 2
```
and `int(1+random.random()*99)`.
More detail
-----------
Backward compatibility was on purpose dropped with the change of `randrange`, see the original [issue](https://bugs.python.org/issue9025).
See this [reddit](https://www.reddit.com/r/pythontips/comments/5pp5mz/the_random_module_with_the_same_seed_produces_a/) post.
If possible use `numpy.random`like is proposed in the reddit post.
Use of `random.seed(42, version=1)` as described in the [documentation](https://docs.python.org/3/library/random.html#bookkeeping-functions) will cause `random.random()` to deliver the same result but give a different result for `random.randint(1,100)` (because in python 3.2 some problem with the old implementation was fixed). You may opt to only rely on something like `int(1+random.random()*99)`.
(Python 2 will run out of support very [soon](https://www.python.org/dev/peps/pep-0373/), [soon2](https://github.com/python/devguide/pull/344) or [here](https://pythonclock.org). If possible check, if backward compatibility is really needed.)
My current tests:
```
import random
try: random.seed(42, version=1) # Python 3
except TypeError: random.seed(42) # Python 2
print(random.random())
print(int(1+99*random.random()))
print(random.randint(1,99))
```
Results on Python 2
```
0.639426798458
3
28
```
and Python 3
```
0.6394267984578837
3
36
``` | You can specify which version to use for the seed: `random.seed(1, version=1)`. However, as stated by Sparky05, you are probably better off using `numpy.random` instead. |
55,647,936 | I am porting the application from python 2 to python 3 and encountered the following problem: `random.randint` returns different result according to used Python version. So
```
import random
random.seed(1)
result = random.randint(1, 100)
```
On Python 2.x result will be 14 and on Python 3.x: 18
Unfortunately, I need to have the same output on python3 to have backward compatibility of service.
Now I have only working idea of usage `subprocess` module from Python 3.x to execute Python 2.x code
```
result = subprocess.check_output(
'''python2 -c "import random; random.seed('%s'); print(random.randint(1, 100))"''' % seed,
shell=True
)
```
But such an approach is slower approx. in 1000 times than execute just `random.randint(1, 100)`.
Maybe there are other approaches to do this? | 2019/04/12 | [
"https://Stackoverflow.com/questions/55647936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4542977/"
] | The difference is caused by two things:
1. You should use `random.seed(42, version=1)`
2. In python 3.2 there was a change to `random.randrange`, which is called by `random.randint` and probably add to above [issue](https://docs.python.org/3/library/random.html#random.randrange).
So use something like:
```
try: random.seed(42, version=1) # Python 3
except TypeError: random.seed(42) # Python 2
```
and `int(1+random.random()*99)`.
More detail
-----------
Backward compatibility was on purpose dropped with the change of `randrange`, see the original [issue](https://bugs.python.org/issue9025).
See this [reddit](https://www.reddit.com/r/pythontips/comments/5pp5mz/the_random_module_with_the_same_seed_produces_a/) post.
If possible use `numpy.random`like is proposed in the reddit post.
Use of `random.seed(42, version=1)` as described in the [documentation](https://docs.python.org/3/library/random.html#bookkeeping-functions) will cause `random.random()` to deliver the same result but give a different result for `random.randint(1,100)` (because in python 3.2 some problem with the old implementation was fixed). You may opt to only rely on something like `int(1+random.random()*99)`.
(Python 2 will run out of support very [soon](https://www.python.org/dev/peps/pep-0373/), [soon2](https://github.com/python/devguide/pull/344) or [here](https://pythonclock.org). If possible check, if backward compatibility is really needed.)
My current tests:
```
import random
try: random.seed(42, version=1) # Python 3
except TypeError: random.seed(42) # Python 2
print(random.random())
print(int(1+99*random.random()))
print(random.randint(1,99))
```
Results on Python 2
```
0.639426798458
3
28
```
and Python 3
```
0.6394267984578837
3
36
``` | Functions which emulate `random.seed(a=None)` and `random.randint(a, b)` for `python 3` and `python 2`:
```python
import random
def seed(a=None):
try: # Python3
random.seed(a, version=1)
except TypeError: # Python2
random.seed(a)
def randint(a, b):
return int(random.random() * (b - a + 1)) + a
```
Use:
```python
seed(1)
result = randint(1, 100)
``` |
55,647,936 | I am porting the application from python 2 to python 3 and encountered the following problem: `random.randint` returns different result according to used Python version. So
```
import random
random.seed(1)
result = random.randint(1, 100)
```
On Python 2.x result will be 14 and on Python 3.x: 18
Unfortunately, I need to have the same output on python3 to have backward compatibility of service.
Now I have only working idea of usage `subprocess` module from Python 3.x to execute Python 2.x code
```
result = subprocess.check_output(
'''python2 -c "import random; random.seed('%s'); print(random.randint(1, 100))"''' % seed,
shell=True
)
```
But such an approach is slower approx. in 1000 times than execute just `random.randint(1, 100)`.
Maybe there are other approaches to do this? | 2019/04/12 | [
"https://Stackoverflow.com/questions/55647936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4542977/"
] | Finally found the answer!
Sparky05 give interesting idea and was near with `int(1+99*random.random())`.
But the right answer is
```
random.seed(seed, version=1)
int(random.random() * 100) + 1
```
in Python 3.x
Works in the same way like
```
random.seed(seed)
random.randint(1, 100)
```
in Python 2.x | You can specify which version to use for the seed: `random.seed(1, version=1)`. However, as stated by Sparky05, you are probably better off using `numpy.random` instead. |
55,647,936 | I am porting the application from python 2 to python 3 and encountered the following problem: `random.randint` returns different result according to used Python version. So
```
import random
random.seed(1)
result = random.randint(1, 100)
```
On Python 2.x result will be 14 and on Python 3.x: 18
Unfortunately, I need to have the same output on python3 to have backward compatibility of service.
Now I have only working idea of usage `subprocess` module from Python 3.x to execute Python 2.x code
```
result = subprocess.check_output(
'''python2 -c "import random; random.seed('%s'); print(random.randint(1, 100))"''' % seed,
shell=True
)
```
But such an approach is slower approx. in 1000 times than execute just `random.randint(1, 100)`.
Maybe there are other approaches to do this? | 2019/04/12 | [
"https://Stackoverflow.com/questions/55647936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4542977/"
] | Finally found the answer!
Sparky05 give interesting idea and was near with `int(1+99*random.random())`.
But the right answer is
```
random.seed(seed, version=1)
int(random.random() * 100) + 1
```
in Python 3.x
Works in the same way like
```
random.seed(seed)
random.randint(1, 100)
```
in Python 2.x | Functions which emulate `random.seed(a=None)` and `random.randint(a, b)` for `python 3` and `python 2`:
```python
import random
def seed(a=None):
try: # Python3
random.seed(a, version=1)
except TypeError: # Python2
random.seed(a)
def randint(a, b):
return int(random.random() * (b - a + 1)) + a
```
Use:
```python
seed(1)
result = randint(1, 100)
``` |
30,772,068 | I have the following string object (its json) in Java (its pretty printed so it is legible):
```
{
name: John,
age: {
years:18
},
computer_skills: {
years:4
},
mile_runner: {
years:2
}
}
```
I have an array with 100 people with the same structure.
What is the best way to go through all 100 people and make it such that there is no more "years"? In other words, each object in the 100 would look something like:
```
{
name: John,
age:18,
computer_skills:4,
mile_runner:2
}
```
In python, I can do something trivial like this in pseudocode:
```
// Iterates through each person
for person in persons:
// Iterates through each key/value pair k,v
for k, v in person.items():
// check if the value is an object {}
if isinstance(v, dict) and 'years' in v:
if the value is an object, and there is 'years' as a key nested within, then set the key k to have the value of the nested 'years' field
person[k] = v['years']
```
In the end, I want to be able to write out that "persons" object back out into a string form (of json).
Is there a simple yet efficient way to do this in java? I would prefer not to create a model object that represents every single field within my object if possible as I like how I dont have to do that in python (unless I have to because its Java).
I like how you can iterate, check if an object has a nested field 'years', if it does, then uplevel it! It looks straightforward. | 2015/06/11 | [
"https://Stackoverflow.com/questions/30772068",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/971888/"
] | It is "nil coalescing operator" (also called "default operator"). `a ?? b` is value of `a` (i.e. `a!`), unless `a` is `nil`, in which case it yields `b`. I.e. if `favouriteSnacks[person]` is missing, return assign `"Candy Bar"` in its stead. | This:
```
let snackName = favoriteSnacks[person] ?? "Candy Bar"
```
Is equals this:
```
if favoriteSnacks[person] != nil {
let snackName = favoriteSnacks[person]
} else {
let snackName = "Candy Bar"
}
```
Explaining in words, if the `let` statement fail to grab `person` from `favoriteSnacks` it will assigned Candy Bar to the `snackName` |
30,772,068 | I have the following string object (its json) in Java (its pretty printed so it is legible):
```
{
name: John,
age: {
years:18
},
computer_skills: {
years:4
},
mile_runner: {
years:2
}
}
```
I have an array with 100 people with the same structure.
What is the best way to go through all 100 people and make it such that there is no more "years"? In other words, each object in the 100 would look something like:
```
{
name: John,
age:18,
computer_skills:4,
mile_runner:2
}
```
In python, I can do something trivial like this in pseudocode:
```
// Iterates through each person
for person in persons:
// Iterates through each key/value pair k,v
for k, v in person.items():
// check if the value is an object {}
if isinstance(v, dict) and 'years' in v:
if the value is an object, and there is 'years' as a key nested within, then set the key k to have the value of the nested 'years' field
person[k] = v['years']
```
In the end, I want to be able to write out that "persons" object back out into a string form (of json).
Is there a simple yet efficient way to do this in java? I would prefer not to create a model object that represents every single field within my object if possible as I like how I dont have to do that in python (unless I have to because its Java).
I like how you can iterate, check if an object has a nested field 'years', if it does, then uplevel it! It looks straightforward. | 2015/06/11 | [
"https://Stackoverflow.com/questions/30772068",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/971888/"
] | It is "nil coalescing operator" (also called "default operator"). `a ?? b` is value of `a` (i.e. `a!`), unless `a` is `nil`, in which case it yields `b`. I.e. if `favouriteSnacks[person]` is missing, return assign `"Candy Bar"` in its stead. | One addition to @Icaro's answer you can declare values without initialize them. In my opinion this is better:
```
func buyFavoriteSnack(person:String) throws {
// let snackName = favoriteSnacks[person] ?? "Candy Bar"
let snackName: String
if let favoriteSnackName = favoriteSnacks[person] {
snackName = favoriteSnackName
} else {
snackName = "Candy Bar"
}
try vend(itemName:snackName)
}
``` |
30,772,068 | I have the following string object (its json) in Java (its pretty printed so it is legible):
```
{
name: John,
age: {
years:18
},
computer_skills: {
years:4
},
mile_runner: {
years:2
}
}
```
I have an array with 100 people with the same structure.
What is the best way to go through all 100 people and make it such that there is no more "years"? In other words, each object in the 100 would look something like:
```
{
name: John,
age:18,
computer_skills:4,
mile_runner:2
}
```
In python, I can do something trivial like this in pseudocode:
```
// Iterates through each person
for person in persons:
// Iterates through each key/value pair k,v
for k, v in person.items():
// check if the value is an object {}
if isinstance(v, dict) and 'years' in v:
if the value is an object, and there is 'years' as a key nested within, then set the key k to have the value of the nested 'years' field
person[k] = v['years']
```
In the end, I want to be able to write out that "persons" object back out into a string form (of json).
Is there a simple yet efficient way to do this in java? I would prefer not to create a model object that represents every single field within my object if possible as I like how I dont have to do that in python (unless I have to because its Java).
I like how you can iterate, check if an object has a nested field 'years', if it does, then uplevel it! It looks straightforward. | 2015/06/11 | [
"https://Stackoverflow.com/questions/30772068",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/971888/"
] | It is "nil coalescing operator" (also called "default operator"). `a ?? b` is value of `a` (i.e. `a!`), unless `a` is `nil`, in which case it yields `b`. I.e. if `favouriteSnacks[person]` is missing, return assign `"Candy Bar"` in its stead. | ```
let something = a ?? b
```
means
```
let something = a != nil ? a! : b
``` |
30,772,068 | I have the following string object (its json) in Java (its pretty printed so it is legible):
```
{
name: John,
age: {
years:18
},
computer_skills: {
years:4
},
mile_runner: {
years:2
}
}
```
I have an array with 100 people with the same structure.
What is the best way to go through all 100 people and make it such that there is no more "years"? In other words, each object in the 100 would look something like:
```
{
name: John,
age:18,
computer_skills:4,
mile_runner:2
}
```
In python, I can do something trivial like this in pseudocode:
```
// Iterates through each person
for person in persons:
// Iterates through each key/value pair k,v
for k, v in person.items():
// check if the value is an object {}
if isinstance(v, dict) and 'years' in v:
if the value is an object, and there is 'years' as a key nested within, then set the key k to have the value of the nested 'years' field
person[k] = v['years']
```
In the end, I want to be able to write out that "persons" object back out into a string form (of json).
Is there a simple yet efficient way to do this in java? I would prefer not to create a model object that represents every single field within my object if possible as I like how I dont have to do that in python (unless I have to because its Java).
I like how you can iterate, check if an object has a nested field 'years', if it does, then uplevel it! It looks straightforward. | 2015/06/11 | [
"https://Stackoverflow.com/questions/30772068",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/971888/"
] | It is "nil coalescing operator" (also called "default operator"). `a ?? b` is value of `a` (i.e. `a!`), unless `a` is `nil`, in which case it yields `b`. I.e. if `favouriteSnacks[person]` is missing, return assign `"Candy Bar"` in its stead. | The nil-coalescing operator `a ?? b` is a shortcut for `a != nil ? a! : b` |
30,772,068 | I have the following string object (its json) in Java (its pretty printed so it is legible):
```
{
name: John,
age: {
years:18
},
computer_skills: {
years:4
},
mile_runner: {
years:2
}
}
```
I have an array with 100 people with the same structure.
What is the best way to go through all 100 people and make it such that there is no more "years"? In other words, each object in the 100 would look something like:
```
{
name: John,
age:18,
computer_skills:4,
mile_runner:2
}
```
In python, I can do something trivial like this in pseudocode:
```
// Iterates through each person
for person in persons:
// Iterates through each key/value pair k,v
for k, v in person.items():
// check if the value is an object {}
if isinstance(v, dict) and 'years' in v:
if the value is an object, and there is 'years' as a key nested within, then set the key k to have the value of the nested 'years' field
person[k] = v['years']
```
In the end, I want to be able to write out that "persons" object back out into a string form (of json).
Is there a simple yet efficient way to do this in java? I would prefer not to create a model object that represents every single field within my object if possible as I like how I dont have to do that in python (unless I have to because its Java).
I like how you can iterate, check if an object has a nested field 'years', if it does, then uplevel it! It looks straightforward. | 2015/06/11 | [
"https://Stackoverflow.com/questions/30772068",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/971888/"
] | This:
```
let snackName = favoriteSnacks[person] ?? "Candy Bar"
```
Is equals this:
```
if favoriteSnacks[person] != nil {
let snackName = favoriteSnacks[person]
} else {
let snackName = "Candy Bar"
}
```
Explaining in words, if the `let` statement fail to grab `person` from `favoriteSnacks` it will assigned Candy Bar to the `snackName` | One addition to @Icaro's answer you can declare values without initialize them. In my opinion this is better:
```
func buyFavoriteSnack(person:String) throws {
// let snackName = favoriteSnacks[person] ?? "Candy Bar"
let snackName: String
if let favoriteSnackName = favoriteSnacks[person] {
snackName = favoriteSnackName
} else {
snackName = "Candy Bar"
}
try vend(itemName:snackName)
}
``` |
30,772,068 | I have the following string object (its json) in Java (its pretty printed so it is legible):
```
{
name: John,
age: {
years:18
},
computer_skills: {
years:4
},
mile_runner: {
years:2
}
}
```
I have an array with 100 people with the same structure.
What is the best way to go through all 100 people and make it such that there is no more "years"? In other words, each object in the 100 would look something like:
```
{
name: John,
age:18,
computer_skills:4,
mile_runner:2
}
```
In python, I can do something trivial like this in pseudocode:
```
// Iterates through each person
for person in persons:
// Iterates through each key/value pair k,v
for k, v in person.items():
// check if the value is an object {}
if isinstance(v, dict) and 'years' in v:
if the value is an object, and there is 'years' as a key nested within, then set the key k to have the value of the nested 'years' field
person[k] = v['years']
```
In the end, I want to be able to write out that "persons" object back out into a string form (of json).
Is there a simple yet efficient way to do this in java? I would prefer not to create a model object that represents every single field within my object if possible as I like how I dont have to do that in python (unless I have to because its Java).
I like how you can iterate, check if an object has a nested field 'years', if it does, then uplevel it! It looks straightforward. | 2015/06/11 | [
"https://Stackoverflow.com/questions/30772068",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/971888/"
] | This:
```
let snackName = favoriteSnacks[person] ?? "Candy Bar"
```
Is equals this:
```
if favoriteSnacks[person] != nil {
let snackName = favoriteSnacks[person]
} else {
let snackName = "Candy Bar"
}
```
Explaining in words, if the `let` statement fail to grab `person` from `favoriteSnacks` it will assigned Candy Bar to the `snackName` | The nil-coalescing operator `a ?? b` is a shortcut for `a != nil ? a! : b` |
30,772,068 | I have the following string object (its json) in Java (its pretty printed so it is legible):
```
{
name: John,
age: {
years:18
},
computer_skills: {
years:4
},
mile_runner: {
years:2
}
}
```
I have an array with 100 people with the same structure.
What is the best way to go through all 100 people and make it such that there is no more "years"? In other words, each object in the 100 would look something like:
```
{
name: John,
age:18,
computer_skills:4,
mile_runner:2
}
```
In python, I can do something trivial like this in pseudocode:
```
// Iterates through each person
for person in persons:
// Iterates through each key/value pair k,v
for k, v in person.items():
// check if the value is an object {}
if isinstance(v, dict) and 'years' in v:
if the value is an object, and there is 'years' as a key nested within, then set the key k to have the value of the nested 'years' field
person[k] = v['years']
```
In the end, I want to be able to write out that "persons" object back out into a string form (of json).
Is there a simple yet efficient way to do this in java? I would prefer not to create a model object that represents every single field within my object if possible as I like how I dont have to do that in python (unless I have to because its Java).
I like how you can iterate, check if an object has a nested field 'years', if it does, then uplevel it! It looks straightforward. | 2015/06/11 | [
"https://Stackoverflow.com/questions/30772068",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/971888/"
] | ```
let something = a ?? b
```
means
```
let something = a != nil ? a! : b
``` | One addition to @Icaro's answer you can declare values without initialize them. In my opinion this is better:
```
func buyFavoriteSnack(person:String) throws {
// let snackName = favoriteSnacks[person] ?? "Candy Bar"
let snackName: String
if let favoriteSnackName = favoriteSnacks[person] {
snackName = favoriteSnackName
} else {
snackName = "Candy Bar"
}
try vend(itemName:snackName)
}
``` |
30,772,068 | I have the following string object (its json) in Java (its pretty printed so it is legible):
```
{
name: John,
age: {
years:18
},
computer_skills: {
years:4
},
mile_runner: {
years:2
}
}
```
I have an array with 100 people with the same structure.
What is the best way to go through all 100 people and make it such that there is no more "years"? In other words, each object in the 100 would look something like:
```
{
name: John,
age:18,
computer_skills:4,
mile_runner:2
}
```
In python, I can do something trivial like this in pseudocode:
```
// Iterates through each person
for person in persons:
// Iterates through each key/value pair k,v
for k, v in person.items():
// check if the value is an object {}
if isinstance(v, dict) and 'years' in v:
if the value is an object, and there is 'years' as a key nested within, then set the key k to have the value of the nested 'years' field
person[k] = v['years']
```
In the end, I want to be able to write out that "persons" object back out into a string form (of json).
Is there a simple yet efficient way to do this in java? I would prefer not to create a model object that represents every single field within my object if possible as I like how I dont have to do that in python (unless I have to because its Java).
I like how you can iterate, check if an object has a nested field 'years', if it does, then uplevel it! It looks straightforward. | 2015/06/11 | [
"https://Stackoverflow.com/questions/30772068",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/971888/"
] | The nil-coalescing operator `a ?? b` is a shortcut for `a != nil ? a! : b` | One addition to @Icaro's answer you can declare values without initialize them. In my opinion this is better:
```
func buyFavoriteSnack(person:String) throws {
// let snackName = favoriteSnacks[person] ?? "Candy Bar"
let snackName: String
if let favoriteSnackName = favoriteSnacks[person] {
snackName = favoriteSnackName
} else {
snackName = "Candy Bar"
}
try vend(itemName:snackName)
}
``` |
30,772,068 | I have the following string object (its json) in Java (its pretty printed so it is legible):
```
{
name: John,
age: {
years:18
},
computer_skills: {
years:4
},
mile_runner: {
years:2
}
}
```
I have an array with 100 people with the same structure.
What is the best way to go through all 100 people and make it such that there is no more "years"? In other words, each object in the 100 would look something like:
```
{
name: John,
age:18,
computer_skills:4,
mile_runner:2
}
```
In python, I can do something trivial like this in pseudocode:
```
// Iterates through each person
for person in persons:
// Iterates through each key/value pair k,v
for k, v in person.items():
// check if the value is an object {}
if isinstance(v, dict) and 'years' in v:
if the value is an object, and there is 'years' as a key nested within, then set the key k to have the value of the nested 'years' field
person[k] = v['years']
```
In the end, I want to be able to write out that "persons" object back out into a string form (of json).
Is there a simple yet efficient way to do this in java? I would prefer not to create a model object that represents every single field within my object if possible as I like how I dont have to do that in python (unless I have to because its Java).
I like how you can iterate, check if an object has a nested field 'years', if it does, then uplevel it! It looks straightforward. | 2015/06/11 | [
"https://Stackoverflow.com/questions/30772068",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/971888/"
] | ```
let something = a ?? b
```
means
```
let something = a != nil ? a! : b
``` | The nil-coalescing operator `a ?? b` is a shortcut for `a != nil ? a! : b` |
54,064,946 | I am working in jupyter with python in order to clean a set of data that I have retrieved from an analysis software and I would like to have an equal number of samples that pass and fail. Basically my dataframe in pandas looks like this:
```
grade section area_steel Nx Myy utilisation Accceptable
0 C16/20 STD R 700 350 4534 -310000 240000 0.313 0
1 C90/105 STD R 400 600 4248 -490000 270000 0.618 0
3 C35/45 STD R 550 400 1282 580000 810000 7.049 1
4 C12/15 STD R 350 750 2386 960000 610000 5.180 1
```
However the results which are not acceptable (1) are double the results which are acceptable.
no\_pass = 8589 no\_fail = 16999
ratio = 1.979159389917336
I would like a new dataframe with the same ratio of pass and fails
I tried the following but it doesn't seem to work:
```
import random
new_data = data[data.Accceptable <= random.random()*1/ratio]
```
It would seem that only one random value is computed and all the rows checked against that, while I would like one value per row.
Any suggestion? | 2019/01/06 | [
"https://Stackoverflow.com/questions/54064946",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10876004/"
] | Using formatting string and assuming that `optimal_system` is your dictionary:
```
with open('output.txt', 'w') as f:
for k in optimal_system.keys():
f.write("{}: {}\n".format(k, optimal_system[k]))
```
**EDIT**
As pointed by @wwii, the code above can be also written as:
```
with open('output.txt', 'w') as f:
for k, v in optimal_system.items():
f.write("{}: {}\n".format(k, v))
```
And the string can be formatted using [formatted string literals](https://docs.python.org/3.7/reference/lexical_analysis.html#f-strings), available since python 3.6, hence `f'{k}: {v}\n'` instead of `"{}: {}\n".format(k, v)`. | You can use json.dumps() to do this with the indent parameter. For example:
```
import json
dictionary_variable = {'employee_01': {'fname': 'John', 'lname': 'Doe'},
'employee_02': {'fname': 'Jane', 'lname': 'Doe'}}
with open('output.txt', 'w') as f:
f.write(json.dumps(dictionary_variable, indent=4))
``` |
54,064,946 | I am working in jupyter with python in order to clean a set of data that I have retrieved from an analysis software and I would like to have an equal number of samples that pass and fail. Basically my dataframe in pandas looks like this:
```
grade section area_steel Nx Myy utilisation Accceptable
0 C16/20 STD R 700 350 4534 -310000 240000 0.313 0
1 C90/105 STD R 400 600 4248 -490000 270000 0.618 0
3 C35/45 STD R 550 400 1282 580000 810000 7.049 1
4 C12/15 STD R 350 750 2386 960000 610000 5.180 1
```
However the results which are not acceptable (1) are double the results which are acceptable.
no\_pass = 8589 no\_fail = 16999
ratio = 1.979159389917336
I would like a new dataframe with the same ratio of pass and fails
I tried the following but it doesn't seem to work:
```
import random
new_data = data[data.Accceptable <= random.random()*1/ratio]
```
It would seem that only one random value is computed and all the rows checked against that, while I would like one value per row.
Any suggestion? | 2019/01/06 | [
"https://Stackoverflow.com/questions/54064946",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10876004/"
] | You can use the [`pprint` module](https://docs.python.org/3/library/pprint.html) -- it also works for all other data structures.
To force every entry on a new line, set the `width` argument to something low. The `stream` argument lets you directly write to the file.
```
import pprint
mydata = {'Optimal Temperature (K)': 425,
'Optimal Pressure (kPa)': 100,
'other stuff': [1, 2, ...]}
with open('output.txt', 'w') as f:
pprint.pprint(mydata, stream=f, width=1)
```
will produce:
```
{'Optimal Pressure (kPa)': 100,
'Optimal Temperature (K)': 425,
'other stuff': [1,
2,
Ellipsis]}
``` | You can use json.dumps() to do this with the indent parameter. For example:
```
import json
dictionary_variable = {'employee_01': {'fname': 'John', 'lname': 'Doe'},
'employee_02': {'fname': 'Jane', 'lname': 'Doe'}}
with open('output.txt', 'w') as f:
f.write(json.dumps(dictionary_variable, indent=4))
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.