qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
---|---|---|---|---|---|
21,869,675 | ```
list_ = [(1, 'a'), (2, 'b'), (3, 'c')]
item1 = 1
item2 = 'c'
#hypothetical:
assert list_.index_by_first_value(item1) == 0
assert list_.index_by_second_value(item2) == 2
```
What would be the fastest way to emulate the `index_by_first/second_value` method in python?
If you don't understand what's going on; if you have a list of tuples (as is contained in `list_`), how would you go about finding the index of a tuple with the first/second value of the tuple being the element you want to index?
---
My best guess would be this:
```
[i[0] for i in list_].index(item1)
[i[1] for i in list_].index(item2)
```
But I'm interested in seeing what you guys will come up with. Any ideas? | 2014/02/19 | [
"https://Stackoverflow.com/questions/21869675",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3002473/"
] | At first, I thought along [the same lines as Nick T](https://stackoverflow.com/a/21869852/418413). Your method is fine if the number of tuples (N) is short. But of course a linear search is O(N). As the number of tuples increases, the time increases directly with it. You can get O(1) lookup time with a dict mapping the zeroth element of each tuple to its index:
```
{el[0]:idx for idx,el in enumerate(list_)}
```
But the cost of converting the list to a dict may be too high! Here are my results:
```
>>> from timeit import timeit as t
>>> t('[i[0] for i in list_].index(1)', "import random;list_=[(i,'a') for i in range(10)]; random.shuffle(list_)")
1.557116985321045
>>> t('[i[0] for i in list_].index(1)', "import random;list_=[(i,'a') for i in range(100)]; random.shuffle(list_)")
7.415766954421997
>>> t('{el[0]:idx for idx,el in enumerate(list_)}[1]', "import random;list_=[(i,'a') for i in range(10)]; random.shuffle(list_)")
2.1753010749816895
>>> t('{el[0]:idx for idx,el in enumerate(list_)}[1]', "import random;list_=[(i,'a') for i in range(100)]; random.shuffle(list_)")
15.062835216522217
```
So the list-to-dict conversion is killing any benefit we get from having the O(1) lookups. But just to prove that the dict is really fast if we can avoid doing the conversion more than once:
```
>>> t('dict_[1]', "import random;list_=[(i,'a') for i in range(10)];random.shuffle(list_);dict_={el[0]:idx for idx,el in enumerate(list_)}")
0.050583839416503906
>>> t('dict_[1]', "import random;list_=[(i,'a') for i in range(100)];random.shuffle(list_);dict_={el[0]:idx for idx,el in enumerate(list_)}")
0.05001211166381836
>>> t('dict_[1]', "import random;list_=[(i,'a') for i in range(1000)];random.shuffle(list_);dict_={el[0]:idx for idx,el in enumerate(list_)}")
0.050894975662231445
``` | **What is fastest?** *It depends on how many times you need to use it, and if you are able to create an index dictionary from the very beginning.*
As the others have mentioned, dictionary is much faster once you have it, but it is costly to transform the list into a dictionary. I'm going to show what I get on my computer so that I have numbers to compare to. Here's what I got:
```
>>> import timeit
>>> timeit.timeit('mydict = {val[0]:(ind, val[1]) for ind, val in enumerate(mylist)}', 'mylist = [(i, "a") for i in range(1000)]')
200.36049539601527
```
Surprisingly, this is significantly slower than it was even to create the list in the first place:
```
>>> timeit.timeit('mylist = [(i, "a") for i in range(1000)]')
70.15259253453814
```
So how does this compare to creating a dictionary in the first place?
```
>>> timeit.timeit('mydict = {i:("a", i) for i in range(1000)}')
90.78464277950229
```
Obviously, this is not always possible because you are not always the one creating the list, but I wanted to include this for comparisons.
Summary of initializations:
* Creating a list - 70.15
* Creating a dictionary - 90.78
* Indexing an existing list - 70.15 + 200.36 = 270.51
So now, supposing you have a list or dictionary already set up, how long does it take?
```
>>> timeit.timeit('[i[0] for i in mylist].index(random.randint(0,999))', 'import random; mylist = [(i, "a") for i in range(1000)]')
68.15473008213394
```
However, this creates a new temporary list each time, so let's look at the breakdown
```
>>> timeit.timeit('indexed = [i[0] for i in mylist]', 'import random; mylist = [(i, "a") for i in range(1000)];')
55.86422327528999
>>> timeit.timeit('indexed.index(random.randint(0,999))', 'import random; mylist = [(i, "a") for i in range(1000)]; indexed = [i[0] for i in mylist]')
12.302146224677017
```
55.86 + 12.30 = 68.16, which is consistent with the 68.15 the previous result gave us. Now the dictionary:
```
>>> timeit.timeit('mydict[random.randint(0,999)]', 'import random; mylist = [(i, "a") for i in range(1000)]; mydict = {val[0]:(ind, val[1]) for ind, val in enumerate(mylist)}')
1.5201382921450204
```
Of course, in each of these cases I'm using `random.randint` so let's time that to factor it out:
```
>>> timeit.timeit('random.randint(0,999)', 'import random')
1.4206546251180043
```
So now a summary of using the index:
* Using a list - (68.16-1.42) = 66.74 first time, (12.30-1.42) = 10.88 after that
* Using a dictionary - (1.52-1.42) = 0.10 each time
Now let's figure out how many accesses it takes for the dictionary to become more useful. First, a formula for time as a function of number of accesses:
* List - 55.86 + 10.88x
* Dictionary - 200.36 + 0.10x
* Initial dictionary - 20.63 + 0.10x
Based on these formulas, a dictionary becomes faster if you need to access it at least 14 times. If you can create a dictionary from the get-go instead of a list, then the extra overhead to create a dictionary instead of a list is more than offset by the overhead to create a list of just the first values in the tuples.
**So which is fastest?** *It depends on how many times you need to use it, and if you are able to create an index dictionary from the very beginning.*
Note: I'm using Python 2.7.5. Timings in Python 3.x could be very different, and also will probably be different on different machines. I'd be curious to see what someone else would come up with on their machine.
All times are in seconds, but timed for one million runs. So individual runs are about the same number in microseconds. |
21,869,675 | ```
list_ = [(1, 'a'), (2, 'b'), (3, 'c')]
item1 = 1
item2 = 'c'
#hypothetical:
assert list_.index_by_first_value(item1) == 0
assert list_.index_by_second_value(item2) == 2
```
What would be the fastest way to emulate the `index_by_first/second_value` method in python?
If you don't understand what's going on; if you have a list of tuples (as is contained in `list_`), how would you go about finding the index of a tuple with the first/second value of the tuple being the element you want to index?
---
My best guess would be this:
```
[i[0] for i in list_].index(item1)
[i[1] for i in list_].index(item2)
```
But I'm interested in seeing what you guys will come up with. Any ideas? | 2014/02/19 | [
"https://Stackoverflow.com/questions/21869675",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3002473/"
] | EDIT: Just kidding. As the lists grow longer it looks like the manual `for` loop takes less time. Updated to generate random lists via kojiro's method:
Just some timing tests for your information while maintaining lists. The good thing about preserving list form versus a dictionary is that it's expansible to include tuples of any length.
```
import timeit
from operator import itemgetter
import random
list_= [('a', i) for i in range(10)]
random.shuffle(list_)
def a():
return [i[1] for i in list_].index(1)
def b():
return zip(*list_)[1].index(1)
def c():
return map(itemgetter(1), list_).index(1)
def d():
for index, value in enumerate(list_):
if 1 == value[1]:
return index
```
With `timeit`:
```
C:\Users\Jesse\Desktop>python -m timeit -s "import test" "test.a()"
1000000 loops, best of 3: 1.21 usec per loop
C:\Users\Jesse\Desktop>python -m timeit -s "import test" "test.b()"
1000000 loops, best of 3: 1.2 usec per loop
C:\Users\Jesse\Desktop>python -m timeit -s "import test" "test.c()"
1000000 loops, best of 3: 1.45 usec per loop
C:\Users\Jesse\Desktop>python -m timeit -s "import test" "test.d()"
1000000 loops, best of 3: 0.922 usec per loop
``` | @Nick T
I think some time is wasted enumerating the list and then converting it to a dictionary, so even if it is an O(1) lookup for a dict, creating the dict in the first place is too costly to consider it a viable option for large lists.
This is the test I used to determine it:
```
import time
l = [(i, chr(i)) for i in range(1000000)]
def test1():
t1 = time.time()
([i[0] for i in l].index(10872))
t2 = time.time()
return t2 - t1
def test2():
t1 = time.time()
(dict((kv[0], (i, kv[1])) for i, kv in enumerate(l))[10872][0])
t2 = time.time()
return t2 - t1
def test3():
sum1 = []
sum2 = []
for i in range(1000):
sum1.append(test1())
sum2.append(test2())
print(sum(sum1)/1000)
print(sum(sum2)/1000)
test3()
```
EDIT: Haha Kojiro, you beat me to it! |
21,869,675 | ```
list_ = [(1, 'a'), (2, 'b'), (3, 'c')]
item1 = 1
item2 = 'c'
#hypothetical:
assert list_.index_by_first_value(item1) == 0
assert list_.index_by_second_value(item2) == 2
```
What would be the fastest way to emulate the `index_by_first/second_value` method in python?
If you don't understand what's going on; if you have a list of tuples (as is contained in `list_`), how would you go about finding the index of a tuple with the first/second value of the tuple being the element you want to index?
---
My best guess would be this:
```
[i[0] for i in list_].index(item1)
[i[1] for i in list_].index(item2)
```
But I'm interested in seeing what you guys will come up with. Any ideas? | 2014/02/19 | [
"https://Stackoverflow.com/questions/21869675",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3002473/"
] | **What is fastest?** *It depends on how many times you need to use it, and if you are able to create an index dictionary from the very beginning.*
As the others have mentioned, dictionary is much faster once you have it, but it is costly to transform the list into a dictionary. I'm going to show what I get on my computer so that I have numbers to compare to. Here's what I got:
```
>>> import timeit
>>> timeit.timeit('mydict = {val[0]:(ind, val[1]) for ind, val in enumerate(mylist)}', 'mylist = [(i, "a") for i in range(1000)]')
200.36049539601527
```
Surprisingly, this is significantly slower than it was even to create the list in the first place:
```
>>> timeit.timeit('mylist = [(i, "a") for i in range(1000)]')
70.15259253453814
```
So how does this compare to creating a dictionary in the first place?
```
>>> timeit.timeit('mydict = {i:("a", i) for i in range(1000)}')
90.78464277950229
```
Obviously, this is not always possible because you are not always the one creating the list, but I wanted to include this for comparisons.
Summary of initializations:
* Creating a list - 70.15
* Creating a dictionary - 90.78
* Indexing an existing list - 70.15 + 200.36 = 270.51
So now, supposing you have a list or dictionary already set up, how long does it take?
```
>>> timeit.timeit('[i[0] for i in mylist].index(random.randint(0,999))', 'import random; mylist = [(i, "a") for i in range(1000)]')
68.15473008213394
```
However, this creates a new temporary list each time, so let's look at the breakdown
```
>>> timeit.timeit('indexed = [i[0] for i in mylist]', 'import random; mylist = [(i, "a") for i in range(1000)];')
55.86422327528999
>>> timeit.timeit('indexed.index(random.randint(0,999))', 'import random; mylist = [(i, "a") for i in range(1000)]; indexed = [i[0] for i in mylist]')
12.302146224677017
```
55.86 + 12.30 = 68.16, which is consistent with the 68.15 the previous result gave us. Now the dictionary:
```
>>> timeit.timeit('mydict[random.randint(0,999)]', 'import random; mylist = [(i, "a") for i in range(1000)]; mydict = {val[0]:(ind, val[1]) for ind, val in enumerate(mylist)}')
1.5201382921450204
```
Of course, in each of these cases I'm using `random.randint` so let's time that to factor it out:
```
>>> timeit.timeit('random.randint(0,999)', 'import random')
1.4206546251180043
```
So now a summary of using the index:
* Using a list - (68.16-1.42) = 66.74 first time, (12.30-1.42) = 10.88 after that
* Using a dictionary - (1.52-1.42) = 0.10 each time
Now let's figure out how many accesses it takes for the dictionary to become more useful. First, a formula for time as a function of number of accesses:
* List - 55.86 + 10.88x
* Dictionary - 200.36 + 0.10x
* Initial dictionary - 20.63 + 0.10x
Based on these formulas, a dictionary becomes faster if you need to access it at least 14 times. If you can create a dictionary from the get-go instead of a list, then the extra overhead to create a dictionary instead of a list is more than offset by the overhead to create a list of just the first values in the tuples.
**So which is fastest?** *It depends on how many times you need to use it, and if you are able to create an index dictionary from the very beginning.*
Note: I'm using Python 2.7.5. Timings in Python 3.x could be very different, and also will probably be different on different machines. I'd be curious to see what someone else would come up with on their machine.
All times are in seconds, but timed for one million runs. So individual runs are about the same number in microseconds. | @Nick T
I think some time is wasted enumerating the list and then converting it to a dictionary, so even if it is an O(1) lookup for a dict, creating the dict in the first place is too costly to consider it a viable option for large lists.
This is the test I used to determine it:
```
import time
l = [(i, chr(i)) for i in range(1000000)]
def test1():
t1 = time.time()
([i[0] for i in l].index(10872))
t2 = time.time()
return t2 - t1
def test2():
t1 = time.time()
(dict((kv[0], (i, kv[1])) for i, kv in enumerate(l))[10872][0])
t2 = time.time()
return t2 - t1
def test3():
sum1 = []
sum2 = []
for i in range(1000):
sum1.append(test1())
sum2.append(test2())
print(sum(sum1)/1000)
print(sum(sum2)/1000)
test3()
```
EDIT: Haha Kojiro, you beat me to it! |
21,869,675 | ```
list_ = [(1, 'a'), (2, 'b'), (3, 'c')]
item1 = 1
item2 = 'c'
#hypothetical:
assert list_.index_by_first_value(item1) == 0
assert list_.index_by_second_value(item2) == 2
```
What would be the fastest way to emulate the `index_by_first/second_value` method in python?
If you don't understand what's going on; if you have a list of tuples (as is contained in `list_`), how would you go about finding the index of a tuple with the first/second value of the tuple being the element you want to index?
---
My best guess would be this:
```
[i[0] for i in list_].index(item1)
[i[1] for i in list_].index(item2)
```
But I'm interested in seeing what you guys will come up with. Any ideas? | 2014/02/19 | [
"https://Stackoverflow.com/questions/21869675",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3002473/"
] | EDIT: Just kidding. As the lists grow longer it looks like the manual `for` loop takes less time. Updated to generate random lists via kojiro's method:
Just some timing tests for your information while maintaining lists. The good thing about preserving list form versus a dictionary is that it's expansible to include tuples of any length.
```
import timeit
from operator import itemgetter
import random
list_= [('a', i) for i in range(10)]
random.shuffle(list_)
def a():
return [i[1] for i in list_].index(1)
def b():
return zip(*list_)[1].index(1)
def c():
return map(itemgetter(1), list_).index(1)
def d():
for index, value in enumerate(list_):
if 1 == value[1]:
return index
```
With `timeit`:
```
C:\Users\Jesse\Desktop>python -m timeit -s "import test" "test.a()"
1000000 loops, best of 3: 1.21 usec per loop
C:\Users\Jesse\Desktop>python -m timeit -s "import test" "test.b()"
1000000 loops, best of 3: 1.2 usec per loop
C:\Users\Jesse\Desktop>python -m timeit -s "import test" "test.c()"
1000000 loops, best of 3: 1.45 usec per loop
C:\Users\Jesse\Desktop>python -m timeit -s "import test" "test.d()"
1000000 loops, best of 3: 0.922 usec per loop
``` | **What is fastest?** *It depends on how many times you need to use it, and if you are able to create an index dictionary from the very beginning.*
As the others have mentioned, dictionary is much faster once you have it, but it is costly to transform the list into a dictionary. I'm going to show what I get on my computer so that I have numbers to compare to. Here's what I got:
```
>>> import timeit
>>> timeit.timeit('mydict = {val[0]:(ind, val[1]) for ind, val in enumerate(mylist)}', 'mylist = [(i, "a") for i in range(1000)]')
200.36049539601527
```
Surprisingly, this is significantly slower than it was even to create the list in the first place:
```
>>> timeit.timeit('mylist = [(i, "a") for i in range(1000)]')
70.15259253453814
```
So how does this compare to creating a dictionary in the first place?
```
>>> timeit.timeit('mydict = {i:("a", i) for i in range(1000)}')
90.78464277950229
```
Obviously, this is not always possible because you are not always the one creating the list, but I wanted to include this for comparisons.
Summary of initializations:
* Creating a list - 70.15
* Creating a dictionary - 90.78
* Indexing an existing list - 70.15 + 200.36 = 270.51
So now, supposing you have a list or dictionary already set up, how long does it take?
```
>>> timeit.timeit('[i[0] for i in mylist].index(random.randint(0,999))', 'import random; mylist = [(i, "a") for i in range(1000)]')
68.15473008213394
```
However, this creates a new temporary list each time, so let's look at the breakdown
```
>>> timeit.timeit('indexed = [i[0] for i in mylist]', 'import random; mylist = [(i, "a") for i in range(1000)];')
55.86422327528999
>>> timeit.timeit('indexed.index(random.randint(0,999))', 'import random; mylist = [(i, "a") for i in range(1000)]; indexed = [i[0] for i in mylist]')
12.302146224677017
```
55.86 + 12.30 = 68.16, which is consistent with the 68.15 the previous result gave us. Now the dictionary:
```
>>> timeit.timeit('mydict[random.randint(0,999)]', 'import random; mylist = [(i, "a") for i in range(1000)]; mydict = {val[0]:(ind, val[1]) for ind, val in enumerate(mylist)}')
1.5201382921450204
```
Of course, in each of these cases I'm using `random.randint` so let's time that to factor it out:
```
>>> timeit.timeit('random.randint(0,999)', 'import random')
1.4206546251180043
```
So now a summary of using the index:
* Using a list - (68.16-1.42) = 66.74 first time, (12.30-1.42) = 10.88 after that
* Using a dictionary - (1.52-1.42) = 0.10 each time
Now let's figure out how many accesses it takes for the dictionary to become more useful. First, a formula for time as a function of number of accesses:
* List - 55.86 + 10.88x
* Dictionary - 200.36 + 0.10x
* Initial dictionary - 20.63 + 0.10x
Based on these formulas, a dictionary becomes faster if you need to access it at least 14 times. If you can create a dictionary from the get-go instead of a list, then the extra overhead to create a dictionary instead of a list is more than offset by the overhead to create a list of just the first values in the tuples.
**So which is fastest?** *It depends on how many times you need to use it, and if you are able to create an index dictionary from the very beginning.*
Note: I'm using Python 2.7.5. Timings in Python 3.x could be very different, and also will probably be different on different machines. I'd be curious to see what someone else would come up with on their machine.
All times are in seconds, but timed for one million runs. So individual runs are about the same number in microseconds. |
36,584,975 | I've a little problem with my code.
I tried to rewrite code from python to java.
In Python it's:
```
data = bytearray(filesize)
f.readinto(data)
```
Then I tried to write it in java like this:
```
try {
data = Files.readAllBytes(file.toPath());
} catch (IOException ex) {
Logger.getLogger(Encrypter.class.getName()).log(Level.SEVERE, null, ex);
}
for(int index : data) {
data[index] = (byte) ((byte) Math.pow(data[index], genfun((fileSize), index)) & 0xFF);
}
```
Everything seems to be good for me but when I compile it and there is an java.lang.ArrayIndexOutOfBoundsException: -77
Has anyone have a clue or can rewrite it better? | 2016/04/12 | [
"https://Stackoverflow.com/questions/36584975",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6195753/"
] | Since `@metrics` is an array, it doesn't look like you're calling any code on your model at all so your model code isn't actually doing anything.
This code your controller will generate the output you're looking for:
```
CSV.generate do |csv|
@metrics.each { |item| csv << [item] }
end
``` | This is just a guess, but try formatting `@metrics` as an array of arrays: so each element of `@metrics` is its own array. It seems likely that `to_csv` treats an array like a row, so you need an array of arrays to generate new lines.
```
[["Group Name,1"], ["25"], ["44,2,5"]]
```
**UPDATE**
Looking at your code again, `@model` is not an instance of any model. It is simply an array. When you call `to_csv` on it, it is not reading any methods referenced in your model. I'm guessing that ruby's built in `Array` object has a `to_csv` method baked in which is being called and explains why you aren't getting any errors. @Anthony E has correctly said said this in his answer. (though I suspect that my answer will also work). |
31,767,709 | What's a good command from term to render all images in a dir into one browser window?
Looking for something like this:
`python -m SimpleHTTPServer 8080`
But instead of a list ...
... Would like to see **all the images rendered in a single browser window**, just flowed naturally, at natural dimensions, just scroll down for how many images there are to see them all in their natural rendered state. | 2015/08/02 | [
"https://Stackoverflow.com/questions/31767709",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1618304/"
] | I found a perl CGI script to do this:
```
#!/usr/bin/perl -wT
# myscript.pl
use strict;
use CGI;
use Image::Size;
my $q = new CGI;
my $imageDir = "./";
my @images;
opendir DIR, "$imageDir" or die "Can't open $imageDir $!";
@images = grep { /\.(?:png|gif|jpg)$/i } readdir DIR;
# @images = grep { /\.(?:png|gif|jpg|webm|web|mp4|svg)$/i } readdir DIR;)
closedir DIR;
print $q->header("text/html"),
$q->start_html("Images in the directory you specified."),
$q->h1("Images in the directory your specified.");
foreach my $image (@images) {
my ($width, $height) = imgsize("$image");
print $q->a({-href=>$image},
$q->img({-src=>$image,
-width=>$width,
-height=>$height})
);
}
print $q->end_html;
```
to run on MacOS you'll need to install these modules like this:
`cpan CGI`
`cpan Image::Size`
Put the sript in the directory that contains the images you want to preview.
…then say `perl -wT myscript.pl > output.html`
Open the generated `output.html` to see all the images in a single browser window at their natural dimensions.
Related to this question and answer: [How to run this simple Perl CGI script on Mac from terminal?](https://stackoverflow.com/questions/61927403/how-to-run-this-simple-perl-cgi-script-on-mac-from-terminal) | This is quite easy, you can program something like this in a couple of minutes.
Just create an array of all the images in ./ create a var s = '' and appen for each img in ./ '>
' and send it to the webbrowser the server->google is your friend |
41,708,458 | I have many bash scripts to help set my current session environment variables. I need the env variables set so I can use the subprocess module to run commands in my python scripts. This is how I execute the bash scripts:
```
. ./file1.sh
```
Below is the beginning of the bash script:
```
echo "Setting Environment Variable..."
export HORCMINST=99
echo $HORCMINST
...
```
Is there a way to call these bash scripts from a python script or do something similar within a python script? | 2017/01/17 | [
"https://Stackoverflow.com/questions/41708458",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7259469/"
] | ### Using `shell=True` With Your Existing Script
First, in terms of the *very simplest thing* -- if you're using `shell=True`, you can tell the shell that starts to run the contents of your preexisting script unmodified.
That is to say -- if you were initially doing this:
```
subprocess.Popen(['your-command', 'arg1', 'arg2'])
```
...then you can do the following to execute that same command, with almost the same security guarantees (the only additional vulnerabilities, so long as the contents of `file1.sh` are trusted, are to out-of-band issues such as shellshock):
```
# this has the security of passing explicit out-of-band args
# but sources your script before the out-of-process command
subprocess.Popen(['. "$1"; shift; exec "$@"', "_", "./file1.sh",
"your-command", "arg1", "arg2"], shell=True)
```
---
### Using `/proc/self/environ` to export environment variables in a NUL-delimited stream
The ideal thing to do is to export your environment variables in an unambiguous form -- a NUL-delimited stream is ideal -- and then parse that stream (which is in a very unambiguous format) in Python.
Assuming Linux, you can export the complete set of environment variables as follows:
```
# copy all our environment variables, in a NUL-delimited stream, to myvars.environ
cat </proc/self/environ >myvars.environ
```
...or you can export a specific set of variables by hand:
```
for varname in HORCMINST PATH; do
printf '%s=%s\0' "$varname" "${!varname}"
done >myvars.environ
```
---
### Reading and parsing a NUL-delimited stream in Python
Then you just need to read and parse them:
```
#!/usr/bin/env python
env = {}
for var_def in open('myvars.environ', 'r').read().split('\0'):
(key, value) = var_def.split('=', 1)
env[key] = value
import subprocess
subprocess.Popen(['your-command', 'arg1', 'arg2'], env=env)
```
You could also immediately apply those variables by running `os.environ[key]=value`.
---
### Reading and parsing a NUL-delimited stream in bash
Incidentally, that same format is also easy to parse in bash:
```
while IFS= read -r -d '' var_def; do
key=${var_def%%=*}
value=${var_def#*=}
printf -v "$key" '%s' "$value"
export "$key"
done <myvars.environ
# ...put the rest of your bash script here
```
---
Now, *why* a NUL-delimited stream? Because environment variables are C strings -- unlike Python strings, they can't contain NUL. As such, NUL is the one and only character that can be safely used to delimit them.
For instance, someone who tried to use newlines could be stymied by an environment variable that *contained* a literal newline -- and if someone is, say, embedding a short Python script inside an environment variable, that's a very plausible event! | You should consider the Python builtin `os` [module](https://docs.python.org/2/library/os.html). The attribute
`os.environ` is a dictionary of environment variables that you can *read*, e.g.
```
import os
os.environ["USER"]
```
You cannot, however, *write* bash environment variables from the child process (see e.g., [How to use export with Python on Linux](https://stackoverflow.com/questions/1506010/how-to-use-export-with-python-on-linux)). |
41,708,458 | I have many bash scripts to help set my current session environment variables. I need the env variables set so I can use the subprocess module to run commands in my python scripts. This is how I execute the bash scripts:
```
. ./file1.sh
```
Below is the beginning of the bash script:
```
echo "Setting Environment Variable..."
export HORCMINST=99
echo $HORCMINST
...
```
Is there a way to call these bash scripts from a python script or do something similar within a python script? | 2017/01/17 | [
"https://Stackoverflow.com/questions/41708458",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7259469/"
] | ### Using `shell=True` With Your Existing Script
First, in terms of the *very simplest thing* -- if you're using `shell=True`, you can tell the shell that starts to run the contents of your preexisting script unmodified.
That is to say -- if you were initially doing this:
```
subprocess.Popen(['your-command', 'arg1', 'arg2'])
```
...then you can do the following to execute that same command, with almost the same security guarantees (the only additional vulnerabilities, so long as the contents of `file1.sh` are trusted, are to out-of-band issues such as shellshock):
```
# this has the security of passing explicit out-of-band args
# but sources your script before the out-of-process command
subprocess.Popen(['. "$1"; shift; exec "$@"', "_", "./file1.sh",
"your-command", "arg1", "arg2"], shell=True)
```
---
### Using `/proc/self/environ` to export environment variables in a NUL-delimited stream
The ideal thing to do is to export your environment variables in an unambiguous form -- a NUL-delimited stream is ideal -- and then parse that stream (which is in a very unambiguous format) in Python.
Assuming Linux, you can export the complete set of environment variables as follows:
```
# copy all our environment variables, in a NUL-delimited stream, to myvars.environ
cat </proc/self/environ >myvars.environ
```
...or you can export a specific set of variables by hand:
```
for varname in HORCMINST PATH; do
printf '%s=%s\0' "$varname" "${!varname}"
done >myvars.environ
```
---
### Reading and parsing a NUL-delimited stream in Python
Then you just need to read and parse them:
```
#!/usr/bin/env python
env = {}
for var_def in open('myvars.environ', 'r').read().split('\0'):
(key, value) = var_def.split('=', 1)
env[key] = value
import subprocess
subprocess.Popen(['your-command', 'arg1', 'arg2'], env=env)
```
You could also immediately apply those variables by running `os.environ[key]=value`.
---
### Reading and parsing a NUL-delimited stream in bash
Incidentally, that same format is also easy to parse in bash:
```
while IFS= read -r -d '' var_def; do
key=${var_def%%=*}
value=${var_def#*=}
printf -v "$key" '%s' "$value"
export "$key"
done <myvars.environ
# ...put the rest of your bash script here
```
---
Now, *why* a NUL-delimited stream? Because environment variables are C strings -- unlike Python strings, they can't contain NUL. As such, NUL is the one and only character that can be safely used to delimit them.
For instance, someone who tried to use newlines could be stymied by an environment variable that *contained* a literal newline -- and if someone is, say, embedding a short Python script inside an environment variable, that's a very plausible event! | Both of your questions are answered before.
You can execute bash scripts from python with something like;
```
import subprocess
subprocess.Popen("cwm --rdf test.rdf --ntriples > test.nt")
```
See this question [running bash commands in python](https://stackoverflow.com/q/4256107/1916158)
Better set the environment variables directly in Python, see this question [How to set environment variables in Python](https://stackoverflow.com/q/5971312/1916158) |
58,225,904 | I have a multiline string in python that looks like this
```
"""1234 dog list some words 1432 cat line 2 1789 cat line3 1348 dog line 4 1678 dog line 5 1733 fish line 6 1093 cat more words"""
```
I want to be able to group specific lines by the animals in python. So my output would look like
```
dog
1234 dog list some words
1348 dog line 4
1678 dog line 5
cat
1432 cat line 2
1789 cat line3
1093 cat more words
fish
1733 fish line 6
```
So far I know that I need to split the text by each line
```
def parser(txt):
for line in txt.splitlines():
print(line)
```
But I'm not sure how to continue. How would I group each line with an animal? | 2019/10/03 | [
"https://Stackoverflow.com/questions/58225904",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9476376/"
] | >
> Or maybe there is a simpler way of archiving this?
>
>
>
Consider *option would be to have a function for each type* that is called by *the same function*.
```
void testVariableInput_int(const int *a, const int *b, int *out, int m) {
while (m > 0) {
m--;
out[m] = a[m] + b[m];
}
}
// Like-wise for the other 2
void testVariableInput_float(const float *a, const float *b, float *out, int m) {...}
void testVariableInput_double(const double *a, const double *b, double *out, int m){...}
void testVariableInput(void *a, void *b, void *out, int m, int type) {
switch (type) {
case 1 : testVariableInput_int(a, b, out, m); break;
case 2 : testVariableInput_float(a, b, out, m); break;
case 3 : testVariableInput_double(a, b, out, m); break;
}
}
```
Sample use
```
float a[] = {1, 2, 3};
float b[] = {4, 5, 6};
float c[] = {0, 0, 0};
#define N (sizeof c/sizeof c[0])
#define TYPE_FLOAT 2
testVariableInput(a, b, c, N, TYPE_FLOAT);
```
In C, drop unneeded casting by taking advantage that a `void *` converts to any object pointer without a cast as well as any object pointer converts to a `void *` without a cast too.
>
> Advanced
>
>
>
Research `_Generic` to avoid the need for `int type`.
Untested sample code:
```
#define testVariableInput(a, b, c) _Generic(*(a), \
double: testVariableInput_double, \
float: testVariableInput_float, \
int: testVariableInput_int, \
default: testVariableInput_TBD, \
)((a), (b), (c), sizeof (a)/sizeof *(a))
float a[] = {1, 2, 3};
float b[] = {4, 5, 6};
float c[] = {0, 0, 0};
testVariableInput(a, b, c);
```
`_Generic` is a bit tricky to use. For OP I recommend sticking with the non-`_Generic` approach. | >
> Or maybe there is a simpler way of achieving this?
>
>
>
I like function pointers. Here we can pass a function pointer that adds two elements. That way we can separate the logic of the function from the abstraction that handles the types.
```
#include <stdlib.h>
#include <stdio.h>
void add_floats(const void *a, const void *b, void *res){
*(float*)res = *(const float*)a + *(const float*)b;
}
void add_ints(const void *a, const void *b, void *res) {
*(int*)res = *(const int*)a + *(const int*)b;
}
void add_doubles(const void *a, const void *b, void *res) {
*(double*)res = *(const double*)a + *(const double*)b;
}
void testVariableInput(const void *a, const void *b, void *out,
// arguments like for qsort
size_t nmemb, size_t size,
// the function that adds two elements
void (*add)(const void *a, const void *b, void *res)) {
// we cast to all pointers to char to increment them properly
const char *ca = a;
const char *cb = b;
char *cout = out;
for (size_t i = 0; i < nmemb; ++i) {
add(ca, cb, cout);
ca += size;
cb += size;
cout += size;
}
}
#define testVariableInput_g(a, b, out, nmemb) \
testVariableInput((a), (b), (out), (nmemb), sizeof(*(out)), \
_Generic((out), float *: add_floats, int *: add_ints, double *: add_doubles));
int main() {
float a[] = {1, 2, 3};
float b[] = {4, 5, 6};
float c[] = {0, 0, 0};
testVariableInput(a, b, c, 3, sizeof(float), add_floats);
testVariableInput_g(a, b, c, 3);
}
```
With the help of \_Generic, we can also automagically infer what function callback to pass to the function for limited number of types. Also it's easy to handle new, custom types to the function, without changing it's logic. |
64,771,870 | I am using a colab pro TPU instance for the purpose of patch image classification.
i'm using tensorflow version 2.3.0.
When calling model.fit I get the following error: `InvalidArgumentError: Unable to find the relevant tensor remote_handle: Op ID: 14738, Output num: 0` with the following trace:
```
--------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-20-5fd2ec1ce2f9> in <module>()
15 steps_per_epoch=STEPS_PER_EPOCH,
16 validation_data=dev_ds,
---> 17 validation_steps=VALIDATION_STEPS
18 )
6 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
106 def _method_wrapper(self, *args, **kwargs):
107 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
--> 108 return method(self, *args, **kwargs)
109
110 # Running inside `run_distribute_coordinator` already.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1084 data_handler._initial_epoch = ( # pylint: disable=protected-access
1085 self._maybe_load_initial_epoch_from_ckpt(initial_epoch))
-> 1086 for epoch, iterator in data_handler.enumerate_epochs():
1087 self.reset_metrics()
1088 callbacks.on_epoch_begin(epoch)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py in enumerate_epochs(self)
1140 if self._insufficient_data: # Set by `catch_stop_iteration`.
1141 break
-> 1142 if self._adapter.should_recreate_iterator():
1143 data_iterator = iter(self._dataset)
1144 yield epoch, data_iterator
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py in should_recreate_iterator(self)
725 # each epoch.
726 return (self._user_steps is None or
--> 727 cardinality.cardinality(self._dataset).numpy() == self._user_steps)
728
729 def _validate_args(self, y, sample_weights, steps):
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in numpy(self)
1061 """
1062 # TODO(slebedev): Consider avoiding a copy for non-CPU or remote tensors.
-> 1063 maybe_arr = self._numpy() # pylint: disable=protected-access
1064 return maybe_arr.copy() if isinstance(maybe_arr, np.ndarray) else maybe_arr
1065
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in _numpy(self)
1029 return self._numpy_internal()
1030 except core._NotOkStatusException as e: # pylint: disable=protected-access
-> 1031 six.raise_from(core._status_to_exception(e.code, e.message), None) # pylint: disable=protected-access
1032
1033 @property
/usr/local/lib/python3.6/dist-packages/six.py in raise_from(value, from_value)
InvalidArgumentError: Unable to find the relevant tensor remote_handle: Op ID: 14738, Output num: 0
```
H have two dataset zip files containing 300,000> and 100,000< training and validation examples which I download from my Google Drive using !gdown and unzip it on Colab VM. For data pipeline I use tf.data.Dataset API and feed the API with list of filepaths and then use .map method to perform image fetching from memory, **please keep in mind that my training dataset can't be fit into memory**
Here is the code for creating Dataset:
```
train_dir = '/content/content/Data/train'
dev_dir = '/content/content/Data/dev'
def create_dataset(dir, label_dic, is_training=True):
filepaths = list(tf.data.Dataset.list_files(dir + '/*.jpg'))
labels = []
for f in filepaths:
ind = f.numpy().decode().split('/')[-1].split('.')[0]
labels.append(label_dic[ind])
ds = tf.data.Dataset.from_tensor_slices((filepaths, labels))
ds = ds.map(load_images, num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds = ds.cache()
if is_training:
ds = ds.shuffle(len(filepaths), reshuffle_each_iteration=True)
ds = ds.repeat(EPOCHS)
ds = ds.batch(BATCH_SIZE)
ds = ds.prefetch(tf.data.experimental.AUTOTUNE)
return ds
train_ds = create_dataset(train_dir, train_label)
dev_ds = create_dataset(dev_dir, dev_label, False)
```
And here is the code for creating and compiling my model and fitting the datasets, I use a keras custom model with VGG16 backend:
```
def create_model(input_shape, batch_size):
VGG16 = keras.applications.VGG16(include_top=False,input_shape=input_shape, weights='imagenet')
for layer in VGG16.layers:
layer.trainable = False
input_layer = keras.Input(shape=input_shape, batch_size=batch_size)
VGG_out = VGG16(input_layer)
x = Flatten(name='flatten', input_shape=(512,8,8))(VGG_out)
x = Dense(256, activation='relu', name='fc1')(x)
x = Dropout(0.5)(x)
x = Dense(1, activation='sigmoid', name='fc2')(x)
model = Model(input_layer, x)
model.summary()
return model
with strategy.scope():
model = create_model(INPUT_SHAPE, BATCH_SIZE)
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(train_ds,
epochs=5,
steps_per_epoch=STEPS_PER_EPOCH,
validation_data=dev_ds,
validation_steps=VALIDATION_STEPS
)
```
**For TPU initialization and strategy**I use a `strategy = tf.distribute.TPUStrategy(resolver)`
Initialization code shown below:
```
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
print("All devices: ", tf.config.list_logical_devices('TPU'))
```
a copy of the whole notebook with outputs can be reached at: [Colab Ipython Notebook](https://github.com/Pooya448/Tumor_Segmentation/blob/main/Patch_Classification.ipynb) | 2020/11/10 | [
"https://Stackoverflow.com/questions/64771870",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8777119/"
] | @Pooya448
I know this is quite late, but this may be useful for anyone stuck here.
Following is the function I use to connect to TPUs.
```py
def connect_to_tpu(tpu_address: str = None):
if tpu_address is not None: # When using GCP
cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(
tpu=tpu_address)
if tpu_address not in ("", "local"):
tf.config.experimental_connect_to_cluster(cluster_resolver)
tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
strategy = tf.distribute.experimental.TPUStrategy(cluster_resolver)
print("Running on TPU ", cluster_resolver.master())
print("REPLICAS: ", strategy.num_replicas_in_sync)
return cluster_resolver, strategy
else: # When using Colab or Kaggle
try:
cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver.connect()
strategy = tf.distribute.experimental.TPUStrategy(cluster_resolver)
print("Running on TPU ", cluster_resolver.master())
print("REPLICAS: ", strategy.num_replicas_in_sync)
return cluster_resolver, strategy
except:
print("WARNING: No TPU detected.")
mirrored_strategy = tf.distribute.MirroredStrategy()
return None, mirrored_strategy
``` | I actually tried all the methods that are suggested in git and stackoverflow none of them worked for me. But what worked is I created a new notebook and connected it to the TPU and trained the model. It worked fine, so may be this is related to the problem of the notebook at the time when we created it. |
32,017,621 | I would like to connect and receive http response from a specific web site link.
I have many Python codes :
```
import urllib.request
import os,sys,re,datetime
fp = urllib.request.urlopen("http://www.python.org")
mybytes = fp.read()
mystr = mybytes.decode(encoding=sys.stdout.encoding)
fp.close()
```
when I pass the response as a parameter to:
`BeautifulSoup(str(mystr), 'html.parser')`
to get the cleaned html text, I got the following error:
```
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u25bc' in position 1139: character maps to <undefined>.
```
The question how can I solve this problem?
**complete code :**
```
import urllib.request
import os,sys,re,datetime
fp = urllib.request.urlopen("http://www.python.org")
mybytes = fp.read()
mystr = mybytes.decode(encoding=sys.stdout.encoding)
fp.close()
from bs4 import BeautifulSoup
soup = BeautifulSoup(str(mystr), 'html.parser')
mystr = soup;
print(mystr.get_text())
``` | 2015/08/14 | [
"https://Stackoverflow.com/questions/32017621",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5228214/"
] | first of all: <https://docs.python.org/2/tutorial/classes.html#inheritance>
At any rate...
```
GParent.testmethod(self) <-- calling a method before it is defined
class GParent(): <-- always inherit object on your base class to ensure you are using new style classes
def testmethod(self):
print "This is test method"
class Parent(): <-- not inheriting anything
def testmethod(self): <-- if you were inheriting GParent you would be overriding the method that is defined in GParent here.
print "This is test method"
class Child(Parent):
def __init__(self):
print "This is init method"
GParent.testmethod(self) <-- if you want to call the method you are inheriting you would use self.testmethod()
c = Child()
```
Take a look at this code and run it, maybe it will help you out.
```
from __future__ import print_function #so we can use python 3 print function
class GParent(object):
def gparent_testmethod(self):
print("Grandparent test method ")
class Parent(GParent):
def parent_testmethod(self): #
print("Parent test method")
class Child(Parent):
def child_testmethod(self):
print("This is the child test method")
c = Child()
c.gparent_testmethod()
c.parent_testmethod()
c.child_testmethod()
``` | You cannot call GParent's `testmethod` without an instance of `GParent` as its first argument.
**Inheritance**
```
class GParent(object):
def testmethod(self):
print "I'm a grandpa"
class Parent(GParent):
# implicitly inherit __init__()
# inherit and override testmethod()
def testmethod(self):
print "I'm a papa"
class Child(Parent):
def __init__(self):
super(Child, self).__init__()
# You can only call testmethod with an instance of Child
# though technically it is calling the parent's up the chain
self.testmethod()
# inherit parent's testmethod implicitly
c = Child() # print "I'm a papa"
```
However, two ways of calling a parent's method explicitly is through composition or class method
**Composition**
```
class Parent(object):
def testmethod(self):
print "I'm a papa"
class Child(object):
def __init__(self):
self.parent = Parent()
# call own's testmethod
self.testmethod()
# call parent's method
self.parentmethod()
def parentmethod(self):
self.parent.testmethod()
def testmethod(self):
print "I'm a son"
c = Child()
```
**Class method**
```
class Parent(object):
@classmethod
def testmethod(cls):
print "I'm a papa"
class Child(object):
def __init__(self):
# call own's testmethod
self.testmethod()
# call parent's method
Parent.testmethod()
def testmethod(self):
print "I'm a son"
c = Child()
```
It has become advisory to use composition when dealing with multiple inheritance, since inheritance creates dependency to the parent class. |
56,711,890 | If I had a function that had three or four optional keyword arguments is it best to use \*\*kwargs or to specify them in the function definition?
I feel as
`def foo(required, option1=False, option2=False, option3=True)`
is much more clumsy looking than
`def foo(required, **kwargs)`.
However if I need to use these keywords as conditionals and they don't exist I will have KeyErrors being thrown and I feel like checking for the keys before each conditional is a bit messy.
```py
def foo(required, **kwargs):
print(required)
if 'true' in kwargs and kwargs['true']:
print(kwargs['true'])
foo('test', true='True')
foo('test2')
```
vs
```py
def foo(required, true=None):
print(required)
if true:
print(true)
foo('test', true='True')
foo('test2')
```
I am wondering what the most pythonic way is. I've got a function that I am working on that depending on the parameters passed will return different values so I am wondering the best way to handle it. It works now, but I wonder if there is a better and more pythonic way of handling it. | 2019/06/22 | [
"https://Stackoverflow.com/questions/56711890",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10206378/"
] | If the function is only using the parameters in its own operation, you should list them all explicitly. This will allow Python to detect if an invalid argument was provided in a call to your function.
You use `**kwargs` when you need to accept dynamic parameters, often because you're passing them along to other functions and you want your function to accept any arguments that the other function needs, e.g. `other_func(**kwargs)` | One easy way to pass in several optional parameters while keeping your function definition clean is to use a dictionary that contains all the parameters. That way your function becomes
```py
def foo(required, params):
print(required)
if 'true' in params and params['true']:
print(params['true'])
```
You really want to use `**kwargs` if your parameters can be anything and you don't really care, such as for a decorator function. If you're actually going to use the parameters in the function, you should specify them explicitly. |
26,625,845 | I work on a project in which I need a python web server. This project is hosted on Amazon EC2 (ubuntu).
I have made two unsuccessful attempts so far:
1. run `python -m SimpleHTTPServer 8080`. It works if I launch a browser on the EC2 instance and head to localhost:8080 or <*ec2-public-IP*>:8080. However I can't access the server from a browser on a remote machine (using <*ec2-public-IP*>:8080).
2. create a python class which allows me to specify both the IP address and port to serve files. Same problem as 1.
There are several questions on SO concerning Python web server on EC2, but none seems to answer my question: what should I do in order to access the python web server remotely ?
One more point: I don't want to use a Python web framework (Django or anything else): I'll use the web server to build a kind of REST API, not for serving HTML content. | 2014/10/29 | [
"https://Stackoverflow.com/questions/26625845",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3592547/"
] | you should open the 8080 port and ip limitation in security croups, such as:
All TCP TCP 0 - 65535 0.0.0.0/0
the last item means this server will accept every request from any ip and port, you also | You passble need to `IAM` in `AWS`.
`Aws` set security permission that need to open up port so you only `localhost` links your webservice
[aws link](http://aws.amazon.com/) |
51,733,698 | I have a program that right now grabs data like temperature and loads using a powershell script and the WMI. It outputs the data as a JSON file. Now let me preface this by saying this is my first time every working with JSON's and im not very familiar with the JSON python library. Here is the code to my program:
```
import subprocess
import json
p = subprocess.Popen(["C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\powershell.exe", ". \"./TestScript\";", "&NSV"], stdout=subprocess.PIPE)
(output, err) = p.communicate()
data = json.loads(output)
for mNull in data:
del mNull['Scope']
del mNull['Path']
del mNull['Options']
del mNull['ClassPath']
del mNull['Properties']
del mNull['SystemProperties']
del mNull['Qualifiers']
del mNull['Site']
del mNull['Container']
del mNull['PSComputerName']
del mNull['__GENUS']
del mNull['__CLASS']
del mNull['__SUPERCLASS']
del mNull['__DYNASTY']
del mNull['__RELPATH']
del mNull['__PROPERTY_COUNT']
del mNull['__DERIVATION']
del mNull['__SERVER']
del mNull['__NAMESPACE']
del mNull['__PATH']
fdata = json.dumps(data,indent=2)
print(fdata)
```
Now here is the resulting JSON:
```
[
{
"Name": "Memory",
"SensorType": "Load",
"Value": 53.3276978
},
{
"Name": "CPU Core #2",
"SensorType": "Temperature",
"Value": 69
},
{
"Name": "Used Space",
"SensorType": "Load",
"Value": 93.12801
},
{
"Name": "CPU Core #1",
"SensorType": "Temperature",
"Value": 66
},
{
"Name": "CPU DRAM",
"SensorType": "Power",
"Value": 1.05141532
},
{
"Name": "CPU Core #2",
"SensorType": "Load",
"Value": 60.15625
},
{
"Name": "CPU Package",
"SensorType": "Power",
"Value": 15.2162886
},
{
"Name": "Bus Speed",
"SensorType": "Clock",
"Value": 100.000031
},
{
"Name": "CPU Total",
"SensorType": "Load",
"Value": 57.421875
},
{
"Name": "CPU Package",
"SensorType": "Temperature",
"Value": 69
},
{
"Name": "CPU Core #2",
"SensorType": "Clock",
"Value": 2700.00073
},
{
"Name": "Temperature",
"SensorType": "Temperature",
"Value": 41
},
{
"Name": "Used Memory",
"SensorType": "Data",
"Value": 4.215393
},
{
"Name": "Available Memory",
"SensorType": "Data",
"Value": 3.68930435
},
{
"Name": "CPU Core #1",
"SensorType": "Clock",
"Value": 3100.001
},
{
"Name": "CPU Cores",
"SensorType": "Power",
"Value": 13.3746643
},
{
"Name": "CPU Graphics",
"SensorType": "Power",
"Value": 0.119861834
},
{
"Name": "CPU Core #1",
"SensorType": "Load",
"Value": 54.6875
}
]
```
As you can see every dictionary in the list has the keys `Name`, `SensorType` and `Value`.
What I want to do is make it so that each list has a "label" equal to the `Name` in each one, so I can call for data from specific entries, one at a time. Once again, I'm kind of a newbie with JSON and its library so I'm not even sure if this sort of thing is possible. Any help would be greatly appreciated! Have a good day! :)
Edit 1:
Here is an example, using the first 2, of what I would like the program to be able to output.
```
[
"Memory":{
"SensorType": "Load",
"Value": 53.3276978
},
"CPU Core #2":{
"SensorType": "Temperature",
"Value": 69
}
]
```
Once again, I dont even know if this is valid JSON but I want it to just do something at least similar to that so I can call, for example, `print(data["Memory"]["Value"])` and return, `53.3276978`.
Edit 2:
It did just occur to me that there are some names with multiple sensor types, for example, `"CPU Core #1"` and `"CPU Core #2"` both have `"Tempurature"`, `"Load"`, and `"Clock"`. Using the above example could cause some conflicts so is there a way we could account for that? | 2018/08/07 | [
"https://Stackoverflow.com/questions/51733698",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9801535/"
] | You can build a new dictionary in the shape you want like this:
```
...
data = {
element["Name"]: {
key: value for key, value in element.items() if key != "Name"
}
for element in json.loads(output)
}
fdata = json.dumps(data, indent=4)
...
```
Result:
```
{
"Memory": {
"SensorType": "Load",
"Value": 53.3276978
},
"CPU Core #2": {
"SensorType": "Clock",
"Value": 2700.00073
},
(and so on)
}
``` | ```
x="""[
{
"Name": "Memory 1",
"SensorType": "Load",
"Value": 53.3276978
},
{
"Name": "CPU Core #2",
"SensorType": "Load",
"Value": 53.3276978
}]"""
json_obj=json.loads(x)
new_list=[]
for item in json_obj:
name=item.pop('Name')
new_list.append({name:item})
print(json.dumps(new_list,indent=4))
```
Output
```
[
{
"Memory 1": {
"SensorType": "Load",
"Value": 53.3276978
}
},
{
"CPU Core #2": {
"SensorType": "Load",
"Value": 53.3276978
}
}
]
``` |
51,733,698 | I have a program that right now grabs data like temperature and loads using a powershell script and the WMI. It outputs the data as a JSON file. Now let me preface this by saying this is my first time every working with JSON's and im not very familiar with the JSON python library. Here is the code to my program:
```
import subprocess
import json
p = subprocess.Popen(["C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\powershell.exe", ". \"./TestScript\";", "&NSV"], stdout=subprocess.PIPE)
(output, err) = p.communicate()
data = json.loads(output)
for mNull in data:
del mNull['Scope']
del mNull['Path']
del mNull['Options']
del mNull['ClassPath']
del mNull['Properties']
del mNull['SystemProperties']
del mNull['Qualifiers']
del mNull['Site']
del mNull['Container']
del mNull['PSComputerName']
del mNull['__GENUS']
del mNull['__CLASS']
del mNull['__SUPERCLASS']
del mNull['__DYNASTY']
del mNull['__RELPATH']
del mNull['__PROPERTY_COUNT']
del mNull['__DERIVATION']
del mNull['__SERVER']
del mNull['__NAMESPACE']
del mNull['__PATH']
fdata = json.dumps(data,indent=2)
print(fdata)
```
Now here is the resulting JSON:
```
[
{
"Name": "Memory",
"SensorType": "Load",
"Value": 53.3276978
},
{
"Name": "CPU Core #2",
"SensorType": "Temperature",
"Value": 69
},
{
"Name": "Used Space",
"SensorType": "Load",
"Value": 93.12801
},
{
"Name": "CPU Core #1",
"SensorType": "Temperature",
"Value": 66
},
{
"Name": "CPU DRAM",
"SensorType": "Power",
"Value": 1.05141532
},
{
"Name": "CPU Core #2",
"SensorType": "Load",
"Value": 60.15625
},
{
"Name": "CPU Package",
"SensorType": "Power",
"Value": 15.2162886
},
{
"Name": "Bus Speed",
"SensorType": "Clock",
"Value": 100.000031
},
{
"Name": "CPU Total",
"SensorType": "Load",
"Value": 57.421875
},
{
"Name": "CPU Package",
"SensorType": "Temperature",
"Value": 69
},
{
"Name": "CPU Core #2",
"SensorType": "Clock",
"Value": 2700.00073
},
{
"Name": "Temperature",
"SensorType": "Temperature",
"Value": 41
},
{
"Name": "Used Memory",
"SensorType": "Data",
"Value": 4.215393
},
{
"Name": "Available Memory",
"SensorType": "Data",
"Value": 3.68930435
},
{
"Name": "CPU Core #1",
"SensorType": "Clock",
"Value": 3100.001
},
{
"Name": "CPU Cores",
"SensorType": "Power",
"Value": 13.3746643
},
{
"Name": "CPU Graphics",
"SensorType": "Power",
"Value": 0.119861834
},
{
"Name": "CPU Core #1",
"SensorType": "Load",
"Value": 54.6875
}
]
```
As you can see every dictionary in the list has the keys `Name`, `SensorType` and `Value`.
What I want to do is make it so that each list has a "label" equal to the `Name` in each one, so I can call for data from specific entries, one at a time. Once again, I'm kind of a newbie with JSON and its library so I'm not even sure if this sort of thing is possible. Any help would be greatly appreciated! Have a good day! :)
Edit 1:
Here is an example, using the first 2, of what I would like the program to be able to output.
```
[
"Memory":{
"SensorType": "Load",
"Value": 53.3276978
},
"CPU Core #2":{
"SensorType": "Temperature",
"Value": 69
}
]
```
Once again, I dont even know if this is valid JSON but I want it to just do something at least similar to that so I can call, for example, `print(data["Memory"]["Value"])` and return, `53.3276978`.
Edit 2:
It did just occur to me that there are some names with multiple sensor types, for example, `"CPU Core #1"` and `"CPU Core #2"` both have `"Tempurature"`, `"Load"`, and `"Clock"`. Using the above example could cause some conflicts so is there a way we could account for that? | 2018/08/07 | [
"https://Stackoverflow.com/questions/51733698",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9801535/"
] | You can build a new dictionary in the shape you want like this:
```
...
data = {
element["Name"]: {
key: value for key, value in element.items() if key != "Name"
}
for element in json.loads(output)
}
fdata = json.dumps(data, indent=4)
...
```
Result:
```
{
"Memory": {
"SensorType": "Load",
"Value": 53.3276978
},
"CPU Core #2": {
"SensorType": "Clock",
"Value": 2700.00073
},
(and so on)
}
``` | How about this?
```
import json
orig_list = json.load(<filename>)
new_dict = { l['Name']:{k:v for k,v in l.items() if k!='Name'} for l in orig_list}
json.dumps(new_dict, <filename>)
```
This way, you won't have to `del` items from the `dict`s that you load from the file. |
51,733,698 | I have a program that right now grabs data like temperature and loads using a powershell script and the WMI. It outputs the data as a JSON file. Now let me preface this by saying this is my first time every working with JSON's and im not very familiar with the JSON python library. Here is the code to my program:
```
import subprocess
import json
p = subprocess.Popen(["C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\powershell.exe", ". \"./TestScript\";", "&NSV"], stdout=subprocess.PIPE)
(output, err) = p.communicate()
data = json.loads(output)
for mNull in data:
del mNull['Scope']
del mNull['Path']
del mNull['Options']
del mNull['ClassPath']
del mNull['Properties']
del mNull['SystemProperties']
del mNull['Qualifiers']
del mNull['Site']
del mNull['Container']
del mNull['PSComputerName']
del mNull['__GENUS']
del mNull['__CLASS']
del mNull['__SUPERCLASS']
del mNull['__DYNASTY']
del mNull['__RELPATH']
del mNull['__PROPERTY_COUNT']
del mNull['__DERIVATION']
del mNull['__SERVER']
del mNull['__NAMESPACE']
del mNull['__PATH']
fdata = json.dumps(data,indent=2)
print(fdata)
```
Now here is the resulting JSON:
```
[
{
"Name": "Memory",
"SensorType": "Load",
"Value": 53.3276978
},
{
"Name": "CPU Core #2",
"SensorType": "Temperature",
"Value": 69
},
{
"Name": "Used Space",
"SensorType": "Load",
"Value": 93.12801
},
{
"Name": "CPU Core #1",
"SensorType": "Temperature",
"Value": 66
},
{
"Name": "CPU DRAM",
"SensorType": "Power",
"Value": 1.05141532
},
{
"Name": "CPU Core #2",
"SensorType": "Load",
"Value": 60.15625
},
{
"Name": "CPU Package",
"SensorType": "Power",
"Value": 15.2162886
},
{
"Name": "Bus Speed",
"SensorType": "Clock",
"Value": 100.000031
},
{
"Name": "CPU Total",
"SensorType": "Load",
"Value": 57.421875
},
{
"Name": "CPU Package",
"SensorType": "Temperature",
"Value": 69
},
{
"Name": "CPU Core #2",
"SensorType": "Clock",
"Value": 2700.00073
},
{
"Name": "Temperature",
"SensorType": "Temperature",
"Value": 41
},
{
"Name": "Used Memory",
"SensorType": "Data",
"Value": 4.215393
},
{
"Name": "Available Memory",
"SensorType": "Data",
"Value": 3.68930435
},
{
"Name": "CPU Core #1",
"SensorType": "Clock",
"Value": 3100.001
},
{
"Name": "CPU Cores",
"SensorType": "Power",
"Value": 13.3746643
},
{
"Name": "CPU Graphics",
"SensorType": "Power",
"Value": 0.119861834
},
{
"Name": "CPU Core #1",
"SensorType": "Load",
"Value": 54.6875
}
]
```
As you can see every dictionary in the list has the keys `Name`, `SensorType` and `Value`.
What I want to do is make it so that each list has a "label" equal to the `Name` in each one, so I can call for data from specific entries, one at a time. Once again, I'm kind of a newbie with JSON and its library so I'm not even sure if this sort of thing is possible. Any help would be greatly appreciated! Have a good day! :)
Edit 1:
Here is an example, using the first 2, of what I would like the program to be able to output.
```
[
"Memory":{
"SensorType": "Load",
"Value": 53.3276978
},
"CPU Core #2":{
"SensorType": "Temperature",
"Value": 69
}
]
```
Once again, I dont even know if this is valid JSON but I want it to just do something at least similar to that so I can call, for example, `print(data["Memory"]["Value"])` and return, `53.3276978`.
Edit 2:
It did just occur to me that there are some names with multiple sensor types, for example, `"CPU Core #1"` and `"CPU Core #2"` both have `"Tempurature"`, `"Load"`, and `"Clock"`. Using the above example could cause some conflicts so is there a way we could account for that? | 2018/08/07 | [
"https://Stackoverflow.com/questions/51733698",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9801535/"
] | Assuming you need to keep the values if key already exists:
```
import json
data = [
{
"Name": "Memory",
"SensorType": "Load",
"Value": 53.3276978
},
{
"Name": "CPU Core #2",
"SensorType": "Temperature",
"Value": 69
},
{
"Name": "Used Space",
"SensorType": "Load",
"Value": 93.12801
},
{
"Name": "CPU Core #1",
"SensorType": "Temperature",
"Value": 66
},
{
"Name": "CPU DRAM",
"SensorType": "Power",
"Value": 1.05141532
},
{
"Name": "CPU Core #2",
"SensorType": "Load",
"Value": 60.15625
},
{
"Name": "CPU Package",
"SensorType": "Power",
"Value": 15.2162886
},
{
"Name": "Bus Speed",
"SensorType": "Clock",
"Value": 100.000031
},
{
"Name": "CPU Total",
"SensorType": "Load",
"Value": 57.421875
},
{
"Name": "CPU Package",
"SensorType": "Temperature",
"Value": 69
},
{
"Name": "CPU Core #2",
"SensorType": "Clock",
"Value": 2700.00073
},
{
"Name": "Temperature",
"SensorType": "Temperature",
"Value": 41
},
{
"Name": "Used Memory",
"SensorType": "Data",
"Value": 4.215393
},
{
"Name": "Available Memory",
"SensorType": "Data",
"Value": 3.68930435
},
{
"Name": "CPU Core #1",
"SensorType": "Clock",
"Value": 3100.001
},
{
"Name": "CPU Cores",
"SensorType": "Power",
"Value": 13.3746643
},
{
"Name": "CPU Graphics",
"SensorType": "Power",
"Value": 0.119861834
},
{
"Name": "CPU Core #1",
"SensorType": "Load",
"Value": 54.6875
}
]
final_data = {}
for d in data:
if d['Name'] not in final_data:
final_data[d['Name']] = list()
key = d.pop('Name')
final_data[key].append(temp)
print json.dumps(final_data,indent=4)
```
will give you
```
{
"CPU Package": [
{
"SensorType": "Power",
"Value": 15.2162886
},
{
"SensorType": "Temperature",
"Value": 69
}
],
"Temperature": [
{
"SensorType": "Temperature",
"Value": 41
}
],
"CPU Core #2": [
{
"SensorType": "Temperature",
"Value": 69
},
{
"SensorType": "Load",
"Value": 60.15625
},
{
"SensorType": "Clock",
"Value": 2700.00073
}
],
"CPU Core #1": [
{
"SensorType": "Temperature",
"Value": 66
},
{
"SensorType": "Clock",
"Value": 3100.001
},
{
"SensorType": "Load",
"Value": 54.6875
}
],
"CPU Cores": [
{
"SensorType": "Power",
"Value": 13.3746643
}
],
"Available Memory": [
{
"SensorType": "Data",
"Value": 3.68930435
}
],
"Used Space": [
{
"SensorType": "Load",
"Value": 93.12801
}
],
"Bus Speed": [
{
"SensorType": "Clock",
"Value": 100.000031
}
],
"Memory": [
{
"SensorType": "Load",
"Value": 53.3276978
}
],
"Used Memory": [
{
"SensorType": "Data",
"Value": 4.215393
}
],
"CPU Total": [
{
"SensorType": "Load",
"Value": 57.421875
}
],
"CPU DRAM": [
{
"SensorType": "Power",
"Value": 1.05141532
}
],
"CPU Graphics": [
{
"SensorType": "Power",
"Value": 0.119861834
}
]
}
```
Hope this helps | You can build a new dictionary in the shape you want like this:
```
...
data = {
element["Name"]: {
key: value for key, value in element.items() if key != "Name"
}
for element in json.loads(output)
}
fdata = json.dumps(data, indent=4)
...
```
Result:
```
{
"Memory": {
"SensorType": "Load",
"Value": 53.3276978
},
"CPU Core #2": {
"SensorType": "Clock",
"Value": 2700.00073
},
(and so on)
}
``` |
51,733,698 | I have a program that right now grabs data like temperature and loads using a powershell script and the WMI. It outputs the data as a JSON file. Now let me preface this by saying this is my first time every working with JSON's and im not very familiar with the JSON python library. Here is the code to my program:
```
import subprocess
import json
p = subprocess.Popen(["C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\powershell.exe", ". \"./TestScript\";", "&NSV"], stdout=subprocess.PIPE)
(output, err) = p.communicate()
data = json.loads(output)
for mNull in data:
del mNull['Scope']
del mNull['Path']
del mNull['Options']
del mNull['ClassPath']
del mNull['Properties']
del mNull['SystemProperties']
del mNull['Qualifiers']
del mNull['Site']
del mNull['Container']
del mNull['PSComputerName']
del mNull['__GENUS']
del mNull['__CLASS']
del mNull['__SUPERCLASS']
del mNull['__DYNASTY']
del mNull['__RELPATH']
del mNull['__PROPERTY_COUNT']
del mNull['__DERIVATION']
del mNull['__SERVER']
del mNull['__NAMESPACE']
del mNull['__PATH']
fdata = json.dumps(data,indent=2)
print(fdata)
```
Now here is the resulting JSON:
```
[
{
"Name": "Memory",
"SensorType": "Load",
"Value": 53.3276978
},
{
"Name": "CPU Core #2",
"SensorType": "Temperature",
"Value": 69
},
{
"Name": "Used Space",
"SensorType": "Load",
"Value": 93.12801
},
{
"Name": "CPU Core #1",
"SensorType": "Temperature",
"Value": 66
},
{
"Name": "CPU DRAM",
"SensorType": "Power",
"Value": 1.05141532
},
{
"Name": "CPU Core #2",
"SensorType": "Load",
"Value": 60.15625
},
{
"Name": "CPU Package",
"SensorType": "Power",
"Value": 15.2162886
},
{
"Name": "Bus Speed",
"SensorType": "Clock",
"Value": 100.000031
},
{
"Name": "CPU Total",
"SensorType": "Load",
"Value": 57.421875
},
{
"Name": "CPU Package",
"SensorType": "Temperature",
"Value": 69
},
{
"Name": "CPU Core #2",
"SensorType": "Clock",
"Value": 2700.00073
},
{
"Name": "Temperature",
"SensorType": "Temperature",
"Value": 41
},
{
"Name": "Used Memory",
"SensorType": "Data",
"Value": 4.215393
},
{
"Name": "Available Memory",
"SensorType": "Data",
"Value": 3.68930435
},
{
"Name": "CPU Core #1",
"SensorType": "Clock",
"Value": 3100.001
},
{
"Name": "CPU Cores",
"SensorType": "Power",
"Value": 13.3746643
},
{
"Name": "CPU Graphics",
"SensorType": "Power",
"Value": 0.119861834
},
{
"Name": "CPU Core #1",
"SensorType": "Load",
"Value": 54.6875
}
]
```
As you can see every dictionary in the list has the keys `Name`, `SensorType` and `Value`.
What I want to do is make it so that each list has a "label" equal to the `Name` in each one, so I can call for data from specific entries, one at a time. Once again, I'm kind of a newbie with JSON and its library so I'm not even sure if this sort of thing is possible. Any help would be greatly appreciated! Have a good day! :)
Edit 1:
Here is an example, using the first 2, of what I would like the program to be able to output.
```
[
"Memory":{
"SensorType": "Load",
"Value": 53.3276978
},
"CPU Core #2":{
"SensorType": "Temperature",
"Value": 69
}
]
```
Once again, I dont even know if this is valid JSON but I want it to just do something at least similar to that so I can call, for example, `print(data["Memory"]["Value"])` and return, `53.3276978`.
Edit 2:
It did just occur to me that there are some names with multiple sensor types, for example, `"CPU Core #1"` and `"CPU Core #2"` both have `"Tempurature"`, `"Load"`, and `"Clock"`. Using the above example could cause some conflicts so is there a way we could account for that? | 2018/08/07 | [
"https://Stackoverflow.com/questions/51733698",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9801535/"
] | You can build a new dictionary in the shape you want like this:
```
...
data = {
element["Name"]: {
key: value for key, value in element.items() if key != "Name"
}
for element in json.loads(output)
}
fdata = json.dumps(data, indent=4)
...
```
Result:
```
{
"Memory": {
"SensorType": "Load",
"Value": 53.3276978
},
"CPU Core #2": {
"SensorType": "Clock",
"Value": 2700.00073
},
(and so on)
}
``` | You can just replace
```
for mNull in data:
del mNull['Scope']
del mNull['Path']
del mNull['Options']
del mNull['ClassPath']
del mNull['Properties']
del mNull['SystemProperties']
del mNull['Qualifiers']
del mNull['Site']
del mNull['Container']
del mNull['PSComputerName']
del mNull['__GENUS']
del mNull['__CLASS']
del mNull['__SUPERCLASS']
del mNull['__DYNASTY']
del mNull['__RELPATH']
del mNull['__PROPERTY_COUNT']
del mNull['__DERIVATION']
del mNull['__SERVER']
del mNull['__NAMESPACE']
del mNull['__PATH']
fdata = json.dumps(data,indent=2)
```
with
```
dontwant=set(['Name', 'PSComputerName', '__RELPATH', '__DYNASTY', '__CLASS', '__PROPERTY_COUNT', 'Site', 'ClassPath', 'SystemProperties', 'Scope', 'Qualifiers', 'Options', '__NAMESPACE', 'Path', '__SUPERCLASS', '__DERIVATION', '__GENUS', '__PATH', 'Container', 'Properties', '__SERVER']) # set of keys to drop
out={} # empty dict
for mNull in data:
name=mNull['Name']
out[name]={key:value for key,value in mNull.items() if key not in dontwant} # only copy over items you want
fdata = json.dumps(out,indent=2)
``` |
51,733,698 | I have a program that right now grabs data like temperature and loads using a powershell script and the WMI. It outputs the data as a JSON file. Now let me preface this by saying this is my first time every working with JSON's and im not very familiar with the JSON python library. Here is the code to my program:
```
import subprocess
import json
p = subprocess.Popen(["C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\powershell.exe", ". \"./TestScript\";", "&NSV"], stdout=subprocess.PIPE)
(output, err) = p.communicate()
data = json.loads(output)
for mNull in data:
del mNull['Scope']
del mNull['Path']
del mNull['Options']
del mNull['ClassPath']
del mNull['Properties']
del mNull['SystemProperties']
del mNull['Qualifiers']
del mNull['Site']
del mNull['Container']
del mNull['PSComputerName']
del mNull['__GENUS']
del mNull['__CLASS']
del mNull['__SUPERCLASS']
del mNull['__DYNASTY']
del mNull['__RELPATH']
del mNull['__PROPERTY_COUNT']
del mNull['__DERIVATION']
del mNull['__SERVER']
del mNull['__NAMESPACE']
del mNull['__PATH']
fdata = json.dumps(data,indent=2)
print(fdata)
```
Now here is the resulting JSON:
```
[
{
"Name": "Memory",
"SensorType": "Load",
"Value": 53.3276978
},
{
"Name": "CPU Core #2",
"SensorType": "Temperature",
"Value": 69
},
{
"Name": "Used Space",
"SensorType": "Load",
"Value": 93.12801
},
{
"Name": "CPU Core #1",
"SensorType": "Temperature",
"Value": 66
},
{
"Name": "CPU DRAM",
"SensorType": "Power",
"Value": 1.05141532
},
{
"Name": "CPU Core #2",
"SensorType": "Load",
"Value": 60.15625
},
{
"Name": "CPU Package",
"SensorType": "Power",
"Value": 15.2162886
},
{
"Name": "Bus Speed",
"SensorType": "Clock",
"Value": 100.000031
},
{
"Name": "CPU Total",
"SensorType": "Load",
"Value": 57.421875
},
{
"Name": "CPU Package",
"SensorType": "Temperature",
"Value": 69
},
{
"Name": "CPU Core #2",
"SensorType": "Clock",
"Value": 2700.00073
},
{
"Name": "Temperature",
"SensorType": "Temperature",
"Value": 41
},
{
"Name": "Used Memory",
"SensorType": "Data",
"Value": 4.215393
},
{
"Name": "Available Memory",
"SensorType": "Data",
"Value": 3.68930435
},
{
"Name": "CPU Core #1",
"SensorType": "Clock",
"Value": 3100.001
},
{
"Name": "CPU Cores",
"SensorType": "Power",
"Value": 13.3746643
},
{
"Name": "CPU Graphics",
"SensorType": "Power",
"Value": 0.119861834
},
{
"Name": "CPU Core #1",
"SensorType": "Load",
"Value": 54.6875
}
]
```
As you can see every dictionary in the list has the keys `Name`, `SensorType` and `Value`.
What I want to do is make it so that each list has a "label" equal to the `Name` in each one, so I can call for data from specific entries, one at a time. Once again, I'm kind of a newbie with JSON and its library so I'm not even sure if this sort of thing is possible. Any help would be greatly appreciated! Have a good day! :)
Edit 1:
Here is an example, using the first 2, of what I would like the program to be able to output.
```
[
"Memory":{
"SensorType": "Load",
"Value": 53.3276978
},
"CPU Core #2":{
"SensorType": "Temperature",
"Value": 69
}
]
```
Once again, I dont even know if this is valid JSON but I want it to just do something at least similar to that so I can call, for example, `print(data["Memory"]["Value"])` and return, `53.3276978`.
Edit 2:
It did just occur to me that there are some names with multiple sensor types, for example, `"CPU Core #1"` and `"CPU Core #2"` both have `"Tempurature"`, `"Load"`, and `"Clock"`. Using the above example could cause some conflicts so is there a way we could account for that? | 2018/08/07 | [
"https://Stackoverflow.com/questions/51733698",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9801535/"
] | Assuming you need to keep the values if key already exists:
```
import json
data = [
{
"Name": "Memory",
"SensorType": "Load",
"Value": 53.3276978
},
{
"Name": "CPU Core #2",
"SensorType": "Temperature",
"Value": 69
},
{
"Name": "Used Space",
"SensorType": "Load",
"Value": 93.12801
},
{
"Name": "CPU Core #1",
"SensorType": "Temperature",
"Value": 66
},
{
"Name": "CPU DRAM",
"SensorType": "Power",
"Value": 1.05141532
},
{
"Name": "CPU Core #2",
"SensorType": "Load",
"Value": 60.15625
},
{
"Name": "CPU Package",
"SensorType": "Power",
"Value": 15.2162886
},
{
"Name": "Bus Speed",
"SensorType": "Clock",
"Value": 100.000031
},
{
"Name": "CPU Total",
"SensorType": "Load",
"Value": 57.421875
},
{
"Name": "CPU Package",
"SensorType": "Temperature",
"Value": 69
},
{
"Name": "CPU Core #2",
"SensorType": "Clock",
"Value": 2700.00073
},
{
"Name": "Temperature",
"SensorType": "Temperature",
"Value": 41
},
{
"Name": "Used Memory",
"SensorType": "Data",
"Value": 4.215393
},
{
"Name": "Available Memory",
"SensorType": "Data",
"Value": 3.68930435
},
{
"Name": "CPU Core #1",
"SensorType": "Clock",
"Value": 3100.001
},
{
"Name": "CPU Cores",
"SensorType": "Power",
"Value": 13.3746643
},
{
"Name": "CPU Graphics",
"SensorType": "Power",
"Value": 0.119861834
},
{
"Name": "CPU Core #1",
"SensorType": "Load",
"Value": 54.6875
}
]
final_data = {}
for d in data:
if d['Name'] not in final_data:
final_data[d['Name']] = list()
key = d.pop('Name')
final_data[key].append(temp)
print json.dumps(final_data,indent=4)
```
will give you
```
{
"CPU Package": [
{
"SensorType": "Power",
"Value": 15.2162886
},
{
"SensorType": "Temperature",
"Value": 69
}
],
"Temperature": [
{
"SensorType": "Temperature",
"Value": 41
}
],
"CPU Core #2": [
{
"SensorType": "Temperature",
"Value": 69
},
{
"SensorType": "Load",
"Value": 60.15625
},
{
"SensorType": "Clock",
"Value": 2700.00073
}
],
"CPU Core #1": [
{
"SensorType": "Temperature",
"Value": 66
},
{
"SensorType": "Clock",
"Value": 3100.001
},
{
"SensorType": "Load",
"Value": 54.6875
}
],
"CPU Cores": [
{
"SensorType": "Power",
"Value": 13.3746643
}
],
"Available Memory": [
{
"SensorType": "Data",
"Value": 3.68930435
}
],
"Used Space": [
{
"SensorType": "Load",
"Value": 93.12801
}
],
"Bus Speed": [
{
"SensorType": "Clock",
"Value": 100.000031
}
],
"Memory": [
{
"SensorType": "Load",
"Value": 53.3276978
}
],
"Used Memory": [
{
"SensorType": "Data",
"Value": 4.215393
}
],
"CPU Total": [
{
"SensorType": "Load",
"Value": 57.421875
}
],
"CPU DRAM": [
{
"SensorType": "Power",
"Value": 1.05141532
}
],
"CPU Graphics": [
{
"SensorType": "Power",
"Value": 0.119861834
}
]
}
```
Hope this helps | ```
x="""[
{
"Name": "Memory 1",
"SensorType": "Load",
"Value": 53.3276978
},
{
"Name": "CPU Core #2",
"SensorType": "Load",
"Value": 53.3276978
}]"""
json_obj=json.loads(x)
new_list=[]
for item in json_obj:
name=item.pop('Name')
new_list.append({name:item})
print(json.dumps(new_list,indent=4))
```
Output
```
[
{
"Memory 1": {
"SensorType": "Load",
"Value": 53.3276978
}
},
{
"CPU Core #2": {
"SensorType": "Load",
"Value": 53.3276978
}
}
]
``` |
51,733,698 | I have a program that right now grabs data like temperature and loads using a powershell script and the WMI. It outputs the data as a JSON file. Now let me preface this by saying this is my first time every working with JSON's and im not very familiar with the JSON python library. Here is the code to my program:
```
import subprocess
import json
p = subprocess.Popen(["C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\powershell.exe", ". \"./TestScript\";", "&NSV"], stdout=subprocess.PIPE)
(output, err) = p.communicate()
data = json.loads(output)
for mNull in data:
del mNull['Scope']
del mNull['Path']
del mNull['Options']
del mNull['ClassPath']
del mNull['Properties']
del mNull['SystemProperties']
del mNull['Qualifiers']
del mNull['Site']
del mNull['Container']
del mNull['PSComputerName']
del mNull['__GENUS']
del mNull['__CLASS']
del mNull['__SUPERCLASS']
del mNull['__DYNASTY']
del mNull['__RELPATH']
del mNull['__PROPERTY_COUNT']
del mNull['__DERIVATION']
del mNull['__SERVER']
del mNull['__NAMESPACE']
del mNull['__PATH']
fdata = json.dumps(data,indent=2)
print(fdata)
```
Now here is the resulting JSON:
```
[
{
"Name": "Memory",
"SensorType": "Load",
"Value": 53.3276978
},
{
"Name": "CPU Core #2",
"SensorType": "Temperature",
"Value": 69
},
{
"Name": "Used Space",
"SensorType": "Load",
"Value": 93.12801
},
{
"Name": "CPU Core #1",
"SensorType": "Temperature",
"Value": 66
},
{
"Name": "CPU DRAM",
"SensorType": "Power",
"Value": 1.05141532
},
{
"Name": "CPU Core #2",
"SensorType": "Load",
"Value": 60.15625
},
{
"Name": "CPU Package",
"SensorType": "Power",
"Value": 15.2162886
},
{
"Name": "Bus Speed",
"SensorType": "Clock",
"Value": 100.000031
},
{
"Name": "CPU Total",
"SensorType": "Load",
"Value": 57.421875
},
{
"Name": "CPU Package",
"SensorType": "Temperature",
"Value": 69
},
{
"Name": "CPU Core #2",
"SensorType": "Clock",
"Value": 2700.00073
},
{
"Name": "Temperature",
"SensorType": "Temperature",
"Value": 41
},
{
"Name": "Used Memory",
"SensorType": "Data",
"Value": 4.215393
},
{
"Name": "Available Memory",
"SensorType": "Data",
"Value": 3.68930435
},
{
"Name": "CPU Core #1",
"SensorType": "Clock",
"Value": 3100.001
},
{
"Name": "CPU Cores",
"SensorType": "Power",
"Value": 13.3746643
},
{
"Name": "CPU Graphics",
"SensorType": "Power",
"Value": 0.119861834
},
{
"Name": "CPU Core #1",
"SensorType": "Load",
"Value": 54.6875
}
]
```
As you can see every dictionary in the list has the keys `Name`, `SensorType` and `Value`.
What I want to do is make it so that each list has a "label" equal to the `Name` in each one, so I can call for data from specific entries, one at a time. Once again, I'm kind of a newbie with JSON and its library so I'm not even sure if this sort of thing is possible. Any help would be greatly appreciated! Have a good day! :)
Edit 1:
Here is an example, using the first 2, of what I would like the program to be able to output.
```
[
"Memory":{
"SensorType": "Load",
"Value": 53.3276978
},
"CPU Core #2":{
"SensorType": "Temperature",
"Value": 69
}
]
```
Once again, I dont even know if this is valid JSON but I want it to just do something at least similar to that so I can call, for example, `print(data["Memory"]["Value"])` and return, `53.3276978`.
Edit 2:
It did just occur to me that there are some names with multiple sensor types, for example, `"CPU Core #1"` and `"CPU Core #2"` both have `"Tempurature"`, `"Load"`, and `"Clock"`. Using the above example could cause some conflicts so is there a way we could account for that? | 2018/08/07 | [
"https://Stackoverflow.com/questions/51733698",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9801535/"
] | Assuming you need to keep the values if key already exists:
```
import json
data = [
{
"Name": "Memory",
"SensorType": "Load",
"Value": 53.3276978
},
{
"Name": "CPU Core #2",
"SensorType": "Temperature",
"Value": 69
},
{
"Name": "Used Space",
"SensorType": "Load",
"Value": 93.12801
},
{
"Name": "CPU Core #1",
"SensorType": "Temperature",
"Value": 66
},
{
"Name": "CPU DRAM",
"SensorType": "Power",
"Value": 1.05141532
},
{
"Name": "CPU Core #2",
"SensorType": "Load",
"Value": 60.15625
},
{
"Name": "CPU Package",
"SensorType": "Power",
"Value": 15.2162886
},
{
"Name": "Bus Speed",
"SensorType": "Clock",
"Value": 100.000031
},
{
"Name": "CPU Total",
"SensorType": "Load",
"Value": 57.421875
},
{
"Name": "CPU Package",
"SensorType": "Temperature",
"Value": 69
},
{
"Name": "CPU Core #2",
"SensorType": "Clock",
"Value": 2700.00073
},
{
"Name": "Temperature",
"SensorType": "Temperature",
"Value": 41
},
{
"Name": "Used Memory",
"SensorType": "Data",
"Value": 4.215393
},
{
"Name": "Available Memory",
"SensorType": "Data",
"Value": 3.68930435
},
{
"Name": "CPU Core #1",
"SensorType": "Clock",
"Value": 3100.001
},
{
"Name": "CPU Cores",
"SensorType": "Power",
"Value": 13.3746643
},
{
"Name": "CPU Graphics",
"SensorType": "Power",
"Value": 0.119861834
},
{
"Name": "CPU Core #1",
"SensorType": "Load",
"Value": 54.6875
}
]
final_data = {}
for d in data:
if d['Name'] not in final_data:
final_data[d['Name']] = list()
key = d.pop('Name')
final_data[key].append(temp)
print json.dumps(final_data,indent=4)
```
will give you
```
{
"CPU Package": [
{
"SensorType": "Power",
"Value": 15.2162886
},
{
"SensorType": "Temperature",
"Value": 69
}
],
"Temperature": [
{
"SensorType": "Temperature",
"Value": 41
}
],
"CPU Core #2": [
{
"SensorType": "Temperature",
"Value": 69
},
{
"SensorType": "Load",
"Value": 60.15625
},
{
"SensorType": "Clock",
"Value": 2700.00073
}
],
"CPU Core #1": [
{
"SensorType": "Temperature",
"Value": 66
},
{
"SensorType": "Clock",
"Value": 3100.001
},
{
"SensorType": "Load",
"Value": 54.6875
}
],
"CPU Cores": [
{
"SensorType": "Power",
"Value": 13.3746643
}
],
"Available Memory": [
{
"SensorType": "Data",
"Value": 3.68930435
}
],
"Used Space": [
{
"SensorType": "Load",
"Value": 93.12801
}
],
"Bus Speed": [
{
"SensorType": "Clock",
"Value": 100.000031
}
],
"Memory": [
{
"SensorType": "Load",
"Value": 53.3276978
}
],
"Used Memory": [
{
"SensorType": "Data",
"Value": 4.215393
}
],
"CPU Total": [
{
"SensorType": "Load",
"Value": 57.421875
}
],
"CPU DRAM": [
{
"SensorType": "Power",
"Value": 1.05141532
}
],
"CPU Graphics": [
{
"SensorType": "Power",
"Value": 0.119861834
}
]
}
```
Hope this helps | How about this?
```
import json
orig_list = json.load(<filename>)
new_dict = { l['Name']:{k:v for k,v in l.items() if k!='Name'} for l in orig_list}
json.dumps(new_dict, <filename>)
```
This way, you won't have to `del` items from the `dict`s that you load from the file. |
51,733,698 | I have a program that right now grabs data like temperature and loads using a powershell script and the WMI. It outputs the data as a JSON file. Now let me preface this by saying this is my first time every working with JSON's and im not very familiar with the JSON python library. Here is the code to my program:
```
import subprocess
import json
p = subprocess.Popen(["C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\powershell.exe", ". \"./TestScript\";", "&NSV"], stdout=subprocess.PIPE)
(output, err) = p.communicate()
data = json.loads(output)
for mNull in data:
del mNull['Scope']
del mNull['Path']
del mNull['Options']
del mNull['ClassPath']
del mNull['Properties']
del mNull['SystemProperties']
del mNull['Qualifiers']
del mNull['Site']
del mNull['Container']
del mNull['PSComputerName']
del mNull['__GENUS']
del mNull['__CLASS']
del mNull['__SUPERCLASS']
del mNull['__DYNASTY']
del mNull['__RELPATH']
del mNull['__PROPERTY_COUNT']
del mNull['__DERIVATION']
del mNull['__SERVER']
del mNull['__NAMESPACE']
del mNull['__PATH']
fdata = json.dumps(data,indent=2)
print(fdata)
```
Now here is the resulting JSON:
```
[
{
"Name": "Memory",
"SensorType": "Load",
"Value": 53.3276978
},
{
"Name": "CPU Core #2",
"SensorType": "Temperature",
"Value": 69
},
{
"Name": "Used Space",
"SensorType": "Load",
"Value": 93.12801
},
{
"Name": "CPU Core #1",
"SensorType": "Temperature",
"Value": 66
},
{
"Name": "CPU DRAM",
"SensorType": "Power",
"Value": 1.05141532
},
{
"Name": "CPU Core #2",
"SensorType": "Load",
"Value": 60.15625
},
{
"Name": "CPU Package",
"SensorType": "Power",
"Value": 15.2162886
},
{
"Name": "Bus Speed",
"SensorType": "Clock",
"Value": 100.000031
},
{
"Name": "CPU Total",
"SensorType": "Load",
"Value": 57.421875
},
{
"Name": "CPU Package",
"SensorType": "Temperature",
"Value": 69
},
{
"Name": "CPU Core #2",
"SensorType": "Clock",
"Value": 2700.00073
},
{
"Name": "Temperature",
"SensorType": "Temperature",
"Value": 41
},
{
"Name": "Used Memory",
"SensorType": "Data",
"Value": 4.215393
},
{
"Name": "Available Memory",
"SensorType": "Data",
"Value": 3.68930435
},
{
"Name": "CPU Core #1",
"SensorType": "Clock",
"Value": 3100.001
},
{
"Name": "CPU Cores",
"SensorType": "Power",
"Value": 13.3746643
},
{
"Name": "CPU Graphics",
"SensorType": "Power",
"Value": 0.119861834
},
{
"Name": "CPU Core #1",
"SensorType": "Load",
"Value": 54.6875
}
]
```
As you can see every dictionary in the list has the keys `Name`, `SensorType` and `Value`.
What I want to do is make it so that each list has a "label" equal to the `Name` in each one, so I can call for data from specific entries, one at a time. Once again, I'm kind of a newbie with JSON and its library so I'm not even sure if this sort of thing is possible. Any help would be greatly appreciated! Have a good day! :)
Edit 1:
Here is an example, using the first 2, of what I would like the program to be able to output.
```
[
"Memory":{
"SensorType": "Load",
"Value": 53.3276978
},
"CPU Core #2":{
"SensorType": "Temperature",
"Value": 69
}
]
```
Once again, I dont even know if this is valid JSON but I want it to just do something at least similar to that so I can call, for example, `print(data["Memory"]["Value"])` and return, `53.3276978`.
Edit 2:
It did just occur to me that there are some names with multiple sensor types, for example, `"CPU Core #1"` and `"CPU Core #2"` both have `"Tempurature"`, `"Load"`, and `"Clock"`. Using the above example could cause some conflicts so is there a way we could account for that? | 2018/08/07 | [
"https://Stackoverflow.com/questions/51733698",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9801535/"
] | Assuming you need to keep the values if key already exists:
```
import json
data = [
{
"Name": "Memory",
"SensorType": "Load",
"Value": 53.3276978
},
{
"Name": "CPU Core #2",
"SensorType": "Temperature",
"Value": 69
},
{
"Name": "Used Space",
"SensorType": "Load",
"Value": 93.12801
},
{
"Name": "CPU Core #1",
"SensorType": "Temperature",
"Value": 66
},
{
"Name": "CPU DRAM",
"SensorType": "Power",
"Value": 1.05141532
},
{
"Name": "CPU Core #2",
"SensorType": "Load",
"Value": 60.15625
},
{
"Name": "CPU Package",
"SensorType": "Power",
"Value": 15.2162886
},
{
"Name": "Bus Speed",
"SensorType": "Clock",
"Value": 100.000031
},
{
"Name": "CPU Total",
"SensorType": "Load",
"Value": 57.421875
},
{
"Name": "CPU Package",
"SensorType": "Temperature",
"Value": 69
},
{
"Name": "CPU Core #2",
"SensorType": "Clock",
"Value": 2700.00073
},
{
"Name": "Temperature",
"SensorType": "Temperature",
"Value": 41
},
{
"Name": "Used Memory",
"SensorType": "Data",
"Value": 4.215393
},
{
"Name": "Available Memory",
"SensorType": "Data",
"Value": 3.68930435
},
{
"Name": "CPU Core #1",
"SensorType": "Clock",
"Value": 3100.001
},
{
"Name": "CPU Cores",
"SensorType": "Power",
"Value": 13.3746643
},
{
"Name": "CPU Graphics",
"SensorType": "Power",
"Value": 0.119861834
},
{
"Name": "CPU Core #1",
"SensorType": "Load",
"Value": 54.6875
}
]
final_data = {}
for d in data:
if d['Name'] not in final_data:
final_data[d['Name']] = list()
key = d.pop('Name')
final_data[key].append(temp)
print json.dumps(final_data,indent=4)
```
will give you
```
{
"CPU Package": [
{
"SensorType": "Power",
"Value": 15.2162886
},
{
"SensorType": "Temperature",
"Value": 69
}
],
"Temperature": [
{
"SensorType": "Temperature",
"Value": 41
}
],
"CPU Core #2": [
{
"SensorType": "Temperature",
"Value": 69
},
{
"SensorType": "Load",
"Value": 60.15625
},
{
"SensorType": "Clock",
"Value": 2700.00073
}
],
"CPU Core #1": [
{
"SensorType": "Temperature",
"Value": 66
},
{
"SensorType": "Clock",
"Value": 3100.001
},
{
"SensorType": "Load",
"Value": 54.6875
}
],
"CPU Cores": [
{
"SensorType": "Power",
"Value": 13.3746643
}
],
"Available Memory": [
{
"SensorType": "Data",
"Value": 3.68930435
}
],
"Used Space": [
{
"SensorType": "Load",
"Value": 93.12801
}
],
"Bus Speed": [
{
"SensorType": "Clock",
"Value": 100.000031
}
],
"Memory": [
{
"SensorType": "Load",
"Value": 53.3276978
}
],
"Used Memory": [
{
"SensorType": "Data",
"Value": 4.215393
}
],
"CPU Total": [
{
"SensorType": "Load",
"Value": 57.421875
}
],
"CPU DRAM": [
{
"SensorType": "Power",
"Value": 1.05141532
}
],
"CPU Graphics": [
{
"SensorType": "Power",
"Value": 0.119861834
}
]
}
```
Hope this helps | You can just replace
```
for mNull in data:
del mNull['Scope']
del mNull['Path']
del mNull['Options']
del mNull['ClassPath']
del mNull['Properties']
del mNull['SystemProperties']
del mNull['Qualifiers']
del mNull['Site']
del mNull['Container']
del mNull['PSComputerName']
del mNull['__GENUS']
del mNull['__CLASS']
del mNull['__SUPERCLASS']
del mNull['__DYNASTY']
del mNull['__RELPATH']
del mNull['__PROPERTY_COUNT']
del mNull['__DERIVATION']
del mNull['__SERVER']
del mNull['__NAMESPACE']
del mNull['__PATH']
fdata = json.dumps(data,indent=2)
```
with
```
dontwant=set(['Name', 'PSComputerName', '__RELPATH', '__DYNASTY', '__CLASS', '__PROPERTY_COUNT', 'Site', 'ClassPath', 'SystemProperties', 'Scope', 'Qualifiers', 'Options', '__NAMESPACE', 'Path', '__SUPERCLASS', '__DERIVATION', '__GENUS', '__PATH', 'Container', 'Properties', '__SERVER']) # set of keys to drop
out={} # empty dict
for mNull in data:
name=mNull['Name']
out[name]={key:value for key,value in mNull.items() if key not in dontwant} # only copy over items you want
fdata = json.dumps(out,indent=2)
``` |
24,888,691 | Well, I finally got cocos2d-x into the IDE, and now I can make minor changes like change the label text.
But when trying to add a sprite, the app crashes on my phone (Galaxy Ace 2), and I can't make sense of the debug output.
I followed [THIS](http://youtu.be/2LI1IrRp_0w) video to set up my IDE, and i've literally just gone to add a sprite in the template project...
Could someone help me fix this please:
```
07-22 13:22:32.310: D/PhoneWindow(22070): couldn't save which view has focus because the focused view org.cocos2dx.lib.Cocos2dxGLSurfaceView@405240c8 has no id.
07-22 13:22:32.930: V/SurfaceView(22070): org.cocos2dx.lib.Cocos2dxGLSurfaceView@405240c8 got app visibiltiy is changed: false
07-22 13:22:32.930: I/GLThread(22070): noticed surfaceView surface lost tid=12
07-22 13:22:32.930: W/EglHelper(22070): destroySurface() tid=12
07-22 13:22:32.960: D/CLIPBOARD(22070): Hide Clipboard dialog at Starting input: finished by someone else... !
07-22 13:23:05.190: W/dalvikvm(22133): threadid=1: thread exiting with uncaught exception (group=0x4001e578)
07-22 13:23:05.190: E/AndroidRuntime(22133): FATAL EXCEPTION: main
07-22 13:23:05.190: E/AndroidRuntime(22133): java.lang.UnsatisfiedLinkError: Couldn't load cocos2dcpp: findLibrary returned null
07-22 13:23:05.190: E/AndroidRuntime(22133): at java.lang.Runtime.loadLibrary(Runtime.java:429)
07-22 13:23:05.190: E/AndroidRuntime(22133): at java.lang.System.loadLibrary(System.java:554)
07-22 13:23:05.190: E/AndroidRuntime(22133): at org.cocos2dx.lib.Cocos2dxActivity.onLoadNativeLibraries(Cocos2dxActivity.java:66)
07-22 13:23:05.190: E/AndroidRuntime(22133): at org.cocos2dx.lib.Cocos2dxActivity.onCreate(Cocos2dxActivity.java:80)
07-22 13:23:05.190: E/AndroidRuntime(22133): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1050)
07-22 13:23:05.190: E/AndroidRuntime(22133): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1615)
07-22 13:23:05.190: E/AndroidRuntime(22133): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1667)
07-22 13:23:05.190: E/AndroidRuntime(22133): at android.app.ActivityThread.access$1500(ActivityThread.java:117)
07-22 13:23:05.190: E/AndroidRuntime(22133): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:935)
07-22 13:23:05.190: E/AndroidRuntime(22133): at android.os.Handler.dispatchMessage(Handler.java:99)
07-22 13:23:05.190: E/AndroidRuntime(22133): at android.os.Looper.loop(Looper.java:130)
07-22 13:23:05.190: E/AndroidRuntime(22133): at android.app.ActivityThread.main(ActivityThread.java:3691)
07-22 13:23:05.190: E/AndroidRuntime(22133): at java.lang.reflect.Method.invokeNative(Native Method)
07-22 13:23:05.190: E/AndroidRuntime(22133): at java.lang.reflect.Method.invoke(Method.java:507)
07-22 13:23:05.190: E/AndroidRuntime(22133): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:912)
07-22 13:23:05.190: E/AndroidRuntime(22133): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:670)
07-22 13:23:05.190: E/AndroidRuntime(22133): at dalvik.system.NativeStart.main(Native Method)
07-22 13:23:07.200: I/dalvikvm(22133): threadid=4: reacting to signal 3
07-22 13:23:07.200: I/dalvikvm(22133): Wrote stack traces to '/data/anr/traces.txt'
```
Thanks
---
P.S. `Cocos2dxActivity.java` has errors on line 66 & 80.
Line 66 is `System.loadLibrary(libName);` and line 80 is `onLoadNativeLibraries();` In line 65 it declares lib name as `String libName = bundle.getString("android.app.lib_name");`
Also, I can see in the Manifest that the key information is:
```
<!-- Tell Cocos2dxActivity the name of our .so -->
<meta-data android:name="android.app.lib_name"
android:value="cocos2dcpp" />
```
I do have the NDK, and I hooked it up in my ./bash\_profile. But I did just notice that the console says:
```
python /Users/damianwilliams/Desktop/KittyKatch/proj.android/build_native.py -b release all
NDK_ROOT not defined. Please define NDK_ROOT in your environment
```
But I know I have it in my bash since my bash profile says:
```
# Add environment variable COCOS_CONSOLE_ROOT for cocos2d-x
export COCOS_CONSOLE_ROOT=/Users/damianwilliams/Desktop/Android-Development-Root/cocos2d-x-3.2rc0/tools/cocos2d-console/bin
export PATH=$COCOS_CONSOLE_ROOT:$PATH
# Add environment variable NDK_ROOT for cocos2d-x
export NDK_ROOT=/Users/damianwilliams/Desktop/Android-Development-Root/android-ndk-r10
export PATH=$NDK_ROOT:$PATH
# Add environment variable ANT_ROOT for cocos2d-x
export ANT_ROOT=/Users/damianwilliams/Desktop/Android-Development-Root/apache-ant-1.9.4/bin
export PATH=$ANT_ROOT:$PATH
```
But I've no idea what to do with that information or if I've built it correctly. | 2014/07/22 | [
"https://Stackoverflow.com/questions/24888691",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3863962/"
] | ```
var new= "\"" + string.Join( "\",\"", keys) + "\"";
```
To include a double quote in a string, you escape it with a backslash character, thus "\"" is a string consisting of a single double quote character, and "\", \"" is a string containing a double quote, a comma, a space, and another double quote. | If performance is the key, you can always use a `StringBuilder` to concatenate everything.
[Here's a fiddle](https://dotnetfiddle.net/nptVEH) to see it in action, but the main part can be summarized as:
```
// these look like snails, but they are actually pretty fast
using @_____ = System.Collections.Generic.IEnumerable<object>;
using @______ = System.Func<object, object>;
using @_______ = System.Text.StringBuilder;
public static string GetCsv(object[] input)
{
// use a string builder to make things faster
var @__ = new StringBuilder();
// the rest should be self-explanatory
Func<@_____, @______, @_____>
@____ = (_6,
_2) => _6.Select(_2);
Func<@_____, object> @_3 = _6
=> _6.FirstOrDefault();
Func<@_____, @_____> @_4 = _8
=> _8.Skip(input.Length - 1);
Action<@_______, object> @_ = (_9,
_2) => _9.Append(_2);
Action<@_______>
@___ = _7 =>
{ if (_7.Length > 0) @_(
@__, ",");
}; var @snail =
@____(input, (@_0 =>
{ @___(@__); @_(@__, @"""");
@_(@__, @_0); @_(@__, @"""");
return @__; }));
var @linq = @_4(@snail);
var @void = @_3(@linq);
// get the result
return @__.ToString();
}
``` |
24,888,691 | Well, I finally got cocos2d-x into the IDE, and now I can make minor changes like change the label text.
But when trying to add a sprite, the app crashes on my phone (Galaxy Ace 2), and I can't make sense of the debug output.
I followed [THIS](http://youtu.be/2LI1IrRp_0w) video to set up my IDE, and i've literally just gone to add a sprite in the template project...
Could someone help me fix this please:
```
07-22 13:22:32.310: D/PhoneWindow(22070): couldn't save which view has focus because the focused view org.cocos2dx.lib.Cocos2dxGLSurfaceView@405240c8 has no id.
07-22 13:22:32.930: V/SurfaceView(22070): org.cocos2dx.lib.Cocos2dxGLSurfaceView@405240c8 got app visibiltiy is changed: false
07-22 13:22:32.930: I/GLThread(22070): noticed surfaceView surface lost tid=12
07-22 13:22:32.930: W/EglHelper(22070): destroySurface() tid=12
07-22 13:22:32.960: D/CLIPBOARD(22070): Hide Clipboard dialog at Starting input: finished by someone else... !
07-22 13:23:05.190: W/dalvikvm(22133): threadid=1: thread exiting with uncaught exception (group=0x4001e578)
07-22 13:23:05.190: E/AndroidRuntime(22133): FATAL EXCEPTION: main
07-22 13:23:05.190: E/AndroidRuntime(22133): java.lang.UnsatisfiedLinkError: Couldn't load cocos2dcpp: findLibrary returned null
07-22 13:23:05.190: E/AndroidRuntime(22133): at java.lang.Runtime.loadLibrary(Runtime.java:429)
07-22 13:23:05.190: E/AndroidRuntime(22133): at java.lang.System.loadLibrary(System.java:554)
07-22 13:23:05.190: E/AndroidRuntime(22133): at org.cocos2dx.lib.Cocos2dxActivity.onLoadNativeLibraries(Cocos2dxActivity.java:66)
07-22 13:23:05.190: E/AndroidRuntime(22133): at org.cocos2dx.lib.Cocos2dxActivity.onCreate(Cocos2dxActivity.java:80)
07-22 13:23:05.190: E/AndroidRuntime(22133): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1050)
07-22 13:23:05.190: E/AndroidRuntime(22133): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1615)
07-22 13:23:05.190: E/AndroidRuntime(22133): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1667)
07-22 13:23:05.190: E/AndroidRuntime(22133): at android.app.ActivityThread.access$1500(ActivityThread.java:117)
07-22 13:23:05.190: E/AndroidRuntime(22133): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:935)
07-22 13:23:05.190: E/AndroidRuntime(22133): at android.os.Handler.dispatchMessage(Handler.java:99)
07-22 13:23:05.190: E/AndroidRuntime(22133): at android.os.Looper.loop(Looper.java:130)
07-22 13:23:05.190: E/AndroidRuntime(22133): at android.app.ActivityThread.main(ActivityThread.java:3691)
07-22 13:23:05.190: E/AndroidRuntime(22133): at java.lang.reflect.Method.invokeNative(Native Method)
07-22 13:23:05.190: E/AndroidRuntime(22133): at java.lang.reflect.Method.invoke(Method.java:507)
07-22 13:23:05.190: E/AndroidRuntime(22133): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:912)
07-22 13:23:05.190: E/AndroidRuntime(22133): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:670)
07-22 13:23:05.190: E/AndroidRuntime(22133): at dalvik.system.NativeStart.main(Native Method)
07-22 13:23:07.200: I/dalvikvm(22133): threadid=4: reacting to signal 3
07-22 13:23:07.200: I/dalvikvm(22133): Wrote stack traces to '/data/anr/traces.txt'
```
Thanks
---
P.S. `Cocos2dxActivity.java` has errors on line 66 & 80.
Line 66 is `System.loadLibrary(libName);` and line 80 is `onLoadNativeLibraries();` In line 65 it declares lib name as `String libName = bundle.getString("android.app.lib_name");`
Also, I can see in the Manifest that the key information is:
```
<!-- Tell Cocos2dxActivity the name of our .so -->
<meta-data android:name="android.app.lib_name"
android:value="cocos2dcpp" />
```
I do have the NDK, and I hooked it up in my ./bash\_profile. But I did just notice that the console says:
```
python /Users/damianwilliams/Desktop/KittyKatch/proj.android/build_native.py -b release all
NDK_ROOT not defined. Please define NDK_ROOT in your environment
```
But I know I have it in my bash since my bash profile says:
```
# Add environment variable COCOS_CONSOLE_ROOT for cocos2d-x
export COCOS_CONSOLE_ROOT=/Users/damianwilliams/Desktop/Android-Development-Root/cocos2d-x-3.2rc0/tools/cocos2d-console/bin
export PATH=$COCOS_CONSOLE_ROOT:$PATH
# Add environment variable NDK_ROOT for cocos2d-x
export NDK_ROOT=/Users/damianwilliams/Desktop/Android-Development-Root/android-ndk-r10
export PATH=$NDK_ROOT:$PATH
# Add environment variable ANT_ROOT for cocos2d-x
export ANT_ROOT=/Users/damianwilliams/Desktop/Android-Development-Root/apache-ant-1.9.4/bin
export PATH=$ANT_ROOT:$PATH
```
But I've no idea what to do with that information or if I've built it correctly. | 2014/07/22 | [
"https://Stackoverflow.com/questions/24888691",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3863962/"
] | Please give a try to this.
```
var keys = new object[] { "test1", "hello", "world", null, "", "oops"};
var csv = string.Join(",", keys.Select(k => string.Format("\"{0}\"", k)));
```
Because you have an `object[]` array, `string.Format` can deal with null as well as other types than strings. This solutions also works in .NET 3.5.
When the object[] array is empty, then a empty string is returned. | If performance is the key, you can always use a `StringBuilder` to concatenate everything.
[Here's a fiddle](https://dotnetfiddle.net/nptVEH) to see it in action, but the main part can be summarized as:
```
// these look like snails, but they are actually pretty fast
using @_____ = System.Collections.Generic.IEnumerable<object>;
using @______ = System.Func<object, object>;
using @_______ = System.Text.StringBuilder;
public static string GetCsv(object[] input)
{
// use a string builder to make things faster
var @__ = new StringBuilder();
// the rest should be self-explanatory
Func<@_____, @______, @_____>
@____ = (_6,
_2) => _6.Select(_2);
Func<@_____, object> @_3 = _6
=> _6.FirstOrDefault();
Func<@_____, @_____> @_4 = _8
=> _8.Skip(input.Length - 1);
Action<@_______, object> @_ = (_9,
_2) => _9.Append(_2);
Action<@_______>
@___ = _7 =>
{ if (_7.Length > 0) @_(
@__, ",");
}; var @snail =
@____(input, (@_0 =>
{ @___(@__); @_(@__, @"""");
@_(@__, @_0); @_(@__, @"""");
return @__; }));
var @linq = @_4(@snail);
var @void = @_3(@linq);
// get the result
return @__.ToString();
}
``` |
6,990,760 | I wrapped opencv today with simplecv python interface. After going through the official [SimpleCV Cookbook](http://simplecv.org/doc/cookbook.html) I was able to successfully [Load, Save](http://simplecv.org/doc/cookbook.html#loading-and-saving-images), and [Manipulate](http://simplecv.org/doc/cookbook.html#image-manipulation) images. Thus, I know the library is being loaded properly.
However, under the [Using a Camera, Kinect, or Virtual Camera](http://simplecv.org/doc/cookbook.html#using-a-camera-kinect-or-virtualcamera) heading I was unsuccessful in running some commands. In particular, `mycam = Camera()` worked, but `img = mycam.getImage()` produced the following error:
```
In [35]: img = mycam.getImage().save()
OpenCV Error: Bad argument (Array should be CvMat or IplImage) in cvGetSize, file /home/jordan/OpenCV-2.2.0/modules/core/src/array.cpp, line 1237
---------------------------------------------------------------------------
error Traceback (most recent call last)
/home/simplecv/<ipython console> in <module>()
/usr/local/lib/python2.7/dist-packages/SimpleCV-1.1-py2.7.egg/SimpleCV/Camera.pyc in getImage(self)
332
333 frame = cv.RetrieveFrame(self.capture)
--> 334 newimg = cv.CreateImage(cv.GetSize(frame), cv.IPL_DEPTH_8U, 3)
335 cv.Copy(frame, newimg)
336 return Image(newimg, self)
error: Array should be CvMat or IplImage
```
I'm running Ubuntu Natty on a HP TX2500 tablet. It has a built in webcam, (CyberLink Youcam?) Has anybody seen this error before? I've been all over the web today looking for a solution, but nothing seems to be doing the trick.
**Update 1**: I tested cv.QueryFrame(capture) using the code found here [in a separate Stack Overflow question](https://stackoverflow.com/questions/4929721/opencv-python-grab-frames-from-a-video-file) and it worked; so I've pretty much nailed this down to a webcam issue.
**Update 2**: In fact, I get the exact same errors on a machine that doesn't even have a webcam! It's looking like the TX2500 is not compatible... | 2011/08/09 | [
"https://Stackoverflow.com/questions/6990760",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/568884/"
] | since the error raised from Camera.py of SimpleCV, you need to debug the getImage() method. If you can edit it:
```
def getImage(self):
if (not self.threaded):
cv.GrabFrame(self.capture)
frame = cv.RetrieveFrame(self.capture)
import pdb # <-- add this line
pdb.set_trace() # <-- add this line
newimg = cv.CreateImage(cv.GetSize(frame), cv.IPL_DEPTH_8U, 3)
cv.Copy(frame, newimg)
return Image(newimg, self)
```
then run your program, it will be paused as pdb.set\_trace(), here you can inspect the type of frame, and try to figure out how get the size of frame.
Or you can do the capture in your code, and inspect the frame object:
```
mycam = Camera()
cv.GrabFrame(mycam.capture)
frame = cv.RetrieveFrame(mycam.capture)
``` | I'm geting the camera with OpenCV
```
from opencv import cv
from opencv import highgui
from opencv import adaptors
def get_image()
cam = highgui.cvCreateCameraCapture(0)
im = highgui.cvQueryFrame(cam)
# Add the line below if you need it (Ubuntu 8.04+)
#im = opencv.cvGetMat(im)
return im
``` |
6,990,760 | I wrapped opencv today with simplecv python interface. After going through the official [SimpleCV Cookbook](http://simplecv.org/doc/cookbook.html) I was able to successfully [Load, Save](http://simplecv.org/doc/cookbook.html#loading-and-saving-images), and [Manipulate](http://simplecv.org/doc/cookbook.html#image-manipulation) images. Thus, I know the library is being loaded properly.
However, under the [Using a Camera, Kinect, or Virtual Camera](http://simplecv.org/doc/cookbook.html#using-a-camera-kinect-or-virtualcamera) heading I was unsuccessful in running some commands. In particular, `mycam = Camera()` worked, but `img = mycam.getImage()` produced the following error:
```
In [35]: img = mycam.getImage().save()
OpenCV Error: Bad argument (Array should be CvMat or IplImage) in cvGetSize, file /home/jordan/OpenCV-2.2.0/modules/core/src/array.cpp, line 1237
---------------------------------------------------------------------------
error Traceback (most recent call last)
/home/simplecv/<ipython console> in <module>()
/usr/local/lib/python2.7/dist-packages/SimpleCV-1.1-py2.7.egg/SimpleCV/Camera.pyc in getImage(self)
332
333 frame = cv.RetrieveFrame(self.capture)
--> 334 newimg = cv.CreateImage(cv.GetSize(frame), cv.IPL_DEPTH_8U, 3)
335 cv.Copy(frame, newimg)
336 return Image(newimg, self)
error: Array should be CvMat or IplImage
```
I'm running Ubuntu Natty on a HP TX2500 tablet. It has a built in webcam, (CyberLink Youcam?) Has anybody seen this error before? I've been all over the web today looking for a solution, but nothing seems to be doing the trick.
**Update 1**: I tested cv.QueryFrame(capture) using the code found here [in a separate Stack Overflow question](https://stackoverflow.com/questions/4929721/opencv-python-grab-frames-from-a-video-file) and it worked; so I've pretty much nailed this down to a webcam issue.
**Update 2**: In fact, I get the exact same errors on a machine that doesn't even have a webcam! It's looking like the TX2500 is not compatible... | 2011/08/09 | [
"https://Stackoverflow.com/questions/6990760",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/568884/"
] | since the error raised from Camera.py of SimpleCV, you need to debug the getImage() method. If you can edit it:
```
def getImage(self):
if (not self.threaded):
cv.GrabFrame(self.capture)
frame = cv.RetrieveFrame(self.capture)
import pdb # <-- add this line
pdb.set_trace() # <-- add this line
newimg = cv.CreateImage(cv.GetSize(frame), cv.IPL_DEPTH_8U, 3)
cv.Copy(frame, newimg)
return Image(newimg, self)
```
then run your program, it will be paused as pdb.set\_trace(), here you can inspect the type of frame, and try to figure out how get the size of frame.
Or you can do the capture in your code, and inspect the frame object:
```
mycam = Camera()
cv.GrabFrame(mycam.capture)
frame = cv.RetrieveFrame(mycam.capture)
``` | Anthony, one of the SimpleCV developers here.
Also instead of using image.save(), this function writes the file/video to disk, you instead probably want to use image.show(). You can save if you want, but you need to specify a file path like image.save("/tmp/blah.png")
So you want to do:
```
img = mycam.getImage()
img.show()
```
As for that model of camera I'm not sure if it works or not. I should note that we also wrapper different camera classes not just OpenCV, this is because OpenCV has a problem with webcams over 640x480, we now can do high resolution cameras. |
6,990,760 | I wrapped opencv today with simplecv python interface. After going through the official [SimpleCV Cookbook](http://simplecv.org/doc/cookbook.html) I was able to successfully [Load, Save](http://simplecv.org/doc/cookbook.html#loading-and-saving-images), and [Manipulate](http://simplecv.org/doc/cookbook.html#image-manipulation) images. Thus, I know the library is being loaded properly.
However, under the [Using a Camera, Kinect, or Virtual Camera](http://simplecv.org/doc/cookbook.html#using-a-camera-kinect-or-virtualcamera) heading I was unsuccessful in running some commands. In particular, `mycam = Camera()` worked, but `img = mycam.getImage()` produced the following error:
```
In [35]: img = mycam.getImage().save()
OpenCV Error: Bad argument (Array should be CvMat or IplImage) in cvGetSize, file /home/jordan/OpenCV-2.2.0/modules/core/src/array.cpp, line 1237
---------------------------------------------------------------------------
error Traceback (most recent call last)
/home/simplecv/<ipython console> in <module>()
/usr/local/lib/python2.7/dist-packages/SimpleCV-1.1-py2.7.egg/SimpleCV/Camera.pyc in getImage(self)
332
333 frame = cv.RetrieveFrame(self.capture)
--> 334 newimg = cv.CreateImage(cv.GetSize(frame), cv.IPL_DEPTH_8U, 3)
335 cv.Copy(frame, newimg)
336 return Image(newimg, self)
error: Array should be CvMat or IplImage
```
I'm running Ubuntu Natty on a HP TX2500 tablet. It has a built in webcam, (CyberLink Youcam?) Has anybody seen this error before? I've been all over the web today looking for a solution, but nothing seems to be doing the trick.
**Update 1**: I tested cv.QueryFrame(capture) using the code found here [in a separate Stack Overflow question](https://stackoverflow.com/questions/4929721/opencv-python-grab-frames-from-a-video-file) and it worked; so I've pretty much nailed this down to a webcam issue.
**Update 2**: In fact, I get the exact same errors on a machine that doesn't even have a webcam! It's looking like the TX2500 is not compatible... | 2011/08/09 | [
"https://Stackoverflow.com/questions/6990760",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/568884/"
] | since the error raised from Camera.py of SimpleCV, you need to debug the getImage() method. If you can edit it:
```
def getImage(self):
if (not self.threaded):
cv.GrabFrame(self.capture)
frame = cv.RetrieveFrame(self.capture)
import pdb # <-- add this line
pdb.set_trace() # <-- add this line
newimg = cv.CreateImage(cv.GetSize(frame), cv.IPL_DEPTH_8U, 3)
cv.Copy(frame, newimg)
return Image(newimg, self)
```
then run your program, it will be paused as pdb.set\_trace(), here you can inspect the type of frame, and try to figure out how get the size of frame.
Or you can do the capture in your code, and inspect the frame object:
```
mycam = Camera()
cv.GrabFrame(mycam.capture)
frame = cv.RetrieveFrame(mycam.capture)
``` | Also I should mention, which I didn't realize, is that OpenCV less than 2.3 is broken with webcams on Ubuntu 11.04 and up. I didn't realize this as I was running Ubuntu 10.10 before, by the looks of your output you are using python 2.7 which makes me think you are on Ubuntu 11.04 or higher. Anyway, we have a fix for this problem. It is now pushed up into the master, it basically does a check to see if OpenCV is working, if not it will fall back to pygame.
This fix will also be in the 1.2 release of SimpleCV (It's in the master branch now) |
6,990,760 | I wrapped opencv today with simplecv python interface. After going through the official [SimpleCV Cookbook](http://simplecv.org/doc/cookbook.html) I was able to successfully [Load, Save](http://simplecv.org/doc/cookbook.html#loading-and-saving-images), and [Manipulate](http://simplecv.org/doc/cookbook.html#image-manipulation) images. Thus, I know the library is being loaded properly.
However, under the [Using a Camera, Kinect, or Virtual Camera](http://simplecv.org/doc/cookbook.html#using-a-camera-kinect-or-virtualcamera) heading I was unsuccessful in running some commands. In particular, `mycam = Camera()` worked, but `img = mycam.getImage()` produced the following error:
```
In [35]: img = mycam.getImage().save()
OpenCV Error: Bad argument (Array should be CvMat or IplImage) in cvGetSize, file /home/jordan/OpenCV-2.2.0/modules/core/src/array.cpp, line 1237
---------------------------------------------------------------------------
error Traceback (most recent call last)
/home/simplecv/<ipython console> in <module>()
/usr/local/lib/python2.7/dist-packages/SimpleCV-1.1-py2.7.egg/SimpleCV/Camera.pyc in getImage(self)
332
333 frame = cv.RetrieveFrame(self.capture)
--> 334 newimg = cv.CreateImage(cv.GetSize(frame), cv.IPL_DEPTH_8U, 3)
335 cv.Copy(frame, newimg)
336 return Image(newimg, self)
error: Array should be CvMat or IplImage
```
I'm running Ubuntu Natty on a HP TX2500 tablet. It has a built in webcam, (CyberLink Youcam?) Has anybody seen this error before? I've been all over the web today looking for a solution, but nothing seems to be doing the trick.
**Update 1**: I tested cv.QueryFrame(capture) using the code found here [in a separate Stack Overflow question](https://stackoverflow.com/questions/4929721/opencv-python-grab-frames-from-a-video-file) and it worked; so I've pretty much nailed this down to a webcam issue.
**Update 2**: In fact, I get the exact same errors on a machine that doesn't even have a webcam! It's looking like the TX2500 is not compatible... | 2011/08/09 | [
"https://Stackoverflow.com/questions/6990760",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/568884/"
] | To answer my own question...
I bought a Logitech C210 today and the problem disappeared.
I'm now getting warnings:
`Corrupt JPEG data: X extraneous bytes before marker 0xYY`.
However, I am able to successfully push a video stream to my web-browser via `JpegStreamer()`. If I cannot solve this error, I'll open a new thread.
Thus, for now, I'll blame the TX2500.
If anybody finds a fix in the future, please post.
Props to @HYRY for the investigation. Thanks. | I'm geting the camera with OpenCV
```
from opencv import cv
from opencv import highgui
from opencv import adaptors
def get_image()
cam = highgui.cvCreateCameraCapture(0)
im = highgui.cvQueryFrame(cam)
# Add the line below if you need it (Ubuntu 8.04+)
#im = opencv.cvGetMat(im)
return im
``` |
6,990,760 | I wrapped opencv today with simplecv python interface. After going through the official [SimpleCV Cookbook](http://simplecv.org/doc/cookbook.html) I was able to successfully [Load, Save](http://simplecv.org/doc/cookbook.html#loading-and-saving-images), and [Manipulate](http://simplecv.org/doc/cookbook.html#image-manipulation) images. Thus, I know the library is being loaded properly.
However, under the [Using a Camera, Kinect, or Virtual Camera](http://simplecv.org/doc/cookbook.html#using-a-camera-kinect-or-virtualcamera) heading I was unsuccessful in running some commands. In particular, `mycam = Camera()` worked, but `img = mycam.getImage()` produced the following error:
```
In [35]: img = mycam.getImage().save()
OpenCV Error: Bad argument (Array should be CvMat or IplImage) in cvGetSize, file /home/jordan/OpenCV-2.2.0/modules/core/src/array.cpp, line 1237
---------------------------------------------------------------------------
error Traceback (most recent call last)
/home/simplecv/<ipython console> in <module>()
/usr/local/lib/python2.7/dist-packages/SimpleCV-1.1-py2.7.egg/SimpleCV/Camera.pyc in getImage(self)
332
333 frame = cv.RetrieveFrame(self.capture)
--> 334 newimg = cv.CreateImage(cv.GetSize(frame), cv.IPL_DEPTH_8U, 3)
335 cv.Copy(frame, newimg)
336 return Image(newimg, self)
error: Array should be CvMat or IplImage
```
I'm running Ubuntu Natty on a HP TX2500 tablet. It has a built in webcam, (CyberLink Youcam?) Has anybody seen this error before? I've been all over the web today looking for a solution, but nothing seems to be doing the trick.
**Update 1**: I tested cv.QueryFrame(capture) using the code found here [in a separate Stack Overflow question](https://stackoverflow.com/questions/4929721/opencv-python-grab-frames-from-a-video-file) and it worked; so I've pretty much nailed this down to a webcam issue.
**Update 2**: In fact, I get the exact same errors on a machine that doesn't even have a webcam! It's looking like the TX2500 is not compatible... | 2011/08/09 | [
"https://Stackoverflow.com/questions/6990760",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/568884/"
] | To answer my own question...
I bought a Logitech C210 today and the problem disappeared.
I'm now getting warnings:
`Corrupt JPEG data: X extraneous bytes before marker 0xYY`.
However, I am able to successfully push a video stream to my web-browser via `JpegStreamer()`. If I cannot solve this error, I'll open a new thread.
Thus, for now, I'll blame the TX2500.
If anybody finds a fix in the future, please post.
Props to @HYRY for the investigation. Thanks. | Anthony, one of the SimpleCV developers here.
Also instead of using image.save(), this function writes the file/video to disk, you instead probably want to use image.show(). You can save if you want, but you need to specify a file path like image.save("/tmp/blah.png")
So you want to do:
```
img = mycam.getImage()
img.show()
```
As for that model of camera I'm not sure if it works or not. I should note that we also wrapper different camera classes not just OpenCV, this is because OpenCV has a problem with webcams over 640x480, we now can do high resolution cameras. |
6,990,760 | I wrapped opencv today with simplecv python interface. After going through the official [SimpleCV Cookbook](http://simplecv.org/doc/cookbook.html) I was able to successfully [Load, Save](http://simplecv.org/doc/cookbook.html#loading-and-saving-images), and [Manipulate](http://simplecv.org/doc/cookbook.html#image-manipulation) images. Thus, I know the library is being loaded properly.
However, under the [Using a Camera, Kinect, or Virtual Camera](http://simplecv.org/doc/cookbook.html#using-a-camera-kinect-or-virtualcamera) heading I was unsuccessful in running some commands. In particular, `mycam = Camera()` worked, but `img = mycam.getImage()` produced the following error:
```
In [35]: img = mycam.getImage().save()
OpenCV Error: Bad argument (Array should be CvMat or IplImage) in cvGetSize, file /home/jordan/OpenCV-2.2.0/modules/core/src/array.cpp, line 1237
---------------------------------------------------------------------------
error Traceback (most recent call last)
/home/simplecv/<ipython console> in <module>()
/usr/local/lib/python2.7/dist-packages/SimpleCV-1.1-py2.7.egg/SimpleCV/Camera.pyc in getImage(self)
332
333 frame = cv.RetrieveFrame(self.capture)
--> 334 newimg = cv.CreateImage(cv.GetSize(frame), cv.IPL_DEPTH_8U, 3)
335 cv.Copy(frame, newimg)
336 return Image(newimg, self)
error: Array should be CvMat or IplImage
```
I'm running Ubuntu Natty on a HP TX2500 tablet. It has a built in webcam, (CyberLink Youcam?) Has anybody seen this error before? I've been all over the web today looking for a solution, but nothing seems to be doing the trick.
**Update 1**: I tested cv.QueryFrame(capture) using the code found here [in a separate Stack Overflow question](https://stackoverflow.com/questions/4929721/opencv-python-grab-frames-from-a-video-file) and it worked; so I've pretty much nailed this down to a webcam issue.
**Update 2**: In fact, I get the exact same errors on a machine that doesn't even have a webcam! It's looking like the TX2500 is not compatible... | 2011/08/09 | [
"https://Stackoverflow.com/questions/6990760",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/568884/"
] | To answer my own question...
I bought a Logitech C210 today and the problem disappeared.
I'm now getting warnings:
`Corrupt JPEG data: X extraneous bytes before marker 0xYY`.
However, I am able to successfully push a video stream to my web-browser via `JpegStreamer()`. If I cannot solve this error, I'll open a new thread.
Thus, for now, I'll blame the TX2500.
If anybody finds a fix in the future, please post.
Props to @HYRY for the investigation. Thanks. | Also I should mention, which I didn't realize, is that OpenCV less than 2.3 is broken with webcams on Ubuntu 11.04 and up. I didn't realize this as I was running Ubuntu 10.10 before, by the looks of your output you are using python 2.7 which makes me think you are on Ubuntu 11.04 or higher. Anyway, we have a fix for this problem. It is now pushed up into the master, it basically does a check to see if OpenCV is working, if not it will fall back to pygame.
This fix will also be in the 1.2 release of SimpleCV (It's in the master branch now) |
69,383,255 | I am trying to calculate the distance between 2 points in python using this code :
```
import math
class Point(object):
def __init__(self, x, y):
self.x = x
self.y = y
def __repr__(self):
return "Point({0}, {1})".format(self.x, self.y)
def __sub__(self, other):
return Point(self.x - other.x, self.y - other.y) #<-- MODIFIED THIS
def distance(self, other):
p1 = __sub__(Point(self.x , other.x))**2
p2 = __sub__(Point(self.y,other.y))**2
p = math.sqrt(p1,p2)
return p
def dist_result(points):
points = [Point(*point) for point in points]
return [points[0].distance(point) for point in points]
```
but it is returning:
```
NameError: name '__sub__' is not defined
```
can you please show me how to correctly write that function ?
so I am expecting an input of:
```
1=(1,1) and 2=(2,2)
```
and I would like to calculate the distance using:
```
=|2−1|=(1−2)^2+(1−2)^2
``` | 2021/09/29 | [
"https://Stackoverflow.com/questions/69383255",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16743649/"
] | ```
let newFavorites = favorites;
```
This assigns newFavorites to point to favorites
```
newFavorites.push(newFav);
```
Because newFavorites points to favorites, which is an array in `state`, you can't push anything onto it and have that change render.
What you need to do, is populate a new array `newFavorites` with the content of favorites.
Try
```
const newFavorites = [...favorites];
```
That should work | I would make some changes in your addFavourite function:
function addFavorite(name, id) {
let newFav = {name, id};
```
setFavorites([…favourites, newFav]);
```
}
This way, everytime you click favourite, you ensure a new array is being created with spread operator |
69,383,255 | I am trying to calculate the distance between 2 points in python using this code :
```
import math
class Point(object):
def __init__(self, x, y):
self.x = x
self.y = y
def __repr__(self):
return "Point({0}, {1})".format(self.x, self.y)
def __sub__(self, other):
return Point(self.x - other.x, self.y - other.y) #<-- MODIFIED THIS
def distance(self, other):
p1 = __sub__(Point(self.x , other.x))**2
p2 = __sub__(Point(self.y,other.y))**2
p = math.sqrt(p1,p2)
return p
def dist_result(points):
points = [Point(*point) for point in points]
return [points[0].distance(point) for point in points]
```
but it is returning:
```
NameError: name '__sub__' is not defined
```
can you please show me how to correctly write that function ?
so I am expecting an input of:
```
1=(1,1) and 2=(2,2)
```
and I would like to calculate the distance using:
```
=|2−1|=(1−2)^2+(1−2)^2
``` | 2021/09/29 | [
"https://Stackoverflow.com/questions/69383255",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16743649/"
] | ```
let newFavorites = favorites;
```
This assigns newFavorites to point to favorites
```
newFavorites.push(newFav);
```
Because newFavorites points to favorites, which is an array in `state`, you can't push anything onto it and have that change render.
What you need to do, is populate a new array `newFavorites` with the content of favorites.
Try
```
const newFavorites = [...favorites];
```
That should work | Its not working because use are mutating the existing state.
The list is updating but it won't render as useState only renders when the parameter passed to it is different from previous one but in your case though you are changing the list items still the reference is not altering.
To make it work you can use spread operator for lists for even Array.concat() returns a new updated array.
```js
function addFavorite(name, id) {
let newFav = {name: name, id: id};
setFavorites(prev=>[...prev, newFav]);
}
``` |
69,383,255 | I am trying to calculate the distance between 2 points in python using this code :
```
import math
class Point(object):
def __init__(self, x, y):
self.x = x
self.y = y
def __repr__(self):
return "Point({0}, {1})".format(self.x, self.y)
def __sub__(self, other):
return Point(self.x - other.x, self.y - other.y) #<-- MODIFIED THIS
def distance(self, other):
p1 = __sub__(Point(self.x , other.x))**2
p2 = __sub__(Point(self.y,other.y))**2
p = math.sqrt(p1,p2)
return p
def dist_result(points):
points = [Point(*point) for point in points]
return [points[0].distance(point) for point in points]
```
but it is returning:
```
NameError: name '__sub__' is not defined
```
can you please show me how to correctly write that function ?
so I am expecting an input of:
```
1=(1,1) and 2=(2,2)
```
and I would like to calculate the distance using:
```
=|2−1|=(1−2)^2+(1−2)^2
``` | 2021/09/29 | [
"https://Stackoverflow.com/questions/69383255",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16743649/"
] | ```
let newFavorites = favorites;
```
This assigns newFavorites to point to favorites
```
newFavorites.push(newFav);
```
Because newFavorites points to favorites, which is an array in `state`, you can't push anything onto it and have that change render.
What you need to do, is populate a new array `newFavorites` with the content of favorites.
Try
```
const newFavorites = [...favorites];
```
That should work | For changing array state, you should use:
```
function addFavorite(name, id) {
let newFav = { name: name, id: id };
setFavorites((favorites) => [...favorites, newFav]);
}
``` |
69,383,255 | I am trying to calculate the distance between 2 points in python using this code :
```
import math
class Point(object):
def __init__(self, x, y):
self.x = x
self.y = y
def __repr__(self):
return "Point({0}, {1})".format(self.x, self.y)
def __sub__(self, other):
return Point(self.x - other.x, self.y - other.y) #<-- MODIFIED THIS
def distance(self, other):
p1 = __sub__(Point(self.x , other.x))**2
p2 = __sub__(Point(self.y,other.y))**2
p = math.sqrt(p1,p2)
return p
def dist_result(points):
points = [Point(*point) for point in points]
return [points[0].distance(point) for point in points]
```
but it is returning:
```
NameError: name '__sub__' is not defined
```
can you please show me how to correctly write that function ?
so I am expecting an input of:
```
1=(1,1) and 2=(2,2)
```
and I would like to calculate the distance using:
```
=|2−1|=(1−2)^2+(1−2)^2
``` | 2021/09/29 | [
"https://Stackoverflow.com/questions/69383255",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16743649/"
] | I would make some changes in your addFavourite function:
function addFavorite(name, id) {
let newFav = {name, id};
```
setFavorites([…favourites, newFav]);
```
}
This way, everytime you click favourite, you ensure a new array is being created with spread operator | Its not working because use are mutating the existing state.
The list is updating but it won't render as useState only renders when the parameter passed to it is different from previous one but in your case though you are changing the list items still the reference is not altering.
To make it work you can use spread operator for lists for even Array.concat() returns a new updated array.
```js
function addFavorite(name, id) {
let newFav = {name: name, id: id};
setFavorites(prev=>[...prev, newFav]);
}
``` |
69,383,255 | I am trying to calculate the distance between 2 points in python using this code :
```
import math
class Point(object):
def __init__(self, x, y):
self.x = x
self.y = y
def __repr__(self):
return "Point({0}, {1})".format(self.x, self.y)
def __sub__(self, other):
return Point(self.x - other.x, self.y - other.y) #<-- MODIFIED THIS
def distance(self, other):
p1 = __sub__(Point(self.x , other.x))**2
p2 = __sub__(Point(self.y,other.y))**2
p = math.sqrt(p1,p2)
return p
def dist_result(points):
points = [Point(*point) for point in points]
return [points[0].distance(point) for point in points]
```
but it is returning:
```
NameError: name '__sub__' is not defined
```
can you please show me how to correctly write that function ?
so I am expecting an input of:
```
1=(1,1) and 2=(2,2)
```
and I would like to calculate the distance using:
```
=|2−1|=(1−2)^2+(1−2)^2
``` | 2021/09/29 | [
"https://Stackoverflow.com/questions/69383255",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16743649/"
] | I would make some changes in your addFavourite function:
function addFavorite(name, id) {
let newFav = {name, id};
```
setFavorites([…favourites, newFav]);
```
}
This way, everytime you click favourite, you ensure a new array is being created with spread operator | For changing array state, you should use:
```
function addFavorite(name, id) {
let newFav = { name: name, id: id };
setFavorites((favorites) => [...favorites, newFav]);
}
``` |
69,383,255 | I am trying to calculate the distance between 2 points in python using this code :
```
import math
class Point(object):
def __init__(self, x, y):
self.x = x
self.y = y
def __repr__(self):
return "Point({0}, {1})".format(self.x, self.y)
def __sub__(self, other):
return Point(self.x - other.x, self.y - other.y) #<-- MODIFIED THIS
def distance(self, other):
p1 = __sub__(Point(self.x , other.x))**2
p2 = __sub__(Point(self.y,other.y))**2
p = math.sqrt(p1,p2)
return p
def dist_result(points):
points = [Point(*point) for point in points]
return [points[0].distance(point) for point in points]
```
but it is returning:
```
NameError: name '__sub__' is not defined
```
can you please show me how to correctly write that function ?
so I am expecting an input of:
```
1=(1,1) and 2=(2,2)
```
and I would like to calculate the distance using:
```
=|2−1|=(1−2)^2+(1−2)^2
``` | 2021/09/29 | [
"https://Stackoverflow.com/questions/69383255",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16743649/"
] | Its not working because use are mutating the existing state.
The list is updating but it won't render as useState only renders when the parameter passed to it is different from previous one but in your case though you are changing the list items still the reference is not altering.
To make it work you can use spread operator for lists for even Array.concat() returns a new updated array.
```js
function addFavorite(name, id) {
let newFav = {name: name, id: id};
setFavorites(prev=>[...prev, newFav]);
}
``` | For changing array state, you should use:
```
function addFavorite(name, id) {
let newFav = { name: name, id: id };
setFavorites((favorites) => [...favorites, newFav]);
}
``` |
40,634,826 | I'm using Swig 3.0.7 to create python 2.7-callable versions of C functions that define constants in this manner:
```c
#define MYCONST 5.0
```
In previous versions of swig these would be available to python transparently:
```py
import mymodule
x = 3. * mymodule.MYCONST
```
But now this generates a message
```none
AttributeError: 'module' object has no attribute 'MYCONST'
```
Functions in 'mymodule' that use the constant internally work as expected.
Interestingly, if I include this line in the Swig directive file mymodule.i,
```c
#define MYCONST 5.0
```
then doing dir(mymodule) returns a list that includes
```
['MYCONST_swigconstant', 'SWIG_PyInstanceMethodNew', (etc.) .... ]
```
typing to the python interpreter
```
mymodule.MYCONST_swigconstant
```
gives
```
<built-in function MYCONST_swigconstant>
```
which offers no obvious way to get at the value.
So my question is, can one make the previous syntax work so that `mymodule.MYCONST` evaluates correctly
If not, is there a workaround? | 2016/11/16 | [
"https://Stackoverflow.com/questions/40634826",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3263972/"
] | You can use [`split`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html), then cast column `year` to `int` and if necessary add `Q` to column `q`:
```
df = pd.DataFrame({'date':['2015Q1','2015Q2']})
print (df)
date
0 2015Q1
1 2015Q2
df[['year','q']] = df.date.str.split('Q', expand=True)
df.year = df.year.astype(int)
df.q = 'Q' + df.q
print (df)
date year q
0 2015Q1 2015 Q1
1 2015Q2 2015 Q2
```
Also you can use [`Period`](http://pandas.pydata.org/pandas-docs/stable/timeseries.html#period):
```
df['date'] = pd.to_datetime(df.date).dt.to_period('Q')
df['year'] = df['date'].dt.year
df['quarter'] = df['date'].dt.quarter
print (df)
date year quarter
0 2015Q1 2015 1
1 2015Q2 2015 2
``` | You could also construct a datetimeIndex and call year and quarter on it.
```
df.index = pd.to_datetime(df.date)
df['year'] = df.index.year
df['quarter'] = df.index.quarter
date year quarter
date
2015-01-01 2015Q1 2015 1
2015-04-01 2015Q2 2015 2
```
Note that you don't even need a dedicated column for year and quarter if you have a datetimeIndex, you could do a groupby like this for example: `df.groupby(df.index.quarter)` |
51,567,959 | I am sort of new to python. I can open files in Windows with but am having trouble in Mac. I can open webbrowsers but I am unsure as to how I open other programs or word documents.
Thanks | 2018/07/28 | [
"https://Stackoverflow.com/questions/51567959",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10147138/"
] | use this class `col-md-auto` to make width auto and `d-inline-block` to display column inline block (bootstrap 4)
```
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css" rel="stylesheet"/>
<div class="row">
<div class="col-md-auto col-lg-auto d-inline-block">
<label for="name">Company Name</label>
<input id="name" type="text" value="" name="name" style="width:200px">
</div>
<div class="col-md-auto col-lg-auto d-inline-block">
<label for="email">GST Number</label>
<input id="email" type="text" value="" name="email">
</div>
<div class="col-md-auto col-lg-auto d-inline-block" style="">
<label for="email">Branch Address</label>
<input id="email" type="text" value="" name="email" style="width:300px">
</div>
<div class="col-md-auto col-lg-auto d-inline-block" style="">
<label for="email">Tin Number</label>
<input id="email" type="text" value="" name="email" style="width:200px">
</div>
<div class="col-md-auto col-lg-auto d-inline-block" style="">
<label for="email">pin code</label>
<input id="email" type="text" value="" name="email" style="width:100px">
</div>
<div class="col-md-auto col-lg-auto d-inline-block" style="">
<label for="email">Date</label>
<input id="email" type="text" value="" name="email" style="width:100px">
</div>
<div class="col-md-auto col-lg-auto d-inline-block" style="">
<label for="email">code</label>
<input id="email" type="text" value="" name="email" style="width:100px">
</div>
</div>
``` | I think that you can see the example below,this may satisfy your need. Also,you can set the col-x-x property to place more that 3 input in one row.
[row-col example](http://%20https://v3.bootcss.com/components/#input-groups-buttons) |
63,627,160 | 1. I am trying to get a students attendance record set up in python. I have most of it figured out. I am stuck on one section and it is the attendane section. I am trying to use a table format (tksheets) to keep record of students names and their attendance. The issue I am having is working with tksheets. I can't seem to get the information from my DB(SQLite3) to populate the columns. I've also tried tktables, and the pandastables. But again I run into the same issue.
I have considered using the Treeview Widget to populate the columns with the students names, and then use entry boxes to add the attendance. The issue is I have to create each entry box and place it individually. I didn't like this plan. Below is the current code I am using.
If anyone could show me how to get the data from the DB and populate the spreadsheet I am using that be great. Thanks.
```
def rows(self):
self.grid_columnconfigure(1, weight=1)
self.grid_rowconfigure(1,weight=1)
self.sheet = Sheet(self.aug_tab,
data=[[f'Row{r} Column{c}' for c in range(36)]for r in range(24)],
height=300,
width=900)
self.sheet.enable_bindings(("single",
"drag_select",
"column_drag_and_drop",
"row_drag_and_drop",
"column_select",
"row_select",
"column_width_resize",
"double_click_column_resize",
"row_width_resize",
"column_height_resize",
"arrowkeys",
"row_height_resize",
"double_click_row_resize",
"right_click_popup_menu",
"rc_insert_column",
"rc_delete_column",
"rc_insert_row",
"rc_delete_row",
"copy",
"cut",
"paste",
"delete",
"undo",
"edit_cell"))
self.headers_list = ("Student ID","Ch. First Name","Ch. Last Name","Eng. Name")
self.headers = [f'{c}'for c in self.headers_list]
self.sheet.headers(self.headers)
self.sheet.pack()
print(self.sheet.get_column_data(0,0))
#############DEFINE FUNCTIONS###############################
rows(self)
```
[enter image description here](https://i.stack.imgur.com/5fAag.jpg) | 2020/08/28 | [
"https://Stackoverflow.com/questions/63627160",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4049491/"
] | Try Something like this-
```
';SELECT text
FROM notes
WHERE username = 'alice
``` | SQL Injection can be implemented by concatenating the SQL statement with the input parameters. For example, the following statement is vulnerable to SQL Injection:
```
String statement = "SELECT ID FROM USERS WHERE USERNAME = '" + inputUsername + "' AND PASSWORD = '" + hashedPassword + "'";
```
An attacker would enter a username like this:
```
' OR 1=1 Limit 1; --
```
Thus, the executed statement will be:
```
SELECT ID FROM USERS WHERE USERNAME = '' OR 1=1 Limit 1; --' AND PASSWORD = 'Blob'
```
Hence, the password part is commented, and the database engine would return any arbitrary result which will be acceptable by the application.
I found this nice explanation on the free preview of "Introduction to Cybersecurity for Software Developers" course.
<https://www.udemy.com/course/cybersecurity-for-developers-1/>
It also explains how to prevent SQL Injection. |
63,283,368 | I've got the problem during setting up deploying using cloudbuild and dockerfile.
My `Dockerfile`:
```
FROM python:3.8
ARG ENV
ARG NUM_WORKERS
ENV PORT=8080
ENV NUM_WORKERS=$NUM_WORKERS
RUN pip install poetry
COPY pyproject.toml poetry.lock ./
RUN poetry config virtualenvs.create false && \
poetry install --no-dev
COPY ./.env.$ENV /workspace/.env
COPY ./app-$ENV.yaml /workspace/app.yaml
COPY . /workspace
ENTRYPOINT ["./entrypoint.sh"]
```
My `cloudbuild.yaml`:
```
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |
docker pull gcr.io/$PROJECT_ID/my-repo:$BRANCH_NAME || exit 0
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'-t',
'gcr.io/$PROJECT_ID/my-repo:$BRANCH_NAME',
'--cache-from',
'gcr.io/$PROJECT_ID/my-repo:$BRANCH_NAME',
'--build-arg', 'ENV=develop',
'--build-arg', 'NUM_WORKERS=2',
'.'
]
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/my-repo:$BRANCH_NAME']
- name: 'gcr.io/$PROJECT_ID/my-repo:$BRANCH_NAME'
id: RUN-LINTERS
entrypoint: sh
args: ['scripts/linters.sh']
- name: gcr.io/cloud-builders/docker
id: START-REDIS
args: ['run', '-d', '--network=cloudbuild', '--name=redisdb', 'redis']
- name: 'gcr.io/$PROJECT_ID/my-repo:$BRANCH_NAME'
id: RUN-TESTS
entrypoint: sh
args: ['scripts/run_tests.sh']
env:
- 'REDIS_HOST=redis://redisdb'
- 'DATASTORE_EMULATOR_HOST=datastore:8081'
waitFor:
- START-REDIS
- START-DATASTORE-EMULATOR
- name: gcr.io/cloud-builders/docker
id: SHUTDOWN-REDIS
args: ['rm', '--force', 'redisdb']
- name: gcr.io/cloud-builders/docker
id: SHUTDOWN-DATASTORE_EMULATOR
args: ['rm', '--force', 'datastore']
- name: 'gcr.io/cloud-builders/gcloud'
id: DEPLOY
args:
- "app"
- "deploy"
- "--image-url"
- 'gcr.io/$PROJECT_ID/my-repo:$BRANCH_NAME'
- "--verbosity=debug"
images: ['gcr.io/$PROJECT_ID/my-repo:$BRANCH_NAME']
timeout: "1000s"
```
Problem is that copied files `.env` and `app.yaml` are not presented in `workspace`
I don't know why cloudbuild ignore these files from image, because I've printed `ls -a` and have seen that files are copied properly during build, but they disappear during run-tests stage and also I can't deploy without app.yaml
Any help pleaseee | 2020/08/06 | [
"https://Stackoverflow.com/questions/63283368",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11993534/"
] | Here is a working example of how you would attach the value of a configuration trait to another pallet's storage item.
Pallet 1
--------
Here is `pallet_1` which has the storage item we want to use.
>
> NOTE: This storage is marked `pub` so it is accessible outside the pallet.
>
>
>
```rust
use frame_support::{decl_module, decl_storage};
use frame_system::ensure_signed;
pub trait Trait: frame_system::Trait {}
decl_storage! {
trait Store for Module<T: Trait> as TemplateModule {
pub MyStorage: u32;
}
}
decl_module! {
pub struct Module<T: Trait> for enum Call where origin: T::Origin {
#[weight = 0]
pub fn set_storage(origin, value: u32) {
let _ = ensure_signed(origin)?;
MyStorage::put(value);
}
}
}
```
Pallet 2
--------
Here is `pallet_2` which has a configuration trait that we want to populate with the storage item from `pallet_1`:
```rust
use frame_support::{decl_module, dispatch, traits::Get};
use frame_system::ensure_signed;
pub trait Trait: frame_system::Trait {
type MyConfig: Get<u32>;
}
decl_module! {
pub struct Module<T: Trait> for enum Call where origin: T::Origin {
#[weight = 0]
pub fn do_something(origin) -> dispatch::DispatchResult {
let _ = ensure_signed(origin)?;
let _my_config = T::MyConfig::get();
Ok(())
}
}
}
```
Runtime Configuration
---------------------
These two pallets are very straightforward and work separately. But if we want to connect them, we need to configure our runtime:
```rust
use frame_support::traits::Get;
impl pallet_1::Trait for Runtime {}
pub struct StorageToConfig;
impl Get<u32> for StorageToConfig {
fn get() -> u32 {
return pallet_1::MyStorage::get();
}
}
impl pallet_2::Trait for Runtime {
type MyConfig = StorageToConfig;
}
// We also update the `construct_runtime!`, but that is omitted for this example.
```
Here we have defined a struct `StorageToConfig` which implements the `Get<u32>` trait that is expected by `pallet_2`. This struct tells the runtime when `MyConfig::get()` is called, it should then call `pallet_1::MyStorage::get()` which reads into runtime storage and gets that value.
So now, every call to `T::MyConfig::get()` in `pallet_2` will be a storage read, and will get whatever value is set in `pallet_1`.
Let me know if this helps! | It is actually as creating a trait impl the struct and then in the runtime pass the struct to the receiver (by using the trait), what I did to learn this is to look at all of the pallets that are already there and see how the pass information
for instance this trait in authorship
<https://github.com/paritytech/substrate/blob/640dd1a0a44b6f28af1189f0293ab272ebc9d2eb/frame/authorship/src/lib.rs#L39>
is implmented here
<https://github.com/paritytech/substrate/blob/77819ad119f23a68b7478f3ac88e6c93a1677fc1/frame/aura/src/lib.rs#L148>
and here it is composed (not with aura impl but session)
<https://github.com/paritytech/substrate/blob/549050b7f1740c90855e777daf3f9700750ad7ff/bin/node/runtime/src/lib.rs#L363>
you should also read this [https://doc.rust-lang.org/book/ch10-02-traits.html#:~:text=A%20trait%20tells%20the%20Rust,type%20that%20has%20certain%20behavior](https://doc.rust-lang.org/book/ch10-02-traits.html#:%7E:text=A%20trait%20tells%20the%20Rust,type%20that%20has%20certain%20behavior) |
30,902,443 | I'm using vincent a data visualization package. One of the inputs it takes is path to data.
(from the documentation)
```
`geo_data` needs to be passed as a list of dicts with the following
| format:
| {
| name: data name
| url: path_to_data,
| feature: TopoJSON object set (ex: 'countries')
| }
|
```
I have a topo.json file on my computer, but when I run that in, ipython says loading failed.
```
map=r'C:\Users\chungkim271\Desktop\DC housing\dc.json'
geo_data = [{'name': 'DC',
'url': map,
'feature': "collection"}]
vis = vincent.Map(geo_data=geo_data, scale=1000)
vis
```
Do you know if vincent only takes url addresses, and if so, what is the quickest way i can get an url address for this file?
Thanks in advance | 2015/06/17 | [
"https://Stackoverflow.com/questions/30902443",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4682755/"
] | It seems that you're using it in Jupyter Notebook. If no, my reply is irrelevant for your case.
AFAIK, vincent needs this topojson file to be available through web server (so javascript from your browser will be able to download it to build the map). If the topojson file is somewhere in the Jupyter root dir then it's available (and you can provide relative path to it), otherwise it's not.
To determine relative path you can use something like this:
```
import os
relpath = os.path.relpath('abs-path-to-geodata', os.path.abspath(os.path.curdir))
``` | I know that this post is old, hopefully this helps someone. I am not sure what map you are looking for, but here is the URL for the world map
```
world_topo="https://raw.githubusercontent.com/wrobstory/vincent_map_data/master/world-countries.topo.json"
```
and the USA state maps
```
state_topo = "https://raw.githubusercontent.com/wrobstory/vincent_map_data/master/us_states.topo.json"
```
I got this working beautifully, hope this is helpful for someone! |
17,975,795 | I'm sure this must be simple, but I'm a python noob, so I need some help.
I have a list that looks the following way:
```
foo = [['0.125', '0', 'able'], ['', '0.75', 'unable'], ['0', '0', 'dorsal'], ['0', '0', 'ventral'], ['0', '0', 'acroscopic']]
```
Notice that every word has 1 or 2 numbers to it. I want to substract number 2 from number 1 and then come with a **dictionary** that is: word, number.
Foo would then look something like this:
```
foo = {'able','0.125'},{'unable', '-0.75'}...
```
it tried doing:
```
bar=[]
for a,b,c in foo:
d=float(a)-float(b)
bar.append((c,d))
```
But I got the error:
```
ValueError: could not convert string to float:
``` | 2013/07/31 | [
"https://Stackoverflow.com/questions/17975795",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2001008/"
] | `''` cannot be converted to string.
```
bar = []
for a,b,c in foo:
d = float(a or 0) - float(b or 0)
bar.append((c,d))
```
However, that will not make a dictionary. For that you want:
```
bar = {}
for a,b,c in foo:
d = float(a or 0)-float(b or 0)
bar[c] = d
```
Or a shorter way using dictionary comprehensions:
```
bar = {sublist[2]: float(sublist[0] or 0) - float(sublist[1] or 0) for sublist in foo}
``` | Add a condition to verify if the string is empty like that '' and convert it to 0 |
17,975,795 | I'm sure this must be simple, but I'm a python noob, so I need some help.
I have a list that looks the following way:
```
foo = [['0.125', '0', 'able'], ['', '0.75', 'unable'], ['0', '0', 'dorsal'], ['0', '0', 'ventral'], ['0', '0', 'acroscopic']]
```
Notice that every word has 1 or 2 numbers to it. I want to substract number 2 from number 1 and then come with a **dictionary** that is: word, number.
Foo would then look something like this:
```
foo = {'able','0.125'},{'unable', '-0.75'}...
```
it tried doing:
```
bar=[]
for a,b,c in foo:
d=float(a)-float(b)
bar.append((c,d))
```
But I got the error:
```
ValueError: could not convert string to float:
``` | 2013/07/31 | [
"https://Stackoverflow.com/questions/17975795",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2001008/"
] | `''` cannot be converted to string.
```
bar = []
for a,b,c in foo:
d = float(a or 0) - float(b or 0)
bar.append((c,d))
```
However, that will not make a dictionary. For that you want:
```
bar = {}
for a,b,c in foo:
d = float(a or 0)-float(b or 0)
bar[c] = d
```
Or a shorter way using dictionary comprehensions:
```
bar = {sublist[2]: float(sublist[0] or 0) - float(sublist[1] or 0) for sublist in foo}
``` | `float('')` doesn't work. Assuming you want `0` in that case, I recommend a helper function:
```
def safefloat(s):
if not s:
return 0.0
return float(s)
res = {}
for a, b, c in foo:
res[c] = safefloat(a) - safefloat(b)
```
Note you can make the dictionary in one line with a comprehension:
```
res = dict((c, safefloat(a) - safefloat(b)) for a, b, c in foo)
```
or a dict comprehension in Python 2.7+:
```
res = {c: safefloat(a) - safefloat(b) for a, b, c in foo}
``` |
17,975,795 | I'm sure this must be simple, but I'm a python noob, so I need some help.
I have a list that looks the following way:
```
foo = [['0.125', '0', 'able'], ['', '0.75', 'unable'], ['0', '0', 'dorsal'], ['0', '0', 'ventral'], ['0', '0', 'acroscopic']]
```
Notice that every word has 1 or 2 numbers to it. I want to substract number 2 from number 1 and then come with a **dictionary** that is: word, number.
Foo would then look something like this:
```
foo = {'able','0.125'},{'unable', '-0.75'}...
```
it tried doing:
```
bar=[]
for a,b,c in foo:
d=float(a)-float(b)
bar.append((c,d))
```
But I got the error:
```
ValueError: could not convert string to float:
``` | 2013/07/31 | [
"https://Stackoverflow.com/questions/17975795",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2001008/"
] | `''` cannot be converted to string.
```
bar = []
for a,b,c in foo:
d = float(a or 0) - float(b or 0)
bar.append((c,d))
```
However, that will not make a dictionary. For that you want:
```
bar = {}
for a,b,c in foo:
d = float(a or 0)-float(b or 0)
bar[c] = d
```
Or a shorter way using dictionary comprehensions:
```
bar = {sublist[2]: float(sublist[0] or 0) - float(sublist[1] or 0) for sublist in foo}
``` | that happens because in some cases their is an empty string
you could write
```
d = float(a or '0') - float(b or '0')
``` |
17,975,795 | I'm sure this must be simple, but I'm a python noob, so I need some help.
I have a list that looks the following way:
```
foo = [['0.125', '0', 'able'], ['', '0.75', 'unable'], ['0', '0', 'dorsal'], ['0', '0', 'ventral'], ['0', '0', 'acroscopic']]
```
Notice that every word has 1 or 2 numbers to it. I want to substract number 2 from number 1 and then come with a **dictionary** that is: word, number.
Foo would then look something like this:
```
foo = {'able','0.125'},{'unable', '-0.75'}...
```
it tried doing:
```
bar=[]
for a,b,c in foo:
d=float(a)-float(b)
bar.append((c,d))
```
But I got the error:
```
ValueError: could not convert string to float:
``` | 2013/07/31 | [
"https://Stackoverflow.com/questions/17975795",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2001008/"
] | `''` cannot be converted to string.
```
bar = []
for a,b,c in foo:
d = float(a or 0) - float(b or 0)
bar.append((c,d))
```
However, that will not make a dictionary. For that you want:
```
bar = {}
for a,b,c in foo:
d = float(a or 0)-float(b or 0)
bar[c] = d
```
Or a shorter way using dictionary comprehensions:
```
bar = {sublist[2]: float(sublist[0] or 0) - float(sublist[1] or 0) for sublist in foo}
``` | ```
>>> foo = [['0.125', '0', 'able'], ['', '0.75', 'unable'],
['0', '0', 'dorsal'], ['0', '0', 'ventral'],
['0', '0', 'acroscopic']]
>>> dict((i[2], float(i[0] or 0) - float(i[1])) for i in foo)
{'acroscopic': 0.0, 'ventral': 0.0, 'unable': -0.75, 'able': 0.125,
'dorsal': 0.0}
``` |
21,188,579 | I'm stuck in a exercice in python where I need to convert a DNA sequence into its corresponding amino acids. So far, I have:
```
seq1 = "AATAGGCATAACTTCCTGTTCTGAACAGTTTGA"
for i in range(0, len(seq), 3):
print seq[i:i+3]
```
I need to do this without using dictionaries, and I was going for replace, but it seems it's not advisable either. How can I achieve this?
And it's supposed to give something like this, for exemple:
```
>seq1_1_+
TQSLIVHLIY
>seq1_2_+
LNRSFTDSST
>seq1_3_+
SIADRSLTHLL
```
Update 2: OK, so i had to resort to functions, and as suggested, i have gotten the output i wanted. Now, i have a series of functions, which return a series of aminoacid sequences, and i want to get an output file that looks like this, for exemple:
```
>seq1_1_+
iyyslrs-las-smrlssiv-m
>seq1_2_+
fiirydrs-ladrcgshrssk
>seq1_3_+
llfativas-lidaalidrl
>seq1_1_-
frrsmraasis-lativannkm
>seq1_2_-
lddr-ephrsas-lrs-riin
>seq1_3_-
-tidesridqlasydrse--m
```
For that, i'm using this:
```
for x in f1:
x = x.strip()
if x.count("seq"):
f2.write((x)+("_1_+\n"))
f2.write((x)+("_2_+\n"))
f2.write((x)+("_3_+\n"))
f2.write((x)+("_1_-\n"))
f2.write((x)+("_2_-\n"))
f2.write((x)+("_3_-\n"))
else:
f2.write((translate1(x))+("\n"))
f2.write((translate2(x))+("\n"))
f2.write((translate3(x))+("\n"))
f2.write((translate1neg(x))+("\n"))
f2.write((translate2neg(x))+("\n"))
f2.write((translate3neg(x))+("\n"))
```
But unlike the expected output file suggested, i get this:
```
>seq1_1_+
>seq1_2_+
>seq1_3_+
>seq1_1_-
>seq1_2_-
>seq1_3_-
iyyslrs-las-smrlssiv-m
fiirydrs-ladrcgshrssk
llfativas-lidaalidrl
frrsmraasis-lativannkm
lddr-ephrsas-lrs-riin
-tidesridqlasydrse--m
```
So he's pretty much doing all the seq's first, and all the functions afterwards, so i need to intercalate them, problem is how. | 2014/01/17 | [
"https://Stackoverflow.com/questions/21188579",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2884400/"
] | To translate you need a table of [codons](http://en.wikipedia.org/wiki/DNA_codon_table), so without dictionary or other data structure seems strange.
Maybe you can look into [biopython](http://biopython.org/DIST/docs/tutorial/Tutorial.html#sec26)? And see how they manage it.
You can also translate directly from the coding strand DNA sequence:
```
>>> from Bio.Seq import Seq
>>> from Bio.Alphabet import IUPAC
>>> coding_dna = Seq("ATGGCCATTGTAATGGGCCGCTGAAAGGGTGCCCGATAG", IUPAC.unambiguous_dna)
>>> coding_dna
Seq('ATGGCCATTGTAATGGGCCGCTGAAAGGGTGCCCGATAG', IUPACUnambiguousDNA())
>>> coding_dna.translate()
Seq('MAIVMGR*KGAR*', HasStopCodon(IUPACProtein(), '*')) "
```
You may take a look [into](https://github.com/biopython/biopython/blob/f0658115607dacb602de58e2438021d46d3c433b/Tests/test_Seq_objs.py) | You cannot practically do this without either a function or a dictionary. Part 1, converting the sequence into three-character codons, is easy enough as you have already done it.
But Part 2, to convert these into amino acids, you will need to define a mapping, either:
```
mapping = {"NNN": "X", ...}
```
or
```
def mapping(codon):
if codon in ("AGA", "AGG", "CGA", "CGC", "CGG", "CGT"):
return "R"
...
```
or
```
for codon, acid in [("CAA", "Q"), ("CAG", "Q"), ...]:
```
I would favour the second of these as it has the least duplication (and therefore potential for error). |
21,188,579 | I'm stuck in a exercice in python where I need to convert a DNA sequence into its corresponding amino acids. So far, I have:
```
seq1 = "AATAGGCATAACTTCCTGTTCTGAACAGTTTGA"
for i in range(0, len(seq), 3):
print seq[i:i+3]
```
I need to do this without using dictionaries, and I was going for replace, but it seems it's not advisable either. How can I achieve this?
And it's supposed to give something like this, for exemple:
```
>seq1_1_+
TQSLIVHLIY
>seq1_2_+
LNRSFTDSST
>seq1_3_+
SIADRSLTHLL
```
Update 2: OK, so i had to resort to functions, and as suggested, i have gotten the output i wanted. Now, i have a series of functions, which return a series of aminoacid sequences, and i want to get an output file that looks like this, for exemple:
```
>seq1_1_+
iyyslrs-las-smrlssiv-m
>seq1_2_+
fiirydrs-ladrcgshrssk
>seq1_3_+
llfativas-lidaalidrl
>seq1_1_-
frrsmraasis-lativannkm
>seq1_2_-
lddr-ephrsas-lrs-riin
>seq1_3_-
-tidesridqlasydrse--m
```
For that, i'm using this:
```
for x in f1:
x = x.strip()
if x.count("seq"):
f2.write((x)+("_1_+\n"))
f2.write((x)+("_2_+\n"))
f2.write((x)+("_3_+\n"))
f2.write((x)+("_1_-\n"))
f2.write((x)+("_2_-\n"))
f2.write((x)+("_3_-\n"))
else:
f2.write((translate1(x))+("\n"))
f2.write((translate2(x))+("\n"))
f2.write((translate3(x))+("\n"))
f2.write((translate1neg(x))+("\n"))
f2.write((translate2neg(x))+("\n"))
f2.write((translate3neg(x))+("\n"))
```
But unlike the expected output file suggested, i get this:
```
>seq1_1_+
>seq1_2_+
>seq1_3_+
>seq1_1_-
>seq1_2_-
>seq1_3_-
iyyslrs-las-smrlssiv-m
fiirydrs-ladrcgshrssk
llfativas-lidaalidrl
frrsmraasis-lativannkm
lddr-ephrsas-lrs-riin
-tidesridqlasydrse--m
```
So he's pretty much doing all the seq's first, and all the functions afterwards, so i need to intercalate them, problem is how. | 2014/01/17 | [
"https://Stackoverflow.com/questions/21188579",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2884400/"
] | You cannot practically do this without either a function or a dictionary. Part 1, converting the sequence into three-character codons, is easy enough as you have already done it.
But Part 2, to convert these into amino acids, you will need to define a mapping, either:
```
mapping = {"NNN": "X", ...}
```
or
```
def mapping(codon):
if codon in ("AGA", "AGG", "CGA", "CGC", "CGG", "CGT"):
return "R"
...
```
or
```
for codon, acid in [("CAA", "Q"), ("CAG", "Q"), ...]:
```
I would favour the second of these as it has the least duplication (and therefore potential for error). | you can convert the nucleotide bases in numbers (base 4) and then translate using ordered aa in a string:
```
def translate(seq,frame):
BASES = 'ACGT'
# standard code
AA = 'KNKNTTTTRSRSIIMIQHQHPPPPRRRRLLLLEDEDAAAAGGGGVVVV*Y*YSSSS*CWCLFLF'
# convert DNA sequence in numbers: A=0; C=1; G=2; T=3
seqn = [str(BASES.find(i)) for i in seq.upper()]
# list of all codons in all forward frames (i.e. 3 digit numbers in base 4)
allframes = [''.join(seqn[x:x+3]) for x in range(len(seqn)) if x <= (len(seqn)-3)]
# translate codons in frame taking aa in AA string using indexes (turned in base 10 from base 4) in allframes
return ''.join([AA[int(i,4)] for i in allframes[(frame-1)::3]])
``` |
21,188,579 | I'm stuck in a exercice in python where I need to convert a DNA sequence into its corresponding amino acids. So far, I have:
```
seq1 = "AATAGGCATAACTTCCTGTTCTGAACAGTTTGA"
for i in range(0, len(seq), 3):
print seq[i:i+3]
```
I need to do this without using dictionaries, and I was going for replace, but it seems it's not advisable either. How can I achieve this?
And it's supposed to give something like this, for exemple:
```
>seq1_1_+
TQSLIVHLIY
>seq1_2_+
LNRSFTDSST
>seq1_3_+
SIADRSLTHLL
```
Update 2: OK, so i had to resort to functions, and as suggested, i have gotten the output i wanted. Now, i have a series of functions, which return a series of aminoacid sequences, and i want to get an output file that looks like this, for exemple:
```
>seq1_1_+
iyyslrs-las-smrlssiv-m
>seq1_2_+
fiirydrs-ladrcgshrssk
>seq1_3_+
llfativas-lidaalidrl
>seq1_1_-
frrsmraasis-lativannkm
>seq1_2_-
lddr-ephrsas-lrs-riin
>seq1_3_-
-tidesridqlasydrse--m
```
For that, i'm using this:
```
for x in f1:
x = x.strip()
if x.count("seq"):
f2.write((x)+("_1_+\n"))
f2.write((x)+("_2_+\n"))
f2.write((x)+("_3_+\n"))
f2.write((x)+("_1_-\n"))
f2.write((x)+("_2_-\n"))
f2.write((x)+("_3_-\n"))
else:
f2.write((translate1(x))+("\n"))
f2.write((translate2(x))+("\n"))
f2.write((translate3(x))+("\n"))
f2.write((translate1neg(x))+("\n"))
f2.write((translate2neg(x))+("\n"))
f2.write((translate3neg(x))+("\n"))
```
But unlike the expected output file suggested, i get this:
```
>seq1_1_+
>seq1_2_+
>seq1_3_+
>seq1_1_-
>seq1_2_-
>seq1_3_-
iyyslrs-las-smrlssiv-m
fiirydrs-ladrcgshrssk
llfativas-lidaalidrl
frrsmraasis-lativannkm
lddr-ephrsas-lrs-riin
-tidesridqlasydrse--m
```
So he's pretty much doing all the seq's first, and all the functions afterwards, so i need to intercalate them, problem is how. | 2014/01/17 | [
"https://Stackoverflow.com/questions/21188579",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2884400/"
] | To translate you need a table of [codons](http://en.wikipedia.org/wiki/DNA_codon_table), so without dictionary or other data structure seems strange.
Maybe you can look into [biopython](http://biopython.org/DIST/docs/tutorial/Tutorial.html#sec26)? And see how they manage it.
You can also translate directly from the coding strand DNA sequence:
```
>>> from Bio.Seq import Seq
>>> from Bio.Alphabet import IUPAC
>>> coding_dna = Seq("ATGGCCATTGTAATGGGCCGCTGAAAGGGTGCCCGATAG", IUPAC.unambiguous_dna)
>>> coding_dna
Seq('ATGGCCATTGTAATGGGCCGCTGAAAGGGTGCCCGATAG', IUPACUnambiguousDNA())
>>> coding_dna.translate()
Seq('MAIVMGR*KGAR*', HasStopCodon(IUPACProtein(), '*')) "
```
You may take a look [into](https://github.com/biopython/biopython/blob/f0658115607dacb602de58e2438021d46d3c433b/Tests/test_Seq_objs.py) | You got the amino acid output for the first codon only because you used 'return' inside the 'for loop'. Once the first amino acid is returned, the loop terminates, hence the second codon won't be tested at all.
You can create an empty list to keep the results for the translation of each codon, e.g.
```
aa = []
```
then, instead of using return, append the output to the list:
```
for x in range(0,len(seq1),3):
nuc2= seq1[x:x+3]
if nuc2 in ('GCT', 'GCC', 'GCA', 'GCG'):
aa.append("a")
elif nuc2 in ('TGT', 'TGC'):
aa.append("c")
....
```
and finally, join the alphabets in the list and return the string from the function:
```
return "".join(aa)
```
or simply print it:
```
print("".join(aa))
``` |
21,188,579 | I'm stuck in a exercice in python where I need to convert a DNA sequence into its corresponding amino acids. So far, I have:
```
seq1 = "AATAGGCATAACTTCCTGTTCTGAACAGTTTGA"
for i in range(0, len(seq), 3):
print seq[i:i+3]
```
I need to do this without using dictionaries, and I was going for replace, but it seems it's not advisable either. How can I achieve this?
And it's supposed to give something like this, for exemple:
```
>seq1_1_+
TQSLIVHLIY
>seq1_2_+
LNRSFTDSST
>seq1_3_+
SIADRSLTHLL
```
Update 2: OK, so i had to resort to functions, and as suggested, i have gotten the output i wanted. Now, i have a series of functions, which return a series of aminoacid sequences, and i want to get an output file that looks like this, for exemple:
```
>seq1_1_+
iyyslrs-las-smrlssiv-m
>seq1_2_+
fiirydrs-ladrcgshrssk
>seq1_3_+
llfativas-lidaalidrl
>seq1_1_-
frrsmraasis-lativannkm
>seq1_2_-
lddr-ephrsas-lrs-riin
>seq1_3_-
-tidesridqlasydrse--m
```
For that, i'm using this:
```
for x in f1:
x = x.strip()
if x.count("seq"):
f2.write((x)+("_1_+\n"))
f2.write((x)+("_2_+\n"))
f2.write((x)+("_3_+\n"))
f2.write((x)+("_1_-\n"))
f2.write((x)+("_2_-\n"))
f2.write((x)+("_3_-\n"))
else:
f2.write((translate1(x))+("\n"))
f2.write((translate2(x))+("\n"))
f2.write((translate3(x))+("\n"))
f2.write((translate1neg(x))+("\n"))
f2.write((translate2neg(x))+("\n"))
f2.write((translate3neg(x))+("\n"))
```
But unlike the expected output file suggested, i get this:
```
>seq1_1_+
>seq1_2_+
>seq1_3_+
>seq1_1_-
>seq1_2_-
>seq1_3_-
iyyslrs-las-smrlssiv-m
fiirydrs-ladrcgshrssk
llfativas-lidaalidrl
frrsmraasis-lativannkm
lddr-ephrsas-lrs-riin
-tidesridqlasydrse--m
```
So he's pretty much doing all the seq's first, and all the functions afterwards, so i need to intercalate them, problem is how. | 2014/01/17 | [
"https://Stackoverflow.com/questions/21188579",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2884400/"
] | To translate you need a table of [codons](http://en.wikipedia.org/wiki/DNA_codon_table), so without dictionary or other data structure seems strange.
Maybe you can look into [biopython](http://biopython.org/DIST/docs/tutorial/Tutorial.html#sec26)? And see how they manage it.
You can also translate directly from the coding strand DNA sequence:
```
>>> from Bio.Seq import Seq
>>> from Bio.Alphabet import IUPAC
>>> coding_dna = Seq("ATGGCCATTGTAATGGGCCGCTGAAAGGGTGCCCGATAG", IUPAC.unambiguous_dna)
>>> coding_dna
Seq('ATGGCCATTGTAATGGGCCGCTGAAAGGGTGCCCGATAG', IUPACUnambiguousDNA())
>>> coding_dna.translate()
Seq('MAIVMGR*KGAR*', HasStopCodon(IUPACProtein(), '*')) "
```
You may take a look [into](https://github.com/biopython/biopython/blob/f0658115607dacb602de58e2438021d46d3c433b/Tests/test_Seq_objs.py) | you can convert the nucleotide bases in numbers (base 4) and then translate using ordered aa in a string:
```
def translate(seq,frame):
BASES = 'ACGT'
# standard code
AA = 'KNKNTTTTRSRSIIMIQHQHPPPPRRRRLLLLEDEDAAAAGGGGVVVV*Y*YSSSS*CWCLFLF'
# convert DNA sequence in numbers: A=0; C=1; G=2; T=3
seqn = [str(BASES.find(i)) for i in seq.upper()]
# list of all codons in all forward frames (i.e. 3 digit numbers in base 4)
allframes = [''.join(seqn[x:x+3]) for x in range(len(seqn)) if x <= (len(seqn)-3)]
# translate codons in frame taking aa in AA string using indexes (turned in base 10 from base 4) in allframes
return ''.join([AA[int(i,4)] for i in allframes[(frame-1)::3]])
``` |
21,188,579 | I'm stuck in a exercice in python where I need to convert a DNA sequence into its corresponding amino acids. So far, I have:
```
seq1 = "AATAGGCATAACTTCCTGTTCTGAACAGTTTGA"
for i in range(0, len(seq), 3):
print seq[i:i+3]
```
I need to do this without using dictionaries, and I was going for replace, but it seems it's not advisable either. How can I achieve this?
And it's supposed to give something like this, for exemple:
```
>seq1_1_+
TQSLIVHLIY
>seq1_2_+
LNRSFTDSST
>seq1_3_+
SIADRSLTHLL
```
Update 2: OK, so i had to resort to functions, and as suggested, i have gotten the output i wanted. Now, i have a series of functions, which return a series of aminoacid sequences, and i want to get an output file that looks like this, for exemple:
```
>seq1_1_+
iyyslrs-las-smrlssiv-m
>seq1_2_+
fiirydrs-ladrcgshrssk
>seq1_3_+
llfativas-lidaalidrl
>seq1_1_-
frrsmraasis-lativannkm
>seq1_2_-
lddr-ephrsas-lrs-riin
>seq1_3_-
-tidesridqlasydrse--m
```
For that, i'm using this:
```
for x in f1:
x = x.strip()
if x.count("seq"):
f2.write((x)+("_1_+\n"))
f2.write((x)+("_2_+\n"))
f2.write((x)+("_3_+\n"))
f2.write((x)+("_1_-\n"))
f2.write((x)+("_2_-\n"))
f2.write((x)+("_3_-\n"))
else:
f2.write((translate1(x))+("\n"))
f2.write((translate2(x))+("\n"))
f2.write((translate3(x))+("\n"))
f2.write((translate1neg(x))+("\n"))
f2.write((translate2neg(x))+("\n"))
f2.write((translate3neg(x))+("\n"))
```
But unlike the expected output file suggested, i get this:
```
>seq1_1_+
>seq1_2_+
>seq1_3_+
>seq1_1_-
>seq1_2_-
>seq1_3_-
iyyslrs-las-smrlssiv-m
fiirydrs-ladrcgshrssk
llfativas-lidaalidrl
frrsmraasis-lativannkm
lddr-ephrsas-lrs-riin
-tidesridqlasydrse--m
```
So he's pretty much doing all the seq's first, and all the functions afterwards, so i need to intercalate them, problem is how. | 2014/01/17 | [
"https://Stackoverflow.com/questions/21188579",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2884400/"
] | You got the amino acid output for the first codon only because you used 'return' inside the 'for loop'. Once the first amino acid is returned, the loop terminates, hence the second codon won't be tested at all.
You can create an empty list to keep the results for the translation of each codon, e.g.
```
aa = []
```
then, instead of using return, append the output to the list:
```
for x in range(0,len(seq1),3):
nuc2= seq1[x:x+3]
if nuc2 in ('GCT', 'GCC', 'GCA', 'GCG'):
aa.append("a")
elif nuc2 in ('TGT', 'TGC'):
aa.append("c")
....
```
and finally, join the alphabets in the list and return the string from the function:
```
return "".join(aa)
```
or simply print it:
```
print("".join(aa))
``` | you can convert the nucleotide bases in numbers (base 4) and then translate using ordered aa in a string:
```
def translate(seq,frame):
BASES = 'ACGT'
# standard code
AA = 'KNKNTTTTRSRSIIMIQHQHPPPPRRRRLLLLEDEDAAAAGGGGVVVV*Y*YSSSS*CWCLFLF'
# convert DNA sequence in numbers: A=0; C=1; G=2; T=3
seqn = [str(BASES.find(i)) for i in seq.upper()]
# list of all codons in all forward frames (i.e. 3 digit numbers in base 4)
allframes = [''.join(seqn[x:x+3]) for x in range(len(seqn)) if x <= (len(seqn)-3)]
# translate codons in frame taking aa in AA string using indexes (turned in base 10 from base 4) in allframes
return ''.join([AA[int(i,4)] for i in allframes[(frame-1)::3]])
``` |
37,986,367 | How I can overcome an issue with conditionals in python? The issue is that it should show certain text according to certain conditional, but if the input was No, it anyway indicates the data of Yes conditional.
```
def main(y_b,c_y):
ans=input('R u Phil?')
if ans=='Yes' or 'yes':
years=y_b-c_y
print('U r',abs(years),'jahre alt')
elif ans=='No' or 'no':
print("How old r u?")
else:
print('Sorry')
main(2012,2016)
``` | 2016/06/23 | [
"https://Stackoverflow.com/questions/37986367",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6492505/"
] | `or` is inclusive. So the `yes` test will always pass because when `ans != 'Yes'` the other condition `yes` has a truthy value.
```
>>> bool('yes')
True
```
You should instead test with:
```
if ans in ('Yes', 'yeah', 'yes'):
# code
elif ans in ('No', 'Nah', 'no'):
# code
else:
# more code
``` | When you write if statements and you have multiple conditionals, you have to write both conditionals and compare them. This is wrong:
```
if ans == 'Yes' or 'yes':
```
and this is ok:
```
if ans == 'Yes' or ans == 'yes':
``` |
37,986,367 | How I can overcome an issue with conditionals in python? The issue is that it should show certain text according to certain conditional, but if the input was No, it anyway indicates the data of Yes conditional.
```
def main(y_b,c_y):
ans=input('R u Phil?')
if ans=='Yes' or 'yes':
years=y_b-c_y
print('U r',abs(years),'jahre alt')
elif ans=='No' or 'no':
print("How old r u?")
else:
print('Sorry')
main(2012,2016)
``` | 2016/06/23 | [
"https://Stackoverflow.com/questions/37986367",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6492505/"
] | When you write if statements and you have multiple conditionals, you have to write both conditionals and compare them. This is wrong:
```
if ans == 'Yes' or 'yes':
```
and this is ok:
```
if ans == 'Yes' or ans == 'yes':
``` | It's not that different from other languages:
```
def main(y_b,c_y):
ans = input('R u Phil?')
if ans == 'Yes' or ans == 'yes':
years = y_b-c_y
print('U r', abs(years), 'jahre alt')
elif ans == 'No' or ans == 'no':
print("How old r u?")
else:
print('Sorry')
main(2012,2016)
```
However you might wish to use a simpler test:
```
if ans.lower() == 'yes':
``` |
37,986,367 | How I can overcome an issue with conditionals in python? The issue is that it should show certain text according to certain conditional, but if the input was No, it anyway indicates the data of Yes conditional.
```
def main(y_b,c_y):
ans=input('R u Phil?')
if ans=='Yes' or 'yes':
years=y_b-c_y
print('U r',abs(years),'jahre alt')
elif ans=='No' or 'no':
print("How old r u?")
else:
print('Sorry')
main(2012,2016)
``` | 2016/06/23 | [
"https://Stackoverflow.com/questions/37986367",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6492505/"
] | `or` is inclusive. So the `yes` test will always pass because when `ans != 'Yes'` the other condition `yes` has a truthy value.
```
>>> bool('yes')
True
```
You should instead test with:
```
if ans in ('Yes', 'yeah', 'yes'):
# code
elif ans in ('No', 'Nah', 'no'):
# code
else:
# more code
``` | It's not that different from other languages:
```
def main(y_b,c_y):
ans = input('R u Phil?')
if ans == 'Yes' or ans == 'yes':
years = y_b-c_y
print('U r', abs(years), 'jahre alt')
elif ans == 'No' or ans == 'no':
print("How old r u?")
else:
print('Sorry')
main(2012,2016)
```
However you might wish to use a simpler test:
```
if ans.lower() == 'yes':
``` |
49,021,968 | I have a list of filenames in a directory and I'd like to keep only the latest versions. The list looks like:
`['file1-v1.csv', 'file1-v2.csv', 'file2-v1.txt', ...]`.
I'd like to only keep the newest csv files as per the version (part after `-` in the filename) and the txt files.
The output would be `[''file1-v2.csv', 'file2-v1.txt', ...]`
I have a solution that requires the use of sets but I'm looking for a easy pythonic way to do this. Potentially using `itertools` and `groupby`
**Update: Solution so far**
I've been able to do some preliminary work to get a list like
```
lst = [('file1', 'csv', 'v1','<some data>'), ('file2', 'csv', 'v2','<some data>'), ...]
```
I'd like to group by elements at index `0` and `1` but provide only the tuple with the maximum index `2`.
It may be something like the below:
```
files = list(item for key, group in itertools.groupby(files, lambda x: x[0:2]) for item in group)
# Maximum over 3rd index element in each tuple does not work
files = max(files, key=operator.itemgetter(2))
```
Also, I feel like the below should work but it does not select the maximum properly
```
[max(items, key=operator.itemgetter(2)) for key, items in itertools.groupby(files, key=operator.itemgetter(0, 1))]
``` | 2018/02/28 | [
"https://Stackoverflow.com/questions/49021968",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2771315/"
] | Do this:
```
SELECT * FROM yourTable
WHERE DATE(punch_in_utc_time)=current_date;
```
For testing:
```
SELECT DATE("2018-02-28 09:32:00")=current_date;
```
See [DEMO on SQL Fiddle](http://sqlfiddle.com/#!9/9eecb/17666). | Should be able to do that using Date function, TRUNC timestamp to date then compare with the date field.
```
SELECT DATE("2018-02-28 09:32:00") = "2018-02-28";
```
The above dml will return 1 since the date part is equal. |
49,021,968 | I have a list of filenames in a directory and I'd like to keep only the latest versions. The list looks like:
`['file1-v1.csv', 'file1-v2.csv', 'file2-v1.txt', ...]`.
I'd like to only keep the newest csv files as per the version (part after `-` in the filename) and the txt files.
The output would be `[''file1-v2.csv', 'file2-v1.txt', ...]`
I have a solution that requires the use of sets but I'm looking for a easy pythonic way to do this. Potentially using `itertools` and `groupby`
**Update: Solution so far**
I've been able to do some preliminary work to get a list like
```
lst = [('file1', 'csv', 'v1','<some data>'), ('file2', 'csv', 'v2','<some data>'), ...]
```
I'd like to group by elements at index `0` and `1` but provide only the tuple with the maximum index `2`.
It may be something like the below:
```
files = list(item for key, group in itertools.groupby(files, lambda x: x[0:2]) for item in group)
# Maximum over 3rd index element in each tuple does not work
files = max(files, key=operator.itemgetter(2))
```
Also, I feel like the below should work but it does not select the maximum properly
```
[max(items, key=operator.itemgetter(2)) for key, items in itertools.groupby(files, key=operator.itemgetter(0, 1))]
``` | 2018/02/28 | [
"https://Stackoverflow.com/questions/49021968",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2771315/"
] | Should be able to do that using Date function, TRUNC timestamp to date then compare with the date field.
```
SELECT DATE("2018-02-28 09:32:00") = "2018-02-28";
```
The above dml will return 1 since the date part is equal. | I am not sure if this is what you wanted:
If you are using earlier version of SQL server use
```
CONVERT
```
(otherwise you might have to use `DATEPART` function instead), do this:
`SELECT CONVERT(punch_in_utc_time)` which will give you date only value.
So in your comparison:
```
SELECT 'TRUE' WHERE Convert(Date, @utc_datetime_x) = '2018-02-28'
```
The above will give you true result.
NOTE: There are NO == in SQL, IF statement could be used with single =, and BEGIN/END act like ():
```
IF Convert(Date, @utc_datetime_x) = '2018-02-28'
BEGIN
Select 'True'
END
ELSE
BEGIN
Select 'False'
END
``` |
49,021,968 | I have a list of filenames in a directory and I'd like to keep only the latest versions. The list looks like:
`['file1-v1.csv', 'file1-v2.csv', 'file2-v1.txt', ...]`.
I'd like to only keep the newest csv files as per the version (part after `-` in the filename) and the txt files.
The output would be `[''file1-v2.csv', 'file2-v1.txt', ...]`
I have a solution that requires the use of sets but I'm looking for a easy pythonic way to do this. Potentially using `itertools` and `groupby`
**Update: Solution so far**
I've been able to do some preliminary work to get a list like
```
lst = [('file1', 'csv', 'v1','<some data>'), ('file2', 'csv', 'v2','<some data>'), ...]
```
I'd like to group by elements at index `0` and `1` but provide only the tuple with the maximum index `2`.
It may be something like the below:
```
files = list(item for key, group in itertools.groupby(files, lambda x: x[0:2]) for item in group)
# Maximum over 3rd index element in each tuple does not work
files = max(files, key=operator.itemgetter(2))
```
Also, I feel like the below should work but it does not select the maximum properly
```
[max(items, key=operator.itemgetter(2)) for key, items in itertools.groupby(files, key=operator.itemgetter(0, 1))]
``` | 2018/02/28 | [
"https://Stackoverflow.com/questions/49021968",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2771315/"
] | Do this:
```
SELECT * FROM yourTable
WHERE DATE(punch_in_utc_time)=current_date;
```
For testing:
```
SELECT DATE("2018-02-28 09:32:00")=current_date;
```
See [DEMO on SQL Fiddle](http://sqlfiddle.com/#!9/9eecb/17666). | I am not sure if this is what you wanted:
If you are using earlier version of SQL server use
```
CONVERT
```
(otherwise you might have to use `DATEPART` function instead), do this:
`SELECT CONVERT(punch_in_utc_time)` which will give you date only value.
So in your comparison:
```
SELECT 'TRUE' WHERE Convert(Date, @utc_datetime_x) = '2018-02-28'
```
The above will give you true result.
NOTE: There are NO == in SQL, IF statement could be used with single =, and BEGIN/END act like ():
```
IF Convert(Date, @utc_datetime_x) = '2018-02-28'
BEGIN
Select 'True'
END
ELSE
BEGIN
Select 'False'
END
``` |
56,227,936 | I am getting the following error when I try to see if my object is valid using `full_clean()`.
```sh
django.core.exceptions.ValidationError: {'schedule_date': ["'%(value)s' value has an invalid format. It must be in YYYY-MM-DD HH:MM[:ss[.uuuuuu]][TZ] format."]}
```
I have tried all the formats recommended here, but none of them work for me:
[Whats the correct format for django dateTime?](https://stackoverflow.com/questions/12255157/whats-the-correct-format-for-django-datetime)
I won't get error when I create the object like `Object.objects.create(...)`
Here is my `models.py`:
```py
from datetime import datetime, timedelta, date
from django.db import models
from django import forms
from django.utils import timezone
from django.core.exceptions import ValidationError
from userstweetsmanager.constants import LANGUAGE_CHOICES
def password_validator(value):
if len(value) < 6:
raise ValidationError(
str('is too short (minimum 6 characters)'),
code='invalid'
)
class User(models.Model):
name = models.TextField(max_length=30, unique=True)
password = models.TextField(validators=[password_validator])
twitter_api_key = models.TextField(null=True, blank=True)
twitter_api_secret_key = models.TextField(null=True, blank=True)
twitter_access_token = models.TextField(null=True, blank=True)
twitter_access_token_secret = models.TextField(null=True, blank=True)
expire_date = models.DateField(default=date.today() + timedelta(days=14))
language = models.TextField(choices=LANGUAGE_CHOICES, default='1')
def schedule_date_validator(value):
if value < timezone.now() or timezone.now() + timedelta(days=14) < value:
raise ValidationError(
str('is not within the range (within 14 days from today)'),
code='invalid'
)
def content_validator(value):
if len(value) > 140:
raise ValidationError(
str('is too long (maximum 140 characters)'),
code='invalid'
)
class Tweet(models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE)
content = models.TextField(validators=[content_validator])
schedule_date = models.DateTimeField(validators=[schedule_date_validator])
```
Here is my test code where the error occurs:
```py
def test_valid_tweet(self):
owner = User.objects.get(name="Hello")
tweet = Tweet(user=owner, content="Hello world!", schedule_date=timezone.now())
try:
tweet.full_clean() # error occurs here
pass
except ValidationError as e:
raise AssertionError("ValidationError should not been thrown")
tweet.save()
self.assertEqual(len(Tweet.objects.all()), 1)
```
As I tested creating an object in the `python manage.py shell`, it will cause error, but if I do `full_clean()`, it will cause error. | 2019/05/20 | [
"https://Stackoverflow.com/questions/56227936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9633315/"
] | The issue was with the logic of the code. I specified the time range that won't allow even one millionth second difference in the `schedule_date` and `timezone.now()`
After taking a look at the source code of `DateTimeField`, it seems like if I have my validator to throw code="invalid", it will just show the above error message, which made me confused about where my code is wrong. | i solve this problem with this
```
datetime.strptime(request.POST['date'], "%Y-%m-%dT%H:%M")
``` |
64,799,578 | I am working on a python script, where I will be passing a directory, and I need to get all log-files from it. Currently, I have a small script which watches for any changes to these files and then processes that information.
It's working good, but it's just for a single file, and hardcoded file value. How can I pass a directory to it, and still watch all the files. My confusion is since I am working on these files in a while loop which should always stay running, how can I do that for n number of files inside a directory?
Current code :
```
import time
f = open('/var/log/nginx/access.log', 'r')
while True:
line = ''
while len(line) == 0 or line[-1] != '\n':
tail = f.readline()
if tail == '':
time.sleep(0.1) # avoid busy waiting
continue
line += tail
print(line)
_process_line(line)
```
Question was already tagged for duplicate, but the requirement is to get changes line by line from all files inside directory. Other questions cover single file, which is already working. | 2020/11/12 | [
"https://Stackoverflow.com/questions/64799578",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1510701/"
] | You could use a generic / reusable approach based on the two-queries approach.
One SQL query to retrieve the entities' `IDs` and a second query with an `IN` predicate including the `IDs` from the second query.
Implementing a custom Spring Data JPA Executor:
```
@NoRepositoryBean
public interface AsimioJpaSpecificationExecutor<E, ID extends Serializable> extends JpaSpecificationExecutor<E> {
Page<ID> findEntityIds(Pageable pageable);
}
public class AsimioSimpleJpaRepository<E, ID extends Serializable> extends SimpleJpaRepository<E, ID>
implements AsimioJpaSpecificationExecutor<E, ID> {
private final EntityManager entityManager;
private final JpaEntityInformation<E, ID> entityInformation;
public AsimioSimpleJpaRepository(JpaEntityInformation<E, ID> entityInformation, EntityManager entityManager) {
super(entityInformation, entityManager);
this.entityManager = entityManager;
this.entityInformation = entityInformation;
}
@Override
public Page<ID> findEntityIds(Pageable pageable) {
CriteriaBuilder criteriaBuilder = this.entityManager.getCriteriaBuilder();
CriteriaQuery<ID> criteriaQuery = criteriaBuilder.createQuery(this.entityInformation.getIdType());
Root<E> root = criteriaQuery.from(this.getDomainClass());
// Get the entities ID only
criteriaQuery.select((Path<ID>) root.get(this.entityInformation.getIdAttribute()));
// Update Sorting
Sort sort = pageable.isPaged() ? pageable.getSort() : Sort.unsorted();
if (sort.isSorted()) {
criteriaQuery.orderBy(toOrders(sort, root, criteriaBuilder));
}
TypedQuery<ID> typedQuery = this.entityManager.createQuery(criteriaQuery);
// Update Pagination attributes
if (pageable.isPaged()) {
typedQuery.setFirstResult((int) pageable.getOffset());
typedQuery.setMaxResults(pageable.getPageSize());
}
return PageableExecutionUtils.getPage(typedQuery.getResultList(), pageable,
() -> executeCountQuery(this.getCountQuery(null, this.getDomainClass())));
}
protected static long executeCountQuery(TypedQuery<Long> query) {
Assert.notNull(query, "TypedQuery must not be null!");
List<Long> totals = query.getResultList();
long total = 0L;
for (Long element : totals) {
total += element == null ? 0 : element;
}
return total;
}
}
```
You can read more at <https://tech.asimio.net/2021/05/19/Fixing-Hibernate-HHH000104-firstResult-maxResults-warning-using-Spring-Data-JPA.html> | I found a workaround myself. Based upon this:
[How can I avoid the Warning "firstResult/maxResults specified with collection fetch; applying in memory!" when using Hibernate?](https://stackoverflow.com/questions/11431670/how-can-i-avoid-the-warning-firstresult-maxresults-specified-with-collection-fe/46195656#46195656)
**First: Get the Ids by pagination:**
```
@Query(value = "select distinct r.id from Reference r " +
"inner join r.persons " +
"left outer join r.categories " +
"left outer join r.keywords " +
"left outer join r.parentReferences " +
"order by r.id",
countQuery = "select count(distinct r.id) from Reference r " +
"inner join r.persons " +
"left outer join r.categories " +
"left outer join r.keywords " +
"left outer join r.parentReferences " +
"order by r.id")
Page<UUID> findsAllRelevantEntriesIds(Pageable pageable);
```
**Second: Use the Ids to do an `in` query**
```
@Query(value = "select distinct r from Reference r " +
"inner join fetch r.persons " +
"left outer join fetch r.categories " +
"left outer join fetch r.keywords " +
"left outer join fetch r.parentReferences " +
"where r.id in ?1 " +
"order by r.id",
countQuery = "select count(distinct r.id) from Reference r " +
"inner join r.persons " +
"left outer join r.categories " +
"left outer join r.keywords " +
"left outer join r.parentReferences ")
@QueryHints(value = {@QueryHint(name = "hibernate.query.passDistinctThrough", value = "false")},
forCounting = false)
List<Reference> findsAllRelevantEntriesByIds(UUID[] ids);
```
**Note:**
I get a `List<Reference` not a `Pageable` so you have to build your `Pageable` on your own like so:
```
private Page<Reference> processResults(Pageable pageable, Page<UUID> result) {
List<Reference> references = referenceRepository.findsAllRelevantEntriesByIds(result.toList().toArray(new UUID[0]));
return new PageImpl<>(references, pageable, references.size());
}
```
This looks not nice and does two statements, but it queries with `limit`, so only the needed records get fetched. |
64,799,578 | I am working on a python script, where I will be passing a directory, and I need to get all log-files from it. Currently, I have a small script which watches for any changes to these files and then processes that information.
It's working good, but it's just for a single file, and hardcoded file value. How can I pass a directory to it, and still watch all the files. My confusion is since I am working on these files in a while loop which should always stay running, how can I do that for n number of files inside a directory?
Current code :
```
import time
f = open('/var/log/nginx/access.log', 'r')
while True:
line = ''
while len(line) == 0 or line[-1] != '\n':
tail = f.readline()
if tail == '':
time.sleep(0.1) # avoid busy waiting
continue
line += tail
print(line)
_process_line(line)
```
Question was already tagged for duplicate, but the requirement is to get changes line by line from all files inside directory. Other questions cover single file, which is already working. | 2020/11/12 | [
"https://Stackoverflow.com/questions/64799578",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1510701/"
] | You could use a generic / reusable approach based on the two-queries approach.
One SQL query to retrieve the entities' `IDs` and a second query with an `IN` predicate including the `IDs` from the second query.
Implementing a custom Spring Data JPA Executor:
```
@NoRepositoryBean
public interface AsimioJpaSpecificationExecutor<E, ID extends Serializable> extends JpaSpecificationExecutor<E> {
Page<ID> findEntityIds(Pageable pageable);
}
public class AsimioSimpleJpaRepository<E, ID extends Serializable> extends SimpleJpaRepository<E, ID>
implements AsimioJpaSpecificationExecutor<E, ID> {
private final EntityManager entityManager;
private final JpaEntityInformation<E, ID> entityInformation;
public AsimioSimpleJpaRepository(JpaEntityInformation<E, ID> entityInformation, EntityManager entityManager) {
super(entityInformation, entityManager);
this.entityManager = entityManager;
this.entityInformation = entityInformation;
}
@Override
public Page<ID> findEntityIds(Pageable pageable) {
CriteriaBuilder criteriaBuilder = this.entityManager.getCriteriaBuilder();
CriteriaQuery<ID> criteriaQuery = criteriaBuilder.createQuery(this.entityInformation.getIdType());
Root<E> root = criteriaQuery.from(this.getDomainClass());
// Get the entities ID only
criteriaQuery.select((Path<ID>) root.get(this.entityInformation.getIdAttribute()));
// Update Sorting
Sort sort = pageable.isPaged() ? pageable.getSort() : Sort.unsorted();
if (sort.isSorted()) {
criteriaQuery.orderBy(toOrders(sort, root, criteriaBuilder));
}
TypedQuery<ID> typedQuery = this.entityManager.createQuery(criteriaQuery);
// Update Pagination attributes
if (pageable.isPaged()) {
typedQuery.setFirstResult((int) pageable.getOffset());
typedQuery.setMaxResults(pageable.getPageSize());
}
return PageableExecutionUtils.getPage(typedQuery.getResultList(), pageable,
() -> executeCountQuery(this.getCountQuery(null, this.getDomainClass())));
}
protected static long executeCountQuery(TypedQuery<Long> query) {
Assert.notNull(query, "TypedQuery must not be null!");
List<Long> totals = query.getResultList();
long total = 0L;
for (Long element : totals) {
total += element == null ? 0 : element;
}
return total;
}
}
```
You can read more at <https://tech.asimio.net/2021/05/19/Fixing-Hibernate-HHH000104-firstResult-maxResults-warning-using-Spring-Data-JPA.html> | The approach to fetch ids first and then do the main query works but is not very efficient. I think this is a perfect use case for [Blaze-Persistence](https://persistence.blazebit.com/documentation/core/manual/en_US/index.html#anchor-offset-pagination).
Blaze-Persistence is a query builder on top of JPA which supports many of the advanced DBMS features on top of the JPA model. The pagination support it comes with handles all of the issues you might encounter.
It also has a Spring Data integration, so you can use the same code like you do now, you only have to add the dependency and do the setup: <https://persistence.blazebit.com/documentation/entity-view/manual/en_US/index.html#spring-data-setup>
Blaze-Persistence has many different strategies for pagination which you can configure. The default strategy is to inline the query for ids into the main query. Something like this:
```
select r
from Reference r
inner join r.persons
left join fetch r.categories
left join fetch r.keywords
left join fetch r.parentReferences
where r.id IN (
select r2.id
from Reference r2
inner join r2.persons
order by ...
limit ...
)
order by ...
``` |
28,814,455 | I am appending a file via python based on the code that has been input by the user.
```
with open ("markbook.txt", "a") as g:
g.write(sn+","+sna+","+sg1+","+sg2+","+sg3+","+sg4)
```
`sn`, `sna`, `sg1`, `sg2`, `sg3`, `sg4` have all been entered by the user and when the program is finished a line will be added to the `'markbook.txt'` file in the format of:
```
00,SmithJE,a,b,b,b
01,JonesFJ,e,d,c,d
02,BlairJA,c,c,b,a
03,BirchFA,a,a,b,c
```
The issue is when the program is used again and the file is appended further, the new line is simply put on the end of the previous line. How do I place the appended text below the previous line? | 2015/03/02 | [
"https://Stackoverflow.com/questions/28814455",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4624147/"
] | Add a "\n" to the end of the write line.
So:
```
g.write(sn+","+sna+","+sg1+","+sg2+","+sg3+","+sg4+"\n")
``` | You're missing the new line character at the end of your string. Also, though string concatenation is completely fine in this case, you should be aware that Python has alternative options for formatting strings.
```
with open('markbook.txt', 'a') as g:
g.write('{},{},{},{},{},{}\n'
.format(sn, sna, sg1, sg2, sg3, sg4))
``` |
28,814,455 | I am appending a file via python based on the code that has been input by the user.
```
with open ("markbook.txt", "a") as g:
g.write(sn+","+sna+","+sg1+","+sg2+","+sg3+","+sg4)
```
`sn`, `sna`, `sg1`, `sg2`, `sg3`, `sg4` have all been entered by the user and when the program is finished a line will be added to the `'markbook.txt'` file in the format of:
```
00,SmithJE,a,b,b,b
01,JonesFJ,e,d,c,d
02,BlairJA,c,c,b,a
03,BirchFA,a,a,b,c
```
The issue is when the program is used again and the file is appended further, the new line is simply put on the end of the previous line. How do I place the appended text below the previous line? | 2015/03/02 | [
"https://Stackoverflow.com/questions/28814455",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4624147/"
] | You need to write line separators between your your lines:
```
g.write(sn + "," + sna + "," + sg1 + "," + sg2 + "," + sg3 + "," + sg4 + '\n')
```
You appear to be reinventing the CSV wheel, however. Leave separators (line or column) to the [`csv` library](https://docs.python.org/2/library/csv.html) instead:
```
import csv
with open ("markbook.txt", "ab") as g:
writer = csv.writer(g)
writer.writerow([sn, sna, sg1, sg2, sg3, sg4])
``` | You're missing the new line character at the end of your string. Also, though string concatenation is completely fine in this case, you should be aware that Python has alternative options for formatting strings.
```
with open('markbook.txt', 'a') as g:
g.write('{},{},{},{},{},{}\n'
.format(sn, sna, sg1, sg2, sg3, sg4))
``` |
37,518,997 | My question is related to this earlier question - [Python subprocess usage](https://stackoverflow.com/questions/17242828/python-subprocess-and-running-a-bash-script-with-multiple-arguments)
I am trying to run this command using python
**nccopy -k 4 "<http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/air.2014.nc?air[408:603][2][20:34][26:40]>" foo.nc**
When I run the above command I should be able to see a file called foo.nc on my disk or a network error stating unable to access that URL or remote URL not found.
Currently the ESRL NOAA server is down - so when I run the above command I get
syntax error, unexpected $end, expecting SCAN\_ATTR or SCAN\_DATASET or SCAN\_ERROR
context: ^
NetCDF: Access failure
Location: file nccopy.c; line 1348
I should get the same error when I run the python script
This is the code I have and I am unable to figure out exactly how to proceed further -
I tried splitting up "-k 4" into two arguments and removing the quotes and I still get this error nccopy : invalid format : 4
Results of print(sys.argv) data.py
['data.py', '-k', '4', '<http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/air.2014.nc?air[480:603][20:34][26:40]>', 'foo.nc']
```
import numpy as np
import subprocess
import sys
url = '"http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/air.2014.nc?air[408:603][2][20:34][26:40]"'
outputFile = 'foo.nc'
arg1 = "-k 4"
arg3 = url
arg4 = outputFile
print (input)
subprocess.check_call(["nccopy",arg1,arg3,arg4])
``` | 2016/05/30 | [
"https://Stackoverflow.com/questions/37518997",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4033876/"
] | There's two dilemmas here.
One being that subprocess processes your arguments and tries to use `4` as a separate argument.
The other being that system calls still goes under normal shell rules, meaning that parameters and commands will be parsed for [metacharacters](http://www.tutorialspoint.com/unix/unix-quoting-mechanisms.htm) aka special characters. In this case you're wrapping `[` and `]`.
There for you need to separate each parameters and it's value into separate objects in the parameter-list, for instance `-k 4` should be `['-k', '4']` and you need to wrap parameters/values in `'...'` instead of `"..."`.
Try this, `shlex.split()` does the grunt work for you, and i swapped the encapsulation characters around the URL:
```
import numpy as np
import subprocess
import sys
import shlex
url = "'http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/air.2014.nc?air[408:603][2][20:34][26:40]'"
outputFile = 'foo.nc'
command_list = shlex.split('nccopy -k 4 ' + url + ' ' + outpufFile)
print(command_list)
subprocess.check_call(command_list)
``` | Instead of arg1 = "-k 4", use two arguments instead.
```
import subprocess
url = 'http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/air.2014.nc?air[408:603][2][20:34][26:40]'
outputFile = 'foo.nc'
arg1 = "-k"
arg2 = "4"
arg3 = url
arg4 = outputFile
subprocess.check_call(["nccopy", arg1, arg2, arg3, arg4])
```
See also here [Python subprocess arguments](https://stackoverflow.com/questions/11679936/python-subprocess-arguments) |
37,518,997 | My question is related to this earlier question - [Python subprocess usage](https://stackoverflow.com/questions/17242828/python-subprocess-and-running-a-bash-script-with-multiple-arguments)
I am trying to run this command using python
**nccopy -k 4 "<http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/air.2014.nc?air[408:603][2][20:34][26:40]>" foo.nc**
When I run the above command I should be able to see a file called foo.nc on my disk or a network error stating unable to access that URL or remote URL not found.
Currently the ESRL NOAA server is down - so when I run the above command I get
syntax error, unexpected $end, expecting SCAN\_ATTR or SCAN\_DATASET or SCAN\_ERROR
context: ^
NetCDF: Access failure
Location: file nccopy.c; line 1348
I should get the same error when I run the python script
This is the code I have and I am unable to figure out exactly how to proceed further -
I tried splitting up "-k 4" into two arguments and removing the quotes and I still get this error nccopy : invalid format : 4
Results of print(sys.argv) data.py
['data.py', '-k', '4', '<http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/air.2014.nc?air[480:603][20:34][26:40]>', 'foo.nc']
```
import numpy as np
import subprocess
import sys
url = '"http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/air.2014.nc?air[408:603][2][20:34][26:40]"'
outputFile = 'foo.nc'
arg1 = "-k 4"
arg3 = url
arg4 = outputFile
print (input)
subprocess.check_call(["nccopy",arg1,arg3,arg4])
``` | 2016/05/30 | [
"https://Stackoverflow.com/questions/37518997",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4033876/"
] | Instead of arg1 = "-k 4", use two arguments instead.
```
import subprocess
url = 'http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/air.2014.nc?air[408:603][2][20:34][26:40]'
outputFile = 'foo.nc'
arg1 = "-k"
arg2 = "4"
arg3 = url
arg4 = outputFile
subprocess.check_call(["nccopy", arg1, arg2, arg3, arg4])
```
See also here [Python subprocess arguments](https://stackoverflow.com/questions/11679936/python-subprocess-arguments) | If you have a working shell command that runs a single program with multiple arguments and you want to parameterized it e.g., to use a variable filename instead of the hardcoded value then you could use `shlex.split()` to create a list of command-line arguments that you could pass to `subprocess` module and replace the desired argument with a variable e.g.:
```
>>> shell_command = "python -c 'import sys; print(sys.argv)' 1 't w o'"
>>> import shlex
>>> shlex.split(shell_command)
['python', '-c', 'import sys; print(sys.argv)', '1', 't w o']
```
To run the command using the same Python interpreter as the parent script, `sys.executable` could be used and we can pass a `variable` instead of `'1'`:
```
#!/usr/bin/env python
import random
import sys
import subprocess
variable = random.choice('ab')
subprocess.check_call([sys.executable, '-c', 'import sys; print(sys.argv)',
variable, 't w o'])
```
Note:
* one command-line argument per list item
* no `shlex.split()` in the final code
* there are no quotes inside `'t w o'` i.e., `'t w o'` is used instead of `'"t w o"'` or `"'t w o'"`
`subprocess` module does not run the shell by default and therefore you don't need to escape shell meta-characters such as a space inside the command-line arguments. And in reverse, if your command uses some shell functionality (e.g., file patterns) then either reimplement the corresponding features in Python (e.g., using `glob` module) or use `shell=True` and pass the command as a string as is. You might need `pipes.quote()`, to escape variable arguments in this case. [Wildcard not working in subprocess call using shlex](https://stackoverflow.com/q/7156892/4279) |
37,518,997 | My question is related to this earlier question - [Python subprocess usage](https://stackoverflow.com/questions/17242828/python-subprocess-and-running-a-bash-script-with-multiple-arguments)
I am trying to run this command using python
**nccopy -k 4 "<http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/air.2014.nc?air[408:603][2][20:34][26:40]>" foo.nc**
When I run the above command I should be able to see a file called foo.nc on my disk or a network error stating unable to access that URL or remote URL not found.
Currently the ESRL NOAA server is down - so when I run the above command I get
syntax error, unexpected $end, expecting SCAN\_ATTR or SCAN\_DATASET or SCAN\_ERROR
context: ^
NetCDF: Access failure
Location: file nccopy.c; line 1348
I should get the same error when I run the python script
This is the code I have and I am unable to figure out exactly how to proceed further -
I tried splitting up "-k 4" into two arguments and removing the quotes and I still get this error nccopy : invalid format : 4
Results of print(sys.argv) data.py
['data.py', '-k', '4', '<http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/air.2014.nc?air[480:603][20:34][26:40]>', 'foo.nc']
```
import numpy as np
import subprocess
import sys
url = '"http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/air.2014.nc?air[408:603][2][20:34][26:40]"'
outputFile = 'foo.nc'
arg1 = "-k 4"
arg3 = url
arg4 = outputFile
print (input)
subprocess.check_call(["nccopy",arg1,arg3,arg4])
``` | 2016/05/30 | [
"https://Stackoverflow.com/questions/37518997",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4033876/"
] | There's two dilemmas here.
One being that subprocess processes your arguments and tries to use `4` as a separate argument.
The other being that system calls still goes under normal shell rules, meaning that parameters and commands will be parsed for [metacharacters](http://www.tutorialspoint.com/unix/unix-quoting-mechanisms.htm) aka special characters. In this case you're wrapping `[` and `]`.
There for you need to separate each parameters and it's value into separate objects in the parameter-list, for instance `-k 4` should be `['-k', '4']` and you need to wrap parameters/values in `'...'` instead of `"..."`.
Try this, `shlex.split()` does the grunt work for you, and i swapped the encapsulation characters around the URL:
```
import numpy as np
import subprocess
import sys
import shlex
url = "'http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/air.2014.nc?air[408:603][2][20:34][26:40]'"
outputFile = 'foo.nc'
command_list = shlex.split('nccopy -k 4 ' + url + ' ' + outpufFile)
print(command_list)
subprocess.check_call(command_list)
``` | If you have a working shell command that runs a single program with multiple arguments and you want to parameterized it e.g., to use a variable filename instead of the hardcoded value then you could use `shlex.split()` to create a list of command-line arguments that you could pass to `subprocess` module and replace the desired argument with a variable e.g.:
```
>>> shell_command = "python -c 'import sys; print(sys.argv)' 1 't w o'"
>>> import shlex
>>> shlex.split(shell_command)
['python', '-c', 'import sys; print(sys.argv)', '1', 't w o']
```
To run the command using the same Python interpreter as the parent script, `sys.executable` could be used and we can pass a `variable` instead of `'1'`:
```
#!/usr/bin/env python
import random
import sys
import subprocess
variable = random.choice('ab')
subprocess.check_call([sys.executable, '-c', 'import sys; print(sys.argv)',
variable, 't w o'])
```
Note:
* one command-line argument per list item
* no `shlex.split()` in the final code
* there are no quotes inside `'t w o'` i.e., `'t w o'` is used instead of `'"t w o"'` or `"'t w o'"`
`subprocess` module does not run the shell by default and therefore you don't need to escape shell meta-characters such as a space inside the command-line arguments. And in reverse, if your command uses some shell functionality (e.g., file patterns) then either reimplement the corresponding features in Python (e.g., using `glob` module) or use `shell=True` and pass the command as a string as is. You might need `pipes.quote()`, to escape variable arguments in this case. [Wildcard not working in subprocess call using shlex](https://stackoverflow.com/q/7156892/4279) |
54,677,761 | The following code generates the warning in tensorflow r1.12 python API:
```
#!/usr/bin/python3
import tensorflow as tf
M = tf.keras.models.Sequential();
M.add(tf.keras.layers.Dense(2));
```
The complete warning text is this:
```
WARNING: Logging before flag parsing goes to stderr.
W0213 15:50:07.239809 140701996246848 deprecation.py:506] From /home/matias/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/init_ops.py:1253: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
```
I have tried different approaches like initializing and calling a kernel initializer before adding Dense layer and passing it to Dense constructor, but it seems to not change anything. Is this warning inevitable? A 'yes' as an answer would be enough for me. | 2019/02/13 | [
"https://Stackoverflow.com/questions/54677761",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7906266/"
] | You are running tensor flow 2.0 and it looks like VarianceScaling.**init** is deprecated. It might mean that Sequential will need to be more explicitly initialized in the future.
for example:
```py
model = tf.keras.Sequential([
# Adds a densely-connected layer with 64 units to the model:
layers.Dense(64, activation='relu', input_shape=(32,)),
# Add another:
layers.Dense(64, activation='relu'),
# Add a softmax layer with 10 output units:
layers.Dense(10, activation='softmax')])
``` | This is just a warning based on the [changes in Tensorflow 2.0](https://www.tensorflow.org/beta/guide/effective_tf2).
If you don't want to see these warnings, upgrade to TensorFlow 2.0. You can install the beta version via pip:
```
pip install tensorflow==2.0.0-beta1
``` |
54,677,761 | The following code generates the warning in tensorflow r1.12 python API:
```
#!/usr/bin/python3
import tensorflow as tf
M = tf.keras.models.Sequential();
M.add(tf.keras.layers.Dense(2));
```
The complete warning text is this:
```
WARNING: Logging before flag parsing goes to stderr.
W0213 15:50:07.239809 140701996246848 deprecation.py:506] From /home/matias/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/init_ops.py:1253: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
```
I have tried different approaches like initializing and calling a kernel initializer before adding Dense layer and passing it to Dense constructor, but it seems to not change anything. Is this warning inevitable? A 'yes' as an answer would be enough for me. | 2019/02/13 | [
"https://Stackoverflow.com/questions/54677761",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7906266/"
] | The warning may be caused upstream by `abseil-py`, a dependency of `tensorflow`.
See the details [here](https://github.com/tensorflow/tensorflow/issues/26691).
An easy fix may be to update `abseil-py` by running:
`pip install --upgrade absl-py`
(In my case, the conflicting version was `0.7.1` and the problem was fixed in the updated version, `0.8.1`) | This is just a warning based on the [changes in Tensorflow 2.0](https://www.tensorflow.org/beta/guide/effective_tf2).
If you don't want to see these warnings, upgrade to TensorFlow 2.0. You can install the beta version via pip:
```
pip install tensorflow==2.0.0-beta1
``` |
2,641,665 | I've got a Django app that accepts uploads from [jQuery uploadify](http://www.uploadify.com/), a jQ plugin that uses flash to upload files and give a progress bar.
Files under about 150k work, but bigger files always fail and almost always at around 192k (that's 3 chunks) completed, sometimes at around 160k. The Exception I get is below.
```
exceptions.IOError
request data read error
File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 171, in _get_post
self._load_post_and_files()
File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 137, in _load_post_and_files
self._post, self._files = self.parse_file_upload(self.META, self.environ[\'wsgi.input\'])
File "/usr/lib/python2.4/site-packages/django/http/__init__.py", line 124, in parse_file_upload
return parser.parse()
File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 192, in parse
for chunk in field_stream:
File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next
output = self._producer.next()
File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 468, in next
for bytes in stream:
File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next
output = self._producer.next()
File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 375, in next
data = self.flo.read(self.chunk_size)
File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 405, in read
return self._file.read(num_bytes)
```
When running locally on the Django development server, big files work.
I've tried setting my `FILE_UPLOAD_HANDLERS = ("django.core.files.uploadhandler.TemporaryFileUploadHandler",)` in case it was the memory upload handler, but it made no difference.
Does anyone know how to fix this? | 2010/04/14 | [
"https://Stackoverflow.com/questions/2641665",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/246265/"
] | Html.DropDownList() accepts a SelectList as a parameter which has a SelectedValue property. Specify the selected item when you create the SelectList and pass the SelectList to the Html.DropDownList(). | Here's an example that has 7 drop downs on the page, each with the same 5 options. Each drop down can have a different option selected.
In my view, I have the following code inside my form:
```
<%= Html.DropDownListFor(m => m.ValueForList1, Model.AllItems)%>
<%= Html.DropDownListFor(m => m.ValueForList2, Model.AllItems)%>
<%= Html.DropDownListFor(m => m.ValueForList3, Model.AllItems)%>
<%= Html.DropDownListFor(m => m.ValueForList4, Model.AllItems)%>
<%= Html.DropDownListFor(m => m.ValueForList5, Model.AllItems)%>
<%= Html.DropDownListFor(m => m.ValueForList6, Model.AllItems)%>
<%= Html.DropDownListFor(m => m.ValueForList7, Model.AllItems)%>
```
Then I have a viewmodel like this:
```
public class HomePageViewModel
{
public List<SelectListItem> AllItems { get; set; }
public string ValueForList1 { get; set; }
public string ValueForList2 { get; set; }
public string ValueForList3 { get; set; }
public string ValueForList4 { get; set; }
public string ValueForList5 { get; set; }
public string ValueForList6 { get; set; }
public string ValueForList7 { get; set; }
public HomePageViewModel()
{
AllItems = new List<SelectListItem>
{
new SelectListItem {Text = "First", Value = "First"},
new SelectListItem {Text = "Second", Value = "Second"},
new SelectListItem {Text = "Third", Value = "Third"},
new SelectListItem {Text = "Fourth", Value = "Fourth"},
new SelectListItem {Text = "Fifth", Value = "Fifth"},
};
}
}
```
Now in your controller method, declared like this:
```
public ActionResult Submit(HomePageViewModel viewModel)
```
The value for viewModel.ValueForList1 will be set to the selected value.
Of course, I'd suggest using some kind of enum or ids from a database as your value. |
2,641,665 | I've got a Django app that accepts uploads from [jQuery uploadify](http://www.uploadify.com/), a jQ plugin that uses flash to upload files and give a progress bar.
Files under about 150k work, but bigger files always fail and almost always at around 192k (that's 3 chunks) completed, sometimes at around 160k. The Exception I get is below.
```
exceptions.IOError
request data read error
File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 171, in _get_post
self._load_post_and_files()
File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 137, in _load_post_and_files
self._post, self._files = self.parse_file_upload(self.META, self.environ[\'wsgi.input\'])
File "/usr/lib/python2.4/site-packages/django/http/__init__.py", line 124, in parse_file_upload
return parser.parse()
File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 192, in parse
for chunk in field_stream:
File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next
output = self._producer.next()
File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 468, in next
for bytes in stream:
File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next
output = self._producer.next()
File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 375, in next
data = self.flo.read(self.chunk_size)
File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 405, in read
return self._file.read(num_bytes)
```
When running locally on the Django development server, big files work.
I've tried setting my `FILE_UPLOAD_HANDLERS = ("django.core.files.uploadhandler.TemporaryFileUploadHandler",)` in case it was the memory upload handler, but it made no difference.
Does anyone know how to fix this? | 2010/04/14 | [
"https://Stackoverflow.com/questions/2641665",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/246265/"
] | Html.DropDownList() accepts a SelectList as a parameter which has a SelectedValue property. Specify the selected item when you create the SelectList and pass the SelectList to the Html.DropDownList(). | class 'Item' with properties 'Id' and 'Name'
ViewModel class has a Property 'SelectedItemId'
items = IEnumerable<Item>
```
<%=Html.DropDownList("selectname", items.Select(i => new SelectListItem{ Text = i.Name, Value = i.Id.ToString(), Selected = Model.SelectedItemId.HasValue ? i.Id == Model.SelectedItemId.Value : false })) %>
``` |
2,641,665 | I've got a Django app that accepts uploads from [jQuery uploadify](http://www.uploadify.com/), a jQ plugin that uses flash to upload files and give a progress bar.
Files under about 150k work, but bigger files always fail and almost always at around 192k (that's 3 chunks) completed, sometimes at around 160k. The Exception I get is below.
```
exceptions.IOError
request data read error
File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 171, in _get_post
self._load_post_and_files()
File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 137, in _load_post_and_files
self._post, self._files = self.parse_file_upload(self.META, self.environ[\'wsgi.input\'])
File "/usr/lib/python2.4/site-packages/django/http/__init__.py", line 124, in parse_file_upload
return parser.parse()
File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 192, in parse
for chunk in field_stream:
File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next
output = self._producer.next()
File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 468, in next
for bytes in stream:
File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next
output = self._producer.next()
File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 375, in next
data = self.flo.read(self.chunk_size)
File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 405, in read
return self._file.read(num_bytes)
```
When running locally on the Django development server, big files work.
I've tried setting my `FILE_UPLOAD_HANDLERS = ("django.core.files.uploadhandler.TemporaryFileUploadHandler",)` in case it was the memory upload handler, but it made no difference.
Does anyone know how to fix this? | 2010/04/14 | [
"https://Stackoverflow.com/questions/2641665",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/246265/"
] | class 'Item' with properties 'Id' and 'Name'
ViewModel class has a Property 'SelectedItemId'
items = IEnumerable<Item>
```
<%=Html.DropDownList("selectname", items.Select(i => new SelectListItem{ Text = i.Name, Value = i.Id.ToString(), Selected = Model.SelectedItemId.HasValue ? i.Id == Model.SelectedItemId.Value : false })) %>
``` | Here's an example that has 7 drop downs on the page, each with the same 5 options. Each drop down can have a different option selected.
In my view, I have the following code inside my form:
```
<%= Html.DropDownListFor(m => m.ValueForList1, Model.AllItems)%>
<%= Html.DropDownListFor(m => m.ValueForList2, Model.AllItems)%>
<%= Html.DropDownListFor(m => m.ValueForList3, Model.AllItems)%>
<%= Html.DropDownListFor(m => m.ValueForList4, Model.AllItems)%>
<%= Html.DropDownListFor(m => m.ValueForList5, Model.AllItems)%>
<%= Html.DropDownListFor(m => m.ValueForList6, Model.AllItems)%>
<%= Html.DropDownListFor(m => m.ValueForList7, Model.AllItems)%>
```
Then I have a viewmodel like this:
```
public class HomePageViewModel
{
public List<SelectListItem> AllItems { get; set; }
public string ValueForList1 { get; set; }
public string ValueForList2 { get; set; }
public string ValueForList3 { get; set; }
public string ValueForList4 { get; set; }
public string ValueForList5 { get; set; }
public string ValueForList6 { get; set; }
public string ValueForList7 { get; set; }
public HomePageViewModel()
{
AllItems = new List<SelectListItem>
{
new SelectListItem {Text = "First", Value = "First"},
new SelectListItem {Text = "Second", Value = "Second"},
new SelectListItem {Text = "Third", Value = "Third"},
new SelectListItem {Text = "Fourth", Value = "Fourth"},
new SelectListItem {Text = "Fifth", Value = "Fifth"},
};
}
}
```
Now in your controller method, declared like this:
```
public ActionResult Submit(HomePageViewModel viewModel)
```
The value for viewModel.ValueForList1 will be set to the selected value.
Of course, I'd suggest using some kind of enum or ids from a database as your value. |
73,662,597 | I have setup Glue Interactive sessions locally by following <https://docs.aws.amazon.com/glue/latest/dg/interactive-sessions.html>
However, I am not able to add any additional packages like HUDI to the interactive session
There are a few magic commands to use but not sure which one is apt and how to use
```
%additional_python_modules
%extra_jars
%extra_py_files
``` | 2022/09/09 | [
"https://Stackoverflow.com/questions/73662597",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19958017/"
] | It is a bit hard t understand what problem you are actually facing, as this is very basic SQL.
Use `EXISTS`:
```
select *
from a
where type = 'F'
and exists (select null from b where b.id = a.id and dt >= date '2022-01-01');
```
Or `IN`:
```
select *
from a
where type = 'F'
and id in (select id from b where dt >= date '2022-01-01');
```
Or, as the IDs are unique in both tables, join:
```
select a.*
from a
join b on b.id = a.id
where a.type = 'F'
and b.dt >= date '2022-01-01';
```
My favorite here is the `IN` clause, because you want to select data from table A where conditions are met. So no join needed, just a where clause, and `IN` is easier to read than `EXISTS`. | ```
SELECT *
FROM A
WHERE type='F'
AND id IN (
SELECT id
FROM B
WHERE DATE>='2022-01-01'; -- '2022' imo should be enough, need to check
);
```
I don't think joining is necessary. |
48,497,092 | I implement multiple linear regression from scratch but I did not find slope and intercept, gradient decent give me nan value.
Here is my code and I also give ipython notebook file.
<https://drive.google.com/file/d/1NMUNL28czJsmoxfgeCMu3KLQUiBGiX1F/view?usp=sharing>
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
x = np.array([[ 1, 2104, 3],
[ 1, 1600, 3],
[ 1, 2400, 3],
[ 1, 1416, 2],
[ 1, 3000, 4],
[ 1, 1985, 4]])
y = np.array([399900, 329900, 369000, 232000, 539900, 299900])
def gradient_runner(x, y, altha, b, theta1, theta2):
initial_m1 = 0
initial_m2 = 0
initial_b = 0
N = len(x)
for i in range(0, len(y)):
x0 = x[i, 0]
x1 = x[i, 1]
x2 = x[i, 2]
yi = y[i]
h_theta = (theta1 * x1 + theta2 * x2 + b)
initial_b += -(1/N) * x0 * (yi - h_theta)
initial_m1 += -(1/N) * x1 * (yi - h_theta)
initial_m2 += -(1/N) * x2 * (yi - h_theta)
new_b = b - (altha * initial_b)
new_m1 = theta1 - (altha * initial_m1)
new_m2 = theta2 - (altha * initial_m2)
return new_b, new_m1, new_m2
def fit(x, y, alpha, iteration, b, m1, m2):
for i in range(0, iteration):
b, m1, m2 = gradient_runner(x, y, alpha, b, m1, m2)
return b, m1, m2
fit(x,y, 0.001, 1500, 0,0,0)
``` | 2018/01/29 | [
"https://Stackoverflow.com/questions/48497092",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5107898/"
] | This is not a programming issue, but an issue of your function. [Numpy can use different data types](https://docs.scipy.org/doc/numpy-1.10.1/user/basics.types.html). In your case it uses float64. You can check the largest number, you can represent with this data format:
```
>>>sys.float_info
>>>sys.float_info(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308,
min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15,
mant_dig=53, epsilon=2.220446049250313e-16, radix=2, rounds=1)
```
Unfortunately, your iteration is not convergent for `b, m1, m2`, at least not with the provided data set. In iteration 83 the values become too large to be represented as a float, which are displayed as `inf` and `-inf` for infinity. When this is fed into the next iterative step, Python returns `NaN` for "not a number".
Though there are ways in Python to overcome limitations of float number representation in terms of precision, this is not a strategy you have to explore. The problem here is that your fit function is not convergent. Whether this is due to the function itself, its implementation by you or the chosen initial guesses, I can't decide. A common reason for non-convergent fit behaviour is also, that the data set doesn't represent the fit function. | try scaling your x
```py
def scale(x):
for j in range(x.shape[1]):
mean_x = 0
for i in range(len(x)):
mean_x += x[i,j]
mean_x = mean_x / len(x)
sum_of_sq = 0
for i in range(len(x)):
sum_of_sq += (x[i,j] - mean_x)**2
stdev = sum_of_sq / (x.shape[0] -1)
for i in range(len(x)):
x[i,j] = (x[i,j] - mean_x) / stdev
return x
```
or you can use a pre defined standard scaler |
70,964,456 | I had an issue like this on my Nano:
```
profiles = [ SERIAL_PORT_PROFILE ],
File "/usr/lib/python2.7/site-packages/bluetooth/bluez.py", line 176, in advertise_service
raise BluetoothError (str (e))
bluetooth.btcommon.BluetoothError: (2, 'No such file or directory')
```
I tried adding compatibility mode in the bluetooth.service file, reloading daemon, restarting bluetooth and then adding a serial port by doing
```
sudo sdptool add SP
```
These steps work fine on my ubuntu 20.04 laptop, but on jetpack 4.5.1, they don’t. And I checked also, they don’t work on jetson NX either.
I am really curious on how to solve this issue, otherwise, another way to use bluetooth inside a python code is welcomed.
Thanks | 2022/02/03 | [
"https://Stackoverflow.com/questions/70964456",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18104741/"
] | You might want to have a look at the following article which shows how to do the connection with core Python Socket library
<https://blog.kevindoran.co/bluetooth-programming-with-python-3/>.
The way BlueZ does this now is with the [Profile](https://git.kernel.org/pub/scm/bluetooth/bluez.git/tree/doc/profile-api.txt) API.
There is a Python example of using the Profile API at <https://git.kernel.org/pub/scm/bluetooth/bluez.git/tree/test/test-profile>
`hciattach`, `hciconfig`, `hcitool`, `hcidump`, `rfcomm`, `sdptool`, `ciptool`, and `gatttool` were [deprecated by the BlueZ](https://git.kernel.org/pub/scm/bluetooth/bluez.git/commit/?id=b1eb2c4cd057624312e0412f6c4be000f7fc3617) project in 2017. If you are following a tutorial that uses them, there is a chance that it might be out of date and that Linux systems will choose not to support them. | The solution was in the path of the bluetooth configuration file (inspired from this <https://developer.nvidia.com/embedded/learn/tutorials/connecting-bluetooth-audio>)
this answer : [bluetooth.btcommon.BluetoothError: (2, 'No such file or directory')](https://stackoverflow.com/questions/36675931/bluetooth-btcommon-bluetootherror-2-no-such-file-or-directory)
is not enough for jetson devices (jetpack). Although I didn't test if it works without changing the file mentioned in this link.
There is a `.conf` file that needs to be changed also : `/lib/systemd/system/bluetooth.service.d/nv-bluetooth-service.conf`
modify :
```
ExecStart=/usr/lib/bluetooth/bluetoothd -d --noplugin=audio,a2dp,avrcp
```
to :
```
ExecStart=/usr/lib/bluetooth/bluetoothd -C
```
after that it is necessary to do:
```
sudo systemctl daemon-reload
sudo systemctl restart bluetooth
```
Tested on jetson Nano and NX with jetpach 4.5.1
Thanks for the help ! |
24,879,641 | I've been looking everywhere for a step-by-step explanation for how to set up the following on an EC2 instance. For a new user I want things to be clean and correct but all of the 'guides' have different information and are really confusing.
My first thought is that I need to do the following
* Upgrade to latest version of Python2.7(finding the download is easy but installing on linux isn't clear)
* Add Pip
* Add Easy\_Install
* Add Virtualenv
* Change default Python to be 2.7 instead of 2.x
* Install other packages(mechanize, beautifulsoup, etc in virtualenv)
Things that are unclear:
* Do I need yum? Is that there by default?
* Do I need to update .bashrc with anything?
* What is the 'preferred' method of installing additional python packages? How can I make sure I've done it right? is `sudo pip package_name` enough?
* What am I missing?
* when do I use sudo vs not?
* Do I need to add a site-packages directory or is that done by default? Why/why not? | 2014/07/22 | [
"https://Stackoverflow.com/questions/24879641",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3195487/"
] | I assume you may be unfamiliar with EC2, so I suggest going through this [FAQ](https://wiki.debian.org/Amazon/EC2/FAQ) before continuing with deploying an EC2 instance to run your Python2.7 application.
Anyway, now that you are somewhat more familiar with that, here's how I normally deploy a one-off instance through the EC2 web-interface in brief:
1. Log into the EC2 Dashboard with your credentials
2. Select the Launch Instance button
3. Pick a modern Linux distribution (since `sudo` is a \*nix command)
4. Select the specifications needed based on needs/costs.
5. Deploy the instance
6. Once the instance is started, log into the console as per the connect instructions for a standalone SSH client (select the running instance, then select the Connect button).
7. Once logged into the server using ssh you may administer that as a standard headless Linux server system.
My recommendation is rather than spending money (unless you are eligible for the free tier) on running an EC2 instance to learn all this, I suggest downloading VirtualBox or VMWare Player and play and learn with a locally running Linux image on your machine.
Now for your unclear bits: They are not much different than normal environments.
1. `yum` is a package management system built on top of `RPM`, or RedHat Package Manager. If you use other distributions they may have different package managers. For instance, other common server distributions like Debian and Ubuntu they will have `aptitude` or `apt-get`, ArchLinux will have `pacman`.
Also, in general you can just rely on the distro's python packages which you can just install using `[sudo] yum install python27` or `[sudo] apt-get install python-2.7`, depending on the Linux distribution that is being used.
2. `.bashrc` controls settings for your running shell, generally it won't do anything for your server processes. So no, you may safely leave that alone if you are following best practices for working with Python (which will follow).
3. Best practices generally is to have localized environments using `virtualenv` and not install Python packages on the system level.
4. `sudo` is for tasks that require system level (root) privileges. You generally want to avoid using `sudo` unless necessary (such as installing system level packages).
5. No, `virtualenv` should take care of that for you. Since 1.4.1 it distributes its own version of `pip` and it will be installed from there.
So, what you seem to be missing is experience with running Python in a virtualenv. There are [good instructions](http://virtualenv.readthedocs.org/en/latest/) on the package's website that you might want to familiarize yourself with. | A script to build python in case the version you need is not in an available repo:
<https://gist.github.com/AvnerCohen/3e5cbe09bc40231869578ce7cbcbe9cc>
```
#!/bin/bash -e
NEW_VERSION="2.7.13"
CURRENT_VERSION="$(python -V 2>&1)"
if [[ "$CURRENT_VERSION" == "Python $NEW_VERSION" ]]; then
echo "Python $NEW_VERSION already installed, aborting."
exit 1
fi
echo "Starting upgrade from ${CURRENT_VERSION} to ${NEW_VERSION}"
if [ ! -d "python_update" ]; then
mkdir python_update
cd python_update
wget https://www.python.org/ftp/python/2.7.13/Python-2.7.13.tgz
tar xfz Python-2.7.13.tgz
cd Python-2.7.13/
else
cd python_update
cd Python-2.7.13/
fi
./configure --prefix /usr/local/lib/python2.7.13 --enable-ipv6
make && make install
alternatives --install /usr/bin/python python /usr/local/lib/python2.7.13/bin/python 27130
update-alternatives --refresh python
update-alternatives --auto python
curl --silent --show-error --retry 5 https://bootstrap.pypa.io/get-pip.py | sudo python
ln -sf /usr/local/lib/python2.7.13/bin/pip /usr/bin/pip
pip install -U virtualenv
ln -sf /usr/local/lib/python2.7.13/bin/virtualenv /usr/bin/virtualenv
echo "DONE!"
``` |
40,145,127 | I'm trying to construct a URL based on what I get from a initial URL.
Example:
*URL1:*
```
http://some-url/rest/ids?configuration_path=project/Main/10-deploy
```
**Response here is** 123
*URL2:*
```
http://abc-bld/download/{RESPONSE_FROM_URL1_HERE}.latest_successful/artifacts/build-info.props
```
so my final URL will be:
```
http://tke-bld/download/123.latest_successful/artifacts/build-info.props
```
**Response here is** Some.Text.here.123
Then I'd like to grab 'Some.Text.here.123' and store it in a variable.
How can I accomplish this with python?
Any help would be much appreciated. Thanks | 2016/10/20 | [
"https://Stackoverflow.com/questions/40145127",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5622743/"
] | First how you can import your variable without modifying extra.py, if really want too,
You would have to take help of sys module for getting reference to foo in extra module.
```
import sys
from extra import *
print('1. Foo in globals ? {0}'.format('foo' in globals()))
setfoo()
print('2. Foo in globals ? {0}'.format('foo' in globals()))
# Check if extra has foo in it
print('2. Foo in extra ? {0}'.format(hasattr(sys.modules['extra'], 'foo')))
# Getting foo explicitly from extra module
foo = sys.modules['extra'].foo
print('3. Foo in globals ? {0}'.format('foo' in globals()))
print("Foo={0}".format(foo))
```
Output:
```
1. Foo in globals ? False
2. Foo in globals ? False
2. Foo in extra ? True
3. Foo in globals ? True
Foo=5
```
**Update for later usecase :**
Modifying extra.py which gets importer and updates its global variables,
```
# extra.py
import sys
def use(**kwargs):
_mod = sys.modules['__main__']
for k, v in kwargs.items():
setattr(_mod, k, v)
```
Now importing in any file remains same,
```
#myfile.py
from extra import *
print use(x = 5, y = 8), str(x) + " times " + str(y) + " equals " + str(x*y)
```
Output:
```
None 5 times 8 equals 40
```
`None` appears as use function returns nothing.
Note: It would be better if you choose better pythonic solution for your usecase, unless you are trying to have a little fun with python.
Refer for python scope rules:
[Short Description of the Scoping Rules?](https://stackoverflow.com/questions/291978/short-description-of-scoping-rules?answertab=active#tab-top) | Modules have namespaces which are variable names bound to objects. When you do `from extra import *`, you take the objects found in `extra`'s namespace and bind them to new variables in the new module. If `setfoo` has never been called, then `extra` doesn't have a variable called `foo` and there is nothing to bind in the new module namespace.
Had `setfoo` been called, then `from extra import *` would have found it. But things can still be funky. Suppose some assignment sets `extra.foo` to `42`. Well, the other module namespace doesn't know about that, so in the other module, `foo` would still be `5` but `extra.foo` would be `42`.
Always keep in mind the difference between an object and the things that may be referencing the object at any given time. Objects have no idea which variables or containers happen to reference them (though they do keep a count of the number of references). If a variable or container is rebound to a different object, it doesn't change the binding of other variables or containers. |
40,145,127 | I'm trying to construct a URL based on what I get from a initial URL.
Example:
*URL1:*
```
http://some-url/rest/ids?configuration_path=project/Main/10-deploy
```
**Response here is** 123
*URL2:*
```
http://abc-bld/download/{RESPONSE_FROM_URL1_HERE}.latest_successful/artifacts/build-info.props
```
so my final URL will be:
```
http://tke-bld/download/123.latest_successful/artifacts/build-info.props
```
**Response here is** Some.Text.here.123
Then I'd like to grab 'Some.Text.here.123' and store it in a variable.
How can I accomplish this with python?
Any help would be much appreciated. Thanks | 2016/10/20 | [
"https://Stackoverflow.com/questions/40145127",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5622743/"
] | First how you can import your variable without modifying extra.py, if really want too,
You would have to take help of sys module for getting reference to foo in extra module.
```
import sys
from extra import *
print('1. Foo in globals ? {0}'.format('foo' in globals()))
setfoo()
print('2. Foo in globals ? {0}'.format('foo' in globals()))
# Check if extra has foo in it
print('2. Foo in extra ? {0}'.format(hasattr(sys.modules['extra'], 'foo')))
# Getting foo explicitly from extra module
foo = sys.modules['extra'].foo
print('3. Foo in globals ? {0}'.format('foo' in globals()))
print("Foo={0}".format(foo))
```
Output:
```
1. Foo in globals ? False
2. Foo in globals ? False
2. Foo in extra ? True
3. Foo in globals ? True
Foo=5
```
**Update for later usecase :**
Modifying extra.py which gets importer and updates its global variables,
```
# extra.py
import sys
def use(**kwargs):
_mod = sys.modules['__main__']
for k, v in kwargs.items():
setattr(_mod, k, v)
```
Now importing in any file remains same,
```
#myfile.py
from extra import *
print use(x = 5, y = 8), str(x) + " times " + str(y) + " equals " + str(x*y)
```
Output:
```
None 5 times 8 equals 40
```
`None` appears as use function returns nothing.
Note: It would be better if you choose better pythonic solution for your usecase, unless you are trying to have a little fun with python.
Refer for python scope rules:
[Short Description of the Scoping Rules?](https://stackoverflow.com/questions/291978/short-description-of-scoping-rules?answertab=active#tab-top) | Without knowing details, one possible solution would be to return `foo` in `extra.py setfoo()` function instead of declaring as a global variable.
Then declare global `foo` in main.py and feed in value from external function `setfoo()`
Here is the setup
```
#extra.py
def setfoo(): # sets "foo" to 5 even if it is unassigned
#global foo
foo = 5
return foo
#main.py
from extra import setfoo
global foo
foo = setfoo()
print foo
```
**Result:**
```
Python 2.7.9 (default, Dec 10 2014, 12:24:55) [MSC v.1500 32 bit (Intel)] on win32
Type "copyright", "credits" or "license()" for more information.
>>> ================================ RESTART ================================
>>>
5
>>>
```
**EDIT - 1**
OK, take-2 at this problem.
I don't endorse it, but if there is a specific need, If you add a variable to the `__builtin__` module, it will be accessible as a global from any other module as long as it includes `__builtin__` .
```
#extra.py
import __builtin__
def setfoo(): # sets "foo" to 5 even if it is unassigned
global foo
__builtin__.foo = 5
#main.py
from extra import *
setfoo()
print foo
```
Output:
```
>>>
5
>>>
``` |
25,585,785 | I'm using python 3.3. Consider this function:
```
def foo(action, log=False,*args) :
print(action)
print(log)
print(args)
print()
```
The following call works as expected:
```
foo("A",True,"C","D","E")
A
True
('C', 'D', 'E')
```
But this one doesn't.
```
foo("A",log=True,"C","D","E")
SyntaxError: non-keyword arg after keyword arg
```
Why is this the case?
Does this somehow introduce ambiguity? | 2014/08/30 | [
"https://Stackoverflow.com/questions/25585785",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/888862/"
] | Consider the following:
```
def foo(bar="baz", bat=False, *args):
...
```
Now if I call
```
foo(bat=True, "bar")
```
Where does "bar" go? Either:
* `bar = "bar", bat = True, args = ()`, or
* `bar = "baz", bat = True, args = ("bar",)`, or even
* `bar = "baz", bat = "bar", args = ()`
and there's no obvious choice (at least between the first two) as to which one it should be. We want `bat = True` to 'consume' the second argument slot, but it's not clear which order the remaining arguments should be consumed in: treating it as if `bat` doesn't exist at all and moving everything to the left, or treating it as if `bat` moved the "cursor" past itself on to the next argument. Or, if we wanted to do something truly strange, we could defend the decision to say that the second argument in the argument tuple *always* goes with the second positional argument, whether or not other keyword arguments were passed.
Regardless, we're left with something pretty confusing, and someone is going to be surprised which one we picked regardless of which one it is. Python aims to be simple and clean, and it wants to avoid making any language design choices that might be unintuitive. [There should be one-- and preferably only one --**obvious** way to do it](http://legacy.python.org/dev/peps/pep-0020/). | The function of keyword arguments is twofold:
1. To provide an interface to functions that does not rely on the order of the parameters.
2. To provide a way to reduce ambiguity when passing parameters to a function.
Providing a mixture of keyword and ordered arguments is only a problem when you provide the keyword arguments **before** the ordered arguments. Why is this?
Two reasons:
1. It is confusing to read. If you're providing ordered parameters, why would you label some of them and not others?
2. The algorithm to process the arguments would be needless and complicated. You can provide keyword args after your 'ordered' arguments. This makes sense because it is clear that everything is **ordered** up until the point that you employ keywords. However; if you employ keywords between ordered arguments, there is no clear way to determine whether you are still ordering your arguments. |
53,965,764 | Hi I'm learning to code in python and thought it would be cool to automate a task I usually do for my room mates. I write out a list of names and the date for each month so that everyone knows whos turn it is for dishes.
Here's my code:
```
def dish_day_cycle(month, days):
print('Dish Cycle For %s:' % month)
dish_list = ['Jen', 'Zack', 'Hector', 'Arron']
days = days + 1
for day in range(1, days):
for i in dish_list:
print('%s %s : %s' % (month, day, i))
```
The problem is that it repeats everyone's name for each and every day, obviously not what I want. I need it to print only one name per day. Not this:
```
>>> dish_day_cycle(month, days)
Dish Cycle For December:
December 1 : Jen
December 1 : Zack
December 1 : Hector
December 1 : Arron
December 2 : Jen
December 2 : Zack
December 2 : Hector
December 2 : Arron
December 3 : Jen
December 3 : Zack
December 3 : Hector
December 3 : Arron
December 4 : Jen
December 4 : Zack
December 4 : Hector
December 4 : Arron
December 5 : Jen
December 5 : Zack
December 5 : Hector
December 5 : Arron
```
Please let me know how I could correct this function to work properly. | 2018/12/29 | [
"https://Stackoverflow.com/questions/53965764",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10844873/"
] | You used a nested for loop, therefore for every day - each of the names is printed along with that day. Use only the outer loop, and calculate who's turn it is. should be something like:
```
for day in range(1, days):
print('%s %s : %s' % (month, day, dish_list[day % len(dish_list)]))
```
assuming your roomates & you split the dishing equally. | You can loop through both lists together and repeating the shorter with `itertools.cycle`:
```
import itertools
for day, person in zip(range(1, days), itertools.cycle(dish_list)):
print('{} {} : {}'.format(month, day, person))
```
Update:
`zip` will pair elements in the two iterables--`range` object of days and `dish_list`--to create a new list of tuple pairs from the two iterables. However, `zip` only creates a list up to the shortest iterable. `itertools.cycle` circumvents this problem so `zip` cycles back to through `dish_list`. The for loop will now cycle through these two together, rather than in a nested fashion in your original code.
Documentation will probably explain better than I just did: [`zip`](https://docs.python.org/3/library/functions.html#zip), [`itertools.cycle`](https://docs.python.org/3/library/itertools.html#itertools.cycle) |
53,965,764 | Hi I'm learning to code in python and thought it would be cool to automate a task I usually do for my room mates. I write out a list of names and the date for each month so that everyone knows whos turn it is for dishes.
Here's my code:
```
def dish_day_cycle(month, days):
print('Dish Cycle For %s:' % month)
dish_list = ['Jen', 'Zack', 'Hector', 'Arron']
days = days + 1
for day in range(1, days):
for i in dish_list:
print('%s %s : %s' % (month, day, i))
```
The problem is that it repeats everyone's name for each and every day, obviously not what I want. I need it to print only one name per day. Not this:
```
>>> dish_day_cycle(month, days)
Dish Cycle For December:
December 1 : Jen
December 1 : Zack
December 1 : Hector
December 1 : Arron
December 2 : Jen
December 2 : Zack
December 2 : Hector
December 2 : Arron
December 3 : Jen
December 3 : Zack
December 3 : Hector
December 3 : Arron
December 4 : Jen
December 4 : Zack
December 4 : Hector
December 4 : Arron
December 5 : Jen
December 5 : Zack
December 5 : Hector
December 5 : Arron
```
Please let me know how I could correct this function to work properly. | 2018/12/29 | [
"https://Stackoverflow.com/questions/53965764",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10844873/"
] | You used a nested for loop, therefore for every day - each of the names is printed along with that day. Use only the outer loop, and calculate who's turn it is. should be something like:
```
for day in range(1, days):
print('%s %s : %s' % (month, day, dish_list[day % len(dish_list)]))
```
assuming your roomates & you split the dishing equally. | The problem is that you're iterating over the days and then for over the list of names. Imagine running line by line because a for loop only repeats after it has gotten to the end of an item. So you've said in your function, for each day each person has to do the dishes which is why you have all this repetition.
Much easier would be to only have one for loop, and add 1 to day and to dish\_list inside that loop as such
```
person = 0 # this will serve as the index of the list
for day in range(days):
print("%s %s : %s" % (month, day + 1, dish_list[person]))
if person == 3:
person = 0
else:
person += 1
```
Also just to point out, there should be an indent after defining your function otherwise it will throw off an error. Hope this helps |
44,861,989 | I have an xlsx file, with columns with various coloring.
I want to read only the white columns of this excel in python using pandas, but I have no clues on hot to do this.
I am able to read the full excel into a dataframe, but then I miss the information about the coloring of the columns and I don't know which columns to remove and which not. | 2017/07/01 | [
"https://Stackoverflow.com/questions/44861989",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4402942/"
] | **(Disclosure: I'm one of the authors of the library I'm going to suggest)**
With [StyleFrame](https://github.com/DeepSpace2/StyleFrame) (that wraps pandas) you can read an excel file into a dataframe without loosing the style data.
Consider the following sheet:
[](https://i.stack.imgur.com/SQ96I.png)
And the following code:
```
from styleframe import StyleFrame, utils
# from StyleFrame import StyleFrame, utils (if using version < 3.X)
sf = StyleFrame.read_excel('test.xlsx', read_style=True)
print(sf)
# b p y
# 0 nan 3 1000.0
# 1 3.0 4 2.0
# 2 4.0 5 42902.72396767148
sf = sf[[col for col in sf.columns
if col.style.fill.fgColor.rgb in ('FFFFFFFF', utils.colors.white)]]
# "white" can be represented as 'FFFFFFFF' or
# '00FFFFFF' (which is what utils.colors.white is set to)
print(sf)
# b
# 0 nan
# 1 3.0
# 2 4.0
``` | This can not be done in pandas. You will need to use other library to read the xlsx file and determine what columns are white. I'd suggest using `openpyxl` library.
Then your script will follow this steps:
1. Open xlsx file
2. Read and filter the data (you can access the cell color) and save the results
3. Create pandas dataframe
Edit: Switched `xlrd` to `openpyxl` as `xlrd` is no longer actively maintained |
51,165,672 | When I execute the code below, is there anyway to keep python compiler running the code without error messages popping up?
Since I don't know how to differentiate integers and strings,
when `int(result)` executes and `result` contains letters, it spits out an error message that stops the program.
Is there anyway around this?
Here is my code:
```
result = input('Type in your number,type y when finished.\n')
int(result)
if isinstance(result,str):
print('finished')
``` | 2018/07/04 | [
"https://Stackoverflow.com/questions/51165672",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10029884/"
] | Actually, with Python and many other languages, you can differentiate types.
When you execute `int(result)`, the `int` builtin assumes the parameter value is able to be turned into an integer. If not, say the string is `abc123`, it can not turn that string into an integer and will raise an exception.
An easy way around this is to check first with one of the many builtins `isdigit()`, before we evaluate `int(result)`.
```
# We assume result is always a string, and therefore always has the method `.isdigit`
if result.isdigit():
int(result)
else:
# Choose what happens if it is not of the correct type. Remove this statement if nothing.
pass
```
Note that `.isdigit()` will only work on whole numbers, `10.4` will be seen as *not* an integer. However `10` will be.
I recommend this approach over `try` and `except` clauses, however that is a valid solution too. | You can put everything that might throw an error, in a try block, and have an except block that keeps the flow of the program.
btw I think, in your code it should be, `isinstance(result,int)` not `isinstance(result,str)`
In your case,
```
result = input('Type in your number,type y when finished.\n')
try:
result = int(result)
except:
pass
if isinstance(result,int):
print('finished')
``` |
51,165,672 | When I execute the code below, is there anyway to keep python compiler running the code without error messages popping up?
Since I don't know how to differentiate integers and strings,
when `int(result)` executes and `result` contains letters, it spits out an error message that stops the program.
Is there anyway around this?
Here is my code:
```
result = input('Type in your number,type y when finished.\n')
int(result)
if isinstance(result,str):
print('finished')
``` | 2018/07/04 | [
"https://Stackoverflow.com/questions/51165672",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10029884/"
] | Let's look at your code:
```
int(result)
```
All that will do is raise an exception if `result` cannot be converted to an `int`. It does **not** change `result`. Why not? Because in python string (and int) objects cannot be changed, they are *immutable*. So:
```
if isinstance(result,str):
print('finished')
```
this test is pointless, because `result` will *always* be a `str` because you have not changed it - that's the type returned by `input()`.
The way to deal with error messages is to fix or handle them. There are two general approaches, "look before you leap" and "exception handling". In "look before you leap" you would check to see if `result` can be turned into an `int` by using a string test like `str.isdigit()`. In python the usual way is to use exception handling, for example:
```
result = input('Type in your number,type y when finished.\n')
try:
# convert result to an int - not sure if this is what you want
result = int(result)
except ValueError:
print("result is not an int")
if isinstance(result, int):
print("result is an int")
```
You can see I specifically tested for `ValueError`. If you don't have this and just have `except` then it would trap *any* error, which could mask other issues. | Actually, with Python and many other languages, you can differentiate types.
When you execute `int(result)`, the `int` builtin assumes the parameter value is able to be turned into an integer. If not, say the string is `abc123`, it can not turn that string into an integer and will raise an exception.
An easy way around this is to check first with one of the many builtins `isdigit()`, before we evaluate `int(result)`.
```
# We assume result is always a string, and therefore always has the method `.isdigit`
if result.isdigit():
int(result)
else:
# Choose what happens if it is not of the correct type. Remove this statement if nothing.
pass
```
Note that `.isdigit()` will only work on whole numbers, `10.4` will be seen as *not* an integer. However `10` will be.
I recommend this approach over `try` and `except` clauses, however that is a valid solution too. |
51,165,672 | When I execute the code below, is there anyway to keep python compiler running the code without error messages popping up?
Since I don't know how to differentiate integers and strings,
when `int(result)` executes and `result` contains letters, it spits out an error message that stops the program.
Is there anyway around this?
Here is my code:
```
result = input('Type in your number,type y when finished.\n')
int(result)
if isinstance(result,str):
print('finished')
``` | 2018/07/04 | [
"https://Stackoverflow.com/questions/51165672",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10029884/"
] | Let's look at your code:
```
int(result)
```
All that will do is raise an exception if `result` cannot be converted to an `int`. It does **not** change `result`. Why not? Because in python string (and int) objects cannot be changed, they are *immutable*. So:
```
if isinstance(result,str):
print('finished')
```
this test is pointless, because `result` will *always* be a `str` because you have not changed it - that's the type returned by `input()`.
The way to deal with error messages is to fix or handle them. There are two general approaches, "look before you leap" and "exception handling". In "look before you leap" you would check to see if `result` can be turned into an `int` by using a string test like `str.isdigit()`. In python the usual way is to use exception handling, for example:
```
result = input('Type in your number,type y when finished.\n')
try:
# convert result to an int - not sure if this is what you want
result = int(result)
except ValueError:
print("result is not an int")
if isinstance(result, int):
print("result is an int")
```
You can see I specifically tested for `ValueError`. If you don't have this and just have `except` then it would trap *any* error, which could mask other issues. | You can put everything that might throw an error, in a try block, and have an except block that keeps the flow of the program.
btw I think, in your code it should be, `isinstance(result,int)` not `isinstance(result,str)`
In your case,
```
result = input('Type in your number,type y when finished.\n')
try:
result = int(result)
except:
pass
if isinstance(result,int):
print('finished')
``` |
69,216,484 | Hello I'm trying to sort my microscpoy images.
I'm using python 3.7
File names' are like this. t0, t1, t2
```
S18_b0s17t0c0x62672-1792y6689-1024.tif
S18_b0s17t1c0x62672-1792y6689-1024.tif
S18_b0s17t2c0x62672-1792y6689-1024.tif
.
.
.
S18_b0s17t145c0x62672-1792y6689-1024
```
I tried "sorted" the list but it was like this
[](https://i.stack.imgur.com/SNJHw.png)
can some one give me some tips to sort out by the sequence | 2021/09/17 | [
"https://Stackoverflow.com/questions/69216484",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16075554/"
] | **Updated answer for your updated question:**
**The simple answer to your question is that you can use [string.Split](https://learn.microsoft.com/en-us/dotnet/api/system.string.split?view=net-5.0) to separate that string at the commas.** But the fact that you have to do this is indicative of a larger problem with your database schema.
Right now I'm inferring that your table looks something like this:
**ai**
| command | properties |
| --- | --- |
| command1 | property1,property2,property3,property4 |
| command2 | property1,property2 |
You should never put comma delimited values into a database. Try something like this:
**ai**
| command | property |
| --- | --- |
| command1 | property1 |
| command1 | property2 |
| command1 | property3 |
| command1 | property4 |
| command2 | property1 |
| command2 | property2 |
Your query becomes: `SELECT property FROM ai WHERE command = @command`
I would however like to add that even this improved schema is problematic. You don't want to duplicate strings and use them as id's. It's prone to typos and problems when renaming. Instead do something like this:
**command**
| id (int) | name (varchar) |
| --- | --- |
| 1 | command1 |
| 2 | command2 |
**property**
| id (int) | name (varchar) |
| --- | --- |
| 1 | property1 |
| 2 | property2 |
| 3 | property3 |
| 4 | property4 |
**commandproperty**
| commandID (int) | propertyID (int) |
| --- | --- |
| 1 | 1 |
| 1 | 2 |
| 1 | 3 |
| 1 | 4 |
| 2 | 1 |
| 2 | 2 |
Your query roughly becomes: `SELECT command.name as command, property.name as property from commandProperty LEFT JOIN command ON command.id = commandID LEFT JOIN property ON property.id = propertyID WHERE commandID = (SELECT TOP 1 id ROM command WHERE name = @command)`
There might be typos in that query. I haven't actually executed it. Also, it would be best practice to turn these tables into a view that looks like my second example.
**My old answer:**
There seems to be something missing in the question.
Is the problem that the array can not be expanded beyond `property4`? If so try using a `List<string>`.
Is the proplem that you want to associate column values with column names? In that case try using a [`Dictionary<string,object>`](https://learn.microsoft.com/en-us/dotnet/api/system.collections.generic.dictionary-2?view=net-5.0) (or `Dictionary<string,T>` where `T` is a datatype common to all the columns).
Alternatively, you can try using C#'s built in [`DataTable`](https://learn.microsoft.com/en-us/dotnet/api/system.data.datatable?view=net-5.0). I find them to be a bit verbose to use, but they will probably work for your needs. | I'm editing the answer based on the new information.
I'd still consider using my Dapper wrapper package.
<https://www.nuget.org/packages/Cworth.DapperExtensions/#>
Create a model class that matches the filed returned in your select.
```
public class MyModel
{
public string Command { get; set; }
public string properties { get; set; }
}
```
Use nugget package manager to install package referenced above.
Update your data access class to add using statement;
`using Cworth.DapperExtensions;`
Update your method
```
public async static string[] SelectData(string data)
{
var sqlRepo = new SqlRepo(_connectionString);
var results = await sqlRepo.GetList<MyModel>("MyStoredProc", new { command = data });
return results.Select(r => r.Properties).ToArray();
}
```
Note the above assumes you have created a stored procedure in SQL name "MyStoredProc" that that match your select with parameter "command". |
59,126,742 | i am playing with wxPython and try to set position of frame:
```
import wx
app = wx.App()
p = wx.Point(200, 200)
frame = wx.Frame(None, title = 'test position', pos = p)
frame.Show(True)
print('frame position: ', frame.GetPosition())
app.MainLoop()
```
even though `print('frame position: ', frame.GetPosition())` shows the correct postion, the frame is shown in top left corner of screen.
Alternatively i tried
```
frame.SetPosition(p)
frame.Move(p)
```
without success.
my environment: ArchLinux 5.3.13, python 3.8.0, wxpython 4.0.7, openbox 3.6.1
On cinnamom the code works as expected. How to solve this on openbox?
edit 07,12,2019:
i could set postion of a dialog in openbox config `~/.config/openbox/rc.xml`:
```
<application name="fahrplan.py"
class="Fahrplan.py"
groupname="fahrplan.py"
groupclass="Fahrplan.py"
title="Fahrplan *"
type="dialog">
<position force="no">
<x>760</x>
<y>415</y>
</position>
</application>
```
i got name, class etc. from obxprop. x and y are calculated to center a dialog of 400 x 250 px on screen of 1920 x 1080 px.
This static solution is not suitable for me. I want to place dynamically generated popups. | 2019/12/01 | [
"https://Stackoverflow.com/questions/59126742",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3455890/"
] | I had the same problem under Windows and played around with the style flags. With wxICONIZE sytle set active the window finally used the positioning information | The position is provided to the window manager as a "hint". It is totally up to the window manager whether it will actually honor the hint or not. Check the openbox settings or preferences and see if there is anything relevant that can be changed. |
56,451,482 | Within my main window I have a table of class QTreeView. The second column contains subjects of mails. With a click of a push button I want to search for a specific character, let's say "Y". Now I want the table to jump to the first found subject beginning with the letter "Y".
See the following example.
[](https://i.stack.imgur.com/rUtxL.png)
When you pick any cell in the second column ("subject") and start typing "y" this will work -> the table highlights the first occurrence. -> See the underlined item "Your Phone Bill". It would even scroll to that cell when it would be out of sight.
[](https://i.stack.imgur.com/BVVLB.png)
I want exactly this - but implemented on a push button, see "Search Subj 'Y'", signal "on\_pbSearch\_Y\_clicked()".
Full functional code (so far):
```
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import sys
from PyQt5.QtGui import *
from PyQt5.QtCore import *
from PyQt5.QtGui import *
from PyQt5.QtWidgets import *
class App(QWidget):
MAIL_RANGE = 4
ID, FROM, SUBJECT, DATE = range(MAIL_RANGE)
def __init__(self):
super().__init__()
self.left = 10
self.top = 10
self.width = 640
self.height = 240
self.initUI()
self.dataView.setSelectionMode(QAbstractItemView.ExtendedSelection) # <- enable selection of rows in tree
self.dataView.setEditTriggers(QAbstractItemView.NoEditTriggers) # <- disable editing items in tree
for i in range(0, 2):
self.dataView.resizeColumnToContents(i)
self.pbSearch_Y = QPushButton(self)
self.pbSearch_Y.setText("Search Subj 'Y'")
self.pbSearch_Y.move(500,0)
self.pbSearch_Y.show()
# connect handlers
self.pbSearch_Y.clicked.connect(self.on_pbSearch_Y_clicked)
def on_pbSearch_Y_clicked(self):
pass
def initUI(self):
self.setGeometry(self.left, self.top, self.width, self.height)
self.dataGroupBox = QGroupBox("Inbox")
self.dataView = QTreeView()
self.dataView.setRootIsDecorated(False)
self.dataView.setAlternatingRowColors(True)
dataLayout = QHBoxLayout()
dataLayout.addWidget(self.dataView)
self.dataGroupBox.setLayout(dataLayout)
model = self.createMailModel(self)
self.dataView.setModel(model)
self.addMail(model, 1, 'service@github.com', 'Your Github Donation','03/25/2017 02:05 PM')
self.addMail(model, 2, 'support@github.com', 'Github Projects','02/02/2017 03:05 PM')
self.addMail(model, 3, 'service@phone.com', 'Your Phone Bill','01/01/2017 04:05 PM')
self.addMail(model, 4, 'service@abc.com', 'aaaYour Github Donation','03/25/2017 02:05 PM')
self.addMail(model, 5, 'support@def.com', 'bbbGithub Projects','02/02/2017 03:05 PM')
self.addMail(model, 6, 'service@xyz.com', 'cccYour Phone Bill','01/01/2017 04:05 PM')
self.dataView.setColumnHidden(0, True)
mainLayout = QVBoxLayout()
mainLayout.addWidget(self.dataGroupBox)
self.setLayout(mainLayout)
self.show()
def createMailModel(self,parent):
model = QStandardItemModel(0, self.MAIL_RANGE, parent)
model.setHeaderData(self.ID, Qt.Horizontal, "ID")
model.setHeaderData(self.FROM, Qt.Horizontal, "From")
model.setHeaderData(self.SUBJECT, Qt.Horizontal, "Subject")
model.setHeaderData(self.DATE, Qt.Horizontal, "Date")
return model
def addMail(self, model, mailID, mailFrom, subject, date):
model.insertRow(0)
model.setData(model.index(0, self.ID), mailID)
model.setData(model.index(0, self.FROM), mailFrom)
model.setData(model.index(0, self.SUBJECT), subject)
model.setData(model.index(0, self.DATE), date)
if __name__ == '__main__':
app = QApplication(sys.argv)
ex = App()
sys.exit(app.exec_())
```
How can I achieve this? | 2019/06/04 | [
"https://Stackoverflow.com/questions/56451482",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10598535/"
] | You have to do the following:
* Use the [`match()`](https://doc.qt.io/qt-5/qabstractitemmodel.html#match) method of view to find the QModelIndex given the text.
* Use the [`scrollTo()`](https://doc.qt.io/qt-5/qabstractitemview.html#scrollTo) method of view to scroll to QModelIndex
* Use the [`select()`](https://doc.qt.io/qt-5/qitemselectionmodel.html#select-2) method of the view's [`selectionModel()`](https://doc.qt.io/qt-5/qabstractitemview.html#selectionModel) to select the row.
```py
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import sys
from PyQt5 import QtCore, QtGui, QtWidgets
class App(QtWidgets.QWidget):
MAIL_RANGE = 4
ID, FROM, SUBJECT, DATE = range(MAIL_RANGE)
def __init__(self):
super().__init__()
self.initUI()
def initUI(self):
self.setGeometry(10, 10, 640, 240)
self.dataGroupBox = QtWidgets.QGroupBox("Inbox")
self.dataView = QtWidgets.QTreeView(
rootIsDecorated=False,
alternatingRowColors=True,
selectionMode=QtWidgets.QAbstractItemView.ExtendedSelection,
editTriggers=QtWidgets.QAbstractItemView.NoEditTriggers,
selectionBehavior=QtWidgets.QAbstractItemView.SelectRows,
)
dataLayout = QtWidgets.QHBoxLayout()
dataLayout.addWidget(self.dataView)
self.dataGroupBox.setLayout(dataLayout)
model = App.createMailModel(self)
self.dataView.setModel(model)
for i in range(0, 2):
self.dataView.resizeColumnToContents(i)
self.addMail(model, 1, 'service@github.com', 'Your Github Donation','03/25/2017 02:05 PM')
self.addMail(model, 2, 'support@github.com', 'Github Projects','02/02/2017 03:05 PM')
self.addMail(model, 3, 'service@phone.com', 'Your Phone Bill','01/01/2017 04:05 PM')
self.addMail(model, 4, 'service@abc.com', 'aaaYour Github Donation','03/25/2017 02:05 PM')
self.addMail(model, 5, 'support@def.com', 'bbbGithub Projects','02/02/2017 03:05 PM')
self.addMail(model, 6, 'service@xyz.com', 'cccYour Phone Bill','01/01/2017 04:05 PM')
self.dataView.setColumnHidden(0, True)
self.leSearch = QtWidgets.QLineEdit()
self.pbSearch = QtWidgets.QPushButton(
"Search", clicked=self.on_pbSearch_clicked
)
hlay = QtWidgets.QHBoxLayout()
hlay.addWidget(self.leSearch)
hlay.addWidget(self.pbSearch)
mainLayout = QtWidgets.QVBoxLayout(self)
mainLayout.addLayout(hlay)
mainLayout.addWidget(self.dataGroupBox)
@staticmethod
def createMailModel(parent):
model = QtGui.QStandardItemModel(0, App.MAIL_RANGE, parent)
for c, text in zip(
(App.ID, App.FROM, App.SUBJECT, App.DATE),
("ID", "From", "Subject", "Date"),
):
model.setHeaderData(c, QtCore.Qt.Horizontal, text)
return model
def addMail(self, model, mailID, mailFrom, subject, date):
model.insertRow(0)
for c, text in zip(
(App.ID, App.FROM, App.SUBJECT, App.DATE),
(mailID, mailFrom, subject, date),
):
model.setData(model.index(0, c), text)
@QtCore.pyqtSlot()
def on_pbSearch_clicked(self):
text = self.leSearch.text()
self.leSearch.clear()
if text:
# find index
start = self.dataView.model().index(0, 2)
ixs = self.dataView.model().match(
start,
QtCore.Qt.DisplayRole,
text,
hits=1,
flags=QtCore.Qt.MatchStartsWith,
)
if ixs:
ix = ixs[0]
# scroll to index
self.dataView.scrollTo(ix)
# select row
ix_from = ix.sibling(ix.row(), 0)
ix_to = ix.sibling(
ix.row(), self.dataView.model().columnCount() - 1
)
self.dataView.selectionModel().select(
QtCore.QItemSelection(ix_from, ix_to),
QtCore.QItemSelectionModel.SelectCurrent,
)
else:
self.dataView.clearSelection()
if __name__ == "__main__":
app = QtWidgets.QApplication(sys.argv)
ex = App()
ex.show()
sys.exit(app.exec_())
``` | I'll be honnest, I don't use GUI with python but here is how you could do by replacing my arbitrary functions by the needed ones with PyQT
```py
mostWantedChar = 'Y'
foundElements = []
for element in dataView.listElements():
if element[0] == mostWantedChar:
foundElements.Append(element + '@' + element.Line()) #In case you would need to get the line and the line's content for further purposes (just make a split('@') )
element.Line().Higlight()
waitClickFromPushButton()
return foundElements
``` |
4,089,843 | I'm looking to implement a SOAP web service in python on top of IIS. Is there a recommended library that would take a given Python class and expose its functions as web methods? It would be great if said library would also auto-generate a WSDL file based on the interface. | 2010/11/03 | [
"https://Stackoverflow.com/questions/4089843",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11208/"
] | There is an article by Doug Hellmann that evaluates various SOAP Tools
* <http://doughellmann.com/2009/09/01/evaluating-tools-for-developing-with-soap-in-python.html>
Other ref:
* <http://wiki.python.org/moin/WebServices>
* <http://pywebsvcs.sourceforge.net/> | Take a look at SOAPpy (<http://pywebsvcs.sourceforge.net/>). It allows you to expose your functions as web methods, but you have to add a line of code (manually) to register your function with the exposed web service. It is fairly easy to do. Also, it doesn't auto generate wsdl for you.
Here's an example of how to create your web service, and expose a function:
```
server = SOAPpy.SOAPServer(("", 8080))
server.registerFunction(self.hello)
``` |
4,089,843 | I'm looking to implement a SOAP web service in python on top of IIS. Is there a recommended library that would take a given Python class and expose its functions as web methods? It would be great if said library would also auto-generate a WSDL file based on the interface. | 2010/11/03 | [
"https://Stackoverflow.com/questions/4089843",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11208/"
] | You might want to take a look at <https://github.com/stepank/pyws>, it can expose python functions as SOAP methods and provides WSDL description. I've just released version 1.0, it's interoperability was tested on several clients, so it seems to be quite friendly. | Take a look at SOAPpy (<http://pywebsvcs.sourceforge.net/>). It allows you to expose your functions as web methods, but you have to add a line of code (manually) to register your function with the exposed web service. It is fairly easy to do. Also, it doesn't auto generate wsdl for you.
Here's an example of how to create your web service, and expose a function:
```
server = SOAPpy.SOAPServer(("", 8080))
server.registerFunction(self.hello)
``` |
11,387,575 | The [python sample source code](https://developers.google.com/drive/examples/python#complete_source_code) goes thru the details of authentication/etc. I am looking for a simple upload to the Google Drive folder that has public writable permissions. (Plan to implement authorization at a later point).
I want to replace the below code to upload file to Google Drive folder instead.
```
f = open('output.txt')
for line in allLines:
f.write (line)
f.close()
```
(If it makes any difference, I plan to run this thru Google App Engine).
Thanks. | 2012/07/08 | [
"https://Stackoverflow.com/questions/11387575",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1055761/"
] | You can't. All requests to the Drive API need authentication (source: <http://developers.google.com/drive/about_auth>) | As Wooble said, you cannot do this without authentication. You can use this service + file-upload widget to let your website visitors upload files to your Google Drive folder: <https://github.com/cloudwok/file-upload-embed/> |
32,341,972 | I'm creating a small python program that iterates through a folder structure and performs a task on every audio file that it finds.
I need to identify which files are audio and which are 'other' (e.g. jpegs of the album cover) that I want the process to ignore and just move onto the next file.
From searching on StackOverflow/Google/etc the sndhdr module appears at the top of most lists - I can't seem to get the sndhdr.what() method to return anything but 'None' though, no matter how many \*.mp3 files I throw at it. My outline implementation is below, can anyone tell me what I'm doing wrong?
```
def import_folder(folder_path):
''' Imports all audio files found in a folder structure
:param folder_path: The absolute path of the folder
:return: True/False depending on whether the process was successful
'''
# Remove any spaces to ensure the folder is located correctly
folder_path = folder_path.strip()
for subdir, dirs, files in os.walk(folder_path):
for file in files:
audio_file = os.path.join(subdir, file)
print sndhdr.what(audio_file)
# The 'real' method will perform the task here
```
For example:
```
rootdir = '/home/user/FolderFullOfmp3Files'
import_folder(rootdir)
>>> None
>>> None
>>> None
...etc
``` | 2015/09/01 | [
"https://Stackoverflow.com/questions/32341972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1701514/"
] | This is likely happening because you are drawing your screenshot in your `Activity#onCreate()`. At this point, your View has not measured its dimensions, so `View#getDrawingCache()` will return null because width and height of the view will be 0.
You can move your screenshot code away from `onCreate()` or you could use a `ViewTreeObserver.OnGlobalLayoutListener` to listen for when the view is about to be drawn.
Only after `View#getWidth()` returns a non-zero integer can you get your screenshot. | got the solution from @ugo's suggestion
put this in a your onCreate function
```
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate( savedInstanceState );
setContentView( R.layout.activity_share );
//
...
///
myLayout.getViewTreeObserver().addOnGlobalLayoutListener(new ViewTreeObserver.OnGlobalLayoutListener() {
@Override
public void onGlobalLayout() {
//take a screenshot
screenShot();
}}
``` |
60,410,173 | I have a pip requirements file that includes specific cpu-only versions of torch and torchvision. I can use the following pip command to successfully install my requirements.
```bash
pip install --requirement azure-pipelines-requirements.txt --find-links https://download.pytorch.org/whl/torch_stable.html
```
My requirements file looks like this
```none
coverage
dataclasses
joblib
matplotlib
mypy
numpy
pandas
param
pylint
pyro-ppl==1.2.1
pyyaml
scikit-learn
scipy
seaborn
torch==1.4.0+cpu
torchvision==0.5.0+cpu
visdom
```
This works from bash, but how do I invoke pip with the `find-links` option from inside a conda environment yaml file? My current attempt looks like this
```yaml
name: build
dependencies:
- python=3.6
- pip
- pip:
- --requirement azure-pipelines-requirements.txt --find-links https://download.pytorch.org/whl/torch_stable.html
```
But when I invoke
```bash
conda env create --file azure-pipeline-environment.yml
```
I get this error.
>
> Pip subprocess error:
>
> ERROR: Could not find a version that satisfies the requirement torch==1.4.0+cpu (from -r E:\Users\tim\Source\Talia\azure-pipelines-requirements.txt (line 25)) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
>
> ERROR: No matching distribution found for torch==1.4.0+cpu (from -r E:\Users\tim\Source\Talia\azure-pipelines-requirements.txt (line 25))
>
>
> CondaEnvException: Pip failed
>
>
>
How do I specify the `find-links` option when invoking pip from a conda environment yaml file? | 2020/02/26 | [
"https://Stackoverflow.com/questions/60410173",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/575530/"
] | [This example](https://github.com/conda/conda/blob/54e4a91d0da4d659a67e3097040764d3a2f6aa16/tests/conda_env/support/advanced-pip/environment.yml) shows how to specify options for pip
Specify the global pip option first:
```
name: build
dependencies:
- python=3.6
- pip
- pip:
- --find-links https://download.pytorch.org/whl/torch_stable.html
- --requirement azure-pipelines-requirements.txt
``` | Found the answer in the pip documentation [here](https://pip.pypa.io/en/stable/reference/pip_install/#requirement-specifiers). I can add the `find-links` option to my requirements file, so my conda environment yaml file becomes
```yaml
name: build
dependencies:
- python=3.6
- pip
- pip:
- --requirement azure-pipelines-requirements.txt
```
and my pip requirements file becomes
```none
--find-links https://download.pytorch.org/whl/torch_stable.html
coverage
dataclasses
joblib
matplotlib
mypy
numpy
pandas
param
pylint
pyro-ppl==1.2.1
pyyaml
scikit-learn
scipy
seaborn
torch==1.4.0+cpu
torchvision==0.5.0+cpu
visdom
``` |
45,894,208 | I'm using Spyder to do some small projects with Keras, and every now and then (I haven't pinned down what it is in the code that makes it appear) I get this message:
```
File "~/.local/lib/python3.5/site-packages/google/protobuf/descriptor_pb2.py", line 1771, in <module>
__module__ = 'google.protobuf.descriptor_pb2'
TypeError: A Message class can only inherit from Message
```
Weirdly, this exception is not raised if I execute the program outside of Spyder, using the terminal. I've looked around and I have found no one who has encountered this error while using Keras.
Restarting Spyder makes it go away, but it's frustrating. What could be causing it? | 2017/08/26 | [
"https://Stackoverflow.com/questions/45894208",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1718331/"
] | Ok, I found the cause: interrupting the execution before Keras fully loads.
As said before restarting Spyder (or just the console) solves it. | I had the same problem with Spyder, which happened when it was trying to reload modules that were already loaded. I solved it by disabling the UMR (User Module Reloader) option in "preferences -> python interpreter" . |
45,894,208 | I'm using Spyder to do some small projects with Keras, and every now and then (I haven't pinned down what it is in the code that makes it appear) I get this message:
```
File "~/.local/lib/python3.5/site-packages/google/protobuf/descriptor_pb2.py", line 1771, in <module>
__module__ = 'google.protobuf.descriptor_pb2'
TypeError: A Message class can only inherit from Message
```
Weirdly, this exception is not raised if I execute the program outside of Spyder, using the terminal. I've looked around and I have found no one who has encountered this error while using Keras.
Restarting Spyder makes it go away, but it's frustrating. What could be causing it? | 2017/08/26 | [
"https://Stackoverflow.com/questions/45894208",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1718331/"
] | Ok, I found the cause: interrupting the execution before Keras fully loads.
As said before restarting Spyder (or just the console) solves it. | Restarting Sypder works or run your script using console only.
Don't forget to use at the top:
```
from google.cloud import bigquery
from google.oauth2 import service_account
from google.auth.transport import requests
``` |
45,894,208 | I'm using Spyder to do some small projects with Keras, and every now and then (I haven't pinned down what it is in the code that makes it appear) I get this message:
```
File "~/.local/lib/python3.5/site-packages/google/protobuf/descriptor_pb2.py", line 1771, in <module>
__module__ = 'google.protobuf.descriptor_pb2'
TypeError: A Message class can only inherit from Message
```
Weirdly, this exception is not raised if I execute the program outside of Spyder, using the terminal. I've looked around and I have found no one who has encountered this error while using Keras.
Restarting Spyder makes it go away, but it's frustrating. What could be causing it? | 2017/08/26 | [
"https://Stackoverflow.com/questions/45894208",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1718331/"
] | I had the same problem with Spyder, which happened when it was trying to reload modules that were already loaded. I solved it by disabling the UMR (User Module Reloader) option in "preferences -> python interpreter" . | Restarting Sypder works or run your script using console only.
Don't forget to use at the top:
```
from google.cloud import bigquery
from google.oauth2 import service_account
from google.auth.transport import requests
``` |
65,266,224 | I'm new to python so please kindly help, I don't know much.
I'm working on a project which asks for a command, if the command is = to "help" then it will say how to use the program. I can't seem to do this, every time I try to use the if statement, it still prints the help section wether the command exists or not.
**example: someone enters a command that doesn't exist on the script, it still prints the help section.**
```
print("welcome, to use this, please input the options below")
print ("help | exit")
option = input("what option would you like to use? ")
if help:
print("this is a test, there will be an actual help section soon.")
else:
print("no such command")
``` | 2020/12/12 | [
"https://Stackoverflow.com/questions/65266224",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14759499/"
] | If you are using gnu-efi, use `uefi_call_wrapper()` to call UEFI functions.
```c
RT->GetTime(time, NULL); // Program hangs
uefi_call_wrapper(RT->GetTime, 2, time, NULL); // Okay
```
The reason is the different calling convention between UEFI (which uses Microsoft x64 calling convention) and Linux (which uses System V amd64 ABI). By default, gcc will generate the code in Linux format, so we need to explicitly tell it to generate it in UEFI format.
You can see the difference by peforming an `objdump`. | I think you missed to initialize RT.
```
RT = SystemTable->RuntimeServices;
```
Your code is very similar to one of the examples (the one at section 4.7.1) of the Unified Extensible Firmware Interface Specification 2.6. I doubth you haven't read it, but just in case.
<https://www.uefi.org/sites/default/files/resources/UEFI%20Spec%202_6.pdf> |
25,310,746 | I've large set of images. I wan't to chage their background to specific color. Lets say green. All of the images have transparent background. Is there a way to perform this action using python-fu scripting in Gimp. Or some other tool available to do this specific task in automated fashion. | 2014/08/14 | [
"https://Stackoverflow.com/questions/25310746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/811502/"
] | The fact is that when you query a model (via a QuerySet method, or indirectly via a ForeignKey) **you get non-polymorphic instances** - in contrast to SQLAlchemy, where you get polymorphic instances.
This is because the fetched data corresponds only to the data you're accessing (and it's ancestors since they are known beforehand). By default, Django does not do any kind of `select_related` to get the children, so you're stuck with the base (i.e. current) class model of the foreign key or query set.
This means:
```
Vehicle.objects.get(pk=1).__class__ == Vehicle
```
will be always True, and:
```
Surprise.objects.get(pk=1).items.all()[0].__class__ == Vehicle
```
will be always True as well.
(**assume** for these examples that vehicle with pk=1 exists, surprise with pk=1 exists, and has at least one item)
There's no clean solution for this EXCEPT by knowing your children classes. As you said: accessing variables like .car or .truck (considering classes Car and Truck exist) is the way. **However** if you hit the wrong children class (e.g. you hit `vehicle.car` when `vehicle` should be, actually, a `Truck` instance) you will get an `ObjectDoesNotExist` error. **Disclaimer**: Don't know what would happen if you have two children classes with the same name in different modules.
If you want to have **polymorphic** behavior, which can abstract you from testing every possible subclass, an application exists (haven't actually used it): <https://django-polymorphic.readthedocs.org/en/latest/> | According to Django Documentation:
`If you have a Place that is also a Restaurant, you can get from the Place object to the Restaurant object by using the lower-case version of the model name:`
```
p = Place.objects.get(id=12)
p.restaurant
```
Further to that:
>
> **However, if p in the above example was not a Restaurant (it had been created directly as a Place object or was the parent of some other class), referring to p.restaurant would raise a Restaurant.DoesNotExist exception.**
>
>
>
So you answered the question on your own, you need to check the car attr, because that is what is pointing to the model you are looking for, if there is no car attr then the object was not created by the Car class. |
25,310,746 | I've large set of images. I wan't to chage their background to specific color. Lets say green. All of the images have transparent background. Is there a way to perform this action using python-fu scripting in Gimp. Or some other tool available to do this specific task in automated fashion. | 2014/08/14 | [
"https://Stackoverflow.com/questions/25310746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/811502/"
] | The fact is that when you query a model (via a QuerySet method, or indirectly via a ForeignKey) **you get non-polymorphic instances** - in contrast to SQLAlchemy, where you get polymorphic instances.
This is because the fetched data corresponds only to the data you're accessing (and it's ancestors since they are known beforehand). By default, Django does not do any kind of `select_related` to get the children, so you're stuck with the base (i.e. current) class model of the foreign key or query set.
This means:
```
Vehicle.objects.get(pk=1).__class__ == Vehicle
```
will be always True, and:
```
Surprise.objects.get(pk=1).items.all()[0].__class__ == Vehicle
```
will be always True as well.
(**assume** for these examples that vehicle with pk=1 exists, surprise with pk=1 exists, and has at least one item)
There's no clean solution for this EXCEPT by knowing your children classes. As you said: accessing variables like .car or .truck (considering classes Car and Truck exist) is the way. **However** if you hit the wrong children class (e.g. you hit `vehicle.car` when `vehicle` should be, actually, a `Truck` instance) you will get an `ObjectDoesNotExist` error. **Disclaimer**: Don't know what would happen if you have two children classes with the same name in different modules.
If you want to have **polymorphic** behavior, which can abstract you from testing every possible subclass, an application exists (haven't actually used it): <https://django-polymorphic.readthedocs.org/en/latest/> | As another workaround, I wrote this function that can be used for the same purpose without the need of `django-polymorphic` :
```
def is_model_instance(object, model):
'''
`object` is expected to be a subclass of models.Model
`model` should be a string containing the name of a models.Model subclass
Return true if `object` has a reference to a `model` instance, false otherwise.
'''
model_name = model.lower()
if model_name in dir(object):
try:
statement = "object." + model_name
exec(statement)
return True
except object.DoesNotExist:
return False
else :
return False
```
Then I can easily do something like
```
>>> is_model_instance(Surprise.objects.get(pk=1).items.all()[0], Car)
True # this is indeed a Car
``` |
5,627,954 | A simple program for reading a CSV file inside a ZIP archive:
```py
import csv, sys, zipfile
zip_file = zipfile.ZipFile(sys.argv[1])
items_file = zip_file.open('items.csv', 'rU')
for row in csv.DictReader(items_file):
pass
```
works in Python 2.7:
```none
$ python2.7 test_zip_file_py3k.py ~/data.zip
$
```
but not in Python 3.2:
```none
$ python3.2 test_zip_file_py3k.py ~/data.zip
Traceback (most recent call last):
File "test_zip_file_py3k.py", line 8, in <module>
for row in csv.DictReader(items_file):
File "/somedir/python3.2/csv.py", line 109, in __next__
self.fieldnames
File "/somedir/python3.2/csv.py", line 96, in fieldnames
self._fieldnames = next(self.reader)
_csv.Error: iterator should return strings, not bytes (did you open the file
in text mode?)
```
The `csv` module in Python 3 wants to see a text file, but `zipfile.ZipFile.open` returns a `zipfile.ZipExtFile` that is always treated as binary data.
How does one make this work in Python 3? | 2011/04/11 | [
"https://Stackoverflow.com/questions/5627954",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/638434/"
] | You can wrap it in a [io.TextIOWrapper](http://docs.python.org/library/io.html#io.TextIOWrapper).
```
items_file = io.TextIOWrapper(items_file, encoding='your-encoding', newline='')
```
Should work. | [Lennart's answer](https://stackoverflow.com/questions/5627954/py3k-how-do-you-read-a-file-inside-a-zip-file-as-text-not-bytes/5631786#5631786) is on the right track (Thanks, Lennart, I voted up your answer) and it **almost** works:
```
$ cat test_zip_file_py3k.py
import csv, io, sys, zipfile
zip_file = zipfile.ZipFile(sys.argv[1])
items_file = zip_file.open('items.csv', 'rU')
items_file = io.TextIOWrapper(items_file, encoding='iso-8859-1', newline='')
for idx, row in enumerate(csv.DictReader(items_file)):
print('Processing row {0}'.format(idx))
$ python3.1 test_zip_file_py3k.py ~/data.zip
Traceback (most recent call last):
File "test_zip_file_py3k.py", line 7, in <module>
items_file = io.TextIOWrapper(items_file,
encoding='iso-8859-1',
newline='')
AttributeError: readable
```
The problem appears to be that [io.TextWrapper](http://docs.python.org/library/io.html#io.TextIOWrapper)'s first required parameter is a **buffer**; not a file object.
This appears to work:
```
items_file = io.TextIOWrapper(io.BytesIO(items_file.read()))
```
This seems a little complex and also it seems annoying to have to read in a whole (perhaps huge) zip file into memory. Any better way?
Here it is in action:
```
$ cat test_zip_file_py3k.py
import csv, io, sys, zipfile
zip_file = zipfile.ZipFile(sys.argv[1])
items_file = zip_file.open('items.csv', 'rU')
items_file = io.TextIOWrapper(io.BytesIO(items_file.read()))
for idx, row in enumerate(csv.DictReader(items_file)):
print('Processing row {0}'.format(idx))
$ python3.1 test_zip_file_py3k.py ~/data.zip
Processing row 0
Processing row 1
Processing row 2
...
Processing row 250
``` |
5,627,954 | A simple program for reading a CSV file inside a ZIP archive:
```py
import csv, sys, zipfile
zip_file = zipfile.ZipFile(sys.argv[1])
items_file = zip_file.open('items.csv', 'rU')
for row in csv.DictReader(items_file):
pass
```
works in Python 2.7:
```none
$ python2.7 test_zip_file_py3k.py ~/data.zip
$
```
but not in Python 3.2:
```none
$ python3.2 test_zip_file_py3k.py ~/data.zip
Traceback (most recent call last):
File "test_zip_file_py3k.py", line 8, in <module>
for row in csv.DictReader(items_file):
File "/somedir/python3.2/csv.py", line 109, in __next__
self.fieldnames
File "/somedir/python3.2/csv.py", line 96, in fieldnames
self._fieldnames = next(self.reader)
_csv.Error: iterator should return strings, not bytes (did you open the file
in text mode?)
```
The `csv` module in Python 3 wants to see a text file, but `zipfile.ZipFile.open` returns a `zipfile.ZipExtFile` that is always treated as binary data.
How does one make this work in Python 3? | 2011/04/11 | [
"https://Stackoverflow.com/questions/5627954",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/638434/"
] | I just noticed that [Lennart's answer](https://stackoverflow.com/questions/5627954/py3k-how-do-you-read-a-file-inside-a-zip-file-as-text-not-bytes/5631786#5631786) didn't work with Python **3.1**, but it **does** work with [Python **3.2**](http://www.python.org/download/releases/3.2/). They've enhanced [`zipfile.ZipExtFile`](http://docs.python.org/py3k/library/zipfile.html#zipfile.ZipFile.open) in Python 3.2 (see [release notes](http://docs.python.org/dev/whatsnew/3.2.html#gzip-and-zipfile)). These changes appear to make `zipfile.ZipExtFile` work nicely with [`io.TextWrapper`](http://docs.python.org/py3k/library/io.html#io.TextIOWrapper).
Incidentally, it works in Python 3.1, if you uncomment the hacky lines below to monkey-patch `zipfile.ZipExtFile`, not that I would recommend this sort of hackery. I include it only to illustrate the essence of what was done in Python 3.2 to make things work nicely.
```
$ cat test_zip_file_py3k.py
import csv, io, sys, zipfile
zip_file = zipfile.ZipFile(sys.argv[1])
items_file = zip_file.open('items.csv', 'rU')
# items_file.readable = lambda: True
# items_file.writable = lambda: False
# items_file.seekable = lambda: False
# items_file.read1 = items_file.read
items_file = io.TextIOWrapper(items_file)
for idx, row in enumerate(csv.DictReader(items_file)):
print('Processing row {0} -- row = {1}'.format(idx, row))
```
If I had to support py3k < 3.2, then I would go with the solution in [my other answer](https://stackoverflow.com/questions/5627954/py3k-how-do-you-read-a-file-inside-a-zip-file-as-text-not-bytes/5639578#5639578).
**Update for 3.6+**
Starting w/3.6, support for `mode='U'` was removed[^1](https://docs.python.org/3/library/zipfile.html#zipfile.ZipFile.open):
>
> *Changed in version 3.6:* Removed support of `mode='U'`. Use io.TextIOWrapper for reading compressed text files in universal newlines mode.
>
>
>
Starting w/3.8, a [Path object](https://docs.python.org/3/library/zipfile.html#zipfile.Path) was added which gives us an `open()` method that we can call like the built-in `open()` function (passing `newline=''` in the case of our CSV) and we get back an io.TextIOWrapper object the csv readers accept. See Yuri's answer, [here](https://stackoverflow.com/a/70583472). | You can wrap it in a [io.TextIOWrapper](http://docs.python.org/library/io.html#io.TextIOWrapper).
```
items_file = io.TextIOWrapper(items_file, encoding='your-encoding', newline='')
```
Should work. |
5,627,954 | A simple program for reading a CSV file inside a ZIP archive:
```py
import csv, sys, zipfile
zip_file = zipfile.ZipFile(sys.argv[1])
items_file = zip_file.open('items.csv', 'rU')
for row in csv.DictReader(items_file):
pass
```
works in Python 2.7:
```none
$ python2.7 test_zip_file_py3k.py ~/data.zip
$
```
but not in Python 3.2:
```none
$ python3.2 test_zip_file_py3k.py ~/data.zip
Traceback (most recent call last):
File "test_zip_file_py3k.py", line 8, in <module>
for row in csv.DictReader(items_file):
File "/somedir/python3.2/csv.py", line 109, in __next__
self.fieldnames
File "/somedir/python3.2/csv.py", line 96, in fieldnames
self._fieldnames = next(self.reader)
_csv.Error: iterator should return strings, not bytes (did you open the file
in text mode?)
```
The `csv` module in Python 3 wants to see a text file, but `zipfile.ZipFile.open` returns a `zipfile.ZipExtFile` that is always treated as binary data.
How does one make this work in Python 3? | 2011/04/11 | [
"https://Stackoverflow.com/questions/5627954",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/638434/"
] | You can wrap it in a [io.TextIOWrapper](http://docs.python.org/library/io.html#io.TextIOWrapper).
```
items_file = io.TextIOWrapper(items_file, encoding='your-encoding', newline='')
```
Should work. | And if you just like to read a file into a string:
```
with ZipFile('spam.zip') as myzip:
with myzip.open('eggs.txt') as myfile:
eggs = myfile.read().decode('UTF-8'))
``` |
5,627,954 | A simple program for reading a CSV file inside a ZIP archive:
```py
import csv, sys, zipfile
zip_file = zipfile.ZipFile(sys.argv[1])
items_file = zip_file.open('items.csv', 'rU')
for row in csv.DictReader(items_file):
pass
```
works in Python 2.7:
```none
$ python2.7 test_zip_file_py3k.py ~/data.zip
$
```
but not in Python 3.2:
```none
$ python3.2 test_zip_file_py3k.py ~/data.zip
Traceback (most recent call last):
File "test_zip_file_py3k.py", line 8, in <module>
for row in csv.DictReader(items_file):
File "/somedir/python3.2/csv.py", line 109, in __next__
self.fieldnames
File "/somedir/python3.2/csv.py", line 96, in fieldnames
self._fieldnames = next(self.reader)
_csv.Error: iterator should return strings, not bytes (did you open the file
in text mode?)
```
The `csv` module in Python 3 wants to see a text file, but `zipfile.ZipFile.open` returns a `zipfile.ZipExtFile` that is always treated as binary data.
How does one make this work in Python 3? | 2011/04/11 | [
"https://Stackoverflow.com/questions/5627954",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/638434/"
] | You can wrap it in a [io.TextIOWrapper](http://docs.python.org/library/io.html#io.TextIOWrapper).
```
items_file = io.TextIOWrapper(items_file, encoding='your-encoding', newline='')
```
Should work. | Starting with Python 3.8, the zipfile module has the [Path object](https://docs.python.org/3.8/library/zipfile.html#path-objects), which we can use with its open() method to get an io.TextIOWrapper object, which can be passed to the csv readers:
```py
import csv, sys, zipfile
# Give a string path to the ZIP archive, and
# the archived file to read from
items_zipf = zipfile.Path(sys.argv[1], at='items.csv')
# Then use the open method, like you'd usually
# use the built-in open()
items_f = items_zipf.open(newline='')
# Pass the TextIO-like file to your reader as normal
for row in csv.DictReader(items_f):
print(row)
``` |
5,627,954 | A simple program for reading a CSV file inside a ZIP archive:
```py
import csv, sys, zipfile
zip_file = zipfile.ZipFile(sys.argv[1])
items_file = zip_file.open('items.csv', 'rU')
for row in csv.DictReader(items_file):
pass
```
works in Python 2.7:
```none
$ python2.7 test_zip_file_py3k.py ~/data.zip
$
```
but not in Python 3.2:
```none
$ python3.2 test_zip_file_py3k.py ~/data.zip
Traceback (most recent call last):
File "test_zip_file_py3k.py", line 8, in <module>
for row in csv.DictReader(items_file):
File "/somedir/python3.2/csv.py", line 109, in __next__
self.fieldnames
File "/somedir/python3.2/csv.py", line 96, in fieldnames
self._fieldnames = next(self.reader)
_csv.Error: iterator should return strings, not bytes (did you open the file
in text mode?)
```
The `csv` module in Python 3 wants to see a text file, but `zipfile.ZipFile.open` returns a `zipfile.ZipExtFile` that is always treated as binary data.
How does one make this work in Python 3? | 2011/04/11 | [
"https://Stackoverflow.com/questions/5627954",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/638434/"
] | I just noticed that [Lennart's answer](https://stackoverflow.com/questions/5627954/py3k-how-do-you-read-a-file-inside-a-zip-file-as-text-not-bytes/5631786#5631786) didn't work with Python **3.1**, but it **does** work with [Python **3.2**](http://www.python.org/download/releases/3.2/). They've enhanced [`zipfile.ZipExtFile`](http://docs.python.org/py3k/library/zipfile.html#zipfile.ZipFile.open) in Python 3.2 (see [release notes](http://docs.python.org/dev/whatsnew/3.2.html#gzip-and-zipfile)). These changes appear to make `zipfile.ZipExtFile` work nicely with [`io.TextWrapper`](http://docs.python.org/py3k/library/io.html#io.TextIOWrapper).
Incidentally, it works in Python 3.1, if you uncomment the hacky lines below to monkey-patch `zipfile.ZipExtFile`, not that I would recommend this sort of hackery. I include it only to illustrate the essence of what was done in Python 3.2 to make things work nicely.
```
$ cat test_zip_file_py3k.py
import csv, io, sys, zipfile
zip_file = zipfile.ZipFile(sys.argv[1])
items_file = zip_file.open('items.csv', 'rU')
# items_file.readable = lambda: True
# items_file.writable = lambda: False
# items_file.seekable = lambda: False
# items_file.read1 = items_file.read
items_file = io.TextIOWrapper(items_file)
for idx, row in enumerate(csv.DictReader(items_file)):
print('Processing row {0} -- row = {1}'.format(idx, row))
```
If I had to support py3k < 3.2, then I would go with the solution in [my other answer](https://stackoverflow.com/questions/5627954/py3k-how-do-you-read-a-file-inside-a-zip-file-as-text-not-bytes/5639578#5639578).
**Update for 3.6+**
Starting w/3.6, support for `mode='U'` was removed[^1](https://docs.python.org/3/library/zipfile.html#zipfile.ZipFile.open):
>
> *Changed in version 3.6:* Removed support of `mode='U'`. Use io.TextIOWrapper for reading compressed text files in universal newlines mode.
>
>
>
Starting w/3.8, a [Path object](https://docs.python.org/3/library/zipfile.html#zipfile.Path) was added which gives us an `open()` method that we can call like the built-in `open()` function (passing `newline=''` in the case of our CSV) and we get back an io.TextIOWrapper object the csv readers accept. See Yuri's answer, [here](https://stackoverflow.com/a/70583472). | [Lennart's answer](https://stackoverflow.com/questions/5627954/py3k-how-do-you-read-a-file-inside-a-zip-file-as-text-not-bytes/5631786#5631786) is on the right track (Thanks, Lennart, I voted up your answer) and it **almost** works:
```
$ cat test_zip_file_py3k.py
import csv, io, sys, zipfile
zip_file = zipfile.ZipFile(sys.argv[1])
items_file = zip_file.open('items.csv', 'rU')
items_file = io.TextIOWrapper(items_file, encoding='iso-8859-1', newline='')
for idx, row in enumerate(csv.DictReader(items_file)):
print('Processing row {0}'.format(idx))
$ python3.1 test_zip_file_py3k.py ~/data.zip
Traceback (most recent call last):
File "test_zip_file_py3k.py", line 7, in <module>
items_file = io.TextIOWrapper(items_file,
encoding='iso-8859-1',
newline='')
AttributeError: readable
```
The problem appears to be that [io.TextWrapper](http://docs.python.org/library/io.html#io.TextIOWrapper)'s first required parameter is a **buffer**; not a file object.
This appears to work:
```
items_file = io.TextIOWrapper(io.BytesIO(items_file.read()))
```
This seems a little complex and also it seems annoying to have to read in a whole (perhaps huge) zip file into memory. Any better way?
Here it is in action:
```
$ cat test_zip_file_py3k.py
import csv, io, sys, zipfile
zip_file = zipfile.ZipFile(sys.argv[1])
items_file = zip_file.open('items.csv', 'rU')
items_file = io.TextIOWrapper(io.BytesIO(items_file.read()))
for idx, row in enumerate(csv.DictReader(items_file)):
print('Processing row {0}'.format(idx))
$ python3.1 test_zip_file_py3k.py ~/data.zip
Processing row 0
Processing row 1
Processing row 2
...
Processing row 250
``` |
5,627,954 | A simple program for reading a CSV file inside a ZIP archive:
```py
import csv, sys, zipfile
zip_file = zipfile.ZipFile(sys.argv[1])
items_file = zip_file.open('items.csv', 'rU')
for row in csv.DictReader(items_file):
pass
```
works in Python 2.7:
```none
$ python2.7 test_zip_file_py3k.py ~/data.zip
$
```
but not in Python 3.2:
```none
$ python3.2 test_zip_file_py3k.py ~/data.zip
Traceback (most recent call last):
File "test_zip_file_py3k.py", line 8, in <module>
for row in csv.DictReader(items_file):
File "/somedir/python3.2/csv.py", line 109, in __next__
self.fieldnames
File "/somedir/python3.2/csv.py", line 96, in fieldnames
self._fieldnames = next(self.reader)
_csv.Error: iterator should return strings, not bytes (did you open the file
in text mode?)
```
The `csv` module in Python 3 wants to see a text file, but `zipfile.ZipFile.open` returns a `zipfile.ZipExtFile` that is always treated as binary data.
How does one make this work in Python 3? | 2011/04/11 | [
"https://Stackoverflow.com/questions/5627954",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/638434/"
] | And if you just like to read a file into a string:
```
with ZipFile('spam.zip') as myzip:
with myzip.open('eggs.txt') as myfile:
eggs = myfile.read().decode('UTF-8'))
``` | [Lennart's answer](https://stackoverflow.com/questions/5627954/py3k-how-do-you-read-a-file-inside-a-zip-file-as-text-not-bytes/5631786#5631786) is on the right track (Thanks, Lennart, I voted up your answer) and it **almost** works:
```
$ cat test_zip_file_py3k.py
import csv, io, sys, zipfile
zip_file = zipfile.ZipFile(sys.argv[1])
items_file = zip_file.open('items.csv', 'rU')
items_file = io.TextIOWrapper(items_file, encoding='iso-8859-1', newline='')
for idx, row in enumerate(csv.DictReader(items_file)):
print('Processing row {0}'.format(idx))
$ python3.1 test_zip_file_py3k.py ~/data.zip
Traceback (most recent call last):
File "test_zip_file_py3k.py", line 7, in <module>
items_file = io.TextIOWrapper(items_file,
encoding='iso-8859-1',
newline='')
AttributeError: readable
```
The problem appears to be that [io.TextWrapper](http://docs.python.org/library/io.html#io.TextIOWrapper)'s first required parameter is a **buffer**; not a file object.
This appears to work:
```
items_file = io.TextIOWrapper(io.BytesIO(items_file.read()))
```
This seems a little complex and also it seems annoying to have to read in a whole (perhaps huge) zip file into memory. Any better way?
Here it is in action:
```
$ cat test_zip_file_py3k.py
import csv, io, sys, zipfile
zip_file = zipfile.ZipFile(sys.argv[1])
items_file = zip_file.open('items.csv', 'rU')
items_file = io.TextIOWrapper(io.BytesIO(items_file.read()))
for idx, row in enumerate(csv.DictReader(items_file)):
print('Processing row {0}'.format(idx))
$ python3.1 test_zip_file_py3k.py ~/data.zip
Processing row 0
Processing row 1
Processing row 2
...
Processing row 250
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.