qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
---|---|---|---|---|---|
8,437,964 | I was wondering if we can print like row-wise in python.
Basically I have a loop which might go on million times and I am printing out some strategic counts in that loop.. so it would be really cool if I can print like row-wise
```
print x
# currently gives
# 3
# 4
#.. and so on
```
and i am looking something like
```
print x
# 3 4
``` | 2011/12/08 | [
"https://Stackoverflow.com/questions/8437964",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/902885/"
] | If you add comma at the end it should work for you.
```
>>> def test():
... print 1,
... print 2,
...
>>> test()
1 2
``` | Use this code for your print
`print(x,end="")` |
8,437,964 | I was wondering if we can print like row-wise in python.
Basically I have a loop which might go on million times and I am printing out some strategic counts in that loop.. so it would be really cool if I can print like row-wise
```
print x
# currently gives
# 3
# 4
#.. and so on
```
and i am looking something like
```
print x
# 3 4
``` | 2011/12/08 | [
"https://Stackoverflow.com/questions/8437964",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/902885/"
] | Just add a `,` at the end of the item(s) you're printing.
```
print(x,)
# 3 4
```
Or in Python 2:
```
print x,
# 3 4
``` | ```
my_list = ['keyboard', 'mouse', 'led', 'monitor', 'headphones', 'dvd']
for i in xrange(0, len(my_list), 4):
print '\t'.join(my_list[i:i+4])
``` |
8,437,964 | I was wondering if we can print like row-wise in python.
Basically I have a loop which might go on million times and I am printing out some strategic counts in that loop.. so it would be really cool if I can print like row-wise
```
print x
# currently gives
# 3
# 4
#.. and so on
```
and i am looking something like
```
print x
# 3 4
``` | 2011/12/08 | [
"https://Stackoverflow.com/questions/8437964",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/902885/"
] | Just add a `,` at the end of the item(s) you're printing.
```
print(x,)
# 3 4
```
Or in Python 2:
```
print x,
# 3 4
``` | ```
a=int(input("RangeFinal "))
print("Prime Numbers in the range")
for n in range(2, a):
p=0
for x in range(2, n):
if n % x == 0:
break
else:
if(p==0):
print(n,end=' ')
p=1
```
Answer
```
RangeFinal 19
Prime Numbers in the range
3 5 7 9 11 13 15 17
``` |
8,437,964 | I was wondering if we can print like row-wise in python.
Basically I have a loop which might go on million times and I am printing out some strategic counts in that loop.. so it would be really cool if I can print like row-wise
```
print x
# currently gives
# 3
# 4
#.. and so on
```
and i am looking something like
```
print x
# 3 4
``` | 2011/12/08 | [
"https://Stackoverflow.com/questions/8437964",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/902885/"
] | Use this code for your print
`print(x,end="")` | Python 3:
```
l = [3.14, 'string', ('tuple', 'of', 'items')]
print(', '.join(map(repr, l)))
```
Output:
>
>
> ```
> 3.14, 'string', ('tuple', 'of', 'items')
>
> ```
>
> |
8,437,964 | I was wondering if we can print like row-wise in python.
Basically I have a loop which might go on million times and I am printing out some strategic counts in that loop.. so it would be really cool if I can print like row-wise
```
print x
# currently gives
# 3
# 4
#.. and so on
```
and i am looking something like
```
print x
# 3 4
``` | 2011/12/08 | [
"https://Stackoverflow.com/questions/8437964",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/902885/"
] | ```
my_list = ['keyboard', 'mouse', 'led', 'monitor', 'headphones', 'dvd']
for i in xrange(0, len(my_list), 4):
print '\t'.join(my_list[i:i+4])
``` | Python 3:
```
l = [3.14, 'string', ('tuple', 'of', 'items')]
print(', '.join(map(repr, l)))
```
Output:
>
>
> ```
> 3.14, 'string', ('tuple', 'of', 'items')
>
> ```
>
> |
8,437,964 | I was wondering if we can print like row-wise in python.
Basically I have a loop which might go on million times and I am printing out some strategic counts in that loop.. so it would be really cool if I can print like row-wise
```
print x
# currently gives
# 3
# 4
#.. and so on
```
and i am looking something like
```
print x
# 3 4
``` | 2011/12/08 | [
"https://Stackoverflow.com/questions/8437964",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/902885/"
] | If you add comma at the end it should work for you.
```
>>> def test():
... print 1,
... print 2,
...
>>> test()
1 2
``` | ```
my_list = ['keyboard', 'mouse', 'led', 'monitor', 'headphones', 'dvd']
for i in xrange(0, len(my_list), 4):
print '\t'.join(my_list[i:i+4])
``` |
8,437,964 | I was wondering if we can print like row-wise in python.
Basically I have a loop which might go on million times and I am printing out some strategic counts in that loop.. so it would be really cool if I can print like row-wise
```
print x
# currently gives
# 3
# 4
#.. and so on
```
and i am looking something like
```
print x
# 3 4
``` | 2011/12/08 | [
"https://Stackoverflow.com/questions/8437964",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/902885/"
] | In Python2:
```
data = [3, 4]
for x in data:
print x, # notice the comma at the end of the line
```
or in Python3:
```
for x in data:
print(x, end=' ')
```
prints
```
3 4
``` | You don't need to use a for loop to do that!
--------------------------------------------
```py
mylist = list('abcdefg')
print(*mylist, sep=' ')
# Output:
# a b c d e f g
```
Here I'm using the unpack operator for iterators: `*`. At the background the print function is beeing called like this: `print('a', 'b', 'c', 'd', 'e', 'f', 'g', sep=' ')`.
Also if you change the value in `sep` parameter you can customize the way you want to print, for exemple:
```py
print(*mylist, sep='\n')
# Output:
# a
# b
# c
# d
# e
# f
# g
``` |
7,843,497 | I am trying to run an awk script using python, so I can process some data.
Is there any way to get an awk script to run in a python class without using the system class to invoke it as shell process? The framework where I run these python scripts does not allow the use of a subprocess call, so I am stuck either figuring out a way to convert my awk script in python, or if is possible, running the awk script in python.
Any suggestions? My awk script basically read a text file and isolate blocks of proteins that contains a specific chemical compound (the output is generated by our framework; I've add an example of how does it looks like below) and isolate them printing them out on a different file.
```
buildProtein compoundA compoundB
begin fusion
Calculate : (lots of text here on multiple lines)
(more lines)
Final result - H20: value CO2: value Compound: value
Other Compounds X: Value Y: value Z:value
[...another similar block]
```
So for example if I build a protein and I need to see if in the compounds I have CH3COOH in the final result line, if it does I have to take the whole block, starting from the command "buildProtein", until the beginning of the next block; and save it on a file; and then move to the next and see if it has again the compound that I am looking for...if it does not have it I skip to the next, until the end of the file (the file has multiple occurrence of the compound that I search for, sometimes they are contiguous while other times they are alternate with blocks that has not the compound.
Any help is more than welcome; banging my head for weeks now and after finding out this site I decided to ask for some help.
Thanks in advance for your kindness! | 2011/10/20 | [
"https://Stackoverflow.com/questions/7843497",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1006198/"
] | If you can't use the *subprocess* module, the best bet is to recode your AWK script in Python. To that end, the *fileinput* module is a great transition tool with and AWK-like feel. | [Python's re module](http://docs.python.org/library/re.html) can help, or, if you can't be bothered with regular expressions and just need to do some quick field seperation, you can use [the built in str `.split()`](http://docs.python.org/library/stdtypes.html#str.split) and [`.find()`](http://docs.python.org/library/stdtypes.html#str.find) functions. |
7,843,497 | I am trying to run an awk script using python, so I can process some data.
Is there any way to get an awk script to run in a python class without using the system class to invoke it as shell process? The framework where I run these python scripts does not allow the use of a subprocess call, so I am stuck either figuring out a way to convert my awk script in python, or if is possible, running the awk script in python.
Any suggestions? My awk script basically read a text file and isolate blocks of proteins that contains a specific chemical compound (the output is generated by our framework; I've add an example of how does it looks like below) and isolate them printing them out on a different file.
```
buildProtein compoundA compoundB
begin fusion
Calculate : (lots of text here on multiple lines)
(more lines)
Final result - H20: value CO2: value Compound: value
Other Compounds X: Value Y: value Z:value
[...another similar block]
```
So for example if I build a protein and I need to see if in the compounds I have CH3COOH in the final result line, if it does I have to take the whole block, starting from the command "buildProtein", until the beginning of the next block; and save it on a file; and then move to the next and see if it has again the compound that I am looking for...if it does not have it I skip to the next, until the end of the file (the file has multiple occurrence of the compound that I search for, sometimes they are contiguous while other times they are alternate with blocks that has not the compound.
Any help is more than welcome; banging my head for weeks now and after finding out this site I decided to ask for some help.
Thanks in advance for your kindness! | 2011/10/20 | [
"https://Stackoverflow.com/questions/7843497",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1006198/"
] | If you can't use the *subprocess* module, the best bet is to recode your AWK script in Python. To that end, the *fileinput* module is a great transition tool with and AWK-like feel. | I have barely started learning AWK, so I can't offer any advice on that front. However, for some python code that does what you need:
```
class ProteinIterator():
def __init__(self, file):
self.file = open(file, 'r')
self.first_line = self.file.readline()
def __iter__(self):
return self
def __next__(self):
"returns the next protein build"
if not self.first_line: # reached end of file
raise StopIteration
file = self.file
protein_data = [self.first_line]
while True:
line = file.readline()
if line.startswith('buildProtein ') or not line:
self.first_line = line
break
protein_data.append(line)
return Protein(protein_data)
class Protein():
def __init__(self, data):
self._data = data
for line in data:
if line.startswith('buildProtein '):
self.initial_compounds = tuple(line[13:].split())
elif line.startswith('Final result - '):
pieces = line[15:].split()[::2] # every other piece is a name
self.final_compounds = tuple([p[:-1] for p in pieces])
elif line.startswith('Other Compounds '):
pieces = line[16:].split()[::2] # every other piece is a name
self.other_compounds = tuple([p[:-1] for p in pieces])
def __repr__(self):
return ("Protein(%s)"% self._data[0])
@property
def data(self):
return ''.join(self._data)
```
What we have here is an iterator for the buildprotein text file which returns one protein at a time as a `Protein` object. This `Protein` object is smart enough to know it's inputs, final results, and other results. You may have to modify some of the code if the actual text in the file is not exactly as represented in the question. Following is a short test of the code with example usage:
```
if __name__ == '__main__':
test_data = """\
buildProtein compoundA compoundB
begin fusion
Calculate : (lots of text here on multiple lines)
(more lines)
Final result - H20: value CO2: value Compound: value
Other Compounds X: Value Y: value Z: value"""
open('testPI.txt', 'w').write(test_data)
for protein in ProteinIterator('testPI.txt'):
print(protein.initial_compounds)
print(protein.final_compounds)
print(protein.other_compounds)
print()
if 'CO2' in protein.final_compounds:
print(protein.data)
```
I didn't bother saving values, but you can add that in if you like. Hopefully this will get you going. |
40,617,324 | So I have an assignment, and For a specific section, we are supposed to import a .py file into our program."You will need to import histogram.py into your program."
Does that simply mean to create a new python file and just copy and past whatever is in the histogram.py into the file?
This part of my assignment is to create a graphical display with the contents in the .py file (which confuses me too) I was reading the chapters from the tb and it states how to create a window, but I havent seen anything about importing.. Sorry if this is a dumb question | 2016/11/15 | [
"https://Stackoverflow.com/questions/40617324",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7045663/"
] | With a small tweak in your Plan model, it is indeed possible to do what you want.
First of all, you'll need to change your Plan `days` field, which is probably an `IntegerField`, to [DurationField](https://docs.djangoproject.com/en/1.10/ref/models/fields/#durationfield).
Now the catch is that we have to use [ExpressionWrapper](https://docs.djangoproject.com/en/1.8/ref/models/expressions/#django.db.models.ExpressionWrapper) to achieve the exact same result inside Postgres as the result you'd achieve in Python if you were to get the plan in a separate query.
Finally, your query should be something like:
```
from django.db.models import F, ExpressionWrapper, DateTimeField
from django.utils import timezone
Post.objects.annotate(target_date=ExpressionWrapper(timezone.now() - F('plan__days'), output_field=DateTimeField())).filter(createdAt__lte=F('target_date'))
``` | For me you must first grab the plan object.
```
plan = Plan.objects.filter(...)
```
and then reference the days
```
Post.objects.filter(createdAt__lte=datetime.now() - timedelta(days=plan.days))
``` |
40,617,324 | So I have an assignment, and For a specific section, we are supposed to import a .py file into our program."You will need to import histogram.py into your program."
Does that simply mean to create a new python file and just copy and past whatever is in the histogram.py into the file?
This part of my assignment is to create a graphical display with the contents in the .py file (which confuses me too) I was reading the chapters from the tb and it states how to create a window, but I havent seen anything about importing.. Sorry if this is a dumb question | 2016/11/15 | [
"https://Stackoverflow.com/questions/40617324",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7045663/"
] | Assuming Postgres database:
```
table_post = Post._meta.db_table
table_plan = Plan._meta.db_table
old_posts = Post.objects.select_related('plan')\
.extra(where=["%s.created_at <= NOW() - INTERVAL '1 day' * %s.days"
% (table_post, table_plan)])
``` | For me you must first grab the plan object.
```
plan = Plan.objects.filter(...)
```
and then reference the days
```
Post.objects.filter(createdAt__lte=datetime.now() - timedelta(days=plan.days))
``` |
40,617,324 | So I have an assignment, and For a specific section, we are supposed to import a .py file into our program."You will need to import histogram.py into your program."
Does that simply mean to create a new python file and just copy and past whatever is in the histogram.py into the file?
This part of my assignment is to create a graphical display with the contents in the .py file (which confuses me too) I was reading the chapters from the tb and it states how to create a window, but I havent seen anything about importing.. Sorry if this is a dumb question | 2016/11/15 | [
"https://Stackoverflow.com/questions/40617324",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7045663/"
] | With a small tweak in your Plan model, it is indeed possible to do what you want.
First of all, you'll need to change your Plan `days` field, which is probably an `IntegerField`, to [DurationField](https://docs.djangoproject.com/en/1.10/ref/models/fields/#durationfield).
Now the catch is that we have to use [ExpressionWrapper](https://docs.djangoproject.com/en/1.8/ref/models/expressions/#django.db.models.ExpressionWrapper) to achieve the exact same result inside Postgres as the result you'd achieve in Python if you were to get the plan in a separate query.
Finally, your query should be something like:
```
from django.db.models import F, ExpressionWrapper, DateTimeField
from django.utils import timezone
Post.objects.annotate(target_date=ExpressionWrapper(timezone.now() - F('plan__days'), output_field=DateTimeField())).filter(createdAt__lte=F('target_date'))
``` | Assuming Postgres database:
```
table_post = Post._meta.db_table
table_plan = Plan._meta.db_table
old_posts = Post.objects.select_related('plan')\
.extra(where=["%s.created_at <= NOW() - INTERVAL '1 day' * %s.days"
% (table_post, table_plan)])
``` |
2,262,482 | I have made my own php MVC framework and have also written its documentation. It is about 80% complete. Now basically I am looking for a way so that other developers should be able to analyze my code and possibly join hands for its further development and improvement and also they should be able to browse through the documentation (html files).
I know about google app engine, but it is currently and mainly for python. So where should i upload my php code which should be runnable and the documentation (html files) browseable? | 2010/02/14 | [
"https://Stackoverflow.com/questions/2262482",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/139459/"
] | [**Github**](http://github.com/) comes to mind. It's free for Open Source projects, and supports a lot of "social coding" functions.
If you prefer Subversion Version Control, take a look at [**Google Code**](http://code.google.com/).
**HTML Hosting**
Github can even [**host static HTML pages**](http://github.com/blog/272-github-pages):
>
> GitHub Pages allow you to publish web content to a github.com subdomain named after your username. With Pages, publishing web content becomes as easy as pushing to your GitHub repository.
>
>
>
**Running PHP**
Running PHP files is not possible neither on Github, nor Google Code. I don't know any free, ad-free PHP hosting offers that are worth their salt - probably because of the huge danger of misuse. If it's an option at all, I think the best thing to do is chip in a few dollars/euros and get a small commercial hosting package somewhere. | [GitHub,](http://github.com) [SourceForge](http://sourceforge.com) and [Google Code](http://code.google.com) are all great places to make your project public and get others involved.
But these sites will only host your code, documentation, maybe provide you a forum, a mailing list and a bug tracker. They usually does not offer you a hosting for an instance of your app. (It would be costly and difficult to do that: all project have very specific runtime requirements and most of them are not even in PHP or not webapps at all.) But you could easily google for "free php web hosting", upload your site there, and then link from the project site.
(Btw. google app engine is also for Java!) |
2,262,482 | I have made my own php MVC framework and have also written its documentation. It is about 80% complete. Now basically I am looking for a way so that other developers should be able to analyze my code and possibly join hands for its further development and improvement and also they should be able to browse through the documentation (html files).
I know about google app engine, but it is currently and mainly for python. So where should i upload my php code which should be runnable and the documentation (html files) browseable? | 2010/02/14 | [
"https://Stackoverflow.com/questions/2262482",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/139459/"
] | [**Github**](http://github.com/) comes to mind. It's free for Open Source projects, and supports a lot of "social coding" functions.
If you prefer Subversion Version Control, take a look at [**Google Code**](http://code.google.com/).
**HTML Hosting**
Github can even [**host static HTML pages**](http://github.com/blog/272-github-pages):
>
> GitHub Pages allow you to publish web content to a github.com subdomain named after your username. With Pages, publishing web content becomes as easy as pushing to your GitHub repository.
>
>
>
**Running PHP**
Running PHP files is not possible neither on Github, nor Google Code. I don't know any free, ad-free PHP hosting offers that are worth their salt - probably because of the huge danger of misuse. If it's an option at all, I think the best thing to do is chip in a few dollars/euros and get a small commercial hosting package somewhere. | ```
#include<stdio.h>
int main()
{ int selection;
printf("this is a program to build a calculator program \n");
printf("for addition press 1 \n");
printf("for multiplication press 2 \n");
printf("for subtraction enter 3 \n");
printf("for division enter 4 \n"); /* this is cool */
scanf("%d",&selection);
switch(selection)
{
case 1: printf("ADDITION \n"); /* this is for addition */
int a,b,c;
printf("enter a value into a \n");
scanf("%d",&a);
printf("enter a value into b \n");
scanf("%d",&b);
c=a+b; /*logic of the addition phase */
printf("the answer is %d \n",c);
break;
case 2: printf("MULTIPLICATION \n"); /* this is for multiplication */
int e,f,g; /* here we took variables to store values in it */
printf("enter a value for a \n");
scanf("%d",&e);
printf("enter a value for b \n");
scanf("%d",&f);
g=e*f; /* logic of the multiplication phase */
printf("the answer is %d \n",g);
break;
case 3: printf("SUBTRACTION \n");
int h,i,j;
printf("enter a value for a /n");
scanf("%d",&h);
printf("enter a value for b \n");
scanf("%d",&i);
j=h-i; /* this is the logic for subtraction */
printf("the answer is %d ",j);
break ;
case 4: printf("DIVISION \n");
float k,l,m;
printf("enter a value into a \n");
scanf("%f",&i);
printf("enter a value into b \n");
scanf("%f",&m);
k=i/m; /*this is the logic used for division */
printf("the answer is %.2f \n",k);
break;
default: printf("error \n");
break; /* this is used to break the execution of the program */
}
getchar();
return 0; /* this return a value */
}
``` |
2,262,482 | I have made my own php MVC framework and have also written its documentation. It is about 80% complete. Now basically I am looking for a way so that other developers should be able to analyze my code and possibly join hands for its further development and improvement and also they should be able to browse through the documentation (html files).
I know about google app engine, but it is currently and mainly for python. So where should i upload my php code which should be runnable and the documentation (html files) browseable? | 2010/02/14 | [
"https://Stackoverflow.com/questions/2262482",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/139459/"
] | [GitHub,](http://github.com) [SourceForge](http://sourceforge.com) and [Google Code](http://code.google.com) are all great places to make your project public and get others involved.
But these sites will only host your code, documentation, maybe provide you a forum, a mailing list and a bug tracker. They usually does not offer you a hosting for an instance of your app. (It would be costly and difficult to do that: all project have very specific runtime requirements and most of them are not even in PHP or not webapps at all.) But you could easily google for "free php web hosting", upload your site there, and then link from the project site.
(Btw. google app engine is also for Java!) | ```
#include<stdio.h>
int main()
{ int selection;
printf("this is a program to build a calculator program \n");
printf("for addition press 1 \n");
printf("for multiplication press 2 \n");
printf("for subtraction enter 3 \n");
printf("for division enter 4 \n"); /* this is cool */
scanf("%d",&selection);
switch(selection)
{
case 1: printf("ADDITION \n"); /* this is for addition */
int a,b,c;
printf("enter a value into a \n");
scanf("%d",&a);
printf("enter a value into b \n");
scanf("%d",&b);
c=a+b; /*logic of the addition phase */
printf("the answer is %d \n",c);
break;
case 2: printf("MULTIPLICATION \n"); /* this is for multiplication */
int e,f,g; /* here we took variables to store values in it */
printf("enter a value for a \n");
scanf("%d",&e);
printf("enter a value for b \n");
scanf("%d",&f);
g=e*f; /* logic of the multiplication phase */
printf("the answer is %d \n",g);
break;
case 3: printf("SUBTRACTION \n");
int h,i,j;
printf("enter a value for a /n");
scanf("%d",&h);
printf("enter a value for b \n");
scanf("%d",&i);
j=h-i; /* this is the logic for subtraction */
printf("the answer is %d ",j);
break ;
case 4: printf("DIVISION \n");
float k,l,m;
printf("enter a value into a \n");
scanf("%f",&i);
printf("enter a value into b \n");
scanf("%f",&m);
k=i/m; /*this is the logic used for division */
printf("the answer is %.2f \n",k);
break;
default: printf("error \n");
break; /* this is used to break the execution of the program */
}
getchar();
return 0; /* this return a value */
}
``` |
48,031,283 | As an extremely simple benchmark, I executed the below simple code on PHP 7.0.19-1 and Python 3.5.3 (command line) on the same Raspberry Pi 3 model B.
Python's execution time was *horrible* in comparison to PHP's (74 seconds vs 1.4 seconds). Can anyone help me understand why the execution takes so much longer on Python? Is there something I'm doing wrong, or some optimizations/settings that would improve its performance to meet or exceed that of PHP? Or is Python just that much slower (surely not!)?
Yes I saw [this benchmark](https://blog.famzah.net/2016/02/09/cpp-vs-python-vs-perl-vs-php-performance-benchmark-2016/), which reports PHP 7 blazes past the other languages, but you'd think both would be fairly equally optimized when doing such a simple operation.
Python executes the loop about twice as fast if a string assignment is substituted for the addition. But that's still 34 seconds vs about 1.1 sec.
PHP7 code:
```
<?php
function test($x)
{
$t1 = microtime(true);
$a = 0;
for($i = 0; $i < $x; $i++)
{
$a++;
}
$t2 = microtime(true);
echo "Time for $x was " . ($t2 - $t1) . "\n";
return $a;
}
echo test(100000);
echo test(1000000);
echo test(10000000);
```
Results:
Time for 100000 was 0.036377191543579
100000Time for 1000000 was 0.18501400947571
1000000Time for 10000000 was 1.3939099311829
Python3 code:
```
import time
def test(x):
t1 = time.clock()
a = 0
for i in range(x):
a += 1
t2 = time.clock()
print("Time for {} was {}".format(x, t2 - t1))
return x
print(test(1000000))
print(test(10000000))
print(test(100000000))
```
Results:
Time for 1000000 was 0.761641
1000000
Time for 10000000 was 7.427618000000001
10000000
Time for 100000000 was 74.320387
100000000
UPDATE: yes after @Amber pointed it out, I realize I totally PEBKAKed and the loop counters are an order of magnitude apart. Even so, the answers were really interesting so it was worth asking the question. | 2017/12/30 | [
"https://Stackoverflow.com/questions/48031283",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1502289/"
] | They're both within an order of magnitude of each other, when you run them with identical cycle counts rather than having the Python counts being larger by an order of magnitude:
### PHP: <https://ideone.com/3ebkai> 2.7089s
```
<?php
function test($x)
{
$t1 = microtime(true);
$a = 0;
for($i = 0; $i < $x; $i++)
{
$a++;
}
$t2 = microtime(true);
echo "Time for $x was " . ($t2 - $t1) . "\n";
return $a;
}
echo test(100000000);
```
### Python: <https://ideone.com/pRFVfk> 4.5708s
```
import time
def test(x):
t1 = time.clock()
a = 0
for i in range(x):
a += 1
t2 = time.clock()
print("Time for {} was {}".format(x, t2 - t1))
return x
print(test(100000000))
``` | The loop itself appears to be twice as slow in CPython 3:
<https://ideone.com/bI6jzD>
```php
<?php
function test($x)
{
$t1 = microtime(true);
$a = 0;
for($i = 0; $i < $x; ++$i)
{
//1.40s Reassign and use $a.
//$a += 1;
//1.15s Use and increment $a.
//$a++;
//0.88s Increment and use $a.
//++$a;
//0.69s Do nothing.
}
$t2 = microtime(true);
echo "Time for $x was " . ($t2 - $t1) . "\n";
return $a;
}
echo test(1e8);
```
<https://ideone.com/l35EBc>
```py
import time
def test(x):
t1 = time.clock()
#>5s
#from functools import reduce
#a = reduce(lambda a, i: a + i, (1 for i in range(x)), 0)
a = 0
for i in range(x):
#4.38s
#a += 1
#1.89s
pass
t2 = time.clock()
print("Time for {} was {}".format(x, t2 - t1))
return x
print(test(int(1e8)))
```
However, that is only the standard implementation of Python which cares more about being easy to understand than being fast. [PyPy3.5 v6.0.0](https://pypy.org/download.html#default-with-a-jit-compiler) for instance, runs that empty loop in 0.06s instead of 1.70s on my laptop. |
48,031,283 | As an extremely simple benchmark, I executed the below simple code on PHP 7.0.19-1 and Python 3.5.3 (command line) on the same Raspberry Pi 3 model B.
Python's execution time was *horrible* in comparison to PHP's (74 seconds vs 1.4 seconds). Can anyone help me understand why the execution takes so much longer on Python? Is there something I'm doing wrong, or some optimizations/settings that would improve its performance to meet or exceed that of PHP? Or is Python just that much slower (surely not!)?
Yes I saw [this benchmark](https://blog.famzah.net/2016/02/09/cpp-vs-python-vs-perl-vs-php-performance-benchmark-2016/), which reports PHP 7 blazes past the other languages, but you'd think both would be fairly equally optimized when doing such a simple operation.
Python executes the loop about twice as fast if a string assignment is substituted for the addition. But that's still 34 seconds vs about 1.1 sec.
PHP7 code:
```
<?php
function test($x)
{
$t1 = microtime(true);
$a = 0;
for($i = 0; $i < $x; $i++)
{
$a++;
}
$t2 = microtime(true);
echo "Time for $x was " . ($t2 - $t1) . "\n";
return $a;
}
echo test(100000);
echo test(1000000);
echo test(10000000);
```
Results:
Time for 100000 was 0.036377191543579
100000Time for 1000000 was 0.18501400947571
1000000Time for 10000000 was 1.3939099311829
Python3 code:
```
import time
def test(x):
t1 = time.clock()
a = 0
for i in range(x):
a += 1
t2 = time.clock()
print("Time for {} was {}".format(x, t2 - t1))
return x
print(test(1000000))
print(test(10000000))
print(test(100000000))
```
Results:
Time for 1000000 was 0.761641
1000000
Time for 10000000 was 7.427618000000001
10000000
Time for 100000000 was 74.320387
100000000
UPDATE: yes after @Amber pointed it out, I realize I totally PEBKAKed and the loop counters are an order of magnitude apart. Even so, the answers were really interesting so it was worth asking the question. | 2017/12/30 | [
"https://Stackoverflow.com/questions/48031283",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1502289/"
] | They're both within an order of magnitude of each other, when you run them with identical cycle counts rather than having the Python counts being larger by an order of magnitude:
### PHP: <https://ideone.com/3ebkai> 2.7089s
```
<?php
function test($x)
{
$t1 = microtime(true);
$a = 0;
for($i = 0; $i < $x; $i++)
{
$a++;
}
$t2 = microtime(true);
echo "Time for $x was " . ($t2 - $t1) . "\n";
return $a;
}
echo test(100000000);
```
### Python: <https://ideone.com/pRFVfk> 4.5708s
```
import time
def test(x):
t1 = time.clock()
a = 0
for i in range(x):
a += 1
t2 = time.clock()
print("Time for {} was {}".format(x, t2 - t1))
return x
print(test(100000000))
``` | You guys are not being fair. The two pieces of code are NOT doing the same thing.
While PHP only increments two variables ($a and $i), Python is generating a range before it loops.
So, to have a fair comparison your Python code should be:
```
import time
def test2(x):
r = range(x) #please generate this first
a = 0
#now you count only the loop time
t1 = time.clock()
for i in r:
a += 1
t2 = time.clock()
print("Time for {} was {}".format(x, t2 - t1))
return a
```
Aaaaaaand, it's MUCH faster:
```
>>> print(test(100000000))
Time for 100000000 was 6.214772
```
**VS**
```
>>> print(test2(100000000))
Time for 100000000 was 3.079545
``` |
48,031,283 | As an extremely simple benchmark, I executed the below simple code on PHP 7.0.19-1 and Python 3.5.3 (command line) on the same Raspberry Pi 3 model B.
Python's execution time was *horrible* in comparison to PHP's (74 seconds vs 1.4 seconds). Can anyone help me understand why the execution takes so much longer on Python? Is there something I'm doing wrong, or some optimizations/settings that would improve its performance to meet or exceed that of PHP? Or is Python just that much slower (surely not!)?
Yes I saw [this benchmark](https://blog.famzah.net/2016/02/09/cpp-vs-python-vs-perl-vs-php-performance-benchmark-2016/), which reports PHP 7 blazes past the other languages, but you'd think both would be fairly equally optimized when doing such a simple operation.
Python executes the loop about twice as fast if a string assignment is substituted for the addition. But that's still 34 seconds vs about 1.1 sec.
PHP7 code:
```
<?php
function test($x)
{
$t1 = microtime(true);
$a = 0;
for($i = 0; $i < $x; $i++)
{
$a++;
}
$t2 = microtime(true);
echo "Time for $x was " . ($t2 - $t1) . "\n";
return $a;
}
echo test(100000);
echo test(1000000);
echo test(10000000);
```
Results:
Time for 100000 was 0.036377191543579
100000Time for 1000000 was 0.18501400947571
1000000Time for 10000000 was 1.3939099311829
Python3 code:
```
import time
def test(x):
t1 = time.clock()
a = 0
for i in range(x):
a += 1
t2 = time.clock()
print("Time for {} was {}".format(x, t2 - t1))
return x
print(test(1000000))
print(test(10000000))
print(test(100000000))
```
Results:
Time for 1000000 was 0.761641
1000000
Time for 10000000 was 7.427618000000001
10000000
Time for 100000000 was 74.320387
100000000
UPDATE: yes after @Amber pointed it out, I realize I totally PEBKAKed and the loop counters are an order of magnitude apart. Even so, the answers were really interesting so it was worth asking the question. | 2017/12/30 | [
"https://Stackoverflow.com/questions/48031283",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1502289/"
] | They're both within an order of magnitude of each other, when you run them with identical cycle counts rather than having the Python counts being larger by an order of magnitude:
### PHP: <https://ideone.com/3ebkai> 2.7089s
```
<?php
function test($x)
{
$t1 = microtime(true);
$a = 0;
for($i = 0; $i < $x; $i++)
{
$a++;
}
$t2 = microtime(true);
echo "Time for $x was " . ($t2 - $t1) . "\n";
return $a;
}
echo test(100000000);
```
### Python: <https://ideone.com/pRFVfk> 4.5708s
```
import time
def test(x):
t1 = time.clock()
a = 0
for i in range(x):
a += 1
t2 = time.clock()
print("Time for {} was {}".format(x, t2 - t1))
return x
print(test(100000000))
``` | As others have pointed out, your arguments are an order of magnitude off for the Python code. But I just want to add that callables in loop conditions should be avoided as much as possible whenever writing any kind of code as Rafael Beckel pointed out in his answer. On every iteration the callable is executed, which results in bad performance as is evident in OP's benchmark results.
While this question is relatively old, this is just tiny but useful info for budding programmers in any language. |
48,031,283 | As an extremely simple benchmark, I executed the below simple code on PHP 7.0.19-1 and Python 3.5.3 (command line) on the same Raspberry Pi 3 model B.
Python's execution time was *horrible* in comparison to PHP's (74 seconds vs 1.4 seconds). Can anyone help me understand why the execution takes so much longer on Python? Is there something I'm doing wrong, or some optimizations/settings that would improve its performance to meet or exceed that of PHP? Or is Python just that much slower (surely not!)?
Yes I saw [this benchmark](https://blog.famzah.net/2016/02/09/cpp-vs-python-vs-perl-vs-php-performance-benchmark-2016/), which reports PHP 7 blazes past the other languages, but you'd think both would be fairly equally optimized when doing such a simple operation.
Python executes the loop about twice as fast if a string assignment is substituted for the addition. But that's still 34 seconds vs about 1.1 sec.
PHP7 code:
```
<?php
function test($x)
{
$t1 = microtime(true);
$a = 0;
for($i = 0; $i < $x; $i++)
{
$a++;
}
$t2 = microtime(true);
echo "Time for $x was " . ($t2 - $t1) . "\n";
return $a;
}
echo test(100000);
echo test(1000000);
echo test(10000000);
```
Results:
Time for 100000 was 0.036377191543579
100000Time for 1000000 was 0.18501400947571
1000000Time for 10000000 was 1.3939099311829
Python3 code:
```
import time
def test(x):
t1 = time.clock()
a = 0
for i in range(x):
a += 1
t2 = time.clock()
print("Time for {} was {}".format(x, t2 - t1))
return x
print(test(1000000))
print(test(10000000))
print(test(100000000))
```
Results:
Time for 1000000 was 0.761641
1000000
Time for 10000000 was 7.427618000000001
10000000
Time for 100000000 was 74.320387
100000000
UPDATE: yes after @Amber pointed it out, I realize I totally PEBKAKed and the loop counters are an order of magnitude apart. Even so, the answers were really interesting so it was worth asking the question. | 2017/12/30 | [
"https://Stackoverflow.com/questions/48031283",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1502289/"
] | They're both within an order of magnitude of each other, when you run them with identical cycle counts rather than having the Python counts being larger by an order of magnitude:
### PHP: <https://ideone.com/3ebkai> 2.7089s
```
<?php
function test($x)
{
$t1 = microtime(true);
$a = 0;
for($i = 0; $i < $x; $i++)
{
$a++;
}
$t2 = microtime(true);
echo "Time for $x was " . ($t2 - $t1) . "\n";
return $a;
}
echo test(100000000);
```
### Python: <https://ideone.com/pRFVfk> 4.5708s
```
import time
def test(x):
t1 = time.clock()
a = 0
for i in range(x):
a += 1
t2 = time.clock()
print("Time for {} was {}".format(x, t2 - t1))
return x
print(test(100000000))
``` | PHP code using `range` is faster than without. My version:
```php
<?php
declare(strict_types=1);
function test(int $x): int
{
$range = range(1, $x);
$a = 0;
$t1 = microtime(true);
foreach($range as $i)
{
$a++;
}
$t2 = microtime(true);
echo 'Time for ' . $x . ' was ' . ($t2 - $t1) . PHP_EOL;
return $a;
}
echo test(100000000);
``` |
48,031,283 | As an extremely simple benchmark, I executed the below simple code on PHP 7.0.19-1 and Python 3.5.3 (command line) on the same Raspberry Pi 3 model B.
Python's execution time was *horrible* in comparison to PHP's (74 seconds vs 1.4 seconds). Can anyone help me understand why the execution takes so much longer on Python? Is there something I'm doing wrong, or some optimizations/settings that would improve its performance to meet or exceed that of PHP? Or is Python just that much slower (surely not!)?
Yes I saw [this benchmark](https://blog.famzah.net/2016/02/09/cpp-vs-python-vs-perl-vs-php-performance-benchmark-2016/), which reports PHP 7 blazes past the other languages, but you'd think both would be fairly equally optimized when doing such a simple operation.
Python executes the loop about twice as fast if a string assignment is substituted for the addition. But that's still 34 seconds vs about 1.1 sec.
PHP7 code:
```
<?php
function test($x)
{
$t1 = microtime(true);
$a = 0;
for($i = 0; $i < $x; $i++)
{
$a++;
}
$t2 = microtime(true);
echo "Time for $x was " . ($t2 - $t1) . "\n";
return $a;
}
echo test(100000);
echo test(1000000);
echo test(10000000);
```
Results:
Time for 100000 was 0.036377191543579
100000Time for 1000000 was 0.18501400947571
1000000Time for 10000000 was 1.3939099311829
Python3 code:
```
import time
def test(x):
t1 = time.clock()
a = 0
for i in range(x):
a += 1
t2 = time.clock()
print("Time for {} was {}".format(x, t2 - t1))
return x
print(test(1000000))
print(test(10000000))
print(test(100000000))
```
Results:
Time for 1000000 was 0.761641
1000000
Time for 10000000 was 7.427618000000001
10000000
Time for 100000000 was 74.320387
100000000
UPDATE: yes after @Amber pointed it out, I realize I totally PEBKAKed and the loop counters are an order of magnitude apart. Even so, the answers were really interesting so it was worth asking the question. | 2017/12/30 | [
"https://Stackoverflow.com/questions/48031283",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1502289/"
] | You guys are not being fair. The two pieces of code are NOT doing the same thing.
While PHP only increments two variables ($a and $i), Python is generating a range before it loops.
So, to have a fair comparison your Python code should be:
```
import time
def test2(x):
r = range(x) #please generate this first
a = 0
#now you count only the loop time
t1 = time.clock()
for i in r:
a += 1
t2 = time.clock()
print("Time for {} was {}".format(x, t2 - t1))
return a
```
Aaaaaaand, it's MUCH faster:
```
>>> print(test(100000000))
Time for 100000000 was 6.214772
```
**VS**
```
>>> print(test2(100000000))
Time for 100000000 was 3.079545
``` | The loop itself appears to be twice as slow in CPython 3:
<https://ideone.com/bI6jzD>
```php
<?php
function test($x)
{
$t1 = microtime(true);
$a = 0;
for($i = 0; $i < $x; ++$i)
{
//1.40s Reassign and use $a.
//$a += 1;
//1.15s Use and increment $a.
//$a++;
//0.88s Increment and use $a.
//++$a;
//0.69s Do nothing.
}
$t2 = microtime(true);
echo "Time for $x was " . ($t2 - $t1) . "\n";
return $a;
}
echo test(1e8);
```
<https://ideone.com/l35EBc>
```py
import time
def test(x):
t1 = time.clock()
#>5s
#from functools import reduce
#a = reduce(lambda a, i: a + i, (1 for i in range(x)), 0)
a = 0
for i in range(x):
#4.38s
#a += 1
#1.89s
pass
t2 = time.clock()
print("Time for {} was {}".format(x, t2 - t1))
return x
print(test(int(1e8)))
```
However, that is only the standard implementation of Python which cares more about being easy to understand than being fast. [PyPy3.5 v6.0.0](https://pypy.org/download.html#default-with-a-jit-compiler) for instance, runs that empty loop in 0.06s instead of 1.70s on my laptop. |
48,031,283 | As an extremely simple benchmark, I executed the below simple code on PHP 7.0.19-1 and Python 3.5.3 (command line) on the same Raspberry Pi 3 model B.
Python's execution time was *horrible* in comparison to PHP's (74 seconds vs 1.4 seconds). Can anyone help me understand why the execution takes so much longer on Python? Is there something I'm doing wrong, or some optimizations/settings that would improve its performance to meet or exceed that of PHP? Or is Python just that much slower (surely not!)?
Yes I saw [this benchmark](https://blog.famzah.net/2016/02/09/cpp-vs-python-vs-perl-vs-php-performance-benchmark-2016/), which reports PHP 7 blazes past the other languages, but you'd think both would be fairly equally optimized when doing such a simple operation.
Python executes the loop about twice as fast if a string assignment is substituted for the addition. But that's still 34 seconds vs about 1.1 sec.
PHP7 code:
```
<?php
function test($x)
{
$t1 = microtime(true);
$a = 0;
for($i = 0; $i < $x; $i++)
{
$a++;
}
$t2 = microtime(true);
echo "Time for $x was " . ($t2 - $t1) . "\n";
return $a;
}
echo test(100000);
echo test(1000000);
echo test(10000000);
```
Results:
Time for 100000 was 0.036377191543579
100000Time for 1000000 was 0.18501400947571
1000000Time for 10000000 was 1.3939099311829
Python3 code:
```
import time
def test(x):
t1 = time.clock()
a = 0
for i in range(x):
a += 1
t2 = time.clock()
print("Time for {} was {}".format(x, t2 - t1))
return x
print(test(1000000))
print(test(10000000))
print(test(100000000))
```
Results:
Time for 1000000 was 0.761641
1000000
Time for 10000000 was 7.427618000000001
10000000
Time for 100000000 was 74.320387
100000000
UPDATE: yes after @Amber pointed it out, I realize I totally PEBKAKed and the loop counters are an order of magnitude apart. Even so, the answers were really interesting so it was worth asking the question. | 2017/12/30 | [
"https://Stackoverflow.com/questions/48031283",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1502289/"
] | The loop itself appears to be twice as slow in CPython 3:
<https://ideone.com/bI6jzD>
```php
<?php
function test($x)
{
$t1 = microtime(true);
$a = 0;
for($i = 0; $i < $x; ++$i)
{
//1.40s Reassign and use $a.
//$a += 1;
//1.15s Use and increment $a.
//$a++;
//0.88s Increment and use $a.
//++$a;
//0.69s Do nothing.
}
$t2 = microtime(true);
echo "Time for $x was " . ($t2 - $t1) . "\n";
return $a;
}
echo test(1e8);
```
<https://ideone.com/l35EBc>
```py
import time
def test(x):
t1 = time.clock()
#>5s
#from functools import reduce
#a = reduce(lambda a, i: a + i, (1 for i in range(x)), 0)
a = 0
for i in range(x):
#4.38s
#a += 1
#1.89s
pass
t2 = time.clock()
print("Time for {} was {}".format(x, t2 - t1))
return x
print(test(int(1e8)))
```
However, that is only the standard implementation of Python which cares more about being easy to understand than being fast. [PyPy3.5 v6.0.0](https://pypy.org/download.html#default-with-a-jit-compiler) for instance, runs that empty loop in 0.06s instead of 1.70s on my laptop. | As others have pointed out, your arguments are an order of magnitude off for the Python code. But I just want to add that callables in loop conditions should be avoided as much as possible whenever writing any kind of code as Rafael Beckel pointed out in his answer. On every iteration the callable is executed, which results in bad performance as is evident in OP's benchmark results.
While this question is relatively old, this is just tiny but useful info for budding programmers in any language. |
48,031,283 | As an extremely simple benchmark, I executed the below simple code on PHP 7.0.19-1 and Python 3.5.3 (command line) on the same Raspberry Pi 3 model B.
Python's execution time was *horrible* in comparison to PHP's (74 seconds vs 1.4 seconds). Can anyone help me understand why the execution takes so much longer on Python? Is there something I'm doing wrong, or some optimizations/settings that would improve its performance to meet or exceed that of PHP? Or is Python just that much slower (surely not!)?
Yes I saw [this benchmark](https://blog.famzah.net/2016/02/09/cpp-vs-python-vs-perl-vs-php-performance-benchmark-2016/), which reports PHP 7 blazes past the other languages, but you'd think both would be fairly equally optimized when doing such a simple operation.
Python executes the loop about twice as fast if a string assignment is substituted for the addition. But that's still 34 seconds vs about 1.1 sec.
PHP7 code:
```
<?php
function test($x)
{
$t1 = microtime(true);
$a = 0;
for($i = 0; $i < $x; $i++)
{
$a++;
}
$t2 = microtime(true);
echo "Time for $x was " . ($t2 - $t1) . "\n";
return $a;
}
echo test(100000);
echo test(1000000);
echo test(10000000);
```
Results:
Time for 100000 was 0.036377191543579
100000Time for 1000000 was 0.18501400947571
1000000Time for 10000000 was 1.3939099311829
Python3 code:
```
import time
def test(x):
t1 = time.clock()
a = 0
for i in range(x):
a += 1
t2 = time.clock()
print("Time for {} was {}".format(x, t2 - t1))
return x
print(test(1000000))
print(test(10000000))
print(test(100000000))
```
Results:
Time for 1000000 was 0.761641
1000000
Time for 10000000 was 7.427618000000001
10000000
Time for 100000000 was 74.320387
100000000
UPDATE: yes after @Amber pointed it out, I realize I totally PEBKAKed and the loop counters are an order of magnitude apart. Even so, the answers were really interesting so it was worth asking the question. | 2017/12/30 | [
"https://Stackoverflow.com/questions/48031283",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1502289/"
] | The loop itself appears to be twice as slow in CPython 3:
<https://ideone.com/bI6jzD>
```php
<?php
function test($x)
{
$t1 = microtime(true);
$a = 0;
for($i = 0; $i < $x; ++$i)
{
//1.40s Reassign and use $a.
//$a += 1;
//1.15s Use and increment $a.
//$a++;
//0.88s Increment and use $a.
//++$a;
//0.69s Do nothing.
}
$t2 = microtime(true);
echo "Time for $x was " . ($t2 - $t1) . "\n";
return $a;
}
echo test(1e8);
```
<https://ideone.com/l35EBc>
```py
import time
def test(x):
t1 = time.clock()
#>5s
#from functools import reduce
#a = reduce(lambda a, i: a + i, (1 for i in range(x)), 0)
a = 0
for i in range(x):
#4.38s
#a += 1
#1.89s
pass
t2 = time.clock()
print("Time for {} was {}".format(x, t2 - t1))
return x
print(test(int(1e8)))
```
However, that is only the standard implementation of Python which cares more about being easy to understand than being fast. [PyPy3.5 v6.0.0](https://pypy.org/download.html#default-with-a-jit-compiler) for instance, runs that empty loop in 0.06s instead of 1.70s on my laptop. | PHP code using `range` is faster than without. My version:
```php
<?php
declare(strict_types=1);
function test(int $x): int
{
$range = range(1, $x);
$a = 0;
$t1 = microtime(true);
foreach($range as $i)
{
$a++;
}
$t2 = microtime(true);
echo 'Time for ' . $x . ' was ' . ($t2 - $t1) . PHP_EOL;
return $a;
}
echo test(100000000);
``` |
48,031,283 | As an extremely simple benchmark, I executed the below simple code on PHP 7.0.19-1 and Python 3.5.3 (command line) on the same Raspberry Pi 3 model B.
Python's execution time was *horrible* in comparison to PHP's (74 seconds vs 1.4 seconds). Can anyone help me understand why the execution takes so much longer on Python? Is there something I'm doing wrong, or some optimizations/settings that would improve its performance to meet or exceed that of PHP? Or is Python just that much slower (surely not!)?
Yes I saw [this benchmark](https://blog.famzah.net/2016/02/09/cpp-vs-python-vs-perl-vs-php-performance-benchmark-2016/), which reports PHP 7 blazes past the other languages, but you'd think both would be fairly equally optimized when doing such a simple operation.
Python executes the loop about twice as fast if a string assignment is substituted for the addition. But that's still 34 seconds vs about 1.1 sec.
PHP7 code:
```
<?php
function test($x)
{
$t1 = microtime(true);
$a = 0;
for($i = 0; $i < $x; $i++)
{
$a++;
}
$t2 = microtime(true);
echo "Time for $x was " . ($t2 - $t1) . "\n";
return $a;
}
echo test(100000);
echo test(1000000);
echo test(10000000);
```
Results:
Time for 100000 was 0.036377191543579
100000Time for 1000000 was 0.18501400947571
1000000Time for 10000000 was 1.3939099311829
Python3 code:
```
import time
def test(x):
t1 = time.clock()
a = 0
for i in range(x):
a += 1
t2 = time.clock()
print("Time for {} was {}".format(x, t2 - t1))
return x
print(test(1000000))
print(test(10000000))
print(test(100000000))
```
Results:
Time for 1000000 was 0.761641
1000000
Time for 10000000 was 7.427618000000001
10000000
Time for 100000000 was 74.320387
100000000
UPDATE: yes after @Amber pointed it out, I realize I totally PEBKAKed and the loop counters are an order of magnitude apart. Even so, the answers were really interesting so it was worth asking the question. | 2017/12/30 | [
"https://Stackoverflow.com/questions/48031283",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1502289/"
] | You guys are not being fair. The two pieces of code are NOT doing the same thing.
While PHP only increments two variables ($a and $i), Python is generating a range before it loops.
So, to have a fair comparison your Python code should be:
```
import time
def test2(x):
r = range(x) #please generate this first
a = 0
#now you count only the loop time
t1 = time.clock()
for i in r:
a += 1
t2 = time.clock()
print("Time for {} was {}".format(x, t2 - t1))
return a
```
Aaaaaaand, it's MUCH faster:
```
>>> print(test(100000000))
Time for 100000000 was 6.214772
```
**VS**
```
>>> print(test2(100000000))
Time for 100000000 was 3.079545
``` | As others have pointed out, your arguments are an order of magnitude off for the Python code. But I just want to add that callables in loop conditions should be avoided as much as possible whenever writing any kind of code as Rafael Beckel pointed out in his answer. On every iteration the callable is executed, which results in bad performance as is evident in OP's benchmark results.
While this question is relatively old, this is just tiny but useful info for budding programmers in any language. |
48,031,283 | As an extremely simple benchmark, I executed the below simple code on PHP 7.0.19-1 and Python 3.5.3 (command line) on the same Raspberry Pi 3 model B.
Python's execution time was *horrible* in comparison to PHP's (74 seconds vs 1.4 seconds). Can anyone help me understand why the execution takes so much longer on Python? Is there something I'm doing wrong, or some optimizations/settings that would improve its performance to meet or exceed that of PHP? Or is Python just that much slower (surely not!)?
Yes I saw [this benchmark](https://blog.famzah.net/2016/02/09/cpp-vs-python-vs-perl-vs-php-performance-benchmark-2016/), which reports PHP 7 blazes past the other languages, but you'd think both would be fairly equally optimized when doing such a simple operation.
Python executes the loop about twice as fast if a string assignment is substituted for the addition. But that's still 34 seconds vs about 1.1 sec.
PHP7 code:
```
<?php
function test($x)
{
$t1 = microtime(true);
$a = 0;
for($i = 0; $i < $x; $i++)
{
$a++;
}
$t2 = microtime(true);
echo "Time for $x was " . ($t2 - $t1) . "\n";
return $a;
}
echo test(100000);
echo test(1000000);
echo test(10000000);
```
Results:
Time for 100000 was 0.036377191543579
100000Time for 1000000 was 0.18501400947571
1000000Time for 10000000 was 1.3939099311829
Python3 code:
```
import time
def test(x):
t1 = time.clock()
a = 0
for i in range(x):
a += 1
t2 = time.clock()
print("Time for {} was {}".format(x, t2 - t1))
return x
print(test(1000000))
print(test(10000000))
print(test(100000000))
```
Results:
Time for 1000000 was 0.761641
1000000
Time for 10000000 was 7.427618000000001
10000000
Time for 100000000 was 74.320387
100000000
UPDATE: yes after @Amber pointed it out, I realize I totally PEBKAKed and the loop counters are an order of magnitude apart. Even so, the answers were really interesting so it was worth asking the question. | 2017/12/30 | [
"https://Stackoverflow.com/questions/48031283",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1502289/"
] | You guys are not being fair. The two pieces of code are NOT doing the same thing.
While PHP only increments two variables ($a and $i), Python is generating a range before it loops.
So, to have a fair comparison your Python code should be:
```
import time
def test2(x):
r = range(x) #please generate this first
a = 0
#now you count only the loop time
t1 = time.clock()
for i in r:
a += 1
t2 = time.clock()
print("Time for {} was {}".format(x, t2 - t1))
return a
```
Aaaaaaand, it's MUCH faster:
```
>>> print(test(100000000))
Time for 100000000 was 6.214772
```
**VS**
```
>>> print(test2(100000000))
Time for 100000000 was 3.079545
``` | PHP code using `range` is faster than without. My version:
```php
<?php
declare(strict_types=1);
function test(int $x): int
{
$range = range(1, $x);
$a = 0;
$t1 = microtime(true);
foreach($range as $i)
{
$a++;
}
$t2 = microtime(true);
echo 'Time for ' . $x . ' was ' . ($t2 - $t1) . PHP_EOL;
return $a;
}
echo test(100000000);
``` |
19,943,977 | I am a somewhat Python/programing newbie, and I am attempting to use a python class for the first time.
In this code I am trying to create a script to backup some files. I have 6 files in total that I want to back up regularly with this script so I thought that I would try and use the python Class to save me writing things out 6 times, and also to get practice using Classes.
In my code below I have things set up for just creating 1 instance of a class for now to test things. However, I have hit a snag. I can't seem to use the operator to assign the original filename and the back-up filename.
Is it not possible to use the operator for a filename when opening a file? Or am I doing things wrong.
```
class Back_up(object):
def __init__(self, file_name, back_up_file):
self.file_name = file_name
self.back_up_file = back_up_file
print "I %s and me %s" % (self.file_name, self.back_up_file)
with open('%s.txt', 'r') as f, open('{}.txt', 'w') as f2 % (self.file_name, self.back_up_file):
f_read = read(f)
f2.write(f_read)
first_back_up = Back_up("syn1_ready", "syn1_backup")
```
Also, line #7 is really long, any tips on how to shorten it are appreciated.
Thanks
Darren | 2013/11/13 | [
"https://Stackoverflow.com/questions/19943977",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2680443/"
] | If you just want your files backed up, may I suggest using `shutil.copy()`?
As for your program:
If you want to substitute in a string to build a filename, you can do it. But your code doesn't do it.
You have this:
```
with open('%s.txt', 'r') as f, open('{}.txt', 'w') as f2 % (self.file_name, self.back_up_file):
```
Try this instead:
```
src = "%s.txt" % self.file_name
dest = "{}.txt".format(self.back_up_file)
with open(src, "rb") as f, open(dest, "wb") as f2:
# copying code goes here
```
The `%` operator operates on a string. The `.format()` method call is a method on a string. Either way, you need to do the operation with the string; you can't have two `with` statements and then try to use these operators at the end of the `with` statements line.
You don't have to use explicit temp variables like I show here, but it's a good way to make the code easy to read, while greatly shortening the length of the `with` statements line.
Your code to copy the files will read all the file data into memory at one time. That will be fine for a small file. For a large file, you should use a loop that calls `.read(CHUNK_SIZE)` where `CHUNK_SIZE` is a maximum amount to read in a single chunk. That way if you ever back up a really large file on a computer with limited memory, it will simply work rather than filling the computer's memory and making the computer start swapping to disk. | Try simplicity :)
Your line 7 is not going to parse. Split it using intermediate variables:
```
source_fname = "%s.txt" % self.file_name
target_fname = "%s.txt" % self.back_up_file
with open(source_fname) as source, open(target_fname) as target:
# do your thing
```
Also, try hard avoiding inconsistent and overly generic attribute names, like `file_name`, when you have *two* files to operate on.
Your copy routine is not going to be very efficient, too. It tries to read the entire file into memory, then write it. If I were you I'd call `rsync` or something similar via `popen()` and feed it with proper list of files to operate on. Most probably I'd use `bash` for that, though Python may be fine, too. |
12,548,572 | I'm trying to make a simple derived class based on `str`, with the addition of an instance variable, `flag`. For reasons I do not understand, I get an error if I try to pass the flag to the constructor:
```
>>> class Strvalue(str):
def __init__(self, content, flag=None):
str.__init__(self, content)
self.flag = flag
>>> Strvalue("No problem")
'No problem'
>>> Strvalue("Problem", flag=None)
Traceback (most recent call last):
File "<pyshell#113>", line 1, in <module>
Strvalue("Problem", flag=None)
TypeError: str() takes at most 1 argument (2 given)
```
I've checked that in the succesful calls, the `Strvalue` constructor really does get called-- I haven't mistyped `__init__` or something of that sort. So what is going on?
**Edit:** According to [this question](https://stackoverflow.com/questions/12231894/overloading-str-in-python) (and @Martijn's answer), the problem is avoided by overriding `__new__` as well. The question was *why* this was happening. | 2012/09/22 | [
"https://Stackoverflow.com/questions/12548572",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/699305/"
] | You need to use `__new__` instead of `__init__` when subclassing `str`, see [basic customization](http://docs.python.org/reference/datamodel.html#basic-customization).
```
>>> class Strvalue(str):
... def __new__(cls, content, flag=None):
... inst = str.__new__(cls, content)
... inst.flag = flag
... return inst
...
>>> Strvalue('foo', True)
'foo'
>>> foo = Strvalue('foo', True)
>>> foo
'foo'
>>> foo.flag
True
```
Your code doesn't override `str.__new__`, so the original `str.__new__` constructor is called with your two arguments, and it only accepts one.
`str` objects are immutable, they construct a new instance in `__new__`, which then cannot be changed anymore; by the time `__init__` is called, `self` is an immutable object, so `__init__` for `str` doesn't make sense. You can still *also* define an `__init__` method, but since you already have `__new__`, there is really no need to divide the work up across two methods. | You need to override `__new__` instead of (or as well as) `__init__`. |
9,052,588 | I am new to python and new to programming. I have question how can I use variables from method1 in method too.
Example
```
class abc(self):
def method1 (self,v1):
v1 = a+b
return v1 # want to use this value in method 2
def method2(self)
v2 * v1 = v3
```
Thanks | 2012/01/29 | [
"https://Stackoverflow.com/questions/9052588",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1172372/"
] | let `method2` "know" it is waiting for an argument:
```
def method2(self,v1): #note v1 was added here
v2 * v1 = v3 #what does that suppse to do? [look my "third note"]
```
also note: you also need to pass `v2` to `method2()`
third note: what exactly are you trying to do in `v2 * v1 = v3` ? maybe you meant `v3 = v1 * v2` ? | To use a value throughout a class, you need to bind that value to an attribute of its instance.
For example:
```
class Abc(object): # put object here, not self
def method1(self):
self.v1 = 3 + 7 # now v1 is an attribute
def method2(self)
return 4 * self.v1
a = Abc()
a.method1()
a.v1 # -> 10
a.method2() # -> 40
```
But usually is not a good practice to have an attribute rising after the call of some method, so you should also provide a default value for v1. Placing it in `__init__`:
```
class Abc(object):
def __init__(self):
self.v1 = 0
def method1(self):
self.v1 = 3 + 7
def method2(self)
return 4 * self.v1
a = Abc()
a.v1 # -> 0
a.method1()
a.v1 # -> 10
a.method2() # -> 40
``` |
9,052,588 | I am new to python and new to programming. I have question how can I use variables from method1 in method too.
Example
```
class abc(self):
def method1 (self,v1):
v1 = a+b
return v1 # want to use this value in method 2
def method2(self)
v2 * v1 = v3
```
Thanks | 2012/01/29 | [
"https://Stackoverflow.com/questions/9052588",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1172372/"
] | let `method2` "know" it is waiting for an argument:
```
def method2(self,v1): #note v1 was added here
v2 * v1 = v3 #what does that suppse to do? [look my "third note"]
```
also note: you also need to pass `v2` to `method2()`
third note: what exactly are you trying to do in `v2 * v1 = v3` ? maybe you meant `v3 = v1 * v2` ? | one more way is to use global variable.
```
def a():
global v
v = 10;
def b():
print v
if __name__=='__main__':
a()
b()
``` |
9,052,588 | I am new to python and new to programming. I have question how can I use variables from method1 in method too.
Example
```
class abc(self):
def method1 (self,v1):
v1 = a+b
return v1 # want to use this value in method 2
def method2(self)
v2 * v1 = v3
```
Thanks | 2012/01/29 | [
"https://Stackoverflow.com/questions/9052588",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1172372/"
] | Make `v1` an instance variable by using `self`, i.e. `self.v1 = a+b` and `v2 * self.v1 = v3`. But that second command should look like this: `v3 = v2 * self.v1`. And there is still a problem in `v2` not being defined.
Note that with this approach, `method1` must be called before `method2`, otherwise `self.v1` will not be defined during processing of method2 (and it has to be). The approach of amit is cleaner.
Good luck in learning python. It is a great language. | To use a value throughout a class, you need to bind that value to an attribute of its instance.
For example:
```
class Abc(object): # put object here, not self
def method1(self):
self.v1 = 3 + 7 # now v1 is an attribute
def method2(self)
return 4 * self.v1
a = Abc()
a.method1()
a.v1 # -> 10
a.method2() # -> 40
```
But usually is not a good practice to have an attribute rising after the call of some method, so you should also provide a default value for v1. Placing it in `__init__`:
```
class Abc(object):
def __init__(self):
self.v1 = 0
def method1(self):
self.v1 = 3 + 7
def method2(self)
return 4 * self.v1
a = Abc()
a.v1 # -> 0
a.method1()
a.v1 # -> 10
a.method2() # -> 40
``` |
9,052,588 | I am new to python and new to programming. I have question how can I use variables from method1 in method too.
Example
```
class abc(self):
def method1 (self,v1):
v1 = a+b
return v1 # want to use this value in method 2
def method2(self)
v2 * v1 = v3
```
Thanks | 2012/01/29 | [
"https://Stackoverflow.com/questions/9052588",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1172372/"
] | Make `v1` an instance variable by using `self`, i.e. `self.v1 = a+b` and `v2 * self.v1 = v3`. But that second command should look like this: `v3 = v2 * self.v1`. And there is still a problem in `v2` not being defined.
Note that with this approach, `method1` must be called before `method2`, otherwise `self.v1` will not be defined during processing of method2 (and it has to be). The approach of amit is cleaner.
Good luck in learning python. It is a great language. | one more way is to use global variable.
```
def a():
global v
v = 10;
def b():
print v
if __name__=='__main__':
a()
b()
``` |
63,894,460 | An example is something like [Desmos](https://www.desmos.com/calculator) (but as a desktop application). The function is given by the user as text, so it cannot be written at compile-time. Furthermore, the function may be reused thousands of times before it changes. However, a true example would be something where the function could change more frequently than desmos, and its values could be used more as well.
I see four methods for writing this code:
1. Parse the user-defined function with a grammar every single time the function is called. (Slow with many function calls)
2. Construct the syntax tree of the math expression so that the nodes contain function pointers to the appropriate math operations, allowing the program to skip parsing the text every single time the function is called. This should be faster than #1 for many function calls, but it still involves function pointers and a tree, which adds indirection and isn't as fast as if the functions were pre-compiled (and optimized).
3. Use something like [The Tiny C Compiler](https://bellard.org/tcc/) as the backend for dynamic code generation with libtcc to quickly compile the user's function after translating it into C code, and then use it in the main program. Since this compiler can compile something like 10,000 very simple programs on my machine per second, there should be next to no delay with parsing new functions. Furthermore, this compiler generates machine code for the function, so there are no pointers or trees involved, and optimization is done by TinyCC. This method is more daunting for an intermediate programmer like me.
4. Write my own tiny compiler (not of C, but tailored specifically to my problem) to generate machine code almost instantly. This is probably 20x more work than #3, and doesn't do much in the way of future improvements (adding a summation operation generator would require me to write more assembly code for that).
Is there any easier, yet equally or more efficient method than #3, while staying in the realm of C++? I'm not experienced enough with lambdas and templates and the standard library to tell for sure if there isn't some abstract way to write this code easily and efficiently.
Even a method that is faster than #2 but slower than #3, and requires no dynamic code generation would be an improvement.
This is more of an intellectual curiosity than a real-world problem, which is why I am concerned so much with performance, and is why I wouldn't use someone else's math parsing library. It's also why I wouldn't consider using javascript or python interpreter which can interpret this kind of thing on-the-fly. | 2020/09/15 | [
"https://Stackoverflow.com/questions/63894460",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10868964/"
] | The paper [“A Killer Adversary for Quicksort”](https://www.cs.dartmouth.edu/%7Edoug/mdmspe.pdf) gives an algorithm that, for any quicksort implementation that satisfies certain “reasonable” requirements and runs deterministically, produces arbitrarily long input sequences that cause the algorithm to run in quadratic time. So while you’re correct that using the middle value as the pivot will prevent your algorithm from running in quadratic time on an already-sorted array, the fact that the pivots are picked deterministically means that there will be some input to the algorithm that causes the performance to degrade, and the linked paper can be used to construct such a pathological input. | the worst case of quick sort is when each time the pivot is chosen it's the max or min number/value in the array.
in this case it will run at O(n^2) for the regular version of Quick sort.
However, there's a version of Quick sort that uses the partition algorithm in order to choose better pivots. In this version of Quick sort the worst case is O(nlogn). |
29,124,435 | So I'm having this issue where I'm trying to convert something such as
```
[0]['question']: "what is 2+2",
[0]['answers'][0]: "21",
[0]['answers'][1]: "312",
[0]['answers'][2]: "4"
```
into an actual formated json object like so
```
[
{
'question': 'what is 2+2',
'answers': ["21", "312", "4"]
}
]
```
but I'm not too sure what approach to take to make this work.
I'm planning on parsing the key-values in the first snipped through javascript and decode it into a json object like in the second snippet through python.
Have you got any idea on how to do this? I'd accept an example in pretty much any language as it shouldn't be much of a worry to read the concept behind them. | 2015/03/18 | [
"https://Stackoverflow.com/questions/29124435",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3254198/"
] | Something like this. You need to handle input errors.
A function to take a data structure and add stuff to it based on input
```
function add(old, input) {
var index = input[0];
var section = input[1];
if (old[index] == undefined) {
old[index] = {}
};
if (section == "question") {
old[index]['question'] = input[2];
}
if (section == "answers") {
var answerIndex = input[2];
var answerValue = input[3];
if (old[index]["answers"] == undefined) {
old[index]["answers"] = []
};
old[index]["answers"][answerIndex] = answerValue
}
return old;
}
```
Some inputs:
```
var inputs = [[0, "question", "what"],
[0, "answers", 0, "21"],
[0, "answers", 1, "22"]];
var result = {};
inputs.forEach(function(input) { add(result, input) })
JSON.stringify(result)
"{"0":{"question":"what","answers":["21","22"]}}"
``` | I think you should format the json as follow:
```
{
"questions": [
{
"question": "What is 2+2",
"possible_answers": [
{
"value": 1,
"correct": false
},
{
"value": 4,
"correct": true
},
{
"value": 3,
"correct": false
}
]
},
{
"question": "What is 5+5",
"possible_answers": [
{
"value": 6,
"correct": false
},
{
"value": 7,
"correct": false
},
{
"value": 10,
"correct": true
}
]
}
]
}
```
for doing that, you can do it:
```
var result = {}
result.questions = []; //the questions collection
var question = {}; //the first question object
question.question = "what is 2 + 2";
question.possible_answers = [];
var answer1 = {};
answer1.value = 1;
answer1.correct = false;
var answer2 = {};
answer2.value = 2;
answer2.correct = true;
var answer3 = {};
answer3.value = 3;
answer3.correct = false;
question.possible_answers.push(answer1);
question.possible_answers.push(answer2);
question.possible_answers.push(answer3);
result.questions.push(question); //add the first question with its possible answer to the result.
```
You can help yourself using [jsonlint](http://jsonlint.com/) for formatting the json and then try to set your javascript object to get the json you want.
Hope helps you! |
55,994,238 | I have a code to scrape hotels reviews in python (from yelp).
The code scrape the first page of reviews perfectly, but, I am struggling to scrape the next pages.
The While loop don't work, data scraped in each loop is the same (data of the first page)
```
import requests
from lxml import html
from bs4 import BeautifulSoup
url = 'https://www.yelp.com/biz/fairmont-san-francisco-san-francisco?sort_by=rating_desc'
while url:
r = requests.get(url)
t = html.fromstring(r.content)
for i in t.xpath("//div[@class='review-list']/ul/li[position()>1]"):
rev = i.xpath('.//p[@lang="en"]/text()')[0].strip()
date = i.xpath('.//span[@class="rating-qualifier"]/text()')[0].strip()
stars = i.xpath('.//img[@class="offscreen"]/@alt')[0].strip().split(' ')[0]
print(rev)
print(date)
print(stars)
next_page = soup.find('a',{'class':'next'})
if next_page:
url = next_page['href']
else:
url = None
sleep(5)
```
here **sleep(5)** before request new url is to avoid limitation set by the website. | 2019/05/05 | [
"https://Stackoverflow.com/questions/55994238",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5539782/"
] | The following is one of the ways you can get your job done. I've slightly modified your existing logic of traversing next pages. Give it a shot.
```
import requests
from lxml.html import fromstring
url = 'https://www.yelp.com/biz/fairmont-san-francisco-san-francisco?sort_by=rating_desc'
while True:
res = requests.get(url)
root = fromstring(res.text)
for item in root.xpath("//div[@class='review-list']/ul/li[position()>1]"):
rev = item.xpath('.//p[@lang="en"]/text()')[0].strip()
print(rev)
next_page = root.cssselect(".pagination-links a.next")
if not len(next_page): break
url = next_page[0].get('href')
``` | You just need to be smart about looking at the URL. Most websites follow a scheme with their page progression. In this case, it seems like it changes to the following format for the next pages:
```
https://www.yelp.com/biz/fairmont-san-francisco-san-francisco?start=20&sort_by=rating_desc
```
Where the start=20 is where we should be looking. Rewrite the url at the end of the while loop. Once it gets to the end of the page, it should add 20 to that number, and then put it in the string. Like so:
```py
pagenum = 0
while url
pagenum += 20
url = "https://www.yelp.com/biz/fairmont-san-francisco-san-francisco?start=" + pagenum + "&sort_by=rating_desc"
```
And then to terminate the program in a try/except catch, where the url wouldn't load because there' no more pages. |
38,810,424 | I was running TensorFlow and I happen to have something yielding a NaN. I'd like to know what it is but I do not know how to do this. The main issue is that in a "normal" procedural program I would just write a print statement just before the operation is executed. The issue with TensorFlow is that I cannot do that because I first declare (or define) the graph, so adding print statements to the graph definition does not help. Are there any rules, advice, heuristics, anything to track down what might be causing the NaN?
---
In this case I know more precisely what line to look at because I have the following:
```
Delta_tilde = 2.0*tf.matmul(x,W) - tf.add(WW, XX) #note this quantity should always be positive because its pair-wise euclidian distance
Z = tf.sqrt(Delta_tilde)
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)
A = tf.exp(Z)
```
when this line is present I have it that it returns NaN as declared by my summary writers. Why is this? Is there a way to at least explore what value Z has after its being square rooted?
---
For the specific example I posted, I tried `tf.Print(0,Z)` but with no success it printed nothing. As in:
```
Delta_tilde = 2.0*tf.matmul(x,W) - tf.add(WW, XX) #note this quantity should always be positive because its pair-wise euclidian distance
Z = tf.sqrt(Delta_tilde)
tf.Print(0,[Z]) # <-------- TF PRINT STATMENT
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)
A = tf.exp(Z)
```
I actually don't understand what `tf.Print` is suppose to do. Why does it need two arguments? If I want to print 1 tensor why would I need to pass 2? Seems bizarre to me.
---
I was looking at the function [tf.add\_check\_numerics\_ops()](https://www.tensorflow.org/versions/r0.9/api_docs/python/control_flow_ops.html#add_check_numerics_ops) but it doesn't say how to use it (plus the docs seem to not be super helpful). Does anyone know how to use this?
---
Since I've had comments addressing the data might be bad, I am using standard MNIST. However, I am computing a quantity that is positive (pair-wise eucledian distance) and then square rooting it. Thus, I wouldn't see how the data specifically would be an issue. | 2016/08/07 | [
"https://Stackoverflow.com/questions/38810424",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1601580/"
] | There are a couple of reasons WHY you can get a NaN-result, often it is because of too high a learning rate but plenty other reasons are possible like for example corrupt data in your input-queue or a log of 0 calculation.
Anyhow, debugging with a print as you describe cannot be done by a simple print (as this would result only in the printing of the tensor-information inside the graph and not print any actual values).
However, if you use tf.print as an op in bulding the graph ([tf.print](https://www.tensorflow.org/versions/r0.10/api_docs/python/control_flow_ops.html#Print)) then when the graph gets executed you will get the actual values printed (and it IS a good exercise to watch these values to debug and understand the behavior of your net).
However, you are using the print-statement not entirely in the correct manner. This is an op, so you need to pass it a tensor and request a result-tensor that you need to work with later on in the executing graph. Otherwise the op is not going to be executed and no printing occurs. Try this:
```
Z = tf.sqrt(Delta_tilde)
Z = tf.Print(Z,[Z], message="my Z-values:") # <-------- TF PRINT STATMENT
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)
``` | I used to find it's much tougher to pinpoint where the nans and infs may occur than to fix the bug. As a complementary to @scai's answer, I'd like to add some points here:
The debug module, you can imported by:
```
from tensorflow.python import debug as tf_debug
```
is much better than any print or assert.
You can just add the debug function by changing your wrapper you session by:
```
sess = tf_debug.LocalCLIDebugWrapperSession(sess)
sess.add_tensor_filter("has_inf_or_nan", tf_debug.has_inf_or_nan)
```
And you'll prompt an command line interface, then you enter:
`run -f has_inf_or_nan` and `lt -f has_inf_or_nan` to find where the nans or infs are. The first one is the first place where the catastrophe occurs. By the variable name you can trace the origin in your code.
Reference: <https://developers.googleblog.com/2017/02/debug-tensorflow-models-with-tfdbg.html> |
38,810,424 | I was running TensorFlow and I happen to have something yielding a NaN. I'd like to know what it is but I do not know how to do this. The main issue is that in a "normal" procedural program I would just write a print statement just before the operation is executed. The issue with TensorFlow is that I cannot do that because I first declare (or define) the graph, so adding print statements to the graph definition does not help. Are there any rules, advice, heuristics, anything to track down what might be causing the NaN?
---
In this case I know more precisely what line to look at because I have the following:
```
Delta_tilde = 2.0*tf.matmul(x,W) - tf.add(WW, XX) #note this quantity should always be positive because its pair-wise euclidian distance
Z = tf.sqrt(Delta_tilde)
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)
A = tf.exp(Z)
```
when this line is present I have it that it returns NaN as declared by my summary writers. Why is this? Is there a way to at least explore what value Z has after its being square rooted?
---
For the specific example I posted, I tried `tf.Print(0,Z)` but with no success it printed nothing. As in:
```
Delta_tilde = 2.0*tf.matmul(x,W) - tf.add(WW, XX) #note this quantity should always be positive because its pair-wise euclidian distance
Z = tf.sqrt(Delta_tilde)
tf.Print(0,[Z]) # <-------- TF PRINT STATMENT
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)
A = tf.exp(Z)
```
I actually don't understand what `tf.Print` is suppose to do. Why does it need two arguments? If I want to print 1 tensor why would I need to pass 2? Seems bizarre to me.
---
I was looking at the function [tf.add\_check\_numerics\_ops()](https://www.tensorflow.org/versions/r0.9/api_docs/python/control_flow_ops.html#add_check_numerics_ops) but it doesn't say how to use it (plus the docs seem to not be super helpful). Does anyone know how to use this?
---
Since I've had comments addressing the data might be bad, I am using standard MNIST. However, I am computing a quantity that is positive (pair-wise eucledian distance) and then square rooting it. Thus, I wouldn't see how the data specifically would be an issue. | 2016/08/07 | [
"https://Stackoverflow.com/questions/38810424",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1601580/"
] | For TensorFlow 2, inject some `x=tf.debugging.check_numerics(x,'x is nan')` into your code. They will throw an `InvalidArgument` error if `x`has any values that are not a number (NaN) or infinity (Inf).
Oh and for the next person finding this when hunting a TF2 NaN issue, my case turned out to be an exploding gradient. The gradient itself got to 1e+20, which was not quite NaN yet, but adding that to the variable then turned out too big. The diagnosis that I did was
```
gradients = tape.gradient(loss, training_variables)
for g,v in zip(gradients, training_variables):
tf.print(v.name, tf.reduce_max(g))
optimizer.apply_gradients(zip(gradients, training_variables))
```
which revealed the overly large numbers. Running the exact same network on CPU worked fine, but it failed on the GTX 1080 TI in my workstation, thus making a CUDA numerical stability issue likely as the root cause. But since it only occurred sometimes, I duct-taped the whole thing by going with:
```
gradients = tape.gradient(loss, training_variables)
gradients = [tf.clip_by_norm(g, 10.0) for g in gradients]
optimizer.apply_gradients(zip(gradients, training_variables))
```
which will just clip exploding gradients to a sane value. For a network where gradients are always high, that wouldn't help, but since the magnitudes where high only sporadically, this fixed the problem and now the network trains nicely also on GPU. | I was able to fix my NaN issues by getting rid of all of my dropout layers in the network model. I suspected that maybe for some reason a unit (neuron?) in the network lost too many input connections (so it had zero after the dropout), so then when information was fed through, it had a value of NaN. I don't see how that could happen over and over again with dropout=0.8 on layers with more than a hundred units each, so the problem was probably fixed for a different reason. Either way, commenting out the dropout layers fixed my issue.
EDIT: Oops! I realized that I added a dropout layer after my final output layer which consists of three units. Now that makes more sense. So, don't do that! |
38,810,424 | I was running TensorFlow and I happen to have something yielding a NaN. I'd like to know what it is but I do not know how to do this. The main issue is that in a "normal" procedural program I would just write a print statement just before the operation is executed. The issue with TensorFlow is that I cannot do that because I first declare (or define) the graph, so adding print statements to the graph definition does not help. Are there any rules, advice, heuristics, anything to track down what might be causing the NaN?
---
In this case I know more precisely what line to look at because I have the following:
```
Delta_tilde = 2.0*tf.matmul(x,W) - tf.add(WW, XX) #note this quantity should always be positive because its pair-wise euclidian distance
Z = tf.sqrt(Delta_tilde)
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)
A = tf.exp(Z)
```
when this line is present I have it that it returns NaN as declared by my summary writers. Why is this? Is there a way to at least explore what value Z has after its being square rooted?
---
For the specific example I posted, I tried `tf.Print(0,Z)` but with no success it printed nothing. As in:
```
Delta_tilde = 2.0*tf.matmul(x,W) - tf.add(WW, XX) #note this quantity should always be positive because its pair-wise euclidian distance
Z = tf.sqrt(Delta_tilde)
tf.Print(0,[Z]) # <-------- TF PRINT STATMENT
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)
A = tf.exp(Z)
```
I actually don't understand what `tf.Print` is suppose to do. Why does it need two arguments? If I want to print 1 tensor why would I need to pass 2? Seems bizarre to me.
---
I was looking at the function [tf.add\_check\_numerics\_ops()](https://www.tensorflow.org/versions/r0.9/api_docs/python/control_flow_ops.html#add_check_numerics_ops) but it doesn't say how to use it (plus the docs seem to not be super helpful). Does anyone know how to use this?
---
Since I've had comments addressing the data might be bad, I am using standard MNIST. However, I am computing a quantity that is positive (pair-wise eucledian distance) and then square rooting it. Thus, I wouldn't see how the data specifically would be an issue. | 2016/08/07 | [
"https://Stackoverflow.com/questions/38810424",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1601580/"
] | For TensorFlow 2, inject some `x=tf.debugging.check_numerics(x,'x is nan')` into your code. They will throw an `InvalidArgument` error if `x`has any values that are not a number (NaN) or infinity (Inf).
Oh and for the next person finding this when hunting a TF2 NaN issue, my case turned out to be an exploding gradient. The gradient itself got to 1e+20, which was not quite NaN yet, but adding that to the variable then turned out too big. The diagnosis that I did was
```
gradients = tape.gradient(loss, training_variables)
for g,v in zip(gradients, training_variables):
tf.print(v.name, tf.reduce_max(g))
optimizer.apply_gradients(zip(gradients, training_variables))
```
which revealed the overly large numbers. Running the exact same network on CPU worked fine, but it failed on the GTX 1080 TI in my workstation, thus making a CUDA numerical stability issue likely as the root cause. But since it only occurred sometimes, I duct-taped the whole thing by going with:
```
gradients = tape.gradient(loss, training_variables)
gradients = [tf.clip_by_norm(g, 10.0) for g in gradients]
optimizer.apply_gradients(zip(gradients, training_variables))
```
which will just clip exploding gradients to a sane value. For a network where gradients are always high, that wouldn't help, but since the magnitudes where high only sporadically, this fixed the problem and now the network trains nicely also on GPU. | NANs occurring in the forward process are one thing and those occurring in the backward process are another.
Step 0: data
============
Make sure that there are no extreme inputs such as NAN inputs or negative labels in the prepared dataset using NumPy tools, for instance: `assert not np.any(np.isnan(x))`.
Step 1: the forward
===================
Switch to a CPU environment to get a more detailed traceback, and test the forward pass only by `loss = tf.stop_gradient(loss)` before calculating the gradients to see if you can run several batches with no errors. If an error occurs, there are several types of potential bugs and methods:
1. 0 in the log for the cross-entropy loss functions(please refer to [this answer](https://stackoverflow.com/a/33713196/3552975))
2. 0/0 problem
3. out of class problem as issued [here](https://github.com/tensorflow/tensorflow/issues/8484#issuecomment-354376609).
4. try `tensor = tf.check_numerics(tensor, 'tensor')` in some suspicious places.
5. try `tf_debug` as written in [this answer](https://stackoverflow.com/a/48729304/3552975).
Step 2: the backward
====================
If everything goes well, remove the `loss = tf.stop_gradient(loss)`.
1. try very small learning rate
2. replace complex blocks of code by simple computations, like full connection, with the same shape of inputs and outputs to zoom in where the bug lies. You may encounter backward bugs like [this](https://stackoverflow.com/q/54346263/3552975).
As an aside, it's always helpful to make sure that the shape of every tensor is desired. You can try to input fixed-sized batches(drop the remainders) and reshape the feature tensors(where the graph receives data from Dataset) as you expect them to be(otherwise the first dimension would be None sometimes) and then print the shape of the very tensor in the graph with fixed numbers.
[A Recipe for Training Neural Networks](http://karpathy.github.io/2019/04/25/recipe/) by Andrej Karpathy is a great article on training/debugging neural networks. |
38,810,424 | I was running TensorFlow and I happen to have something yielding a NaN. I'd like to know what it is but I do not know how to do this. The main issue is that in a "normal" procedural program I would just write a print statement just before the operation is executed. The issue with TensorFlow is that I cannot do that because I first declare (or define) the graph, so adding print statements to the graph definition does not help. Are there any rules, advice, heuristics, anything to track down what might be causing the NaN?
---
In this case I know more precisely what line to look at because I have the following:
```
Delta_tilde = 2.0*tf.matmul(x,W) - tf.add(WW, XX) #note this quantity should always be positive because its pair-wise euclidian distance
Z = tf.sqrt(Delta_tilde)
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)
A = tf.exp(Z)
```
when this line is present I have it that it returns NaN as declared by my summary writers. Why is this? Is there a way to at least explore what value Z has after its being square rooted?
---
For the specific example I posted, I tried `tf.Print(0,Z)` but with no success it printed nothing. As in:
```
Delta_tilde = 2.0*tf.matmul(x,W) - tf.add(WW, XX) #note this quantity should always be positive because its pair-wise euclidian distance
Z = tf.sqrt(Delta_tilde)
tf.Print(0,[Z]) # <-------- TF PRINT STATMENT
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)
A = tf.exp(Z)
```
I actually don't understand what `tf.Print` is suppose to do. Why does it need two arguments? If I want to print 1 tensor why would I need to pass 2? Seems bizarre to me.
---
I was looking at the function [tf.add\_check\_numerics\_ops()](https://www.tensorflow.org/versions/r0.9/api_docs/python/control_flow_ops.html#add_check_numerics_ops) but it doesn't say how to use it (plus the docs seem to not be super helpful). Does anyone know how to use this?
---
Since I've had comments addressing the data might be bad, I am using standard MNIST. However, I am computing a quantity that is positive (pair-wise eucledian distance) and then square rooting it. Thus, I wouldn't see how the data specifically would be an issue. | 2016/08/07 | [
"https://Stackoverflow.com/questions/38810424",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1601580/"
] | There are a couple of reasons WHY you can get a NaN-result, often it is because of too high a learning rate but plenty other reasons are possible like for example corrupt data in your input-queue or a log of 0 calculation.
Anyhow, debugging with a print as you describe cannot be done by a simple print (as this would result only in the printing of the tensor-information inside the graph and not print any actual values).
However, if you use tf.print as an op in bulding the graph ([tf.print](https://www.tensorflow.org/versions/r0.10/api_docs/python/control_flow_ops.html#Print)) then when the graph gets executed you will get the actual values printed (and it IS a good exercise to watch these values to debug and understand the behavior of your net).
However, you are using the print-statement not entirely in the correct manner. This is an op, so you need to pass it a tensor and request a result-tensor that you need to work with later on in the executing graph. Otherwise the op is not going to be executed and no printing occurs. Try this:
```
Z = tf.sqrt(Delta_tilde)
Z = tf.Print(Z,[Z], message="my Z-values:") # <-------- TF PRINT STATMENT
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)
``` | For TensorFlow 2, inject some `x=tf.debugging.check_numerics(x,'x is nan')` into your code. They will throw an `InvalidArgument` error if `x`has any values that are not a number (NaN) or infinity (Inf).
Oh and for the next person finding this when hunting a TF2 NaN issue, my case turned out to be an exploding gradient. The gradient itself got to 1e+20, which was not quite NaN yet, but adding that to the variable then turned out too big. The diagnosis that I did was
```
gradients = tape.gradient(loss, training_variables)
for g,v in zip(gradients, training_variables):
tf.print(v.name, tf.reduce_max(g))
optimizer.apply_gradients(zip(gradients, training_variables))
```
which revealed the overly large numbers. Running the exact same network on CPU worked fine, but it failed on the GTX 1080 TI in my workstation, thus making a CUDA numerical stability issue likely as the root cause. But since it only occurred sometimes, I duct-taped the whole thing by going with:
```
gradients = tape.gradient(loss, training_variables)
gradients = [tf.clip_by_norm(g, 10.0) for g in gradients]
optimizer.apply_gradients(zip(gradients, training_variables))
```
which will just clip exploding gradients to a sane value. For a network where gradients are always high, that wouldn't help, but since the magnitudes where high only sporadically, this fixed the problem and now the network trains nicely also on GPU. |
38,810,424 | I was running TensorFlow and I happen to have something yielding a NaN. I'd like to know what it is but I do not know how to do this. The main issue is that in a "normal" procedural program I would just write a print statement just before the operation is executed. The issue with TensorFlow is that I cannot do that because I first declare (or define) the graph, so adding print statements to the graph definition does not help. Are there any rules, advice, heuristics, anything to track down what might be causing the NaN?
---
In this case I know more precisely what line to look at because I have the following:
```
Delta_tilde = 2.0*tf.matmul(x,W) - tf.add(WW, XX) #note this quantity should always be positive because its pair-wise euclidian distance
Z = tf.sqrt(Delta_tilde)
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)
A = tf.exp(Z)
```
when this line is present I have it that it returns NaN as declared by my summary writers. Why is this? Is there a way to at least explore what value Z has after its being square rooted?
---
For the specific example I posted, I tried `tf.Print(0,Z)` but with no success it printed nothing. As in:
```
Delta_tilde = 2.0*tf.matmul(x,W) - tf.add(WW, XX) #note this quantity should always be positive because its pair-wise euclidian distance
Z = tf.sqrt(Delta_tilde)
tf.Print(0,[Z]) # <-------- TF PRINT STATMENT
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)
A = tf.exp(Z)
```
I actually don't understand what `tf.Print` is suppose to do. Why does it need two arguments? If I want to print 1 tensor why would I need to pass 2? Seems bizarre to me.
---
I was looking at the function [tf.add\_check\_numerics\_ops()](https://www.tensorflow.org/versions/r0.9/api_docs/python/control_flow_ops.html#add_check_numerics_ops) but it doesn't say how to use it (plus the docs seem to not be super helpful). Does anyone know how to use this?
---
Since I've had comments addressing the data might be bad, I am using standard MNIST. However, I am computing a quantity that is positive (pair-wise eucledian distance) and then square rooting it. Thus, I wouldn't see how the data specifically would be an issue. | 2016/08/07 | [
"https://Stackoverflow.com/questions/38810424",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1601580/"
] | It look like you can call it after you complete making the graph.
`check = tf.add_check_numerics_ops()`
I think this will add the check for all floating point operations. Then in the sessions run function you can add the check operation.
`sess.run([check, ...])` | I was able to fix my NaN issues by getting rid of all of my dropout layers in the network model. I suspected that maybe for some reason a unit (neuron?) in the network lost too many input connections (so it had zero after the dropout), so then when information was fed through, it had a value of NaN. I don't see how that could happen over and over again with dropout=0.8 on layers with more than a hundred units each, so the problem was probably fixed for a different reason. Either way, commenting out the dropout layers fixed my issue.
EDIT: Oops! I realized that I added a dropout layer after my final output layer which consists of three units. Now that makes more sense. So, don't do that! |
38,810,424 | I was running TensorFlow and I happen to have something yielding a NaN. I'd like to know what it is but I do not know how to do this. The main issue is that in a "normal" procedural program I would just write a print statement just before the operation is executed. The issue with TensorFlow is that I cannot do that because I first declare (or define) the graph, so adding print statements to the graph definition does not help. Are there any rules, advice, heuristics, anything to track down what might be causing the NaN?
---
In this case I know more precisely what line to look at because I have the following:
```
Delta_tilde = 2.0*tf.matmul(x,W) - tf.add(WW, XX) #note this quantity should always be positive because its pair-wise euclidian distance
Z = tf.sqrt(Delta_tilde)
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)
A = tf.exp(Z)
```
when this line is present I have it that it returns NaN as declared by my summary writers. Why is this? Is there a way to at least explore what value Z has after its being square rooted?
---
For the specific example I posted, I tried `tf.Print(0,Z)` but with no success it printed nothing. As in:
```
Delta_tilde = 2.0*tf.matmul(x,W) - tf.add(WW, XX) #note this quantity should always be positive because its pair-wise euclidian distance
Z = tf.sqrt(Delta_tilde)
tf.Print(0,[Z]) # <-------- TF PRINT STATMENT
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)
A = tf.exp(Z)
```
I actually don't understand what `tf.Print` is suppose to do. Why does it need two arguments? If I want to print 1 tensor why would I need to pass 2? Seems bizarre to me.
---
I was looking at the function [tf.add\_check\_numerics\_ops()](https://www.tensorflow.org/versions/r0.9/api_docs/python/control_flow_ops.html#add_check_numerics_ops) but it doesn't say how to use it (plus the docs seem to not be super helpful). Does anyone know how to use this?
---
Since I've had comments addressing the data might be bad, I am using standard MNIST. However, I am computing a quantity that is positive (pair-wise eucledian distance) and then square rooting it. Thus, I wouldn't see how the data specifically would be an issue. | 2016/08/07 | [
"https://Stackoverflow.com/questions/38810424",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1601580/"
] | NANs occurring in the forward process are one thing and those occurring in the backward process are another.
Step 0: data
============
Make sure that there are no extreme inputs such as NAN inputs or negative labels in the prepared dataset using NumPy tools, for instance: `assert not np.any(np.isnan(x))`.
Step 1: the forward
===================
Switch to a CPU environment to get a more detailed traceback, and test the forward pass only by `loss = tf.stop_gradient(loss)` before calculating the gradients to see if you can run several batches with no errors. If an error occurs, there are several types of potential bugs and methods:
1. 0 in the log for the cross-entropy loss functions(please refer to [this answer](https://stackoverflow.com/a/33713196/3552975))
2. 0/0 problem
3. out of class problem as issued [here](https://github.com/tensorflow/tensorflow/issues/8484#issuecomment-354376609).
4. try `tensor = tf.check_numerics(tensor, 'tensor')` in some suspicious places.
5. try `tf_debug` as written in [this answer](https://stackoverflow.com/a/48729304/3552975).
Step 2: the backward
====================
If everything goes well, remove the `loss = tf.stop_gradient(loss)`.
1. try very small learning rate
2. replace complex blocks of code by simple computations, like full connection, with the same shape of inputs and outputs to zoom in where the bug lies. You may encounter backward bugs like [this](https://stackoverflow.com/q/54346263/3552975).
As an aside, it's always helpful to make sure that the shape of every tensor is desired. You can try to input fixed-sized batches(drop the remainders) and reshape the feature tensors(where the graph receives data from Dataset) as you expect them to be(otherwise the first dimension would be None sometimes) and then print the shape of the very tensor in the graph with fixed numbers.
[A Recipe for Training Neural Networks](http://karpathy.github.io/2019/04/25/recipe/) by Andrej Karpathy is a great article on training/debugging neural networks. | I was able to fix my NaN issues by getting rid of all of my dropout layers in the network model. I suspected that maybe for some reason a unit (neuron?) in the network lost too many input connections (so it had zero after the dropout), so then when information was fed through, it had a value of NaN. I don't see how that could happen over and over again with dropout=0.8 on layers with more than a hundred units each, so the problem was probably fixed for a different reason. Either way, commenting out the dropout layers fixed my issue.
EDIT: Oops! I realized that I added a dropout layer after my final output layer which consists of three units. Now that makes more sense. So, don't do that! |
38,810,424 | I was running TensorFlow and I happen to have something yielding a NaN. I'd like to know what it is but I do not know how to do this. The main issue is that in a "normal" procedural program I would just write a print statement just before the operation is executed. The issue with TensorFlow is that I cannot do that because I first declare (or define) the graph, so adding print statements to the graph definition does not help. Are there any rules, advice, heuristics, anything to track down what might be causing the NaN?
---
In this case I know more precisely what line to look at because I have the following:
```
Delta_tilde = 2.0*tf.matmul(x,W) - tf.add(WW, XX) #note this quantity should always be positive because its pair-wise euclidian distance
Z = tf.sqrt(Delta_tilde)
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)
A = tf.exp(Z)
```
when this line is present I have it that it returns NaN as declared by my summary writers. Why is this? Is there a way to at least explore what value Z has after its being square rooted?
---
For the specific example I posted, I tried `tf.Print(0,Z)` but with no success it printed nothing. As in:
```
Delta_tilde = 2.0*tf.matmul(x,W) - tf.add(WW, XX) #note this quantity should always be positive because its pair-wise euclidian distance
Z = tf.sqrt(Delta_tilde)
tf.Print(0,[Z]) # <-------- TF PRINT STATMENT
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)
A = tf.exp(Z)
```
I actually don't understand what `tf.Print` is suppose to do. Why does it need two arguments? If I want to print 1 tensor why would I need to pass 2? Seems bizarre to me.
---
I was looking at the function [tf.add\_check\_numerics\_ops()](https://www.tensorflow.org/versions/r0.9/api_docs/python/control_flow_ops.html#add_check_numerics_ops) but it doesn't say how to use it (plus the docs seem to not be super helpful). Does anyone know how to use this?
---
Since I've had comments addressing the data might be bad, I am using standard MNIST. However, I am computing a quantity that is positive (pair-wise eucledian distance) and then square rooting it. Thus, I wouldn't see how the data specifically would be an issue. | 2016/08/07 | [
"https://Stackoverflow.com/questions/38810424",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1601580/"
] | NANs occurring in the forward process are one thing and those occurring in the backward process are another.
Step 0: data
============
Make sure that there are no extreme inputs such as NAN inputs or negative labels in the prepared dataset using NumPy tools, for instance: `assert not np.any(np.isnan(x))`.
Step 1: the forward
===================
Switch to a CPU environment to get a more detailed traceback, and test the forward pass only by `loss = tf.stop_gradient(loss)` before calculating the gradients to see if you can run several batches with no errors. If an error occurs, there are several types of potential bugs and methods:
1. 0 in the log for the cross-entropy loss functions(please refer to [this answer](https://stackoverflow.com/a/33713196/3552975))
2. 0/0 problem
3. out of class problem as issued [here](https://github.com/tensorflow/tensorflow/issues/8484#issuecomment-354376609).
4. try `tensor = tf.check_numerics(tensor, 'tensor')` in some suspicious places.
5. try `tf_debug` as written in [this answer](https://stackoverflow.com/a/48729304/3552975).
Step 2: the backward
====================
If everything goes well, remove the `loss = tf.stop_gradient(loss)`.
1. try very small learning rate
2. replace complex blocks of code by simple computations, like full connection, with the same shape of inputs and outputs to zoom in where the bug lies. You may encounter backward bugs like [this](https://stackoverflow.com/q/54346263/3552975).
As an aside, it's always helpful to make sure that the shape of every tensor is desired. You can try to input fixed-sized batches(drop the remainders) and reshape the feature tensors(where the graph receives data from Dataset) as you expect them to be(otherwise the first dimension would be None sometimes) and then print the shape of the very tensor in the graph with fixed numbers.
[A Recipe for Training Neural Networks](http://karpathy.github.io/2019/04/25/recipe/) by Andrej Karpathy is a great article on training/debugging neural networks. | Current implementation of `tfdbg.has_inf_or_nan` seems do not break immediately on hitting any tensor containing `NaN`. When it does stop, the huge list of tensors displayed are *not* sorted in order of its execution.
A possible hack to find the first appearance of `Nan`s is to dump all tensors to a temporary directory and inspect afterwards.
Here is a quick-and-dirty [example](https://gist.github.com/yuq-1s/ce63a306f1d39d1c0c80d33f7855f3b5) to do that. (Assuming the `NaN`s appear in the first few runs) |
38,810,424 | I was running TensorFlow and I happen to have something yielding a NaN. I'd like to know what it is but I do not know how to do this. The main issue is that in a "normal" procedural program I would just write a print statement just before the operation is executed. The issue with TensorFlow is that I cannot do that because I first declare (or define) the graph, so adding print statements to the graph definition does not help. Are there any rules, advice, heuristics, anything to track down what might be causing the NaN?
---
In this case I know more precisely what line to look at because I have the following:
```
Delta_tilde = 2.0*tf.matmul(x,W) - tf.add(WW, XX) #note this quantity should always be positive because its pair-wise euclidian distance
Z = tf.sqrt(Delta_tilde)
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)
A = tf.exp(Z)
```
when this line is present I have it that it returns NaN as declared by my summary writers. Why is this? Is there a way to at least explore what value Z has after its being square rooted?
---
For the specific example I posted, I tried `tf.Print(0,Z)` but with no success it printed nothing. As in:
```
Delta_tilde = 2.0*tf.matmul(x,W) - tf.add(WW, XX) #note this quantity should always be positive because its pair-wise euclidian distance
Z = tf.sqrt(Delta_tilde)
tf.Print(0,[Z]) # <-------- TF PRINT STATMENT
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)
A = tf.exp(Z)
```
I actually don't understand what `tf.Print` is suppose to do. Why does it need two arguments? If I want to print 1 tensor why would I need to pass 2? Seems bizarre to me.
---
I was looking at the function [tf.add\_check\_numerics\_ops()](https://www.tensorflow.org/versions/r0.9/api_docs/python/control_flow_ops.html#add_check_numerics_ops) but it doesn't say how to use it (plus the docs seem to not be super helpful). Does anyone know how to use this?
---
Since I've had comments addressing the data might be bad, I am using standard MNIST. However, I am computing a quantity that is positive (pair-wise eucledian distance) and then square rooting it. Thus, I wouldn't see how the data specifically would be an issue. | 2016/08/07 | [
"https://Stackoverflow.com/questions/38810424",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1601580/"
] | First of all, you need to check you input data properly. In most cases this is the reason. But not always, of course.
I usually use Tensorboard to see whats happening while training. So you can see the values on each step with
```
Z = tf.pow(Z, 2.0)
summary_z = tf.scalar_summary('z', Z)
#etc..
summary_merge = tf.merge_all_summaries()
#on each desired step save:
summary_str = sess.run(summary_merge)
summary_writer.add_summary(summary_str, i)
```
Also you can simply eval and print the current value:
```
print(sess.run(Z))
``` | I was able to fix my NaN issues by getting rid of all of my dropout layers in the network model. I suspected that maybe for some reason a unit (neuron?) in the network lost too many input connections (so it had zero after the dropout), so then when information was fed through, it had a value of NaN. I don't see how that could happen over and over again with dropout=0.8 on layers with more than a hundred units each, so the problem was probably fixed for a different reason. Either way, commenting out the dropout layers fixed my issue.
EDIT: Oops! I realized that I added a dropout layer after my final output layer which consists of three units. Now that makes more sense. So, don't do that! |
38,810,424 | I was running TensorFlow and I happen to have something yielding a NaN. I'd like to know what it is but I do not know how to do this. The main issue is that in a "normal" procedural program I would just write a print statement just before the operation is executed. The issue with TensorFlow is that I cannot do that because I first declare (or define) the graph, so adding print statements to the graph definition does not help. Are there any rules, advice, heuristics, anything to track down what might be causing the NaN?
---
In this case I know more precisely what line to look at because I have the following:
```
Delta_tilde = 2.0*tf.matmul(x,W) - tf.add(WW, XX) #note this quantity should always be positive because its pair-wise euclidian distance
Z = tf.sqrt(Delta_tilde)
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)
A = tf.exp(Z)
```
when this line is present I have it that it returns NaN as declared by my summary writers. Why is this? Is there a way to at least explore what value Z has after its being square rooted?
---
For the specific example I posted, I tried `tf.Print(0,Z)` but with no success it printed nothing. As in:
```
Delta_tilde = 2.0*tf.matmul(x,W) - tf.add(WW, XX) #note this quantity should always be positive because its pair-wise euclidian distance
Z = tf.sqrt(Delta_tilde)
tf.Print(0,[Z]) # <-------- TF PRINT STATMENT
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)
A = tf.exp(Z)
```
I actually don't understand what `tf.Print` is suppose to do. Why does it need two arguments? If I want to print 1 tensor why would I need to pass 2? Seems bizarre to me.
---
I was looking at the function [tf.add\_check\_numerics\_ops()](https://www.tensorflow.org/versions/r0.9/api_docs/python/control_flow_ops.html#add_check_numerics_ops) but it doesn't say how to use it (plus the docs seem to not be super helpful). Does anyone know how to use this?
---
Since I've had comments addressing the data might be bad, I am using standard MNIST. However, I am computing a quantity that is positive (pair-wise eucledian distance) and then square rooting it. Thus, I wouldn't see how the data specifically would be an issue. | 2016/08/07 | [
"https://Stackoverflow.com/questions/38810424",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1601580/"
] | As of version 0.12, TensorFlow is shipped with a builtin debugger called `tfdbg`. It optimizes the workflow of debugging this type of bad-numerical-value issues (like `inf` and `nan`). The documentation is at:
<https://www.tensorflow.org/programmers_guide/debugger> | Current implementation of `tfdbg.has_inf_or_nan` seems do not break immediately on hitting any tensor containing `NaN`. When it does stop, the huge list of tensors displayed are *not* sorted in order of its execution.
A possible hack to find the first appearance of `Nan`s is to dump all tensors to a temporary directory and inspect afterwards.
Here is a quick-and-dirty [example](https://gist.github.com/yuq-1s/ce63a306f1d39d1c0c80d33f7855f3b5) to do that. (Assuming the `NaN`s appear in the first few runs) |
38,810,424 | I was running TensorFlow and I happen to have something yielding a NaN. I'd like to know what it is but I do not know how to do this. The main issue is that in a "normal" procedural program I would just write a print statement just before the operation is executed. The issue with TensorFlow is that I cannot do that because I first declare (or define) the graph, so adding print statements to the graph definition does not help. Are there any rules, advice, heuristics, anything to track down what might be causing the NaN?
---
In this case I know more precisely what line to look at because I have the following:
```
Delta_tilde = 2.0*tf.matmul(x,W) - tf.add(WW, XX) #note this quantity should always be positive because its pair-wise euclidian distance
Z = tf.sqrt(Delta_tilde)
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)
A = tf.exp(Z)
```
when this line is present I have it that it returns NaN as declared by my summary writers. Why is this? Is there a way to at least explore what value Z has after its being square rooted?
---
For the specific example I posted, I tried `tf.Print(0,Z)` but with no success it printed nothing. As in:
```
Delta_tilde = 2.0*tf.matmul(x,W) - tf.add(WW, XX) #note this quantity should always be positive because its pair-wise euclidian distance
Z = tf.sqrt(Delta_tilde)
tf.Print(0,[Z]) # <-------- TF PRINT STATMENT
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)
A = tf.exp(Z)
```
I actually don't understand what `tf.Print` is suppose to do. Why does it need two arguments? If I want to print 1 tensor why would I need to pass 2? Seems bizarre to me.
---
I was looking at the function [tf.add\_check\_numerics\_ops()](https://www.tensorflow.org/versions/r0.9/api_docs/python/control_flow_ops.html#add_check_numerics_ops) but it doesn't say how to use it (plus the docs seem to not be super helpful). Does anyone know how to use this?
---
Since I've had comments addressing the data might be bad, I am using standard MNIST. However, I am computing a quantity that is positive (pair-wise eucledian distance) and then square rooting it. Thus, I wouldn't see how the data specifically would be an issue. | 2016/08/07 | [
"https://Stackoverflow.com/questions/38810424",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1601580/"
] | I used to find it's much tougher to pinpoint where the nans and infs may occur than to fix the bug. As a complementary to @scai's answer, I'd like to add some points here:
The debug module, you can imported by:
```
from tensorflow.python import debug as tf_debug
```
is much better than any print or assert.
You can just add the debug function by changing your wrapper you session by:
```
sess = tf_debug.LocalCLIDebugWrapperSession(sess)
sess.add_tensor_filter("has_inf_or_nan", tf_debug.has_inf_or_nan)
```
And you'll prompt an command line interface, then you enter:
`run -f has_inf_or_nan` and `lt -f has_inf_or_nan` to find where the nans or infs are. The first one is the first place where the catastrophe occurs. By the variable name you can trace the origin in your code.
Reference: <https://developers.googleblog.com/2017/02/debug-tensorflow-models-with-tfdbg.html> | For TensorFlow 2, inject some `x=tf.debugging.check_numerics(x,'x is nan')` into your code. They will throw an `InvalidArgument` error if `x`has any values that are not a number (NaN) or infinity (Inf).
Oh and for the next person finding this when hunting a TF2 NaN issue, my case turned out to be an exploding gradient. The gradient itself got to 1e+20, which was not quite NaN yet, but adding that to the variable then turned out too big. The diagnosis that I did was
```
gradients = tape.gradient(loss, training_variables)
for g,v in zip(gradients, training_variables):
tf.print(v.name, tf.reduce_max(g))
optimizer.apply_gradients(zip(gradients, training_variables))
```
which revealed the overly large numbers. Running the exact same network on CPU worked fine, but it failed on the GTX 1080 TI in my workstation, thus making a CUDA numerical stability issue likely as the root cause. But since it only occurred sometimes, I duct-taped the whole thing by going with:
```
gradients = tape.gradient(loss, training_variables)
gradients = [tf.clip_by_norm(g, 10.0) for g in gradients]
optimizer.apply_gradients(zip(gradients, training_variables))
```
which will just clip exploding gradients to a sane value. For a network where gradients are always high, that wouldn't help, but since the magnitudes where high only sporadically, this fixed the problem and now the network trains nicely also on GPU. |
29,298,096 | Is there any code to tap and hold on Appium? i use python , is there any command to support it ?
For double click i used click on element twice, for tap and hold i am not getting any solution | 2015/03/27 | [
"https://Stackoverflow.com/questions/29298096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4310652/"
] | Need to pass driver
```
TouchAction action = new TouchAction(driver);
action.longPress(webElement).release().perform();
``` | Here is the update for `Java Client: 5.0.4`
```
WebElement recBtn = driver.findElement(MobileBy.id("img_button"));
new TouchAction((MobileDriver) driver).press(recBtn).waitAction(Duration.ofMillis(10000)).release().perform();
``` |
29,298,096 | Is there any code to tap and hold on Appium? i use python , is there any command to support it ?
For double click i used click on element twice, for tap and hold i am not getting any solution | 2015/03/27 | [
"https://Stackoverflow.com/questions/29298096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4310652/"
] | Here is the update for `Java Client: 5.0.4`
```
WebElement recBtn = driver.findElement(MobileBy.id("img_button"));
new TouchAction((MobileDriver) driver).press(recBtn).waitAction(Duration.ofMillis(10000)).release().perform();
``` | This works:
```
TouchActions action = new TouchActions(driver);
action.longPress(element);
action.perform();
``` |
29,298,096 | Is there any code to tap and hold on Appium? i use python , is there any command to support it ?
For double click i used click on element twice, for tap and hold i am not getting any solution | 2015/03/27 | [
"https://Stackoverflow.com/questions/29298096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4310652/"
] | It should be like this. The duration is calculated in milliseconds, so it need to multiply by 1000 as 1 second.
```
TouchAction action = new TouchAction(driver);
action.longPress(webElement,duration*1000).release().perform();
``` | Once you have identified the pageElement you want to longPress on.
```
//pageElement
editPreferenceButton = driver.whatever
//code for waiting for display of element
waitForDisplayed(editPreferenceButton, 10)
//this line is not required, keeping it here for easy readability
MobileElement longpress = editPreferenceButton;
//use the below code, it will do the trick, credits to wherever i found this
LongPressOptions longPressOptions = new LongPressOptions(); longPressOptions.withDuration(Duration.ofSeconds(3)).withElement(ElementOption.element(longpress));
TouchAction action = new TouchAction(driver);
action.longPress(longPressOptions).release().perform();}
``` |
29,298,096 | Is there any code to tap and hold on Appium? i use python , is there any command to support it ?
For double click i used click on element twice, for tap and hold i am not getting any solution | 2015/03/27 | [
"https://Stackoverflow.com/questions/29298096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4310652/"
] | Here is the update for `Java Client: 5.0.4`
```
WebElement recBtn = driver.findElement(MobileBy.id("img_button"));
new TouchAction((MobileDriver) driver).press(recBtn).waitAction(Duration.ofMillis(10000)).release().perform();
``` | Once you have identified the pageElement you want to longPress on.
```
//pageElement
editPreferenceButton = driver.whatever
//code for waiting for display of element
waitForDisplayed(editPreferenceButton, 10)
//this line is not required, keeping it here for easy readability
MobileElement longpress = editPreferenceButton;
//use the below code, it will do the trick, credits to wherever i found this
LongPressOptions longPressOptions = new LongPressOptions(); longPressOptions.withDuration(Duration.ofSeconds(3)).withElement(ElementOption.element(longpress));
TouchAction action = new TouchAction(driver);
action.longPress(longPressOptions).release().perform();}
``` |
29,298,096 | Is there any code to tap and hold on Appium? i use python , is there any command to support it ?
For double click i used click on element twice, for tap and hold i am not getting any solution | 2015/03/27 | [
"https://Stackoverflow.com/questions/29298096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4310652/"
] | Yes, you can use TouchAction class to longPress any element. Try this:
```
TouchAction action = new TouchAction();
action.longPress(webElement).release().perform();
``` | It should be like this. The duration is calculated in milliseconds, so it need to multiply by 1000 as 1 second.
```
TouchAction action = new TouchAction(driver);
action.longPress(webElement,duration*1000).release().perform();
``` |
29,298,096 | Is there any code to tap and hold on Appium? i use python , is there any command to support it ?
For double click i used click on element twice, for tap and hold i am not getting any solution | 2015/03/27 | [
"https://Stackoverflow.com/questions/29298096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4310652/"
] | Need to pass driver
```
TouchAction action = new TouchAction(driver);
action.longPress(webElement).release().perform();
``` | In latest Java client versions below will work.
```
AndroidTouchAction touch = new AndroidTouchAction (driver);
touch.longPress(LongPressOptions.longPressOptions()
.withElement (ElementOption.element (element)))
.perform ();
System.out.println("LongPressed Tapped");
``` |
29,298,096 | Is there any code to tap and hold on Appium? i use python , is there any command to support it ?
For double click i used click on element twice, for tap and hold i am not getting any solution | 2015/03/27 | [
"https://Stackoverflow.com/questions/29298096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4310652/"
] | Need to pass driver
```
TouchAction action = new TouchAction(driver);
action.longPress(webElement).release().perform();
``` | Following worked:
```
MobileElement longpress = driver.findElement({element find strategy})
LongPressOptions longPressOptions = new LongPressOptions();
longPressOptions.withDuration(Duration.ofSeconds(3)).withElement(ElementOption.element(longpress));
TouchAction action = new TouchAction(driver);
action.longPress(longPressOptions).release().perform();
``` |
29,298,096 | Is there any code to tap and hold on Appium? i use python , is there any command to support it ?
For double click i used click on element twice, for tap and hold i am not getting any solution | 2015/03/27 | [
"https://Stackoverflow.com/questions/29298096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4310652/"
] | It should be like this. The duration is calculated in milliseconds, so it need to multiply by 1000 as 1 second.
```
TouchAction action = new TouchAction(driver);
action.longPress(webElement,duration*1000).release().perform();
``` | Following worked:
```
MobileElement longpress = driver.findElement({element find strategy})
LongPressOptions longPressOptions = new LongPressOptions();
longPressOptions.withDuration(Duration.ofSeconds(3)).withElement(ElementOption.element(longpress));
TouchAction action = new TouchAction(driver);
action.longPress(longPressOptions).release().perform();
``` |
29,298,096 | Is there any code to tap and hold on Appium? i use python , is there any command to support it ?
For double click i used click on element twice, for tap and hold i am not getting any solution | 2015/03/27 | [
"https://Stackoverflow.com/questions/29298096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4310652/"
] | In latest Java client versions below will work.
```
AndroidTouchAction touch = new AndroidTouchAction (driver);
touch.longPress(LongPressOptions.longPressOptions()
.withElement (ElementOption.element (element)))
.perform ();
System.out.println("LongPressed Tapped");
``` | Once you have identified the pageElement you want to longPress on.
```
//pageElement
editPreferenceButton = driver.whatever
//code for waiting for display of element
waitForDisplayed(editPreferenceButton, 10)
//this line is not required, keeping it here for easy readability
MobileElement longpress = editPreferenceButton;
//use the below code, it will do the trick, credits to wherever i found this
LongPressOptions longPressOptions = new LongPressOptions(); longPressOptions.withDuration(Duration.ofSeconds(3)).withElement(ElementOption.element(longpress));
TouchAction action = new TouchAction(driver);
action.longPress(longPressOptions).release().perform();}
``` |
29,298,096 | Is there any code to tap and hold on Appium? i use python , is there any command to support it ?
For double click i used click on element twice, for tap and hold i am not getting any solution | 2015/03/27 | [
"https://Stackoverflow.com/questions/29298096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4310652/"
] | Yes, you can use TouchAction class to longPress any element. Try this:
```
TouchAction action = new TouchAction();
action.longPress(webElement).release().perform();
``` | Once you have identified the pageElement you want to longPress on.
```
//pageElement
editPreferenceButton = driver.whatever
//code for waiting for display of element
waitForDisplayed(editPreferenceButton, 10)
//this line is not required, keeping it here for easy readability
MobileElement longpress = editPreferenceButton;
//use the below code, it will do the trick, credits to wherever i found this
LongPressOptions longPressOptions = new LongPressOptions(); longPressOptions.withDuration(Duration.ofSeconds(3)).withElement(ElementOption.element(longpress));
TouchAction action = new TouchAction(driver);
action.longPress(longPressOptions).release().perform();}
``` |
31,321,906 | I have a string like this in Java:
`"\xd0\xb5\xd0\xbd\xd0\xb4\xd0\xbf\xd0\xbe\xd0\xb9\xd0\xbd\xd1\x82"`
How can I convert it to a human readable equivalent?
Note:
actually it is `GWT` and this string is coming from python as part of a JSON data.
The `JSONParser` transforms it to something that is totally irrelevant, so I want to be able to convert the string prior to parsing.
The expected, so called by me "human readable", should be "ендойнт" (<https://mothereff.in/utf-8#%D0%B5%D0%BD%D0%B4%D0%BF%D0%BE%D0%B9%D0%BD%D1%82>) | 2015/07/09 | [
"https://Stackoverflow.com/questions/31321906",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2961166/"
] | It seems indeed that there's no endpoint voor search exists, but I think you use a simple alternative:
Use an empty "fields" array. And count the results of your query. If == 0: false. If > 0: true
```
GET /giata_index/giata_type/_search
{
"fields": [],
"query": {
"bool": {
"must": [
{
"term": {
"status": 2
}
},
{
"term": {
"ids": "26744"
}
}
]
}
}
}
```
An other alternative is to use \_count : <https://www.elastic.co/guide/en/elasticsearch/reference/1.6/search-count.html> | It should be possible with the [latest 2.x version](https://github.com/elastic/elasticsearch-php/blob/master/src/Elasticsearch/Endpoints/SearchExists.php).
Code sample could be something like this:
```
$clientBuilder = Elasticsearch\ClientBuilder::create();
// Additional client options, hosts, etc.
$client = $clientBuilder->build();
$index = 'your_index';
$type = 'your_type';
$params = [
'index' => $index,
'type' => $type,
'body' => [
'query' => [
'bool' => [
'must' => [
[
'term' => [
"status" => 2
]
],
[
'term' => [
'ids' => "26744"
]
]
]
]
]
];
try {
$client->searchExists($params);
} catch (Exception $e) {
// Not found. You might want to return FALSE if wrapped in a function.
// return FALSE;
}
// Found.
```
It is worth noting that if search is not wrapped in try/catch block it can break execution and throw an exception (status code 4xx if not found).
Also, it can not be used effectively in [future mode](https://www.elastic.co/guide/en/elasticsearch/client/php-api/current/_future_mode.html#_caveats_to_future_mode). |
37,293,366 | I am trying to list the instances on tag values of different tag keys
For eg> one tag key - Environment, other tag key - Role.
My code is given below :
```
import argparse
import boto3
AWS_ACCESS_KEY_ID = '<Access Key>'
AWS_SECRET_ACCESS_KEY = '<Secret Key>'
def get_ec2_instances(Env,Role):
ec2 = boto3.client("ec2", region)
reservations = ec2.describe_instances(Filters={"tag:environment" : Env, "tag:role" : Role})
for reservation in reservations["Reservations"] :
for instance in reservation["Instances"]:
print "%s" % (instance.tags['Name'])
if __name__ == '__main__':
regions = ['us-east-1','us-west-1','us-west-2','eu-west-1','sa-east-1',
'ap-southeast-1','ap-southeast-2','ap-northeast-1']
parser = argparse.ArgumentParser()
parser.add_argument('Env', default="environment", help='value for tag:environment');
parser.add_argument('Role', default="role", help='value for tag:role');
args = parser.parse_args()
for region in regions: get_ec2_instances(args.Env, args.Role)
```
After running this script : python script.py arg1 arg2
I am getting following error
```
Traceback (most recent call last):
File "script.py", line 27, in <module>
for region in regions: get_ec2_instances(args.Env, args.Role)
File "script.py", line 10, in get_ec2_instances
reservations = ec2.describe_instances(Filters={"tag:environment" : Env, "tag:role" : Role})
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 258, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 524, in _make_api_call
api_params, operation_model, context=request_context)
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 577, in _convert_to_request_dict
api_params, operation_model)
File "/usr/local/lib/python2.7/dist-packages/botocore/validate.py", line 270, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
botocore.exceptions.ParamValidationError: Parameter validation failed:
Invalid type for parameter Filters, value: {'tag:role': 'arg1', 'tag:environment': 'arg2'}, type: <type 'dict'>, valid types: <type 'list'>, <type 'tuple'>
``` | 2016/05/18 | [
"https://Stackoverflow.com/questions/37293366",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6349605/"
] | This looks familiar, did I modify this for somebody somewhere ;-) . Actually the code I wrote is in rush and not tested properly (And I don't bother to amend the % string formating and replace it with str.format() ) . In fact,using Filters parameter is not properly documented in AWS.
Please refer to Russell Ballestrini blog [Filtering AWS resources with Boto3](http://russell.ballestrini.net/filtering-aws-resources-with-boto3/) to learn more about correct boto Filters method.
1. Filters accept list value, and info inside the tag should be dict. thus [{}]
2. Boto3 documentation is pretty ambiguous on how to use specify the tag name. It is confusing without examples when they say you may use tag:key. So many people will just do `[{"tag:keyname","Values": [""] }]` and it doesn't work. (Actually the origin code I assume the developer know how the filters works, so I just amend the structure only).
3. Actually, You MUST explicitly specify "Name" and "Values" pair. So the correct way to specify tag name is `[{"Name" :"tag:keyname", "Values":[""] }]`. It is tricky.
So the correct way of formatting a filters if you want to use for your example
```
filters = [{'Name':'tag:environment', 'Values':[Env]},
{'Name':'tag:role', 'Values':[Role]}
]
```
(Update)
And to make sure argparse take up string value, you just enforce the argument to take string values
```
parser.add_argument('Env', type=str, default="environment",
help='value for tag:environment');
parser.add_argument('Role', type=str,default="role",
help='value for tag:role');
``` | Fix the Env and Role, as I am not sure mine or mootmoot's answer will work because the Array for Values [expects](http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.Client.describe_instances) strings.
```
reservervations = ec2.describe_instances(
Filters=[
{'Name': 'tag:environment', 'Values': ['%s'], 'Name': 'tag:role', 'Values': ['%s']} % (Env, Role),
]]
).get(
'Reservations', []
)
``` |
37,293,366 | I am trying to list the instances on tag values of different tag keys
For eg> one tag key - Environment, other tag key - Role.
My code is given below :
```
import argparse
import boto3
AWS_ACCESS_KEY_ID = '<Access Key>'
AWS_SECRET_ACCESS_KEY = '<Secret Key>'
def get_ec2_instances(Env,Role):
ec2 = boto3.client("ec2", region)
reservations = ec2.describe_instances(Filters={"tag:environment" : Env, "tag:role" : Role})
for reservation in reservations["Reservations"] :
for instance in reservation["Instances"]:
print "%s" % (instance.tags['Name'])
if __name__ == '__main__':
regions = ['us-east-1','us-west-1','us-west-2','eu-west-1','sa-east-1',
'ap-southeast-1','ap-southeast-2','ap-northeast-1']
parser = argparse.ArgumentParser()
parser.add_argument('Env', default="environment", help='value for tag:environment');
parser.add_argument('Role', default="role", help='value for tag:role');
args = parser.parse_args()
for region in regions: get_ec2_instances(args.Env, args.Role)
```
After running this script : python script.py arg1 arg2
I am getting following error
```
Traceback (most recent call last):
File "script.py", line 27, in <module>
for region in regions: get_ec2_instances(args.Env, args.Role)
File "script.py", line 10, in get_ec2_instances
reservations = ec2.describe_instances(Filters={"tag:environment" : Env, "tag:role" : Role})
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 258, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 524, in _make_api_call
api_params, operation_model, context=request_context)
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 577, in _convert_to_request_dict
api_params, operation_model)
File "/usr/local/lib/python2.7/dist-packages/botocore/validate.py", line 270, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
botocore.exceptions.ParamValidationError: Parameter validation failed:
Invalid type for parameter Filters, value: {'tag:role': 'arg1', 'tag:environment': 'arg2'}, type: <type 'dict'>, valid types: <type 'list'>, <type 'tuple'>
``` | 2016/05/18 | [
"https://Stackoverflow.com/questions/37293366",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6349605/"
] | This looks familiar, did I modify this for somebody somewhere ;-) . Actually the code I wrote is in rush and not tested properly (And I don't bother to amend the % string formating and replace it with str.format() ) . In fact,using Filters parameter is not properly documented in AWS.
Please refer to Russell Ballestrini blog [Filtering AWS resources with Boto3](http://russell.ballestrini.net/filtering-aws-resources-with-boto3/) to learn more about correct boto Filters method.
1. Filters accept list value, and info inside the tag should be dict. thus [{}]
2. Boto3 documentation is pretty ambiguous on how to use specify the tag name. It is confusing without examples when they say you may use tag:key. So many people will just do `[{"tag:keyname","Values": [""] }]` and it doesn't work. (Actually the origin code I assume the developer know how the filters works, so I just amend the structure only).
3. Actually, You MUST explicitly specify "Name" and "Values" pair. So the correct way to specify tag name is `[{"Name" :"tag:keyname", "Values":[""] }]`. It is tricky.
So the correct way of formatting a filters if you want to use for your example
```
filters = [{'Name':'tag:environment', 'Values':[Env]},
{'Name':'tag:role', 'Values':[Role]}
]
```
(Update)
And to make sure argparse take up string value, you just enforce the argument to take string values
```
parser.add_argument('Env', type=str, default="environment",
help='value for tag:environment');
parser.add_argument('Role', type=str,default="role",
help='value for tag:role');
``` | In my own python script I use the following:
```
import boto3
ec2client = boto3.client('ec2','us-east-1')
response = ec2client.describe_instances(Filters=[{'Name' : 'instance-state-name','Values' : ['running']}])
``` |
37,293,366 | I am trying to list the instances on tag values of different tag keys
For eg> one tag key - Environment, other tag key - Role.
My code is given below :
```
import argparse
import boto3
AWS_ACCESS_KEY_ID = '<Access Key>'
AWS_SECRET_ACCESS_KEY = '<Secret Key>'
def get_ec2_instances(Env,Role):
ec2 = boto3.client("ec2", region)
reservations = ec2.describe_instances(Filters={"tag:environment" : Env, "tag:role" : Role})
for reservation in reservations["Reservations"] :
for instance in reservation["Instances"]:
print "%s" % (instance.tags['Name'])
if __name__ == '__main__':
regions = ['us-east-1','us-west-1','us-west-2','eu-west-1','sa-east-1',
'ap-southeast-1','ap-southeast-2','ap-northeast-1']
parser = argparse.ArgumentParser()
parser.add_argument('Env', default="environment", help='value for tag:environment');
parser.add_argument('Role', default="role", help='value for tag:role');
args = parser.parse_args()
for region in regions: get_ec2_instances(args.Env, args.Role)
```
After running this script : python script.py arg1 arg2
I am getting following error
```
Traceback (most recent call last):
File "script.py", line 27, in <module>
for region in regions: get_ec2_instances(args.Env, args.Role)
File "script.py", line 10, in get_ec2_instances
reservations = ec2.describe_instances(Filters={"tag:environment" : Env, "tag:role" : Role})
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 258, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 524, in _make_api_call
api_params, operation_model, context=request_context)
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 577, in _convert_to_request_dict
api_params, operation_model)
File "/usr/local/lib/python2.7/dist-packages/botocore/validate.py", line 270, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
botocore.exceptions.ParamValidationError: Parameter validation failed:
Invalid type for parameter Filters, value: {'tag:role': 'arg1', 'tag:environment': 'arg2'}, type: <type 'dict'>, valid types: <type 'list'>, <type 'tuple'>
``` | 2016/05/18 | [
"https://Stackoverflow.com/questions/37293366",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6349605/"
] | This looks familiar, did I modify this for somebody somewhere ;-) . Actually the code I wrote is in rush and not tested properly (And I don't bother to amend the % string formating and replace it with str.format() ) . In fact,using Filters parameter is not properly documented in AWS.
Please refer to Russell Ballestrini blog [Filtering AWS resources with Boto3](http://russell.ballestrini.net/filtering-aws-resources-with-boto3/) to learn more about correct boto Filters method.
1. Filters accept list value, and info inside the tag should be dict. thus [{}]
2. Boto3 documentation is pretty ambiguous on how to use specify the tag name. It is confusing without examples when they say you may use tag:key. So many people will just do `[{"tag:keyname","Values": [""] }]` and it doesn't work. (Actually the origin code I assume the developer know how the filters works, so I just amend the structure only).
3. Actually, You MUST explicitly specify "Name" and "Values" pair. So the correct way to specify tag name is `[{"Name" :"tag:keyname", "Values":[""] }]`. It is tricky.
So the correct way of formatting a filters if you want to use for your example
```
filters = [{'Name':'tag:environment', 'Values':[Env]},
{'Name':'tag:role', 'Values':[Role]}
]
```
(Update)
And to make sure argparse take up string value, you just enforce the argument to take string values
```
parser.add_argument('Env', type=str, default="environment",
help='value for tag:environment');
parser.add_argument('Role', type=str,default="role",
help='value for tag:role');
``` | Although not actually the answer to your question but **DO NOT**, **NEVER**, put your AWS credentials hard coded in your scripts. With your AWS credentials, **anyone** can use your account. There are bots scouring github and other git repositories looking for hard coded AWS credentials.
Also, when rotating credentials all your code will be broken or you will have a hard time updating all of them.
Some alternatives instead hard coding your AWS credentials:
1. Configure your ~/.aws/credentials file
2. Use IAM Roles
3. Use STS to 'assumeRole'
Follow the best practices described here: [Best Practices for Managing AWS Access Keys](https://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html)
Now, for answering your question, here is an example on how to filter by tags:
```
argEnv = '<any_string_you_want_to_match_as_a_value_for_a_tag>'
ec2Client = boto3.client('ec2')
response = ec2Client.describe_instances(
Filters=[
{
'Name': 'tag:Projeto',
'Values': [argEnv]
}
]
)
```
Make sure 'Value' is a list and not a string. For example, if 'argEnv' is a string, make sure you use '[]' to encase your variable.
Then if you want to consult the Tag:Name and get the Value of it (for example, the name you set up for a specific EC2 instance in the console):
```
for reservation in res['Reservations']:
for instance in reservation['Instances']:
for tag in instance['Tags']:
if tag['Key'] == 'Name':
consoleName = tag['Value']
print(consoleName)
```
The output will be the Value of the Name tag for every resource. As you can see, you have to loop through the results to get the result. You can check the Response Syntax [here](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2.html#EC2.Client.describe_instances). |
37,293,366 | I am trying to list the instances on tag values of different tag keys
For eg> one tag key - Environment, other tag key - Role.
My code is given below :
```
import argparse
import boto3
AWS_ACCESS_KEY_ID = '<Access Key>'
AWS_SECRET_ACCESS_KEY = '<Secret Key>'
def get_ec2_instances(Env,Role):
ec2 = boto3.client("ec2", region)
reservations = ec2.describe_instances(Filters={"tag:environment" : Env, "tag:role" : Role})
for reservation in reservations["Reservations"] :
for instance in reservation["Instances"]:
print "%s" % (instance.tags['Name'])
if __name__ == '__main__':
regions = ['us-east-1','us-west-1','us-west-2','eu-west-1','sa-east-1',
'ap-southeast-1','ap-southeast-2','ap-northeast-1']
parser = argparse.ArgumentParser()
parser.add_argument('Env', default="environment", help='value for tag:environment');
parser.add_argument('Role', default="role", help='value for tag:role');
args = parser.parse_args()
for region in regions: get_ec2_instances(args.Env, args.Role)
```
After running this script : python script.py arg1 arg2
I am getting following error
```
Traceback (most recent call last):
File "script.py", line 27, in <module>
for region in regions: get_ec2_instances(args.Env, args.Role)
File "script.py", line 10, in get_ec2_instances
reservations = ec2.describe_instances(Filters={"tag:environment" : Env, "tag:role" : Role})
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 258, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 524, in _make_api_call
api_params, operation_model, context=request_context)
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 577, in _convert_to_request_dict
api_params, operation_model)
File "/usr/local/lib/python2.7/dist-packages/botocore/validate.py", line 270, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
botocore.exceptions.ParamValidationError: Parameter validation failed:
Invalid type for parameter Filters, value: {'tag:role': 'arg1', 'tag:environment': 'arg2'}, type: <type 'dict'>, valid types: <type 'list'>, <type 'tuple'>
``` | 2016/05/18 | [
"https://Stackoverflow.com/questions/37293366",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6349605/"
] | In my own python script I use the following:
```
import boto3
ec2client = boto3.client('ec2','us-east-1')
response = ec2client.describe_instances(Filters=[{'Name' : 'instance-state-name','Values' : ['running']}])
``` | Fix the Env and Role, as I am not sure mine or mootmoot's answer will work because the Array for Values [expects](http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.Client.describe_instances) strings.
```
reservervations = ec2.describe_instances(
Filters=[
{'Name': 'tag:environment', 'Values': ['%s'], 'Name': 'tag:role', 'Values': ['%s']} % (Env, Role),
]]
).get(
'Reservations', []
)
``` |
37,293,366 | I am trying to list the instances on tag values of different tag keys
For eg> one tag key - Environment, other tag key - Role.
My code is given below :
```
import argparse
import boto3
AWS_ACCESS_KEY_ID = '<Access Key>'
AWS_SECRET_ACCESS_KEY = '<Secret Key>'
def get_ec2_instances(Env,Role):
ec2 = boto3.client("ec2", region)
reservations = ec2.describe_instances(Filters={"tag:environment" : Env, "tag:role" : Role})
for reservation in reservations["Reservations"] :
for instance in reservation["Instances"]:
print "%s" % (instance.tags['Name'])
if __name__ == '__main__':
regions = ['us-east-1','us-west-1','us-west-2','eu-west-1','sa-east-1',
'ap-southeast-1','ap-southeast-2','ap-northeast-1']
parser = argparse.ArgumentParser()
parser.add_argument('Env', default="environment", help='value for tag:environment');
parser.add_argument('Role', default="role", help='value for tag:role');
args = parser.parse_args()
for region in regions: get_ec2_instances(args.Env, args.Role)
```
After running this script : python script.py arg1 arg2
I am getting following error
```
Traceback (most recent call last):
File "script.py", line 27, in <module>
for region in regions: get_ec2_instances(args.Env, args.Role)
File "script.py", line 10, in get_ec2_instances
reservations = ec2.describe_instances(Filters={"tag:environment" : Env, "tag:role" : Role})
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 258, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 524, in _make_api_call
api_params, operation_model, context=request_context)
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 577, in _convert_to_request_dict
api_params, operation_model)
File "/usr/local/lib/python2.7/dist-packages/botocore/validate.py", line 270, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
botocore.exceptions.ParamValidationError: Parameter validation failed:
Invalid type for parameter Filters, value: {'tag:role': 'arg1', 'tag:environment': 'arg2'}, type: <type 'dict'>, valid types: <type 'list'>, <type 'tuple'>
``` | 2016/05/18 | [
"https://Stackoverflow.com/questions/37293366",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6349605/"
] | Although not actually the answer to your question but **DO NOT**, **NEVER**, put your AWS credentials hard coded in your scripts. With your AWS credentials, **anyone** can use your account. There are bots scouring github and other git repositories looking for hard coded AWS credentials.
Also, when rotating credentials all your code will be broken or you will have a hard time updating all of them.
Some alternatives instead hard coding your AWS credentials:
1. Configure your ~/.aws/credentials file
2. Use IAM Roles
3. Use STS to 'assumeRole'
Follow the best practices described here: [Best Practices for Managing AWS Access Keys](https://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html)
Now, for answering your question, here is an example on how to filter by tags:
```
argEnv = '<any_string_you_want_to_match_as_a_value_for_a_tag>'
ec2Client = boto3.client('ec2')
response = ec2Client.describe_instances(
Filters=[
{
'Name': 'tag:Projeto',
'Values': [argEnv]
}
]
)
```
Make sure 'Value' is a list and not a string. For example, if 'argEnv' is a string, make sure you use '[]' to encase your variable.
Then if you want to consult the Tag:Name and get the Value of it (for example, the name you set up for a specific EC2 instance in the console):
```
for reservation in res['Reservations']:
for instance in reservation['Instances']:
for tag in instance['Tags']:
if tag['Key'] == 'Name':
consoleName = tag['Value']
print(consoleName)
```
The output will be the Value of the Name tag for every resource. As you can see, you have to loop through the results to get the result. You can check the Response Syntax [here](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2.html#EC2.Client.describe_instances). | Fix the Env and Role, as I am not sure mine or mootmoot's answer will work because the Array for Values [expects](http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.Client.describe_instances) strings.
```
reservervations = ec2.describe_instances(
Filters=[
{'Name': 'tag:environment', 'Values': ['%s'], 'Name': 'tag:role', 'Values': ['%s']} % (Env, Role),
]]
).get(
'Reservations', []
)
``` |
51,710,083 | * I am writing unit tests for a Python library using **pytest**
* I need to **specify a directory** for test files to avoid automatic test file discovery, because there is a large sub-directory structure, including many files in the library containing "\_test" or "test\_" in the name but are not intended for pytest
* Some files in the library use **argparse** for specifying command-line options
* The problem is that specifying the directory for pytest as a command-line argument seems to interfere with using command line options for argparse
To give an example, I have a file in the root directory called `script_with_args.py` as follows:
```
import argparse
def parse_args():
parser = argparse.ArgumentParser(description="description")
parser.add_argument("--a", type=int, default=3)
parser.add_argument("--b", type=int, default=5)
return parser.parse_args()
```
I also have a folder called `tests` in the root directory, containing a test-file called `test_file.py`:
```
import script_with_args
def test_script_func():
args = script_with_args.parse_args()
assert args.a == 3
```
If I call `python -m pytest` from the command line, the test passes fine. If I specify the test directory from the command line with `python -m pytest tests`, the following error is returned:
```
============================= test session starts =============================
platform win32 -- Python 3.6.5, pytest-3.5.1, py-1.5.3, pluggy-0.6.0
rootdir: C:\Users\Jake\CBAS\pytest-tests, inifile:
plugins: remotedata-0.2.1, openfiles-0.3.0, doctestplus-0.1.3, arraydiff-0.2
collected 1 item
tests\test_file.py F [100%]
================================== FAILURES ===================================
______________________________ test_script_func _______________________________
def test_script_func():
# a = 1
# b = 2
> args = script_with_args.parse_args()
tests\test_file.py:13:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
script_with_args.py:9: in parse_args
return parser.parse_args()
..\..\Anaconda3\lib\argparse.py:1733: in parse_args
self.error(msg % ' '.join(argv))
..\..\Anaconda3\lib\argparse.py:2389: in error
self.exit(2, _('%(prog)s: error: %(message)s\n') % args)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = ArgumentParser(prog='pytest.py', usage=None, description='description', f
ormatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_h
elp=True)
status = 2, message = 'pytest.py: error: unrecognized arguments: tests\n'
def exit(self, status=0, message=None):
if message:
self._print_message(message, _sys.stderr)
> _sys.exit(status)
E SystemExit: 2
..\..\Anaconda3\lib\argparse.py:2376: SystemExit
---------------------------- Captured stderr call -----------------------------
usage: pytest.py [-h] [--a A] [--b B]
pytest.py: error: unrecognized arguments: tests
========================== 1 failed in 0.19 seconds ===========================
```
My question is, how do I specify the test file directory for pytest, without interfering with the command line options for argparse? | 2018/08/06 | [
"https://Stackoverflow.com/questions/51710083",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8477566/"
] | `parse_args()` without argument reads the `sys.argv[1:]` list. That will include the 'tests' string.
`pytests` also uses that `sys.argv[1:]` with its own parser.
One way to make your parser testable is provide an optional `argv`:
```
def parse_args(argv=None):
parser = argparse.ArgumentParser(description="description")
parser.add_argument("--a", type=int, default=3)
parser.add_argument("--b", type=int, default=5)
return parser.parse_args(argv)
```
Then you can test it with:
```
parse_args(['-a', '4'])
```
and use it in for real with
```
parse_args()
```
Changing the `sys.argv` is also good way. But if you are going to the work of putting the parser in a function like this, you might as well give it this added flexibility. | To add to hpaulj's answer, you can also use a library like [unittest.mock](https://docs.python.org/3/library/unittest.mock.html) to temporarily mask the value of `sys.argv`. That way your parse args command will run using the "mocked" argv but the *actual* `sys.argv` remains unchanged.
When your tests call `parse_args()` they could do it like this:
```
with unittest.mock.patch('sys.argv', ['--a', '1', '--b', 2]):
parse_args()
``` |
51,710,083 | * I am writing unit tests for a Python library using **pytest**
* I need to **specify a directory** for test files to avoid automatic test file discovery, because there is a large sub-directory structure, including many files in the library containing "\_test" or "test\_" in the name but are not intended for pytest
* Some files in the library use **argparse** for specifying command-line options
* The problem is that specifying the directory for pytest as a command-line argument seems to interfere with using command line options for argparse
To give an example, I have a file in the root directory called `script_with_args.py` as follows:
```
import argparse
def parse_args():
parser = argparse.ArgumentParser(description="description")
parser.add_argument("--a", type=int, default=3)
parser.add_argument("--b", type=int, default=5)
return parser.parse_args()
```
I also have a folder called `tests` in the root directory, containing a test-file called `test_file.py`:
```
import script_with_args
def test_script_func():
args = script_with_args.parse_args()
assert args.a == 3
```
If I call `python -m pytest` from the command line, the test passes fine. If I specify the test directory from the command line with `python -m pytest tests`, the following error is returned:
```
============================= test session starts =============================
platform win32 -- Python 3.6.5, pytest-3.5.1, py-1.5.3, pluggy-0.6.0
rootdir: C:\Users\Jake\CBAS\pytest-tests, inifile:
plugins: remotedata-0.2.1, openfiles-0.3.0, doctestplus-0.1.3, arraydiff-0.2
collected 1 item
tests\test_file.py F [100%]
================================== FAILURES ===================================
______________________________ test_script_func _______________________________
def test_script_func():
# a = 1
# b = 2
> args = script_with_args.parse_args()
tests\test_file.py:13:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
script_with_args.py:9: in parse_args
return parser.parse_args()
..\..\Anaconda3\lib\argparse.py:1733: in parse_args
self.error(msg % ' '.join(argv))
..\..\Anaconda3\lib\argparse.py:2389: in error
self.exit(2, _('%(prog)s: error: %(message)s\n') % args)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = ArgumentParser(prog='pytest.py', usage=None, description='description', f
ormatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_h
elp=True)
status = 2, message = 'pytest.py: error: unrecognized arguments: tests\n'
def exit(self, status=0, message=None):
if message:
self._print_message(message, _sys.stderr)
> _sys.exit(status)
E SystemExit: 2
..\..\Anaconda3\lib\argparse.py:2376: SystemExit
---------------------------- Captured stderr call -----------------------------
usage: pytest.py [-h] [--a A] [--b B]
pytest.py: error: unrecognized arguments: tests
========================== 1 failed in 0.19 seconds ===========================
```
My question is, how do I specify the test file directory for pytest, without interfering with the command line options for argparse? | 2018/08/06 | [
"https://Stackoverflow.com/questions/51710083",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8477566/"
] | `parse_args()` without argument reads the `sys.argv[1:]` list. That will include the 'tests' string.
`pytests` also uses that `sys.argv[1:]` with its own parser.
One way to make your parser testable is provide an optional `argv`:
```
def parse_args(argv=None):
parser = argparse.ArgumentParser(description="description")
parser.add_argument("--a", type=int, default=3)
parser.add_argument("--b", type=int, default=5)
return parser.parse_args(argv)
```
Then you can test it with:
```
parse_args(['-a', '4'])
```
and use it in for real with
```
parse_args()
```
Changing the `sys.argv` is also good way. But if you are going to the work of putting the parser in a function like this, you might as well give it this added flexibility. | I ran into a similar problem with test discovery in VS Code. The run adapter in VS Code passes in parameters that my program does not understand. My solution was to make the parser accepts unknown arguments.
Change:
```
return parser.parse_args()
```
To:
```
args, _ = parser.parse_known_args()
return args
``` |
51,710,083 | * I am writing unit tests for a Python library using **pytest**
* I need to **specify a directory** for test files to avoid automatic test file discovery, because there is a large sub-directory structure, including many files in the library containing "\_test" or "test\_" in the name but are not intended for pytest
* Some files in the library use **argparse** for specifying command-line options
* The problem is that specifying the directory for pytest as a command-line argument seems to interfere with using command line options for argparse
To give an example, I have a file in the root directory called `script_with_args.py` as follows:
```
import argparse
def parse_args():
parser = argparse.ArgumentParser(description="description")
parser.add_argument("--a", type=int, default=3)
parser.add_argument("--b", type=int, default=5)
return parser.parse_args()
```
I also have a folder called `tests` in the root directory, containing a test-file called `test_file.py`:
```
import script_with_args
def test_script_func():
args = script_with_args.parse_args()
assert args.a == 3
```
If I call `python -m pytest` from the command line, the test passes fine. If I specify the test directory from the command line with `python -m pytest tests`, the following error is returned:
```
============================= test session starts =============================
platform win32 -- Python 3.6.5, pytest-3.5.1, py-1.5.3, pluggy-0.6.0
rootdir: C:\Users\Jake\CBAS\pytest-tests, inifile:
plugins: remotedata-0.2.1, openfiles-0.3.0, doctestplus-0.1.3, arraydiff-0.2
collected 1 item
tests\test_file.py F [100%]
================================== FAILURES ===================================
______________________________ test_script_func _______________________________
def test_script_func():
# a = 1
# b = 2
> args = script_with_args.parse_args()
tests\test_file.py:13:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
script_with_args.py:9: in parse_args
return parser.parse_args()
..\..\Anaconda3\lib\argparse.py:1733: in parse_args
self.error(msg % ' '.join(argv))
..\..\Anaconda3\lib\argparse.py:2389: in error
self.exit(2, _('%(prog)s: error: %(message)s\n') % args)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = ArgumentParser(prog='pytest.py', usage=None, description='description', f
ormatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_h
elp=True)
status = 2, message = 'pytest.py: error: unrecognized arguments: tests\n'
def exit(self, status=0, message=None):
if message:
self._print_message(message, _sys.stderr)
> _sys.exit(status)
E SystemExit: 2
..\..\Anaconda3\lib\argparse.py:2376: SystemExit
---------------------------- Captured stderr call -----------------------------
usage: pytest.py [-h] [--a A] [--b B]
pytest.py: error: unrecognized arguments: tests
========================== 1 failed in 0.19 seconds ===========================
```
My question is, how do I specify the test file directory for pytest, without interfering with the command line options for argparse? | 2018/08/06 | [
"https://Stackoverflow.com/questions/51710083",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8477566/"
] | To add to hpaulj's answer, you can also use a library like [unittest.mock](https://docs.python.org/3/library/unittest.mock.html) to temporarily mask the value of `sys.argv`. That way your parse args command will run using the "mocked" argv but the *actual* `sys.argv` remains unchanged.
When your tests call `parse_args()` they could do it like this:
```
with unittest.mock.patch('sys.argv', ['--a', '1', '--b', 2]):
parse_args()
``` | I ran into a similar problem with test discovery in VS Code. The run adapter in VS Code passes in parameters that my program does not understand. My solution was to make the parser accepts unknown arguments.
Change:
```
return parser.parse_args()
```
To:
```
args, _ = parser.parse_known_args()
return args
``` |
59,704,959 | I'm trying to count the number of dots in an email address using Python + Pandas.
The first record is "addison.shepherd@gmail.com". It should count 2 dots. Instead, it returns 26, the length of the string.
```
import pandas as pd
url = "http://profalibania.com.br/python/EmailsDoctors.xlsx"
docs = pd.read_excel(url)
docs["PosAt"] = docs["Email"].str.count('.')
```
Can anybody help me? Thanks in advance! | 2020/01/12 | [
"https://Stackoverflow.com/questions/59704959",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5518389/"
] | [`pandas.Series.str.count`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.count.html) takes a regex expression as input. To match a literal period (`.`), you must escape it:
```
docs["Email"].str.count('\.')
```
Just specifying `.` will use the regex meaning of the period (matching any single character) | The [**`.str.count(..)`** method [pandas-doc]](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.count.html) works with a [*regular expression* [wiki]](https://en.wikipedia.org/wiki/Regular_expression). This is specified in the documentation:
>
> This function is used to count the number of times a particular **regex pattern** is repeated in each of the string elements of the `Series`.
>
>
>
For a regex, the dot means "all characters except new line". You can use a *character set* (by surrounding it by square brackets):
```
docs["PosAt"] = docs["Email"].str.count(**'[.]'**)
``` |
59,704,959 | I'm trying to count the number of dots in an email address using Python + Pandas.
The first record is "addison.shepherd@gmail.com". It should count 2 dots. Instead, it returns 26, the length of the string.
```
import pandas as pd
url = "http://profalibania.com.br/python/EmailsDoctors.xlsx"
docs = pd.read_excel(url)
docs["PosAt"] = docs["Email"].str.count('.')
```
Can anybody help me? Thanks in advance! | 2020/01/12 | [
"https://Stackoverflow.com/questions/59704959",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5518389/"
] | [`pandas.Series.str.count`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.count.html) takes a regex expression as input. To match a literal period (`.`), you must escape it:
```
docs["Email"].str.count('\.')
```
Just specifying `.` will use the regex meaning of the period (matching any single character) | A variant here would be to compare the length of the original email column with the length of that column with all dots removed:
```
docs["Email"].str.len() - docs["Email"].str.replace("[.]", "").len()
``` |
57,718,512 | I'm trying to try using this model to train on rock, paper, scissor pictures. However, it was trained on 1800 pictures and only has an accuracy of 30-40%. I was then trying to use TensorBoard to see whats going on, but the error in the title appears.
```
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from tensorflow.python.keras.callbacks import TensorBoard
model = Sequential()
model.add(Conv2D(256, kernel_size=(4, 4),
activation='relu',
input_shape=(64,64,3)))
model.add(Conv2D(196, (4, 4), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(196, (4, 4), activation='relu'))
model.add(Conv2D(196, (4, 4), activation='relu'))
model.add(Dropout(0.25))
model.add(Conv2D(128, (4, 4), activation='relu'))
model.add(Conv2D(128, (4, 4), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(96, (4, 4), activation='relu'))
model.add(Conv2D(96, (4, 4), activation='relu'))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(3, activation='softmax'))
''' here it instantiates the tensorboard '''
tensorboard = TensorBoard(log_dir="C:/Users/bamla/Desktop/RPS project/Logs")
model.compile(loss="sparse_categorical_crossentropy",
optimizer="SGD",
metrics=['accuracy'])
model.summary()
''' Here its fitting the model '''
model.fit(x_train, y_train, batch_size=50, epochs = 3, callbacks=
[tensorboard])
```
This outputs:
```
Traceback (most recent call last):
File "c:/Users/bamla/Desktop/RPS project/Testing.py", line 82, in <module>
model.fit(x_train, y_train, batch_size=50, epochs = 3, callbacks=
[tensorboard])
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\keras\engine\training.py", line 1178, in fit
validation_freq=validation_freq)
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\keras\engine\training_arrays.py", line 125, in fit_loop
callbacks.set_model(callback_model)
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\keras\callbacks.py", line 68, in set_model
callback.set_model(model)
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\tensorflow\python\keras\callbacks.py", line 1509, in set_model
if not model.run_eagerly:
AttributeError: 'Sequential' object has no attribute 'run_eagerly'
```
Also, if you have any tips on how to improve the accuracy it would be appreciated! | 2019/08/29 | [
"https://Stackoverflow.com/questions/57718512",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7858253/"
] | The problem is here:
```
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from tensorflow.python.keras.callbacks import TensorBoard
```
Do not mix `keras` and `tf.keras` imports, these are **not compatible with each other**, and produce weird errors as the ones you are seeing. | I changed `from tensorflow.python.keras.callbacks import TensorBoard`
to `from keras.callbacks import TensorBoard` and it worked for me. |
57,718,512 | I'm trying to try using this model to train on rock, paper, scissor pictures. However, it was trained on 1800 pictures and only has an accuracy of 30-40%. I was then trying to use TensorBoard to see whats going on, but the error in the title appears.
```
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from tensorflow.python.keras.callbacks import TensorBoard
model = Sequential()
model.add(Conv2D(256, kernel_size=(4, 4),
activation='relu',
input_shape=(64,64,3)))
model.add(Conv2D(196, (4, 4), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(196, (4, 4), activation='relu'))
model.add(Conv2D(196, (4, 4), activation='relu'))
model.add(Dropout(0.25))
model.add(Conv2D(128, (4, 4), activation='relu'))
model.add(Conv2D(128, (4, 4), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(96, (4, 4), activation='relu'))
model.add(Conv2D(96, (4, 4), activation='relu'))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(3, activation='softmax'))
''' here it instantiates the tensorboard '''
tensorboard = TensorBoard(log_dir="C:/Users/bamla/Desktop/RPS project/Logs")
model.compile(loss="sparse_categorical_crossentropy",
optimizer="SGD",
metrics=['accuracy'])
model.summary()
''' Here its fitting the model '''
model.fit(x_train, y_train, batch_size=50, epochs = 3, callbacks=
[tensorboard])
```
This outputs:
```
Traceback (most recent call last):
File "c:/Users/bamla/Desktop/RPS project/Testing.py", line 82, in <module>
model.fit(x_train, y_train, batch_size=50, epochs = 3, callbacks=
[tensorboard])
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\keras\engine\training.py", line 1178, in fit
validation_freq=validation_freq)
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\keras\engine\training_arrays.py", line 125, in fit_loop
callbacks.set_model(callback_model)
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\keras\callbacks.py", line 68, in set_model
callback.set_model(model)
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\tensorflow\python\keras\callbacks.py", line 1509, in set_model
if not model.run_eagerly:
AttributeError: 'Sequential' object has no attribute 'run_eagerly'
```
Also, if you have any tips on how to improve the accuracy it would be appreciated! | 2019/08/29 | [
"https://Stackoverflow.com/questions/57718512",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7858253/"
] | The problem is here:
```
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from tensorflow.python.keras.callbacks import TensorBoard
```
Do not mix `keras` and `tf.keras` imports, these are **not compatible with each other**, and produce weird errors as the ones you are seeing. | for me, this did the job:
```
from tensorflow.keras import datasets, layers, models
from tensorflow import keras
``` |
57,718,512 | I'm trying to try using this model to train on rock, paper, scissor pictures. However, it was trained on 1800 pictures and only has an accuracy of 30-40%. I was then trying to use TensorBoard to see whats going on, but the error in the title appears.
```
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from tensorflow.python.keras.callbacks import TensorBoard
model = Sequential()
model.add(Conv2D(256, kernel_size=(4, 4),
activation='relu',
input_shape=(64,64,3)))
model.add(Conv2D(196, (4, 4), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(196, (4, 4), activation='relu'))
model.add(Conv2D(196, (4, 4), activation='relu'))
model.add(Dropout(0.25))
model.add(Conv2D(128, (4, 4), activation='relu'))
model.add(Conv2D(128, (4, 4), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(96, (4, 4), activation='relu'))
model.add(Conv2D(96, (4, 4), activation='relu'))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(3, activation='softmax'))
''' here it instantiates the tensorboard '''
tensorboard = TensorBoard(log_dir="C:/Users/bamla/Desktop/RPS project/Logs")
model.compile(loss="sparse_categorical_crossentropy",
optimizer="SGD",
metrics=['accuracy'])
model.summary()
''' Here its fitting the model '''
model.fit(x_train, y_train, batch_size=50, epochs = 3, callbacks=
[tensorboard])
```
This outputs:
```
Traceback (most recent call last):
File "c:/Users/bamla/Desktop/RPS project/Testing.py", line 82, in <module>
model.fit(x_train, y_train, batch_size=50, epochs = 3, callbacks=
[tensorboard])
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\keras\engine\training.py", line 1178, in fit
validation_freq=validation_freq)
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\keras\engine\training_arrays.py", line 125, in fit_loop
callbacks.set_model(callback_model)
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\keras\callbacks.py", line 68, in set_model
callback.set_model(model)
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\tensorflow\python\keras\callbacks.py", line 1509, in set_model
if not model.run_eagerly:
AttributeError: 'Sequential' object has no attribute 'run_eagerly'
```
Also, if you have any tips on how to improve the accuracy it would be appreciated! | 2019/08/29 | [
"https://Stackoverflow.com/questions/57718512",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7858253/"
] | The problem is here:
```
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from tensorflow.python.keras.callbacks import TensorBoard
```
Do not mix `keras` and `tf.keras` imports, these are **not compatible with each other**, and produce weird errors as the ones you are seeing. | It seems that you are mixing imports from `keras` and `tensorflow.keras` (last one is preferred).
<https://www.pyimagesearch.com/2019/10/21/keras-vs-tf-keras-whats-the-difference-in-tensorflow-2-0/>
>
> And most importantly, going forward all deep learning practitioners
> should switch their code to TensorFlow 2.0 and the tf.keras package.
> The original keras package will still receive bug fixes, but moving
> forward, you should be using tf.keras.
>
>
>
Try with:
```
import tensorflow
Conv2D = tensorflow.keras.layers.Conv2D
MaxPooling2D = tensorflow.keras.layers.MaxPooling2D
Dense = tensorflow.keras.layers.Dense
Flatten = tensorflow.keras.layers.Flatten
Dropout = tensorflow.keras.layers.Dropout
TensorBoard = tensorflow.keras.callbacks.TensorBoard
model = tensorflow.keras.Sequential()
``` |
57,718,512 | I'm trying to try using this model to train on rock, paper, scissor pictures. However, it was trained on 1800 pictures and only has an accuracy of 30-40%. I was then trying to use TensorBoard to see whats going on, but the error in the title appears.
```
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from tensorflow.python.keras.callbacks import TensorBoard
model = Sequential()
model.add(Conv2D(256, kernel_size=(4, 4),
activation='relu',
input_shape=(64,64,3)))
model.add(Conv2D(196, (4, 4), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(196, (4, 4), activation='relu'))
model.add(Conv2D(196, (4, 4), activation='relu'))
model.add(Dropout(0.25))
model.add(Conv2D(128, (4, 4), activation='relu'))
model.add(Conv2D(128, (4, 4), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(96, (4, 4), activation='relu'))
model.add(Conv2D(96, (4, 4), activation='relu'))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(3, activation='softmax'))
''' here it instantiates the tensorboard '''
tensorboard = TensorBoard(log_dir="C:/Users/bamla/Desktop/RPS project/Logs")
model.compile(loss="sparse_categorical_crossentropy",
optimizer="SGD",
metrics=['accuracy'])
model.summary()
''' Here its fitting the model '''
model.fit(x_train, y_train, batch_size=50, epochs = 3, callbacks=
[tensorboard])
```
This outputs:
```
Traceback (most recent call last):
File "c:/Users/bamla/Desktop/RPS project/Testing.py", line 82, in <module>
model.fit(x_train, y_train, batch_size=50, epochs = 3, callbacks=
[tensorboard])
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\keras\engine\training.py", line 1178, in fit
validation_freq=validation_freq)
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\keras\engine\training_arrays.py", line 125, in fit_loop
callbacks.set_model(callback_model)
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\keras\callbacks.py", line 68, in set_model
callback.set_model(model)
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\tensorflow\python\keras\callbacks.py", line 1509, in set_model
if not model.run_eagerly:
AttributeError: 'Sequential' object has no attribute 'run_eagerly'
```
Also, if you have any tips on how to improve the accuracy it would be appreciated! | 2019/08/29 | [
"https://Stackoverflow.com/questions/57718512",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7858253/"
] | I changed `from tensorflow.python.keras.callbacks import TensorBoard`
to `from keras.callbacks import TensorBoard` and it worked for me. | for me, this did the job:
```
from tensorflow.keras import datasets, layers, models
from tensorflow import keras
``` |
57,718,512 | I'm trying to try using this model to train on rock, paper, scissor pictures. However, it was trained on 1800 pictures and only has an accuracy of 30-40%. I was then trying to use TensorBoard to see whats going on, but the error in the title appears.
```
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from tensorflow.python.keras.callbacks import TensorBoard
model = Sequential()
model.add(Conv2D(256, kernel_size=(4, 4),
activation='relu',
input_shape=(64,64,3)))
model.add(Conv2D(196, (4, 4), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(196, (4, 4), activation='relu'))
model.add(Conv2D(196, (4, 4), activation='relu'))
model.add(Dropout(0.25))
model.add(Conv2D(128, (4, 4), activation='relu'))
model.add(Conv2D(128, (4, 4), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(96, (4, 4), activation='relu'))
model.add(Conv2D(96, (4, 4), activation='relu'))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(3, activation='softmax'))
''' here it instantiates the tensorboard '''
tensorboard = TensorBoard(log_dir="C:/Users/bamla/Desktop/RPS project/Logs")
model.compile(loss="sparse_categorical_crossentropy",
optimizer="SGD",
metrics=['accuracy'])
model.summary()
''' Here its fitting the model '''
model.fit(x_train, y_train, batch_size=50, epochs = 3, callbacks=
[tensorboard])
```
This outputs:
```
Traceback (most recent call last):
File "c:/Users/bamla/Desktop/RPS project/Testing.py", line 82, in <module>
model.fit(x_train, y_train, batch_size=50, epochs = 3, callbacks=
[tensorboard])
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\keras\engine\training.py", line 1178, in fit
validation_freq=validation_freq)
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\keras\engine\training_arrays.py", line 125, in fit_loop
callbacks.set_model(callback_model)
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\keras\callbacks.py", line 68, in set_model
callback.set_model(model)
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\tensorflow\python\keras\callbacks.py", line 1509, in set_model
if not model.run_eagerly:
AttributeError: 'Sequential' object has no attribute 'run_eagerly'
```
Also, if you have any tips on how to improve the accuracy it would be appreciated! | 2019/08/29 | [
"https://Stackoverflow.com/questions/57718512",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7858253/"
] | I changed `from tensorflow.python.keras.callbacks import TensorBoard`
to `from keras.callbacks import TensorBoard` and it worked for me. | It seems that you are mixing imports from `keras` and `tensorflow.keras` (last one is preferred).
<https://www.pyimagesearch.com/2019/10/21/keras-vs-tf-keras-whats-the-difference-in-tensorflow-2-0/>
>
> And most importantly, going forward all deep learning practitioners
> should switch their code to TensorFlow 2.0 and the tf.keras package.
> The original keras package will still receive bug fixes, but moving
> forward, you should be using tf.keras.
>
>
>
Try with:
```
import tensorflow
Conv2D = tensorflow.keras.layers.Conv2D
MaxPooling2D = tensorflow.keras.layers.MaxPooling2D
Dense = tensorflow.keras.layers.Dense
Flatten = tensorflow.keras.layers.Flatten
Dropout = tensorflow.keras.layers.Dropout
TensorBoard = tensorflow.keras.callbacks.TensorBoard
model = tensorflow.keras.Sequential()
``` |
51,664,292 | I'm getting the error below when I'm parsing the xml from the URL in the code. I won't post the XML because it's huge. The link is in the code below.
ERROR:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-70-77e5e1b79ccc> in <module>()
11
12 for child in root.iter('Materia'):
---> 13 if not child.find('EmentaMateria').text is None:
14 ementa = child.find('EmentaMateria').text
15
AttributeError: 'NoneType' object has no attribute 'text'
```
MY CODE:
```
url = 'http://legis.senado.leg.br/dadosabertos/senador/4988/autorias'
import requests
from xml.etree import ElementTree
response = requests.get(url, stream=True)
response.raw.decode_content = True
tree = ElementTree.parse(response.raw)
root = tree.getroot()
for child in root.iter('Materia'):
if child.find('EmentaMateria').text is not None:
ementa = child.find('EmentaMateria').text
for child_IdMateria in child.findall('IdentificacaoMateria'):
anoMateria = child_IdMateria.find('AnoMateria').text
materia = child_IdMateria.find('NumeroMateria').text
siglaMateria = child_IdMateria.find('SiglaSubtipoMateria').text
print('Ano = '+anoMateria+' | Numero Materia = '+materia+' | tipo = '+siglaMateria+' | '+ementa)
```
What I'm overlooking here?
Thanks | 2018/08/03 | [
"https://Stackoverflow.com/questions/51664292",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1706665/"
] | Instead of checking if `child.find('EmentaMateria').text` is not `None`, you should make sure that `child.find('EmentaMateria')` is not `None` first.
Also, you should store the returning value of `child.find('EmentaMateria')` to avoid calling it twice.
Lastly, you should assign `ementa` a default value if `child.find('EmentaMateria')` is `None`; otherwise your `print` function below will be referencing an un-initialized variable.
Change:
```
if child.find('EmentaMateria').text is not None:
ementa = child.find('EmentaMateria').text
```
to:
```
node = child.find('EmentaMateria')
if node is not None:
ementa = node.text
else:
ementa = None
```
Alternatively, you can use the built-in function `getattr` to do the same without a temporary variable:
```
ementa = getattr(child.find('EmentaMateria'), 'text', None)
``` | If you are using the code to parse an xml file, open the xml file with a text editor and inspect the tags. In my case there were some rogue tags at the end. Once i removed those, the code worked as expected. |
12,164,692 | So I am new in this field and am not sure how to do this!!But basically here is what i did.
I sshed to somehost.
```
ssh hostname
username: foo
password: bar
```
In one of the directories, there is a huge csv file.. abc.csv
Now, i dont want to copy that file to my local.. but read it from there.
When I asked the folks around, they said that I can write a unix script and get the data in my python program from tehre.
I am not sure what that means?
Any clues?
Also, i am using windows env.
Thanks | 2012/08/28 | [
"https://Stackoverflow.com/questions/12164692",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/902885/"
] | So, here are your options:
**1.** Declare your **base class** as `abstract` and some methods as well
This approach has two good points: you will be free to implement common methods at the base class (that is, not all of them need to be `abstract`) while any abstract method will **must be** overridden at derived classes. There is one counter point (that you may be aware of): you can't instantiate it. That is, you can't do something like:
```
Base obj = new Base();
```
However, you stil will be able to do this:
```
Base obj = new Child();
```
**2.** Use **interfaces**
You may declare some interfaces to force your classes to implement some methods. However, the semantics between inheritance and interface is quite different. You must decide which is best for your.
IMHO, you would be fine with the first option. | You need to specify an abstract method in Parent:
```
public abstract class Parent
{
public void DoSomething()
{
// Do something here...
}
public abstract void ForceChildToDoSomething();
}
```
This forces the child to implement it:
```
public class Child : Parent
{
public override void ForceChildToDoSomething()
{
// Do something...
}
}
```
You will, however, now have an abstract Parent. So if you want to use the functionality in Parent, you'll need to do something like this:
```
Parent parent = new Child();
parent.DoSomething();
parent.ForceChildToDoSomething();
``` |
12,164,692 | So I am new in this field and am not sure how to do this!!But basically here is what i did.
I sshed to somehost.
```
ssh hostname
username: foo
password: bar
```
In one of the directories, there is a huge csv file.. abc.csv
Now, i dont want to copy that file to my local.. but read it from there.
When I asked the folks around, they said that I can write a unix script and get the data in my python program from tehre.
I am not sure what that means?
Any clues?
Also, i am using windows env.
Thanks | 2012/08/28 | [
"https://Stackoverflow.com/questions/12164692",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/902885/"
] | So, here are your options:
**1.** Declare your **base class** as `abstract` and some methods as well
This approach has two good points: you will be free to implement common methods at the base class (that is, not all of them need to be `abstract`) while any abstract method will **must be** overridden at derived classes. There is one counter point (that you may be aware of): you can't instantiate it. That is, you can't do something like:
```
Base obj = new Base();
```
However, you stil will be able to do this:
```
Base obj = new Child();
```
**2.** Use **interfaces**
You may declare some interfaces to force your classes to implement some methods. However, the semantics between inheritance and interface is quite different. You must decide which is best for your.
IMHO, you would be fine with the first option. | Yes, abstract:
```
public abstract Parent
{
protected abstract bool CancelCanExecute(object param);
//more stuff
}
```
It could also be `public`, but not `private`.
Now you can't have a derived class that doesn't either implement `CancelCanExecute` or is itself `abstract` so forcing further derived classes to implement it. |
12,164,692 | So I am new in this field and am not sure how to do this!!But basically here is what i did.
I sshed to somehost.
```
ssh hostname
username: foo
password: bar
```
In one of the directories, there is a huge csv file.. abc.csv
Now, i dont want to copy that file to my local.. but read it from there.
When I asked the folks around, they said that I can write a unix script and get the data in my python program from tehre.
I am not sure what that means?
Any clues?
Also, i am using windows env.
Thanks | 2012/08/28 | [
"https://Stackoverflow.com/questions/12164692",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/902885/"
] | So, here are your options:
**1.** Declare your **base class** as `abstract` and some methods as well
This approach has two good points: you will be free to implement common methods at the base class (that is, not all of them need to be `abstract`) while any abstract method will **must be** overridden at derived classes. There is one counter point (that you may be aware of): you can't instantiate it. That is, you can't do something like:
```
Base obj = new Base();
```
However, you stil will be able to do this:
```
Base obj = new Child();
```
**2.** Use **interfaces**
You may declare some interfaces to force your classes to implement some methods. However, the semantics between inheritance and interface is quite different. You must decide which is best for your.
IMHO, you would be fine with the first option. | You should define an interface and then your code should accept only objects that implement that interface. While it is very tempting to use `abstract` and to create a common base class, this approach is (almost) wrong by definition (almost).
In C# and other languages that does not allow multiple-inheritance, creating a base class just to define an empty 'skeleton' that others must fill and forcing everyone to use it is **very limiting**. Everyone will have to inherit from it, thus, for example, they will not be able to reuse their own existing class hierarchies, or have to create tedious bridges or (...).
If your abstract-base-class have completely nothing else than a one/few/dozen/hundred abstract methods, events, and properties - it should have been an interface, because it only DEFINES the 'common requirements'.
The only reasonable reason to create an new abstract base with common thingies is to actually **provide** the default implementations. Still, with such base class, an interface should be defined too, the base should implement it and allow to override and maybe even mark something as actually abstract -- but still your code should refer to everything via interface, not the base class. Such abstract bases should be a help/shortcut for implementors, not mandatory. If someone wants to do everything from scratch - he will implement interface and ignore the base with example code, and your code will still beautifully work with that. Still, all the common code may be provided as a set of static helper classes operating on that very interfaces.
Abstract base classes are actually needed and cannot be supplanted with interfaces in some corner cases, for example when you have to force the derived classes to have a parameterful constructor, or where you yourself are bound to derive from somthing, ie. like WPF Visual or UIElement or DependencyObject --- the Microsoft's design is therefore flawed here a bit, too. They enforced deriving from base classes and it hits the developer in many places (like, ie. data model objects from EntityFramework not being DependencyObjects etc). Still, I think they should have abstracted from that - looking at Visual and friends, there are not that many internal routines that could not be lifted to interfaces.. I do not think they did it just 'because', rather, I think it was about performance and to cut casts/method dispatches.
Please note that everything I said does not exactly fit what you have presented in the question. There, you have already assumed that "the parent/base class will handle XYZ". This means, that you are resigning from the interface approach just at the very beginning. With interface, you will only define that click and cancelclick must exist, but you will not be able to enforce/provice the "base implementation". Such things you can do with base clases. Thus, I believe that you should take the open/mixed approach: define interface, define static reusable handlers, use only the interface, and provide a base class for some "lazy coders":
```
public interface IClickable
{
ICommand CancelCommand { get; }
void CancelClick();
bool CanCancelClick();
}
public static class ClickableDefaultImpl
{
public static void DefaultCancelClick(IClickable obj)
{
... do the common things on the OBJ
}
public static bool DefaultCanCancelClick(IClickable obj)
{
... do the common things on the OBJ
}
}
public abstract class Clickable : IClickable
{
public void CancelClick() { ClickableDefaultImpl.CancelClick(this); }
public bool CanCancelClick() { return ClickableDefaultImpl.CanCancelClick(this); }
}
```
This may seem very bloated, but it is quite open for customization. Though with the open-ness, there is almost no way to enforce that every one must use the "ClickableImpl". There are some ways, but .. they would include even more bloat and time/memory overhead, I think they are not worth describing now.
Remember to estimate who, how, and how much will use this code in future. If it is for one/two/five uses, stick with abstract bases. But if you sense dozens or hundreds of child implementations, you'd better add the little bloat - it may save much time later. |
12,164,692 | So I am new in this field and am not sure how to do this!!But basically here is what i did.
I sshed to somehost.
```
ssh hostname
username: foo
password: bar
```
In one of the directories, there is a huge csv file.. abc.csv
Now, i dont want to copy that file to my local.. but read it from there.
When I asked the folks around, they said that I can write a unix script and get the data in my python program from tehre.
I am not sure what that means?
Any clues?
Also, i am using windows env.
Thanks | 2012/08/28 | [
"https://Stackoverflow.com/questions/12164692",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/902885/"
] | You need to specify an abstract method in Parent:
```
public abstract class Parent
{
public void DoSomething()
{
// Do something here...
}
public abstract void ForceChildToDoSomething();
}
```
This forces the child to implement it:
```
public class Child : Parent
{
public override void ForceChildToDoSomething()
{
// Do something...
}
}
```
You will, however, now have an abstract Parent. So if you want to use the functionality in Parent, you'll need to do something like this:
```
Parent parent = new Child();
parent.DoSomething();
parent.ForceChildToDoSomething();
``` | Yes, abstract:
```
public abstract Parent
{
protected abstract bool CancelCanExecute(object param);
//more stuff
}
```
It could also be `public`, but not `private`.
Now you can't have a derived class that doesn't either implement `CancelCanExecute` or is itself `abstract` so forcing further derived classes to implement it. |
12,164,692 | So I am new in this field and am not sure how to do this!!But basically here is what i did.
I sshed to somehost.
```
ssh hostname
username: foo
password: bar
```
In one of the directories, there is a huge csv file.. abc.csv
Now, i dont want to copy that file to my local.. but read it from there.
When I asked the folks around, they said that I can write a unix script and get the data in my python program from tehre.
I am not sure what that means?
Any clues?
Also, i am using windows env.
Thanks | 2012/08/28 | [
"https://Stackoverflow.com/questions/12164692",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/902885/"
] | You need to specify an abstract method in Parent:
```
public abstract class Parent
{
public void DoSomething()
{
// Do something here...
}
public abstract void ForceChildToDoSomething();
}
```
This forces the child to implement it:
```
public class Child : Parent
{
public override void ForceChildToDoSomething()
{
// Do something...
}
}
```
You will, however, now have an abstract Parent. So if you want to use the functionality in Parent, you'll need to do something like this:
```
Parent parent = new Child();
parent.DoSomething();
parent.ForceChildToDoSomething();
``` | You should define an interface and then your code should accept only objects that implement that interface. While it is very tempting to use `abstract` and to create a common base class, this approach is (almost) wrong by definition (almost).
In C# and other languages that does not allow multiple-inheritance, creating a base class just to define an empty 'skeleton' that others must fill and forcing everyone to use it is **very limiting**. Everyone will have to inherit from it, thus, for example, they will not be able to reuse their own existing class hierarchies, or have to create tedious bridges or (...).
If your abstract-base-class have completely nothing else than a one/few/dozen/hundred abstract methods, events, and properties - it should have been an interface, because it only DEFINES the 'common requirements'.
The only reasonable reason to create an new abstract base with common thingies is to actually **provide** the default implementations. Still, with such base class, an interface should be defined too, the base should implement it and allow to override and maybe even mark something as actually abstract -- but still your code should refer to everything via interface, not the base class. Such abstract bases should be a help/shortcut for implementors, not mandatory. If someone wants to do everything from scratch - he will implement interface and ignore the base with example code, and your code will still beautifully work with that. Still, all the common code may be provided as a set of static helper classes operating on that very interfaces.
Abstract base classes are actually needed and cannot be supplanted with interfaces in some corner cases, for example when you have to force the derived classes to have a parameterful constructor, or where you yourself are bound to derive from somthing, ie. like WPF Visual or UIElement or DependencyObject --- the Microsoft's design is therefore flawed here a bit, too. They enforced deriving from base classes and it hits the developer in many places (like, ie. data model objects from EntityFramework not being DependencyObjects etc). Still, I think they should have abstracted from that - looking at Visual and friends, there are not that many internal routines that could not be lifted to interfaces.. I do not think they did it just 'because', rather, I think it was about performance and to cut casts/method dispatches.
Please note that everything I said does not exactly fit what you have presented in the question. There, you have already assumed that "the parent/base class will handle XYZ". This means, that you are resigning from the interface approach just at the very beginning. With interface, you will only define that click and cancelclick must exist, but you will not be able to enforce/provice the "base implementation". Such things you can do with base clases. Thus, I believe that you should take the open/mixed approach: define interface, define static reusable handlers, use only the interface, and provide a base class for some "lazy coders":
```
public interface IClickable
{
ICommand CancelCommand { get; }
void CancelClick();
bool CanCancelClick();
}
public static class ClickableDefaultImpl
{
public static void DefaultCancelClick(IClickable obj)
{
... do the common things on the OBJ
}
public static bool DefaultCanCancelClick(IClickable obj)
{
... do the common things on the OBJ
}
}
public abstract class Clickable : IClickable
{
public void CancelClick() { ClickableDefaultImpl.CancelClick(this); }
public bool CanCancelClick() { return ClickableDefaultImpl.CanCancelClick(this); }
}
```
This may seem very bloated, but it is quite open for customization. Though with the open-ness, there is almost no way to enforce that every one must use the "ClickableImpl". There are some ways, but .. they would include even more bloat and time/memory overhead, I think they are not worth describing now.
Remember to estimate who, how, and how much will use this code in future. If it is for one/two/five uses, stick with abstract bases. But if you sense dozens or hundreds of child implementations, you'd better add the little bloat - it may save much time later. |
28,454,359 | I need to process a large text file containing information on scientific publications, exported from the ScienceDirect search page. I want to store the data in an array of arrays, so that each paper is an array, and all papers are stored in a larger array.
The good part is that each line corresponds to the value I want to put in the array, and that there is an empty line between papers. The problem is that each paper has a different number of lines associated with it, ranging from 2 to 6. An example of the data would be:
```
[Authors, title, journal, date]
[(digital object identifier)]
[(link to ScienceDirect website)]
[Abstract: Abstract]
[It has been shown ...]
[Authors, title, journal, date]
[(digital object identifier)]
[(link to ScienceDirect website)]
[Abstract: Abstract]
[It has been shown ...]
[Keywords]
[Authors, title, journal, date]
[(digital object identifier)]
```
and so on. The desired data structure would be ArrayAllPapers [ Paper-1 , Paper-2 , ... ,
Paper-n ], where each paper is an array Paper-1 [ author-line , doi-line , etc ]
I am able to read the file into python line by line as an array, but then run up against the problem of slicing the list based on a list item (in this case '\n'). I have found solutions to this problem for datasets with equal line spacing for objects, most of them written for lists, but none that work for unequal distribution. Perhaps I need to write to the text file first to fill in 'missing' rows to create an equal distribution?
I am still learning to work with Python (some experience with MatLab), so please excuse me if there is an obvious solution for this. I have tried finding a solution but have come up empty.
Any help would be highly appreciated!
For reference, the code I use now to enter the text file into an array:
```
import re, numpy
with open("test-abstracts-short.txt", "r") as text:
array = []
for line in text:
array.append(line)
``` | 2015/02/11 | [
"https://Stackoverflow.com/questions/28454359",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4554385/"
] | Since you want to skip blank lines, the easiest thing to do is to check if a line is blank.
```
x = []
with open('my_file.txt', 'r') as f:
temp_list = []
for line in f:
if line.strip(): #line is not blank
temp_list.append(line)
else: #line is blank, i.e., it contains only newlines and/or whitespace
if temp_list: #check if temp_list contains any items
x.append(temp_list)
temp_list = []
``` | If first lines are mandatory, you can try to parse them and for each article create structure like this `{'author': 'Name', 'digital_object_identifier': 'Value'}` and so on.
Than you can try to parse most common keywords and append them as fields. So your article woild be like this:
`{'author': 'Name', 'digital_object_identifier': 'Value', 'keyword1': 'Value', 'keyword2': 'Value', 'keyword3': 'Value'}`.
Than you can add all unparsed keywords in some specific field (to do not lose data):
`{'author': 'Name', 'digital_object_identifier': 'Value', 'keyword1': 'Value', 'keyword2': 'Value', 'keyword3': 'Value', 'other_keys': {'key': 'value'}}`.
So, in other words, you can split your document om mandatory and non-mandatory fields. |
44,851,342 | How to convert a python dictionary `d = {1:10, 2:20, 3:30, 4:30}` to `{10: [1], 20: [2], 30: [3, 4]}`?
I need to reverse a dictionary the values should become the keys of another dictionary and the values should be key in a list i.e. also in the sorted matter. | 2017/06/30 | [
"https://Stackoverflow.com/questions/44851342",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8238359/"
] | ```
d = {1:10, 2:20, 3:30, 4:30}
inv = {}
for key, val in d.iteritems():
inv[val] = inv.get(val, []) + [key]
```
Try this! | ```
o = {}
for k,v in d.iteritems():
if v in o:
o[v].append(k)
else:
o[v] = [k]
```
`o = {10: [1], 20: [2], 30: [3, 4]}` |
44,851,342 | How to convert a python dictionary `d = {1:10, 2:20, 3:30, 4:30}` to `{10: [1], 20: [2], 30: [3, 4]}`?
I need to reverse a dictionary the values should become the keys of another dictionary and the values should be key in a list i.e. also in the sorted matter. | 2017/06/30 | [
"https://Stackoverflow.com/questions/44851342",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8238359/"
] | This use case is easily handled by [*dict.setdefault()*](https://docs.python.org/3/library/stdtypes.html#dict.setdefault)
```
>>> d = {1:10, 2:20, 3:30, 4:30}
>>> e = {}
>>> for x, y in d.items():
e.setdefault(y, []).append(x)
>>> e
{10: [1], 20: [2], 30: [3, 4]}
```
An alternative is to use [collections.defaultdict](https://docs.python.org/3/library/collections.html#collections.defaultdict). This has a slightly more complex set-up, but the inner-loop access is simpler and faster than the *setdefault* approach. Also, it returns a dict subclass rather than a plain dict:
```
>>> e = defaultdict(list)
>>> for x, y in d.items():
e[y].append(x)
>>> e
defaultdict(<class 'list'>, {30: [3, 4], 10: [1], 20: [2]})
``` | ```
o = {}
for k,v in d.iteritems():
if v in o:
o[v].append(k)
else:
o[v] = [k]
```
`o = {10: [1], 20: [2], 30: [3, 4]}` |
44,851,342 | How to convert a python dictionary `d = {1:10, 2:20, 3:30, 4:30}` to `{10: [1], 20: [2], 30: [3, 4]}`?
I need to reverse a dictionary the values should become the keys of another dictionary and the values should be key in a list i.e. also in the sorted matter. | 2017/06/30 | [
"https://Stackoverflow.com/questions/44851342",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8238359/"
] | Reversing keys and values in a python dict is a bit tricky. You should have in mind that a python dict must have a `unique` keys.
So, if you know that when reversing keys and values of your current dict will have a unique keys, you can use a simple `dict comprehension` like this example:
```
{v:k for k,v in my_dict.items()}
```
However, you can use `groupby` from `itertools` module like this example:
```
from itertools import groupby
a = {1:10, 2:20, 3:30, 4:30}
b = {k: [j for j, _ in list(v)] for k, v in groupby(a.items(), lambda x: x[1])}
print(b)
>>> {10: [1], 20: [2], 30: [3, 4]}
``` | ```
o = {}
for k,v in d.iteritems():
if v in o:
o[v].append(k)
else:
o[v] = [k]
```
`o = {10: [1], 20: [2], 30: [3, 4]}` |
44,851,342 | How to convert a python dictionary `d = {1:10, 2:20, 3:30, 4:30}` to `{10: [1], 20: [2], 30: [3, 4]}`?
I need to reverse a dictionary the values should become the keys of another dictionary and the values should be key in a list i.e. also in the sorted matter. | 2017/06/30 | [
"https://Stackoverflow.com/questions/44851342",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8238359/"
] | This use case is easily handled by [*dict.setdefault()*](https://docs.python.org/3/library/stdtypes.html#dict.setdefault)
```
>>> d = {1:10, 2:20, 3:30, 4:30}
>>> e = {}
>>> for x, y in d.items():
e.setdefault(y, []).append(x)
>>> e
{10: [1], 20: [2], 30: [3, 4]}
```
An alternative is to use [collections.defaultdict](https://docs.python.org/3/library/collections.html#collections.defaultdict). This has a slightly more complex set-up, but the inner-loop access is simpler and faster than the *setdefault* approach. Also, it returns a dict subclass rather than a plain dict:
```
>>> e = defaultdict(list)
>>> for x, y in d.items():
e[y].append(x)
>>> e
defaultdict(<class 'list'>, {30: [3, 4], 10: [1], 20: [2]})
``` | ```
d = {1:10, 2:20, 3:30, 4:30}
inv = {}
for key, val in d.iteritems():
inv[val] = inv.get(val, []) + [key]
```
Try this! |
44,851,342 | How to convert a python dictionary `d = {1:10, 2:20, 3:30, 4:30}` to `{10: [1], 20: [2], 30: [3, 4]}`?
I need to reverse a dictionary the values should become the keys of another dictionary and the values should be key in a list i.e. also in the sorted matter. | 2017/06/30 | [
"https://Stackoverflow.com/questions/44851342",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8238359/"
] | Reversing keys and values in a python dict is a bit tricky. You should have in mind that a python dict must have a `unique` keys.
So, if you know that when reversing keys and values of your current dict will have a unique keys, you can use a simple `dict comprehension` like this example:
```
{v:k for k,v in my_dict.items()}
```
However, you can use `groupby` from `itertools` module like this example:
```
from itertools import groupby
a = {1:10, 2:20, 3:30, 4:30}
b = {k: [j for j, _ in list(v)] for k, v in groupby(a.items(), lambda x: x[1])}
print(b)
>>> {10: [1], 20: [2], 30: [3, 4]}
``` | ```
d = {1:10, 2:20, 3:30, 4:30}
inv = {}
for key, val in d.iteritems():
inv[val] = inv.get(val, []) + [key]
```
Try this! |
29,385,340 | I'm trying to find all the divisors ("i" in my case) of a given number ("a" in my case) with no remainder (a % i == 0). I'm running a loop that goes trough all the vales of i starting from 1 up to the value of a. The problem is that only first 2 products of a % i == 0 are taken into account. The rest is left out. Why is that?
Here the code in python3:
```
a = 999
i = 1
x = 0
d = []
while (i < a):
x = a / i
if(x % i == 0):
d.append(i)
i += 1
print (d)
```
The output of the code is:
```
[1, 3]
```
instead of listing all the divisors.
I have checked for different values of a and can't find the error. | 2015/04/01 | [
"https://Stackoverflow.com/questions/29385340",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4262683/"
] | The behavior of the script is correct. See for yourself:

I think it's your logic, and what you are trying to achieve is:
```
a = 999
i = 1
d = []
while (i < a):
if(a % i == 0):
d.append(i)
i += 1
print (d)
```
Outputs:
```
[1, 3, 9, 27, ...]
``` | To complement Anton's answer, a more Pythonic way to loop would be:
```
a, d = 999, []
for i in range(1, a):
if a%i == 0:
d.append(i)
```
You can also take advantage of the fact that object have a [Boolean value](https://docs.python.org/3.4/reference/datamodel.html#object.__bool__):
```
if not a%i:
```
Or you can use a list comprehension:
```
d = [i for i in range(1, a) if not a%i]
``` |
41,690,010 | [](https://i.stack.imgur.com/FnX1O.png)In python selenium, how to create xpath for below code which needs only id and class:
```
<button type="button" id="ext-gen756" class=" x-btn-text">Save</button>
```
And I also need to select Global ID from below drop-down without clicking it.
```
<div class="x-combo-list-item">Global ID</div>
```
My below solution is not working-
```
//div[@class='x-combo-list-item']/div[contains(.,'Global ID')]
```
I do not want to mention `droplist` sequence number like-
```
//div[@class='x-combo-list-item']/div[1]
``` | 2017/01/17 | [
"https://Stackoverflow.com/questions/41690010",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5907308/"
] | If you want to club `id` and `class` together in your `xpath` try like this-
```
driver.find_element_by_xpath('//button[@id="ext-gen756"][@class=" x-btn-text"]');
```
You can also try the same using `AND` -
```
driver.find_element_by_xpath('//button[@id="ext-gen756" and @class=" x-btn-text"]');
```
**EDITED**
Your `xpath` seem incorrect. Use following -
```
driver.find_element_by_xpath('//div[@class="x-combo-list-item"][contains(.,"Global ID")]');
``` | Just answering my own question after a long time had a look on this. The Question was posted when I was new in xpath topics.
```
<button type="button" id="ext-gen756" class=" x-btn-text">Save</button>
```
in terms of id and class:
```
driver.find_element_by_xpath("//button[@id='ext-gen756'][@class=' x-btn-text']")
```
Also sometime if Id's are dynamic and changes for every reload of the page then you may try:
```
driver.find_element_by_xpath("//button[@type='Save'][contains(@id,'ext-gen')][@class=' x-btn-text']")
```
Here I have used @type and for the @id contains option as prefix(ext-gen) usually remains the same for the dynamic ID's |
5,048,217 | i have some data stored in a .txt file in this format:
```
----------|||||||||||||||||||||||||-----------|||||||||||
1029450386abcdefghijklmnopqrstuvwxy0293847719184756301943
1020414646canBeFollowedBySpaces 3292532113435532419963
```
don't ask...
i have many lines of this, and i need a way to add more digits to the end of a particular line.
i've written code to find the line i want, but im stumped as to how to add 11 characters to the end of it. i've looked around, this site has been helpful with some other issues i've run into, but i can't seem to find what i need for this.
it is important that the line retain its position in the file, and its contents in their current order.
using python3.1, how would you turn this:
```
1020414646canBeFollowedBySpaces 3292532113435532419963
```
into
```
1020414646canBeFollowedBySpaces 329253211343553241996301846372998
``` | 2011/02/19 | [
"https://Stackoverflow.com/questions/5048217",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/623985/"
] | As a general principle, there's no shortcut to "inserting" new data in the middle of a text file. You will need to make a copy of the entire original file in a new file, modifying your desired line(s) of text on the way.
For example:
```
with open("input.txt") as infile:
with open("output.txt", "w") as outfile:
for s in infile:
s = s.rstrip() # remove trailing newline
if "target" in s:
s += "0123456789"
print(s, file=outfile)
os.rename("input.txt", "input.txt.original")
os.rename("output.txt", "input.txt")
``` | Copy the file, line by line, to another file. When you get to the line that needs extra chars then add them before writing. |
5,048,217 | i have some data stored in a .txt file in this format:
```
----------|||||||||||||||||||||||||-----------|||||||||||
1029450386abcdefghijklmnopqrstuvwxy0293847719184756301943
1020414646canBeFollowedBySpaces 3292532113435532419963
```
don't ask...
i have many lines of this, and i need a way to add more digits to the end of a particular line.
i've written code to find the line i want, but im stumped as to how to add 11 characters to the end of it. i've looked around, this site has been helpful with some other issues i've run into, but i can't seem to find what i need for this.
it is important that the line retain its position in the file, and its contents in their current order.
using python3.1, how would you turn this:
```
1020414646canBeFollowedBySpaces 3292532113435532419963
```
into
```
1020414646canBeFollowedBySpaces 329253211343553241996301846372998
``` | 2011/02/19 | [
"https://Stackoverflow.com/questions/5048217",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/623985/"
] | As a general principle, there's no shortcut to "inserting" new data in the middle of a text file. You will need to make a copy of the entire original file in a new file, modifying your desired line(s) of text on the way.
For example:
```
with open("input.txt") as infile:
with open("output.txt", "w") as outfile:
for s in infile:
s = s.rstrip() # remove trailing newline
if "target" in s:
s += "0123456789"
print(s, file=outfile)
os.rename("input.txt", "input.txt.original")
os.rename("output.txt", "input.txt")
``` | Check out the [fileinput](http://docs.python.org/py3k/library/fileinput.html#module-fileinput) module, it can do sort of "inplace" edits with files. though I believe temporary files are still involved in the internal process.
```
import fileinput
for line in fileinput.input('input.txt', inplace=1, backup='.orig'):
if line.startswith('1020414646canBeFollowedBySpaces'):
line = line.rstrip() + '01846372998' '\n'
print(line, end='')
```
The `print` now prints to the file instead of the console.
You might want to back up your original file before editing. |
5,048,217 | i have some data stored in a .txt file in this format:
```
----------|||||||||||||||||||||||||-----------|||||||||||
1029450386abcdefghijklmnopqrstuvwxy0293847719184756301943
1020414646canBeFollowedBySpaces 3292532113435532419963
```
don't ask...
i have many lines of this, and i need a way to add more digits to the end of a particular line.
i've written code to find the line i want, but im stumped as to how to add 11 characters to the end of it. i've looked around, this site has been helpful with some other issues i've run into, but i can't seem to find what i need for this.
it is important that the line retain its position in the file, and its contents in their current order.
using python3.1, how would you turn this:
```
1020414646canBeFollowedBySpaces 3292532113435532419963
```
into
```
1020414646canBeFollowedBySpaces 329253211343553241996301846372998
``` | 2011/02/19 | [
"https://Stackoverflow.com/questions/5048217",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/623985/"
] | As a general principle, there's no shortcut to "inserting" new data in the middle of a text file. You will need to make a copy of the entire original file in a new file, modifying your desired line(s) of text on the way.
For example:
```
with open("input.txt") as infile:
with open("output.txt", "w") as outfile:
for s in infile:
s = s.rstrip() # remove trailing newline
if "target" in s:
s += "0123456789"
print(s, file=outfile)
os.rename("input.txt", "input.txt.original")
os.rename("output.txt", "input.txt")
``` | ```
target_chain = '1020414646canBeFollowedBySpaces 3292532113435532419963'
to_add = '01846372998'
with open('zaza.txt','rb+') as f:
ch = f.read()
x = ch.find(target_chain)
f.seek(x + len(target_chain),0)
f.write(to_add)
f.write(ch[x + len(target_chain):])
```
In this method it's absolutely obligatory to open the file in binary mode **'b'** for some reason linked to the treatment of the end of lines by Python (see Universal Newline, enabled by default)
The mode **'r+'** is to allow the writing as well as the reading
In this method, what is before the target\_chain in the file remains untouched. And what is after the target\_chain is shifted ahead. As said by Greg Hewgill, there is no possibility to move apart bits on a hard drisk to insert new bits in the middle.
Evidently, if the file is very big, reading all of its content in **ch** could be too much memory consuming and the algorithm should then be changed: reading line after line until the line containing the target\_chain, and then reading the next line before inserting, and then continuing to do "reading the next line - re-writing on the current line" until the end of the file in order to shift progressively the content from the line concerned with addition.
You see what I mean... |
5,048,217 | i have some data stored in a .txt file in this format:
```
----------|||||||||||||||||||||||||-----------|||||||||||
1029450386abcdefghijklmnopqrstuvwxy0293847719184756301943
1020414646canBeFollowedBySpaces 3292532113435532419963
```
don't ask...
i have many lines of this, and i need a way to add more digits to the end of a particular line.
i've written code to find the line i want, but im stumped as to how to add 11 characters to the end of it. i've looked around, this site has been helpful with some other issues i've run into, but i can't seem to find what i need for this.
it is important that the line retain its position in the file, and its contents in their current order.
using python3.1, how would you turn this:
```
1020414646canBeFollowedBySpaces 3292532113435532419963
```
into
```
1020414646canBeFollowedBySpaces 329253211343553241996301846372998
``` | 2011/02/19 | [
"https://Stackoverflow.com/questions/5048217",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/623985/"
] | Check out the [fileinput](http://docs.python.org/py3k/library/fileinput.html#module-fileinput) module, it can do sort of "inplace" edits with files. though I believe temporary files are still involved in the internal process.
```
import fileinput
for line in fileinput.input('input.txt', inplace=1, backup='.orig'):
if line.startswith('1020414646canBeFollowedBySpaces'):
line = line.rstrip() + '01846372998' '\n'
print(line, end='')
```
The `print` now prints to the file instead of the console.
You might want to back up your original file before editing. | Copy the file, line by line, to another file. When you get to the line that needs extra chars then add them before writing. |
5,048,217 | i have some data stored in a .txt file in this format:
```
----------|||||||||||||||||||||||||-----------|||||||||||
1029450386abcdefghijklmnopqrstuvwxy0293847719184756301943
1020414646canBeFollowedBySpaces 3292532113435532419963
```
don't ask...
i have many lines of this, and i need a way to add more digits to the end of a particular line.
i've written code to find the line i want, but im stumped as to how to add 11 characters to the end of it. i've looked around, this site has been helpful with some other issues i've run into, but i can't seem to find what i need for this.
it is important that the line retain its position in the file, and its contents in their current order.
using python3.1, how would you turn this:
```
1020414646canBeFollowedBySpaces 3292532113435532419963
```
into
```
1020414646canBeFollowedBySpaces 329253211343553241996301846372998
``` | 2011/02/19 | [
"https://Stackoverflow.com/questions/5048217",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/623985/"
] | ```
target_chain = '1020414646canBeFollowedBySpaces 3292532113435532419963'
to_add = '01846372998'
with open('zaza.txt','rb+') as f:
ch = f.read()
x = ch.find(target_chain)
f.seek(x + len(target_chain),0)
f.write(to_add)
f.write(ch[x + len(target_chain):])
```
In this method it's absolutely obligatory to open the file in binary mode **'b'** for some reason linked to the treatment of the end of lines by Python (see Universal Newline, enabled by default)
The mode **'r+'** is to allow the writing as well as the reading
In this method, what is before the target\_chain in the file remains untouched. And what is after the target\_chain is shifted ahead. As said by Greg Hewgill, there is no possibility to move apart bits on a hard drisk to insert new bits in the middle.
Evidently, if the file is very big, reading all of its content in **ch** could be too much memory consuming and the algorithm should then be changed: reading line after line until the line containing the target\_chain, and then reading the next line before inserting, and then continuing to do "reading the next line - re-writing on the current line" until the end of the file in order to shift progressively the content from the line concerned with addition.
You see what I mean... | Copy the file, line by line, to another file. When you get to the line that needs extra chars then add them before writing. |
2,541,954 | I basically want to be able to:
* Write a few functions in python (with the minimum amount of extra meta data)
* Turn these functions into a web service (with the minimum of effort / boiler plate)
* Automatically generate some javascript functions / objects for rpc (this should prevent me from doing as many stupid things as possible like mistyping method names, forgetting the names of methods, passing the wrong number of arguments)
**Example**
python:
```
def hello_world():
return "Hello world"
```
javascript:
```
...
<!-- This file is automatically generated (either dynamically or statically) -->
<script src="http://myurl.com/webservice/client_side_javascript"> </script>
...
<script>
$('#button').click(function () {
hello_world(function (data){ $('#label').text(data)))
}
</script>
```
A bit of research has shown me some approaches that come close to this:
* Automatic generation of json-rpc services from functions with a little boiler plate code in python and then using jquery and json to do the calls (still easy to make mistakes with method names - still need to be aware of urls when calling, very irritating to write these calls yourself in the firebug shell)
* Using a library like soaplib to generate wsdl from python (by adding copious type information). And then somehow convert this into javascript (not sure if there is even a library to do this)
But are there any approaches closer to what I want? | 2010/03/29 | [
"https://Stackoverflow.com/questions/2541954",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/47741/"
] | Yes there is, there is [Pyjamas](http://pyjs.org/). Some people bill this as the "[GWT](http://code.google.com/webtoolkit/) for Python" | It looks like using a javascript XML RPC client (there is jquery plugin for this) together with an XML RPC server is a good way to go.
The jquery plugin will introspect your rpc service and will populate method names make it impossible to mis type the name of a method call without getting early warning. It will not however test the number of arguments that you pass, or their type.
There doesn't seem to be the same support for introspection on json rpc (or rather there doesn't seem to be a consistent standard). This approach can also be used with django.
I've put together some example code and uploaded it [here](http://tat.wright.name/xml-rpc) (I hope that linking to one's blog posts isn't considered terrible form - a brief search of the internet didn't seem to suggest it was)... |
36,510,431 | I am very new to python and programming in general and I want to print out the string "forward" whenever i press "w" on the keyboard. It is a test which I will transform into a remote control for a motorized vehicle.
```
while True:
if raw_input("") == "w":
print "forward"
```
Why does it just print out every key I type? | 2016/04/08 | [
"https://Stackoverflow.com/questions/36510431",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4909346/"
] | In Python 2.x the raw\_input function will display all characters pressed, and return upon receiving a newline. If you want different behaviour you'll have to use a different function. Here's a portable version of getch for Python, it will return every key press:
```
# Copied from: stackoverflow.com/questions/510357/python-read-a-single-character-from-the-user
def _find_getch():
try:
import termios
except ImportError:
# Non-POSIX. Return msvcrt's (Windows') getch.
import msvcrt
return msvcrt.getch
# POSIX system. Create and return a getch that manipulates the tty.
import sys, tty
def _getch():
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
try:
tty.setraw(fd)
ch = sys.stdin.read(1)
finally:
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
return ch
return _getch
getch = _find_getch()
```
It can be used like so:
```
while True:
if getch() == "w":
print "forward"
``` | `raw_input` reads an entire line of input. The line you're inputting is made visible to you, and you can do things like type some text:
```
aiplanes
```
go left a few characters to fix your typo:
```
airplanes
```
go back to the end and delete a character because you didn't mean to make it plural:
```
airplane
```
and then hit `Enter`, and `raw_input` will return `"airplane"`. It doesn't just return immediately when you hit a keyboard key.
---
If you *want* to read individual keys, you'll need to use lower-level terminal control routines to take input. On Unix, the [`curses`](https://docs.python.org/2/library/curses.html) module would be an appropriate tool; I'm not sure what you'd use on Windows. I haven't done this before, but on Unix, I think you'd need to set the terminal to raw or cbreak mode and take input with `window.getkey()` or `window.getch()`. You might also have to turn off echoing with `curses.noecho()`; I'm not sure whether that's included in raw/cbreak mode. |
74,134,047 | I need some help recursively searching a python dict that contains nested lists.
I have a structure like the below example. The value of key "c" is a list of one or more dicts. The structure can be nested multiple times (as you can see in the second item), but the pattern is the same. In all likelihood, the nested depth will probably not be more than 5 deep.
My objective (in this example) is to find all occurrences of ref = 'hij789', **no matter where they occur** (however deep they are nested) and then add the missing 'b' = 'something' to each occurrence.
```
{
'ref': 'abc123',
'a': 'something',
'b': 'something',
'c': [{
'ref': 'def456',
'a': 'something',
'b': 'something',
'c': [{
'ref': 'hij789',
'a': 'something'
}]
},{
'ref': 'klm012',
'a': 'something',
'b': 'something',
'c': [{
'ref': 'nop345',
'a': 'something',
'b': 'something',
'c': [{
'ref': 'hij789',
'a': 'something'
}]
}]
},{
'ref': 'qrs678',
'a': 'something',
'b': 'something',
'c': [{
'ref': 'tuv901',
'a': 'something'
}]
}]
}
```
I first tried something like this, but it of course does not search beyond the first nested dict:
```
l = next((n for n in mydict['c'] if n['ref'] == 'myref'), None)
l['b'] = 'somevalue'
```
I also tried a variation of this, but could not make it work:
[Recursive list inside dict? - python](https://stackoverflow.com/questions/63646525/recursive-list-inside-dict-python)
Is there a relatively straightforward way to achieve this?
Thanks. | 2022/10/20 | [
"https://Stackoverflow.com/questions/74134047",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20287604/"
] | (Just posting this here from Matt Wards comment as I cannot mark the comment as the answer.)
As the comment suggests, Visual Studio for Mac seems to only use launchSettings.json for Asp.Net projects. I was working with a Console App.
Visual Studio for PC uses launchSettings.json for console applications too but not the Mac version. | Well, you can try to change the properties of the file and how VS Studio treats it during build.
1. Right-click on `launchSettings.json` and choose `Properties`
2. Set the below properties as follows:
```
Build action -> Content
Copy to directory -> Copy if newer
```
See if this helps. |
74,134,047 | I need some help recursively searching a python dict that contains nested lists.
I have a structure like the below example. The value of key "c" is a list of one or more dicts. The structure can be nested multiple times (as you can see in the second item), but the pattern is the same. In all likelihood, the nested depth will probably not be more than 5 deep.
My objective (in this example) is to find all occurrences of ref = 'hij789', **no matter where they occur** (however deep they are nested) and then add the missing 'b' = 'something' to each occurrence.
```
{
'ref': 'abc123',
'a': 'something',
'b': 'something',
'c': [{
'ref': 'def456',
'a': 'something',
'b': 'something',
'c': [{
'ref': 'hij789',
'a': 'something'
}]
},{
'ref': 'klm012',
'a': 'something',
'b': 'something',
'c': [{
'ref': 'nop345',
'a': 'something',
'b': 'something',
'c': [{
'ref': 'hij789',
'a': 'something'
}]
}]
},{
'ref': 'qrs678',
'a': 'something',
'b': 'something',
'c': [{
'ref': 'tuv901',
'a': 'something'
}]
}]
}
```
I first tried something like this, but it of course does not search beyond the first nested dict:
```
l = next((n for n in mydict['c'] if n['ref'] == 'myref'), None)
l['b'] = 'somevalue'
```
I also tried a variation of this, but could not make it work:
[Recursive list inside dict? - python](https://stackoverflow.com/questions/63646525/recursive-list-inside-dict-python)
Is there a relatively straightforward way to achieve this?
Thanks. | 2022/10/20 | [
"https://Stackoverflow.com/questions/74134047",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20287604/"
] | (Just posting this here from Matt Wards comment as I cannot mark the comment as the answer.)
As the comment suggests, Visual Studio for Mac seems to only use launchSettings.json for Asp.Net projects. I was working with a Console App.
Visual Studio for PC uses launchSettings.json for console applications too but not the Mac version. | Change the top line of your project file to the following:
`<Project Sdk="Microsoft.NET.Sdk.Web">`
(It was probably missing the `.Web` namespace) |
44,756,447 | I've got a lot of commands running in impala shell, in the middle of them I now have a need to run a python script. The script itself is fine when run from outside the impala shell.
When I run from within the impala shell using ! or "shell" (documentation found [here](https://www.cloudera.com/documentation/enterprise/5-9-x/topics/impala_shell_commands.html "here")) it changes the commands to be fully lower case.
The path to the script itself would be something like this: **/home/DOMAIN\_USERS/somemorefolders/python/script.py**
so in my impala shell I'm running: `!/home/DOMAIN_USERS/somemorefolders/python/script.py`
the error I get back is
>
> sh: /home/domain\_users/somemorefolders/python/script.py: No such file
> or directory
>
>
>
Is there any way to force it to not make it into lower case? I've tried putting both single & double quotes round the path but that makes no difference.
I guess if there's no way I'll have to come out of the impala shell, run the python bit then go back in. Its just a bit more work when I figured the "shell" command in the impala shell is there for that exact benefit. | 2017/06/26 | [
"https://Stackoverflow.com/questions/44756447",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5065581/"
] | This is caused by a known bug [IMPALA-4664](https://issues.apache.org/jira/browse/IMPALA-4664).
A workaround is to leave a space after "!". Can you try this (note the space):
! /home/DOMAIN\_USERS/somemorefolders/python/script.py | Thanks to [@BoboDarph](https://stackoverflow.com/users/8085234/bobodarph "bobodarph") for help in getting there.
I was able to use `!~/somemorefolders/python/script.py` as I could get there from my home directory.
I still think it's a bit shortsighted of impala to force things into lower case but there you go. |
43,021,399 | Just creating a python program that creates a function named letterX, that ... well makes an X. The two lines must be 90 degrees from each other. The pointer ends at the initial position.
I solved this pretty easily, just wondering if you can put this into a loop or just simplify it. I don't know how since i have to change directions differently rather than looping over the same code. Any help would be appreciated.
```
import turtle
t = turtle.Turtle()
s = turtle.Screen()
def letterX(t,length):
t.down()
t.right(45)
t.forward(length/2)
t.right(180)
t.forward(length)
t.right(180)
t.forward(length/2)
t.left(90)
t.forward(length/2)
t.right(180)
t.forward(length)
t.right(180)
t.forward(length/2)
t.right(45)
t.up()
letterX(t,100)
``` | 2017/03/25 | [
"https://Stackoverflow.com/questions/43021399",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7767472/"
] | Like francescalus commented, it looks like problem is related to integer arithmetic in Fortran.
You may modify the first `if` statement in Matlab implementation as follows:
```
if fix(k/2) ~= j/2
```
---
In your second part, there is a typo error in the Matlab code.
You wrote `x2` instead of `x1`.
Correct code:
```
f1 = 100/(1+x1.^2); %Instead of f1 = 100/(1+x2.^2);
```
Minor flaw:
```
if abs(e)<=0.001 %Instead of if abs(e)<0.001
```
---
I know very basic Fortran, so I executed both Matlab and Fortran code versions side by side.
I executed the code step by step using the debugger.
I used some arbitrary input values.
The problem is related to the first Fortran `if` statement: `(k/2/=j/2.)`
When `k` is an integer `k/2` evaluates to `floor(k/2)`, and `j/2.` evaluates to floating point (assume `k` is positive).
(I used [fix](https://www.mathworks.com/help/fixedpoint/ref/fix.html) Matlab function, in case `k` can also be negative).
Example:
```
integer j, k
j=3
k=3
print *, k/2
print *, j/2.
print *, k/2/=j/2.
```
Result:
```
1
1.500000
T
```
---
In Matlab, the default type is double.
```
j=3;
k=3;
disp(k/2)
disp(j/2)
disp(k/2 ~= j/2)
```
Result:
```
1.5000
1.5000
0
```
As you can see, in Fortran condition evaluates to **true**, and in Matlab to **false**.
---
Complete Matlab code:
```
a = 0;
b = 5.+85;
par1 = 100*(atan(b)-atan(a));
fa = 100/(1+a.^2);
fb = 100/(1+b.^2);
fprintf('METHOD SIMPSON\n');
for n = 1:1000000
h=(b-a)/n;
sum1=0;
sum2=0;
x1 = a;
x2 = a;
for j = 1:n-1
k = j;
if fix(k/2) ~= j/2
if j == 1
x1 = x1+h;
end
if j > 1
x1 = x1+2*h;
end
f1 = 100/(1+x1.^2);
sum1 = sum1 + f1;
else
x2 = x2+2*h;
f2 = 100/(1+x2.^2);
sum2 = sum2 + f2;
end
end
par2 = (h/3)*(fa+4*sum1+2*sum2+fb);
e = par1 - par2;
if abs(e)<=0.001
break;
end
end
y=n;
partitionS = zeros (n);
valueS= zeros (n);
errorS = zeros (n);
for n = 1:y
h=(b-a)/n;
sum1=0;
sum2=0;
x1=a;
x2=a;
for j = 1:n-1
k = j;
if fix(k/2) == j/2
x2 = x2 + 2*h;
f2 = 100/(1+x2.^2);
sum2 = sum2 + f2;
else
if j == 1
x1 = x1 + h;
end
if j > 1
x1 = x1 + 2*h;
end
f1 = 100/(1+x1.^2);%f1 = 100/(1+x2.^2);
sum1 = sum1 + f1;
end
end
partitionS(n) = n;
valueS(n)= (h/3)*(fa+4*sum1+2*sum2+fb);
errorS(n)=par1-valueS(n);
end
fprintf('Below are the results\n');
fprintf('%.25f\n',partitionS(n));
fprintf('%.25f\n',valueS(n));
fprintf('%.25f\n',errorS(n));
```
---
Matlab output:
```
METHOD SIMPSON
Below are the results
332.0000000000000000000000000
155.9675968160148900000000000
0.0009704737140339148000000
``` | I made a small fortran program based on your posts. Then put it through my f2matlab fortran source to matlab source converter (matlab file exchange). Here is the fortran:
```
program kt_f
implicit none
integer j,n,k,f1,f2
real x1,x2,h,sum1,sum2
n=100
k=50
do j=1,n-1
k=j
if(k/2/=j/2.) then
if(j==1) x1=x1+h
if(j>1) x1=x1+2*h
f1=100/(1+x1**2)
sum1=sum1+f1
else
x2=x2+2*h
f2=100/(1+x2**2)
sum2=sum2+f2
endif
enddo
print *,'sum1=',sum1
print *,'sum2=',sum2
end program kt_f
```
When I compile and run this, the output is:
```
sum1= 5000.000
sum2= 4900.000
```
Here is the matlab source produced. Note that in addition to the `fix` in the if statement, you need another `fix` in the line with the 100/ because this is an integer division as well. Here is the matlab code:
```
function kt_f(varargin)
clear global; clear functions;
global GlobInArgs nargs
GlobInArgs={mfilename,varargin{:}}; nargs=nargin+1;
persistent f1 f2 h_fv j k n sum1 sum2 x1 x2 ;
if isempty(f1), f1=0; end;
if isempty(f2), f2=0; end;
if isempty(h_fv), h_fv=0; end;
if isempty(j), j=0; end;
if isempty(k), k=0; end;
if isempty(n), n=0; end;
if isempty(sum1), sum1=0; end;
if isempty(sum2), sum2=0; end;
if isempty(x1), x1=0; end;
if isempty(x2), x2=0; end;
n = 100;
k = 50;
for j = 1: n - 1;
k = fix(j);
if(fix(k./2) ~= (j./2.));
if(j == 1);
x1 = x1 + h_fv;
end;
if(j > 1);
x1 = x1 + 2.*h_fv;
end;
f1 = fix(100./(1+x1.^2));
sum1 = sum1 + f1;
else;
x2 = x2 + 2.*h_fv;
f2 = fix(100./(1+x2.^2));
sum2 = sum2 + f2;
end;
end;
'sum1=',sum1
'sum2=',sum2
end %program kt_f
```
This gives the same output as the fortran. Please check and see whether this solves you issue. |
6,738,987 | is it possible to add extension method to python built-in types?
I know that I can add extension method to defined type by simply adding new method by . as following:
```
class myClass:
pass
myClass.myExtensionMethod = lambda self,x:x * 2
z = myClass()
print z.myExtensionMethod(10)
```
But is any way to adding extension method to python built'in types like list, dict, ...
```
list.myExtension = lambda self,x:x * 2
list.myExtension(10)
``` | 2011/07/18 | [
"https://Stackoverflow.com/questions/6738987",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/614953/"
] | No. Types defined in C cannot be monkeypatched. | No, because I'm pretty sure all the built-in types are written in optimized C and thus can't be modified with Python. When I try it, I just get:
```
TypeError: can't set attributes of built-in/extension type 'list'
``` |
6,738,987 | is it possible to add extension method to python built-in types?
I know that I can add extension method to defined type by simply adding new method by . as following:
```
class myClass:
pass
myClass.myExtensionMethod = lambda self,x:x * 2
z = myClass()
print z.myExtensionMethod(10)
```
But is any way to adding extension method to python built'in types like list, dict, ...
```
list.myExtension = lambda self,x:x * 2
list.myExtension(10)
``` | 2011/07/18 | [
"https://Stackoverflow.com/questions/6738987",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/614953/"
] | Nope, you gotta subclass!
```
>>> import string
>>> class MyString(str):
... def disemvowel(self):
... return MyString(string.translate(self, None, "aeiou"))
...
>>> s = MyString("this is only a test")
>>> s.disemvowel()
'ths s nly tst'
```
---
Or more specific to your example
```
>>> class MyList(list):
... pass
...
>>> MyList.myExtension = lambda self,x:x * 2
>>> l = MyList()
>>> l.myExtension(10)
20
``` | No, because I'm pretty sure all the built-in types are written in optimized C and thus can't be modified with Python. When I try it, I just get:
```
TypeError: can't set attributes of built-in/extension type 'list'
``` |
6,738,987 | is it possible to add extension method to python built-in types?
I know that I can add extension method to defined type by simply adding new method by . as following:
```
class myClass:
pass
myClass.myExtensionMethod = lambda self,x:x * 2
z = myClass()
print z.myExtensionMethod(10)
```
But is any way to adding extension method to python built'in types like list, dict, ...
```
list.myExtension = lambda self,x:x * 2
list.myExtension(10)
``` | 2011/07/18 | [
"https://Stackoverflow.com/questions/6738987",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/614953/"
] | It can be done in pure Python with this incredibly clever module:
<https://pypi.python.org/pypi/forbiddenfruit>
For example:
```
import functools
import ctypes
import __builtin__
import operator
class PyObject(ctypes.Structure):
pass
Py_ssize_t = hasattr(ctypes.pythonapi, 'Py_InitModule4_64') and ctypes.c_int64 or ctypes.c_int
PyObject._fields_ = [
('ob_refcnt', Py_ssize_t),
('ob_type', ctypes.POINTER(PyObject)),
]
class SlotsPointer(PyObject):
_fields_ = [('dict', ctypes.POINTER(PyObject))]
def proxy_builtin(klass):
name = klass.__name__
slots = getattr(klass, '__dict__', name)
pointer = SlotsPointer.from_address(id(slots))
namespace = {}
ctypes.pythonapi.PyDict_SetItem(
ctypes.py_object(namespace),
ctypes.py_object(name),
pointer.dict,
)
return namespace[name]
def die(message, cls=Exception):
"""
Raise an exception, allows you to use logical shortcut operators to test for object existence succinctly.
User.by_name('username') or die('Failed to find user')
"""
raise cls(message)
def unguido(self, key):
"""
Attempt to find methods which should really exist on the object instance.
"""
return functools.partial((getattr(__builtin__, key, None) if hasattr(__builtin__, key) else getattr(operator, key, None)) or die(key, KeyError), self)
class mapper(object):
def __init__(self, iterator, key):
self.iterator = iterator
self.key = key
self.fn = lambda o: getattr(o, key)
def __getattribute__(self, key):
if key in ('iterator', 'fn', 'key'): return object.__getattribute__(self, key)
return mapper(self, key)
def __call__(self, *args, **kwargs):
self.fn = lambda o: (getattr(o, self.key, None) or unguido(o, self.key))(*args, **kwargs)
return self
def __iter__(self):
for value in self.iterator:
yield self.fn(value)
class foreach(object):
"""
Creates an output iterator which will apply any functions called on it to every element
in the input iterator. A kind of chainable version of filter().
E.g:
foreach([1, 2, 3]).__add__(2).__str__().replace('3', 'a').upper()
is equivalent to:
(str(o + 2).replace('3', 'a').upper() for o in iterator)
Obviously this is not 'Pythonic'.
"""
def __init__(self, iterator):
self.iterator = iterator
def __getattribute__(self, key):
if key in ('iterator',): return object.__getattribute__(self, key)
return mapper(self.iterator, key)
def __iter__(self):
for value in self.iterator:
yield value
proxy_builtin(list)['foreach'] = property(foreach)
import string
print string.join([1, 2, 3].foreach.add(2).str().add(' cookies').upper(), ', ')
>>> 3 COOKIES, 4 COOKIES, 5 COOKIES
```
There, doesn't that feel good? | No, because I'm pretty sure all the built-in types are written in optimized C and thus can't be modified with Python. When I try it, I just get:
```
TypeError: can't set attributes of built-in/extension type 'list'
``` |
6,738,987 | is it possible to add extension method to python built-in types?
I know that I can add extension method to defined type by simply adding new method by . as following:
```
class myClass:
pass
myClass.myExtensionMethod = lambda self,x:x * 2
z = myClass()
print z.myExtensionMethod(10)
```
But is any way to adding extension method to python built'in types like list, dict, ...
```
list.myExtension = lambda self,x:x * 2
list.myExtension(10)
``` | 2011/07/18 | [
"https://Stackoverflow.com/questions/6738987",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/614953/"
] | No. Types defined in C cannot be monkeypatched. | The best you can do appears to be deriving a class from the built-in type. For example:
```
class mylist(list):
def myfunc(self, x):
self.append(x)
test = mylist([1,2,3,4])
test.myfunc(99)
```
(You could even name it "list" so as to get the same constructor, if you wanted.) However, you cannot directly modify a built-in type like the example in your question. |
6,738,987 | is it possible to add extension method to python built-in types?
I know that I can add extension method to defined type by simply adding new method by . as following:
```
class myClass:
pass
myClass.myExtensionMethod = lambda self,x:x * 2
z = myClass()
print z.myExtensionMethod(10)
```
But is any way to adding extension method to python built'in types like list, dict, ...
```
list.myExtension = lambda self,x:x * 2
list.myExtension(10)
``` | 2011/07/18 | [
"https://Stackoverflow.com/questions/6738987",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/614953/"
] | It can be done in pure Python with this incredibly clever module:
<https://pypi.python.org/pypi/forbiddenfruit>
For example:
```
import functools
import ctypes
import __builtin__
import operator
class PyObject(ctypes.Structure):
pass
Py_ssize_t = hasattr(ctypes.pythonapi, 'Py_InitModule4_64') and ctypes.c_int64 or ctypes.c_int
PyObject._fields_ = [
('ob_refcnt', Py_ssize_t),
('ob_type', ctypes.POINTER(PyObject)),
]
class SlotsPointer(PyObject):
_fields_ = [('dict', ctypes.POINTER(PyObject))]
def proxy_builtin(klass):
name = klass.__name__
slots = getattr(klass, '__dict__', name)
pointer = SlotsPointer.from_address(id(slots))
namespace = {}
ctypes.pythonapi.PyDict_SetItem(
ctypes.py_object(namespace),
ctypes.py_object(name),
pointer.dict,
)
return namespace[name]
def die(message, cls=Exception):
"""
Raise an exception, allows you to use logical shortcut operators to test for object existence succinctly.
User.by_name('username') or die('Failed to find user')
"""
raise cls(message)
def unguido(self, key):
"""
Attempt to find methods which should really exist on the object instance.
"""
return functools.partial((getattr(__builtin__, key, None) if hasattr(__builtin__, key) else getattr(operator, key, None)) or die(key, KeyError), self)
class mapper(object):
def __init__(self, iterator, key):
self.iterator = iterator
self.key = key
self.fn = lambda o: getattr(o, key)
def __getattribute__(self, key):
if key in ('iterator', 'fn', 'key'): return object.__getattribute__(self, key)
return mapper(self, key)
def __call__(self, *args, **kwargs):
self.fn = lambda o: (getattr(o, self.key, None) or unguido(o, self.key))(*args, **kwargs)
return self
def __iter__(self):
for value in self.iterator:
yield self.fn(value)
class foreach(object):
"""
Creates an output iterator which will apply any functions called on it to every element
in the input iterator. A kind of chainable version of filter().
E.g:
foreach([1, 2, 3]).__add__(2).__str__().replace('3', 'a').upper()
is equivalent to:
(str(o + 2).replace('3', 'a').upper() for o in iterator)
Obviously this is not 'Pythonic'.
"""
def __init__(self, iterator):
self.iterator = iterator
def __getattribute__(self, key):
if key in ('iterator',): return object.__getattribute__(self, key)
return mapper(self.iterator, key)
def __iter__(self):
for value in self.iterator:
yield value
proxy_builtin(list)['foreach'] = property(foreach)
import string
print string.join([1, 2, 3].foreach.add(2).str().add(' cookies').upper(), ', ')
>>> 3 COOKIES, 4 COOKIES, 5 COOKIES
```
There, doesn't that feel good? | No. Types defined in C cannot be monkeypatched. |
6,738,987 | is it possible to add extension method to python built-in types?
I know that I can add extension method to defined type by simply adding new method by . as following:
```
class myClass:
pass
myClass.myExtensionMethod = lambda self,x:x * 2
z = myClass()
print z.myExtensionMethod(10)
```
But is any way to adding extension method to python built'in types like list, dict, ...
```
list.myExtension = lambda self,x:x * 2
list.myExtension(10)
``` | 2011/07/18 | [
"https://Stackoverflow.com/questions/6738987",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/614953/"
] | Nope, you gotta subclass!
```
>>> import string
>>> class MyString(str):
... def disemvowel(self):
... return MyString(string.translate(self, None, "aeiou"))
...
>>> s = MyString("this is only a test")
>>> s.disemvowel()
'ths s nly tst'
```
---
Or more specific to your example
```
>>> class MyList(list):
... pass
...
>>> MyList.myExtension = lambda self,x:x * 2
>>> l = MyList()
>>> l.myExtension(10)
20
``` | The best you can do appears to be deriving a class from the built-in type. For example:
```
class mylist(list):
def myfunc(self, x):
self.append(x)
test = mylist([1,2,3,4])
test.myfunc(99)
```
(You could even name it "list" so as to get the same constructor, if you wanted.) However, you cannot directly modify a built-in type like the example in your question. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.