qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
sequencelengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
3,186,526
In debian recently change de default version of python from 2.5 to 2.6 but i need 2.5, how i can configure apache and/or wsgi script to say it use pythons2.5 and not python default?
2010/07/06
[ "https://Stackoverflow.com/questions/3186526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/150647/" ]
``` <input type="submit" onclick="this.disabled = true" value="Save"/> ``` or ref [this](https://stackoverflow.com/questions/2545641/how-to-submit-form-only-once-after-multiple-clicking-on-submit)
Using **jQuery**, add onClick handler that returns false: ``` <input type="submit" value="Submit" onClick="$(this).click(function() {return false;});"/> ```
3,186,526
In debian recently change de default version of python from 2.5 to 2.6 but i need 2.5, how i can configure apache and/or wsgi script to say it use pythons2.5 and not python default?
2010/07/06
[ "https://Stackoverflow.com/questions/3186526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/150647/" ]
i found a alternative online. wat i did is to create a fake disable and hidden button. when the actual button is clicked, i will hide it and show the fake disable button. **actual button:** ``` $onclick = " var boolconfirm = confirm('$strconfirmattempt'); if(boolconfirm==true){ finishattempt.style.display='none'; finishattempt2.style.display='inline'; } return boolconfirm;"; ``` **fake button:** ``` echo "<input type=\"submit\" name=\"finishattempt\" value=\"submit\" onclick=\"$onclick\" />. <input type=\"submit\" id=\"finishattempt2\" name=\"finishattempt2\" value=\"submit\" style=\"display:none;\" DISABLED/>\n"; ```
Here's a method using onsubmit instead of onlick: This goes at the top: ``` <script type='text/javascript'> function disableButtons() { $('input[type="submit"]').attr('disabled', true); } </script> ``` Then your PHP (note that isset post is NOT for the submit button, because we want to disable the submit button). ``` if (isset($_POST['dothis'])) { //CODE TO EXECUTE } ``` Then HTML. ``` <form method='post' action='' onsubmit='disableButtons()'> <input type='hidden' name='dothis' value=''> <input type='submit' value='Submit'></form> ``` Onsubmit goes in . Make sure your isset (the PHP part) is for an input that goes with your submit, but is not the submit button itself. You can see that it is the hidden value being checked for with the PHP, rather than the submit button, and the submit button is what gets disabled. By doing this, you can disable a submit button without disabling the PHP.
3,186,526
In debian recently change de default version of python from 2.5 to 2.6 but i need 2.5, how i can configure apache and/or wsgi script to say it use pythons2.5 and not python default?
2010/07/06
[ "https://Stackoverflow.com/questions/3186526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/150647/" ]
I solved it with simple jQuery. The code removes the button on click, then appends the fake button or some like "loading.." text and finally sends the form. **HTML:** ``` <div class="holder"><input type='submit' value='ACCEPT' class='button'></div> ``` **jQuery:** ``` $('.button').click(function() { $('.button').remove(); $('.holder').append("//fake input button or whatever you want."); $('.form').submit(); }); ``` In diference with other methods like unload the button changes in the instant moment you click and sends the form. With heavy forms i think is a better practice.
Here's a method using onsubmit instead of onlick: This goes at the top: ``` <script type='text/javascript'> function disableButtons() { $('input[type="submit"]').attr('disabled', true); } </script> ``` Then your PHP (note that isset post is NOT for the submit button, because we want to disable the submit button). ``` if (isset($_POST['dothis'])) { //CODE TO EXECUTE } ``` Then HTML. ``` <form method='post' action='' onsubmit='disableButtons()'> <input type='hidden' name='dothis' value=''> <input type='submit' value='Submit'></form> ``` Onsubmit goes in . Make sure your isset (the PHP part) is for an input that goes with your submit, but is not the submit button itself. You can see that it is the hidden value being checked for with the PHP, rather than the submit button, and the submit button is what gets disabled. By doing this, you can disable a submit button without disabling the PHP.
3,186,526
In debian recently change de default version of python from 2.5 to 2.6 but i need 2.5, how i can configure apache and/or wsgi script to say it use pythons2.5 and not python default?
2010/07/06
[ "https://Stackoverflow.com/questions/3186526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/150647/" ]
If you disable an input, then its value naturally won't be included in the form data. You'll need to disable the button *after* you submit. If you bind a callback to `onclick`, then it runs *before* the form submits. What you need is something like this: **jQuery:** ``` $(document).ready(function() { $(document).unload(function() { $('#submit-btn').attr('disabled', 'disabled'); }); }); ``` **Regular JS:** ``` document.onunload = disableSubmit; function disableSubmit() { /* disable the submit button here */ } ``` Basically, instead of binding to the submit button's `onclick` event, this binds the disabling code to the document's unload event (`document.onunload`), which gets fired once the form is submitted and you begin to leave the page.
You could use a hidden field which would hold the value of the button and pull that value out of your POST data: ``` <input type="hidden" id="hiddenField" value="default" /> <input type="button" id="myButton" onclick="buttonClick();"> function buttonClick() { document.myForm.myButton.disabled = true; document.myForm.hiddenField.value = "myButtonClicked"; } ``` My PHP is a little rusty, but then you can access the hidden field like so: ``` if ($POST['hiddenField'] == "myButtonClicked") { // Click handling code here } ```
3,186,526
In debian recently change de default version of python from 2.5 to 2.6 but i need 2.5, how i can configure apache and/or wsgi script to say it use pythons2.5 and not python default?
2010/07/06
[ "https://Stackoverflow.com/questions/3186526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/150647/" ]
``` <input type="submit" onclick="this.disabled = true" value="Save"/> ``` or ref [this](https://stackoverflow.com/questions/2545641/how-to-submit-form-only-once-after-multiple-clicking-on-submit)
i found a alternative online. wat i did is to create a fake disable and hidden button. when the actual button is clicked, i will hide it and show the fake disable button. **actual button:** ``` $onclick = " var boolconfirm = confirm('$strconfirmattempt'); if(boolconfirm==true){ finishattempt.style.display='none'; finishattempt2.style.display='inline'; } return boolconfirm;"; ``` **fake button:** ``` echo "<input type=\"submit\" name=\"finishattempt\" value=\"submit\" onclick=\"$onclick\" />. <input type=\"submit\" id=\"finishattempt2\" name=\"finishattempt2\" value=\"submit\" style=\"display:none;\" DISABLED/>\n"; ```
3,186,526
In debian recently change de default version of python from 2.5 to 2.6 but i need 2.5, how i can configure apache and/or wsgi script to say it use pythons2.5 and not python default?
2010/07/06
[ "https://Stackoverflow.com/questions/3186526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/150647/" ]
Using **jQuery**, add onClick handler that returns false: ``` <input type="submit" value="Submit" onClick="$(this).click(function() {return false;});"/> ```
You could use a hidden field which would hold the value of the button and pull that value out of your POST data: ``` <input type="hidden" id="hiddenField" value="default" /> <input type="button" id="myButton" onclick="buttonClick();"> function buttonClick() { document.myForm.myButton.disabled = true; document.myForm.hiddenField.value = "myButtonClicked"; } ``` My PHP is a little rusty, but then you can access the hidden field like so: ``` if ($POST['hiddenField'] == "myButtonClicked") { // Click handling code here } ```
3,186,526
In debian recently change de default version of python from 2.5 to 2.6 but i need 2.5, how i can configure apache and/or wsgi script to say it use pythons2.5 and not python default?
2010/07/06
[ "https://Stackoverflow.com/questions/3186526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/150647/" ]
I solved it with simple jQuery. The code removes the button on click, then appends the fake button or some like "loading.." text and finally sends the form. **HTML:** ``` <div class="holder"><input type='submit' value='ACCEPT' class='button'></div> ``` **jQuery:** ``` $('.button').click(function() { $('.button').remove(); $('.holder').append("//fake input button or whatever you want."); $('.form').submit(); }); ``` In diference with other methods like unload the button changes in the instant moment you click and sends the form. With heavy forms i think is a better practice.
You could use a hidden field which would hold the value of the button and pull that value out of your POST data: ``` <input type="hidden" id="hiddenField" value="default" /> <input type="button" id="myButton" onclick="buttonClick();"> function buttonClick() { document.myForm.myButton.disabled = true; document.myForm.hiddenField.value = "myButtonClicked"; } ``` My PHP is a little rusty, but then you can access the hidden field like so: ``` if ($POST['hiddenField'] == "myButtonClicked") { // Click handling code here } ```
3,186,526
In debian recently change de default version of python from 2.5 to 2.6 but i need 2.5, how i can configure apache and/or wsgi script to say it use pythons2.5 and not python default?
2010/07/06
[ "https://Stackoverflow.com/questions/3186526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/150647/" ]
``` <input type="submit" onclick="this.disabled = true" value="Save"/> ``` or ref [this](https://stackoverflow.com/questions/2545641/how-to-submit-form-only-once-after-multiple-clicking-on-submit)
Here's a method using onsubmit instead of onlick: This goes at the top: ``` <script type='text/javascript'> function disableButtons() { $('input[type="submit"]').attr('disabled', true); } </script> ``` Then your PHP (note that isset post is NOT for the submit button, because we want to disable the submit button). ``` if (isset($_POST['dothis'])) { //CODE TO EXECUTE } ``` Then HTML. ``` <form method='post' action='' onsubmit='disableButtons()'> <input type='hidden' name='dothis' value=''> <input type='submit' value='Submit'></form> ``` Onsubmit goes in . Make sure your isset (the PHP part) is for an input that goes with your submit, but is not the submit button itself. You can see that it is the hidden value being checked for with the PHP, rather than the submit button, and the submit button is what gets disabled. By doing this, you can disable a submit button without disabling the PHP.
3,186,526
In debian recently change de default version of python from 2.5 to 2.6 but i need 2.5, how i can configure apache and/or wsgi script to say it use pythons2.5 and not python default?
2010/07/06
[ "https://Stackoverflow.com/questions/3186526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/150647/" ]
If you disable an input, then its value naturally won't be included in the form data. You'll need to disable the button *after* you submit. If you bind a callback to `onclick`, then it runs *before* the form submits. What you need is something like this: **jQuery:** ``` $(document).ready(function() { $(document).unload(function() { $('#submit-btn').attr('disabled', 'disabled'); }); }); ``` **Regular JS:** ``` document.onunload = disableSubmit; function disableSubmit() { /* disable the submit button here */ } ``` Basically, instead of binding to the submit button's `onclick` event, this binds the disabling code to the document's unload event (`document.onunload`), which gets fired once the form is submitted and you begin to leave the page.
Here's a method using onsubmit instead of onlick: This goes at the top: ``` <script type='text/javascript'> function disableButtons() { $('input[type="submit"]').attr('disabled', true); } </script> ``` Then your PHP (note that isset post is NOT for the submit button, because we want to disable the submit button). ``` if (isset($_POST['dothis'])) { //CODE TO EXECUTE } ``` Then HTML. ``` <form method='post' action='' onsubmit='disableButtons()'> <input type='hidden' name='dothis' value=''> <input type='submit' value='Submit'></form> ``` Onsubmit goes in . Make sure your isset (the PHP part) is for an input that goes with your submit, but is not the submit button itself. You can see that it is the hidden value being checked for with the PHP, rather than the submit button, and the submit button is what gets disabled. By doing this, you can disable a submit button without disabling the PHP.
3,186,526
In debian recently change de default version of python from 2.5 to 2.6 but i need 2.5, how i can configure apache and/or wsgi script to say it use pythons2.5 and not python default?
2010/07/06
[ "https://Stackoverflow.com/questions/3186526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/150647/" ]
If you disable an input, then its value naturally won't be included in the form data. You'll need to disable the button *after* you submit. If you bind a callback to `onclick`, then it runs *before* the form submits. What you need is something like this: **jQuery:** ``` $(document).ready(function() { $(document).unload(function() { $('#submit-btn').attr('disabled', 'disabled'); }); }); ``` **Regular JS:** ``` document.onunload = disableSubmit; function disableSubmit() { /* disable the submit button here */ } ``` Basically, instead of binding to the submit button's `onclick` event, this binds the disabling code to the document's unload event (`document.onunload`), which gets fired once the form is submitted and you begin to leave the page.
Why not create a disabled submit button that is hidden, and an active submit button, and onClick show the disabled and hide the active? I could do this in jQuery, but I'm kinda useless without it. Sad, eh?
2,335,762
I want to parse a css file and multiply each pixel value by (2/3). I was wondering if this was possible with sed? I know this is incorrect syntax but i think it'll bring home the explanation of what i want to achieve: ``` sed -e "s|\([0-9]*\)px|int((\1 * 2)/3)|g" file.css ``` So basically I want to take \1, multiply it by (2/3) and cast to and int. Or maybe it's more possible with awk? Suppose I could write a python script, but would like to know if it can be done by quicker means. Thanks
2010/02/25
[ "https://Stackoverflow.com/questions/2335762", "https://Stackoverflow.com", "https://Stackoverflow.com/users/281403/" ]
use awk ``` $ cat file foo: 3px; bar: 6px $ awk '{for(i=1;i<=NF;i++){if($i~/^[0-9]+px/){o=$i;sub(/^[0-9]+/,"",o);$i=($i+0)*(2/3)o}}}1' file foo: 2px; bar: 4px ```
You can use perl do it like this: ``` echo -e "100px;\n20px;" | perl -pe 's{ (\d*) (?=px) }{ $1*(2/3) }xe' ```
2,335,762
I want to parse a css file and multiply each pixel value by (2/3). I was wondering if this was possible with sed? I know this is incorrect syntax but i think it'll bring home the explanation of what i want to achieve: ``` sed -e "s|\([0-9]*\)px|int((\1 * 2)/3)|g" file.css ``` So basically I want to take \1, multiply it by (2/3) and cast to and int. Or maybe it's more possible with awk? Suppose I could write a python script, but would like to know if it can be done by quicker means. Thanks
2010/02/25
[ "https://Stackoverflow.com/questions/2335762", "https://Stackoverflow.com", "https://Stackoverflow.com/users/281403/" ]
You can use perl do it like this: ``` echo -e "100px;\n20px;" | perl -pe 's{ (\d*) (?=px) }{ $1*(2/3) }xe' ```
To answer the initial question: Yes, you can do this in sed, and No, you do NOT want to do it unless you have some bizarre set of constraints that prevent you from using any other tool. I say this because the unix dc command (a desktop calculator) has been written in sed, but it works IIRC by performing math the same way you were probably taught math in school: string manipulations on digits, along with carries and such. You would actually have to write multiply and divide commands (or strip them out of dc.sed) to be able to accomplish this and the results would run a couple of orders of magnitude slower than any of the above suggestions. In case you're now curious about dc.sed, its usually provided as one of the examples in any sed distribution.
2,335,762
I want to parse a css file and multiply each pixel value by (2/3). I was wondering if this was possible with sed? I know this is incorrect syntax but i think it'll bring home the explanation of what i want to achieve: ``` sed -e "s|\([0-9]*\)px|int((\1 * 2)/3)|g" file.css ``` So basically I want to take \1, multiply it by (2/3) and cast to and int. Or maybe it's more possible with awk? Suppose I could write a python script, but would like to know if it can be done by quicker means. Thanks
2010/02/25
[ "https://Stackoverflow.com/questions/2335762", "https://Stackoverflow.com", "https://Stackoverflow.com/users/281403/" ]
You can use perl do it like this: ``` echo -e "100px;\n20px;" | perl -pe 's{ (\d*) (?=px) }{ $1*(2/3) }xe' ```
``` sed $( sed -rn ' :X /([0-9]+)px/ { s##\a\1\a# h s#.*\a(.*)\a.*#echo -e s/\1px/$((\1 * 2/3))px/g#ep x bX } ' cssfile ) cssfile ```
2,335,762
I want to parse a css file and multiply each pixel value by (2/3). I was wondering if this was possible with sed? I know this is incorrect syntax but i think it'll bring home the explanation of what i want to achieve: ``` sed -e "s|\([0-9]*\)px|int((\1 * 2)/3)|g" file.css ``` So basically I want to take \1, multiply it by (2/3) and cast to and int. Or maybe it's more possible with awk? Suppose I could write a python script, but would like to know if it can be done by quicker means. Thanks
2010/02/25
[ "https://Stackoverflow.com/questions/2335762", "https://Stackoverflow.com", "https://Stackoverflow.com/users/281403/" ]
use awk ``` $ cat file foo: 3px; bar: 6px $ awk '{for(i=1;i<=NF;i++){if($i~/^[0-9]+px/){o=$i;sub(/^[0-9]+/,"",o);$i=($i+0)*(2/3)o}}}1' file foo: 2px; bar: 4px ```
To answer the initial question: Yes, you can do this in sed, and No, you do NOT want to do it unless you have some bizarre set of constraints that prevent you from using any other tool. I say this because the unix dc command (a desktop calculator) has been written in sed, but it works IIRC by performing math the same way you were probably taught math in school: string manipulations on digits, along with carries and such. You would actually have to write multiply and divide commands (or strip them out of dc.sed) to be able to accomplish this and the results would run a couple of orders of magnitude slower than any of the above suggestions. In case you're now curious about dc.sed, its usually provided as one of the examples in any sed distribution.
2,335,762
I want to parse a css file and multiply each pixel value by (2/3). I was wondering if this was possible with sed? I know this is incorrect syntax but i think it'll bring home the explanation of what i want to achieve: ``` sed -e "s|\([0-9]*\)px|int((\1 * 2)/3)|g" file.css ``` So basically I want to take \1, multiply it by (2/3) and cast to and int. Or maybe it's more possible with awk? Suppose I could write a python script, but would like to know if it can be done by quicker means. Thanks
2010/02/25
[ "https://Stackoverflow.com/questions/2335762", "https://Stackoverflow.com", "https://Stackoverflow.com/users/281403/" ]
use awk ``` $ cat file foo: 3px; bar: 6px $ awk '{for(i=1;i<=NF;i++){if($i~/^[0-9]+px/){o=$i;sub(/^[0-9]+/,"",o);$i=($i+0)*(2/3)o}}}1' file foo: 2px; bar: 4px ```
``` sed $( sed -rn ' :X /([0-9]+)px/ { s##\a\1\a# h s#.*\a(.*)\a.*#echo -e s/\1px/$((\1 * 2/3))px/g#ep x bX } ' cssfile ) cssfile ```
70,581,125
I am to write a program by taking two integers as input and output their sum on Sololearn using python But I don’t seem to get what they want me to do
2022/01/04
[ "https://Stackoverflow.com/questions/70581125", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17833033/" ]
``` num1 = int(input('Enter your first number: ')) num2 = int(input('Enter your second number: ')) print('Your sum is', num1 + num2) ``` I would suggest doing more research on the `input` function in python. Also, in the future, please provide the code that you have already written so that it is easier to answer your question.
Try with ```py a = int(input("A: ")) # Convert input string to int b = int(input("B: ")) print("A + B =", a+b) ```
70,581,125
I am to write a program by taking two integers as input and output their sum on Sololearn using python But I don’t seem to get what they want me to do
2022/01/04
[ "https://Stackoverflow.com/questions/70581125", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17833033/" ]
I am to write a program by **taking two integers as input** and **output their sum** on Sololearn using python But I don’t seem to get what they want me to do This is what the lesson is asking you to do. * Step 1: Take two integers as input * Step 2: Output their sum (addition) Step 1 Using the `input()` function wrapped in the `int()` type allows a user to input a number from the command line. ``` num1 = int(input('Enter the 1st number: ')) num2 = int(input('Enter the 2nd number: ')) ``` Step 2 Add the two entered numbers using the '+' operand ``` sum_of_two_numbers = num1 + num2 print('The sum of two numbers is', sum_of_two_numbers) ``` I suggest wrapping the code in a `try-and-catch` block. ``` import traceback, os try: num1 = int(input('Enter the 1st number: ')) num2 = int(input('Enter the 2nd number: ')) sum_of_two_numbers = num1 + num2 print('The sum of two numbers is', sum_of_two_numbers) except Exception as ex: template = "An exception of type {0} occurred. Arguments:{1!r}" message = template.format(type(ex).__name__, ex.args) print( message ) exc_type, exc_obj, exc_tb = sys.exc_info() fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1] print(exc_type, fname, exc_tb.tb_lineno) print(traceback.format_exc()) ```
Try with ```py a = int(input("A: ")) # Convert input string to int b = int(input("B: ")) print("A + B =", a+b) ```
12,960,522
I am trying to create an enumeration in python. I have seen seen several solutions (The second answer [here](https://stackoverflow.com/questions/36932/whats-the-best-way-to-implement-an-enum-in-python) by @alec thomas intrigued me most), but I would like to make the enumeration immutable. I found a [python recipe](http://code.activestate.com/recipes/413486-first-class-enums-in-python/) that is immutable, but I want to have a dict-like key/value association. I was attempting to use duck-punching to add properties to the class that would throw an `AttributeError` if you tried to call `fset` or `fdel` on the property. I ran into trouble defining the `fget` function of the property. Here's the code i have so far: ``` def enum(*sequential, **named): # Build property dict enums = dict(zip(sequential, range(len(sequential))), **named) # Define an errorhandler function def err_func(*args, **kwargs): raise AttributeError('Enumeration is immutable!') # Create a base type t = type('enum', (object,), {}) # Add properties to class by duck-punching for attr, val in enums.iteritems(): setattr(t, attr, property(lambda attr: enums[attr], err_func, err_func)) # Return an instance of the new class return t() e = enum('OK', 'CANCEL', 'QUIT') print e print e.OK print e.CANCEL print e.QUIT # Immutable? e.OK = 'ASDF' # Does throw the correct exception print e.OK ``` The output from this is: ``` <__main__.enum object at 0x01FC8F70> Traceback (most recent call last): File "enum.py", line 24, in <module> print e.OK File "enum.py", line 17, in <lambda> setattr(t, attr, property(lambda attr: enums[attr], err_func, err_func)) KeyError: <__main__.enum object at 0x01FC8F70> ``` Perhaps this is not the best way to create an enumeration, but it's short and I wanted to explore more of this whole duck-punching/monkey-patching concept.
2012/10/18
[ "https://Stackoverflow.com/questions/12960522", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1255482/" ]
Your immediate problem is that the `getter` of a `property` takes `self` as the only parameter, not `attr`. Therefore, you should use something like `lambda self: val` instead. However, that doesn't work because the `lambda` binds the *name* `val`, which changes from iteration to iteration. So you will need to wrap it somehow: ``` def getter(val): return lambda self: val for attr, val in enums.iteritems(): setattr(t, attr, property(getter(val), err_func, err_func)) ```
The final implementation (thanks to @nneonneo). * Checks for duplicate enum keys * Checks if enum is empty * Does not allow deletion or modification of enum items ``` def enum(*sequential, **named): # Check for duplicate keys names = list(sequential) names.extend(named.keys()) if len(set(names)) != len(names): raise KeyError('Cannot create enumeration with duplicate keys!') # Build property dict enums = dict(zip(sequential, range(len(sequential))), **named) if not enums: raise KeyError('Cannot create empty enumeration') # Function to be called as fset/fdel def err_func(*args, **kwargs): raise AttributeError('Enumeration is immutable!') # function to be called as fget def getter(cls, val): return lambda cls: val # Create a base type t = type('enum', (object,), {}) # Add properties to class by duck-punching for attr, val in enums.iteritems(): setattr(t, attr, property(getter(t, val), err_func, err_func)) # Return an instance of the new class return t() ```
63,790,601
``` # Read an integer: a = input() #Now swap it... a[0] = a[1] a[1] = a[0] ``` As you can see I am trying to change the value and trying to swap it.. ``` print(a) ``` ...and then i print it out. But I am getting an error which is as follows: ``` Traceback (most recent call last): File "python", line 4, in <module> TypeError: 'str' object does not support item assignment ``` For example, if my input is `79` I want the result to be `97`. Can you tell me where my mistake is?
2020/09/08
[ "https://Stackoverflow.com/questions/63790601", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14235043/" ]
Try this: ``` a = input() a = str(a) result = int(a[-1: : -1]) print(result) ``` Output: ( a = 34 ) ``` 43 ```
Based on your question simple thing you can do. As above comment string is not iterable while you as input. You need to convert to `list` to access by index. For swap you need to use temporary variable, so i used `temp` as variable to swap. ``` a = list(input()) #Now swap it... print(a) temp = a[0] a[0] = a[1] a[1] = temp print(a) print("".join(a)) ```
48,272,939
In advance, thank you for looking at my issue community, My python test script will not execute from my Centos 7 Crontab. This script will execute manually if called either in the containing directory or from the root/any other directory with a full path. My Centos Python location is `/bin/python`. This is included at the top of my python script. Crontab itself seems to be working fine as: ``` * * * * * root date >> /home/test.log ``` redirects output to test.log without issue. Please find both my Crontab and test script. Crontab in /etc/crontab ``` SHELL=/bin/bash #PATH=/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root * * * * * root python /scripts/test1.py ``` As for the python script in /scripts/test1.py: ``` #!/bin/python import os os.system('date >> testlog.txt') ``` Any input/advice you can offer I would greatly appreciate. Thank you kindly,
2018/01/16
[ "https://Stackoverflow.com/questions/48272939", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6820299/" ]
I had the same problem. Pulling images was working but provisioning a container was not working. In this case the solution was to provide Docker with a configuration file named `~/.docker/config.json` with the following contents. ``` { "proxies": { "default": { "httpProxy": "http://proxy.server....com:8080", "httpsProxy": "https://proxy.server.....com:8080" } } } ``` I hope this will solve your problem.
I struggled making it work but finally found a working solution on my side. I'm behind a corporate proxy and have a CNTLM properly configured on windows and linked in my docker desktop settings with address `127.0.0.1:3128`. My docker runs under WSL2. The magic tip hereis to link your containers proxies to docker internal proxy `host.docker.internal`. I didn't find why I wasn't able to connect with the localhost proxy (`127.0.0.1:3128`) like i did for docker desktop config, i guess docker runs an internal proxy as a gateway to access windows network.
33,981,803
Lets say I am trying to get the number of different peoples names. user inputs names until they enter a -1, once -1 is entered then loop will break Once entered then i am trying to tabulate the output something likes this names : John Max Joan No of occurrences : 4 1 2 % of occurences : 20% 10% 30% ``` #!/usr/bin/python names = ["John","Max","Joan"] lst = [] while True: lst = raw_input("What is your name?") if lst == "-1": break input_list = lst.split() print "Names" '[%s]' % ' '.join(map(str, names)) ``` I have no idea on how to increment the values of the names with the number of times they are entered by the user - lets say the user enters john, john, max,joan joan joan then I would need to increment john twice, max once and joan 3 times. I know I can reference different parts of names using [0] for example which is the first item but I don't know how to increment all the relevant parts .
2015/11/29
[ "https://Stackoverflow.com/questions/33981803", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5466007/" ]
You can use [`collections.Counter`](https://docs.python.org/2/library/collections.html#collections.Counter) to count and accumulate the occurrences of names in the given input: ``` counter = collections.Counter() names = ["John", "Max", "Joan"] while True: lst = raw_input("What is your name?") if lst == "-1": break lst = [name for name in lst.strip().split() if name in names] names.update(collections.Counter(lst)) print "names : {}".format(" ".join(names)) print "No of occurrences : {}".format(" ".join(map(str, names.values()))) ``` *Please note that, I presumed names are separated by whitespace in the input.*
Counting word frequency in a multi-word string: ``` import sys from collections import defaultdict WORDS = """Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.""" d = defaultdict(lambda: 0) for word in WORDS.split(): d[word] += 1 for key in ['in', 'et', 'ut', 'sed']: print(key, ':\t', str(d[key])) ``` output: ``` in : 0 et : 1 ut : 2 sed : 1 ``` Whether one normalizes the words to lowercase depends on the problem domain; in case the questioner wants to do that with the names he receives, that'd be `data[word.lower()] += 1` in the loop, and then he may re-capitalize the first letter on output if he wishes to do that.
45,176,779
I have a python object that looks like this. I am trying to parse this object and turn it to a human readable string which I need to put in the logs. How can I recursively loop through this considering the object could be nested dictionaries or nested lists or dictionaries inside lists inside dictionaries etc. ``` {"plugins": [ {"Chrome PDF Viewer": "mhjfbmdgcfjbbpaeojofohoefgiehjai"}, {"Chrome PDF Viewer": "internal-pdf-viewer"}, {"Native Client": "internal-nacl-plugin"}, {"Shockwave Flash": "PepperFlashPlayer.plugin"}, {"Widevine Content Decryption Module": "widevinecdmadapter.plugin"} ] } ``` I want to possibly serialize the above to look something like this ``` "plugins: Chrome PDF Viewer": "mhjfbmdgcfjbbpaeojofohoefgiehjai, Chrome PDF Viewer": "internal-pdf-viewer, Native Client": "internal-nacl-plugin, Shockwave Flash": "PepperFlashPlayer.plugin, Widevine Content Decryption Module": "widevinecdmadapter.plugin" ``` My code so far [this works for nested dictionaries but I am not sure how I can alter this to support lists in the above object]: ``` result_str = "" def dictionary_iterator(results): global result_str for key, value in results.items(): if isinstance(value, dict): result_str = result_str + key + ": \n \t" dictionary_iterator(value) else: result_str = result_str + key + ": " + str(value) + "\n" return result_str ``` I have looked over possible answers but could not find a solution.
2017/07/18
[ "https://Stackoverflow.com/questions/45176779", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7044618/" ]
The formatting might be a bit off ``` def humanizer(input, result=''): if type(input) == dict: for k, v in input.items(): if type(v) == str: result += '%s:%s\n\t' % (str(k), str(v)) elif type(v) in (dict, list): result += '%s:\n\t' % str(k) result = humanizer(v, result) result += '\n\t' elif type(input) == list: for item in input: if type(item) == str: result += item continue result = humanizer(item, result) + '\n\t' else: result += input + '\n\t' return result ``` Result: ``` plugins: Chrome PDF Viewer:mhjfbmdgcfjbbpaeojofohoefgiehjai Chrome PDF Viewer:internal-pdf-viewer Native Client:internal-nacl-plugin Shockwave Flash:PepperFlashPlayer.plugin Widevine Content Decryption Module:widevinecdmadapter.plugin ```
Maybe the output of [pformat](https://docs.python.org/3/library/pprint.html#pprint.pformat "pformat") would suit you: ``` from pprint import pformat results_str = pformat(results) ```
45,176,779
I have a python object that looks like this. I am trying to parse this object and turn it to a human readable string which I need to put in the logs. How can I recursively loop through this considering the object could be nested dictionaries or nested lists or dictionaries inside lists inside dictionaries etc. ``` {"plugins": [ {"Chrome PDF Viewer": "mhjfbmdgcfjbbpaeojofohoefgiehjai"}, {"Chrome PDF Viewer": "internal-pdf-viewer"}, {"Native Client": "internal-nacl-plugin"}, {"Shockwave Flash": "PepperFlashPlayer.plugin"}, {"Widevine Content Decryption Module": "widevinecdmadapter.plugin"} ] } ``` I want to possibly serialize the above to look something like this ``` "plugins: Chrome PDF Viewer": "mhjfbmdgcfjbbpaeojofohoefgiehjai, Chrome PDF Viewer": "internal-pdf-viewer, Native Client": "internal-nacl-plugin, Shockwave Flash": "PepperFlashPlayer.plugin, Widevine Content Decryption Module": "widevinecdmadapter.plugin" ``` My code so far [this works for nested dictionaries but I am not sure how I can alter this to support lists in the above object]: ``` result_str = "" def dictionary_iterator(results): global result_str for key, value in results.items(): if isinstance(value, dict): result_str = result_str + key + ": \n \t" dictionary_iterator(value) else: result_str = result_str + key + ": " + str(value) + "\n" return result_str ``` I have looked over possible answers but could not find a solution.
2017/07/18
[ "https://Stackoverflow.com/questions/45176779", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7044618/" ]
Maybe the output of [pformat](https://docs.python.org/3/library/pprint.html#pprint.pformat "pformat") would suit you: ``` from pprint import pformat results_str = pformat(results) ```
You need an elif-condition in case of type == list and a indentation tracker: ``` data = {"plugins": [ {"Chrome PDF Viewer": "mhjfbmdgcfjbbpaeojofohoefgiehjai"}, {"Chrome PDF Viewer": "internal-pdf-viewer"}, {"Native Client": "internal-nacl-plugin"}, {"Shockwave Flash": "PepperFlashPlayer.plugin"}, {"Widevine Content Decryption Module": "widevinecdmadapter.plugin"} ], "anotherLevel": { "sublevel": [ {'item1': 'value1'} ] } } result_str = "" def dictionary_iterator(indent, data): global result_str if isinstance(data, dict): for key, value in data.items(): result_str += indent*'\t' + key + '\n' indent = indent + 1 dictionary_iterator(indent, value) elif isinstance(data, list): for item in data: if isinstance(item, dict) and len(list(item.keys())) == 1: key = list(item.keys())[0] value = item[key] result_str += indent*'\t' + key + ': ' + value + '\n' else: indent = indent + 1 dictionary_iterator(indent, item) return result_str if __name__ == '__main__': print(dictionary_iterator(0, data)) ``` That will print out: ``` plugins Chrome PDF Viewer: mhjfbmdgcfjbbpaeojofohoefgiehjai Chrome PDF Viewer: internal-pdf-viewer Native Client: internal-nacl-plugin Shockwave Flash: PepperFlashPlayer.plugin Widevine Content Decryption Module: widevinecdmadapter.plugin anotherLevel sublevel item1: value1 ```
45,176,779
I have a python object that looks like this. I am trying to parse this object and turn it to a human readable string which I need to put in the logs. How can I recursively loop through this considering the object could be nested dictionaries or nested lists or dictionaries inside lists inside dictionaries etc. ``` {"plugins": [ {"Chrome PDF Viewer": "mhjfbmdgcfjbbpaeojofohoefgiehjai"}, {"Chrome PDF Viewer": "internal-pdf-viewer"}, {"Native Client": "internal-nacl-plugin"}, {"Shockwave Flash": "PepperFlashPlayer.plugin"}, {"Widevine Content Decryption Module": "widevinecdmadapter.plugin"} ] } ``` I want to possibly serialize the above to look something like this ``` "plugins: Chrome PDF Viewer": "mhjfbmdgcfjbbpaeojofohoefgiehjai, Chrome PDF Viewer": "internal-pdf-viewer, Native Client": "internal-nacl-plugin, Shockwave Flash": "PepperFlashPlayer.plugin, Widevine Content Decryption Module": "widevinecdmadapter.plugin" ``` My code so far [this works for nested dictionaries but I am not sure how I can alter this to support lists in the above object]: ``` result_str = "" def dictionary_iterator(results): global result_str for key, value in results.items(): if isinstance(value, dict): result_str = result_str + key + ": \n \t" dictionary_iterator(value) else: result_str = result_str + key + ": " + str(value) + "\n" return result_str ``` I have looked over possible answers but could not find a solution.
2017/07/18
[ "https://Stackoverflow.com/questions/45176779", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7044618/" ]
The formatting might be a bit off ``` def humanizer(input, result=''): if type(input) == dict: for k, v in input.items(): if type(v) == str: result += '%s:%s\n\t' % (str(k), str(v)) elif type(v) in (dict, list): result += '%s:\n\t' % str(k) result = humanizer(v, result) result += '\n\t' elif type(input) == list: for item in input: if type(item) == str: result += item continue result = humanizer(item, result) + '\n\t' else: result += input + '\n\t' return result ``` Result: ``` plugins: Chrome PDF Viewer:mhjfbmdgcfjbbpaeojofohoefgiehjai Chrome PDF Viewer:internal-pdf-viewer Native Client:internal-nacl-plugin Shockwave Flash:PepperFlashPlayer.plugin Widevine Content Decryption Module:widevinecdmadapter.plugin ```
You need an elif-condition in case of type == list and a indentation tracker: ``` data = {"plugins": [ {"Chrome PDF Viewer": "mhjfbmdgcfjbbpaeojofohoefgiehjai"}, {"Chrome PDF Viewer": "internal-pdf-viewer"}, {"Native Client": "internal-nacl-plugin"}, {"Shockwave Flash": "PepperFlashPlayer.plugin"}, {"Widevine Content Decryption Module": "widevinecdmadapter.plugin"} ], "anotherLevel": { "sublevel": [ {'item1': 'value1'} ] } } result_str = "" def dictionary_iterator(indent, data): global result_str if isinstance(data, dict): for key, value in data.items(): result_str += indent*'\t' + key + '\n' indent = indent + 1 dictionary_iterator(indent, value) elif isinstance(data, list): for item in data: if isinstance(item, dict) and len(list(item.keys())) == 1: key = list(item.keys())[0] value = item[key] result_str += indent*'\t' + key + ': ' + value + '\n' else: indent = indent + 1 dictionary_iterator(indent, item) return result_str if __name__ == '__main__': print(dictionary_iterator(0, data)) ``` That will print out: ``` plugins Chrome PDF Viewer: mhjfbmdgcfjbbpaeojofohoefgiehjai Chrome PDF Viewer: internal-pdf-viewer Native Client: internal-nacl-plugin Shockwave Flash: PepperFlashPlayer.plugin Widevine Content Decryption Module: widevinecdmadapter.plugin anotherLevel sublevel item1: value1 ```
10,226,551
I have a list of floating point numbers and I want to generate another list of period returns from my first list. This is a run of the mill implementation (not tested - and OBVIOUSLY no error checking/handling): ``` a = [100,105,100,95,100] def calc_period_returns(values, period): output = [] startpos, endpos = (period, len(values)-1) while True: current = values[startpos] previous = values[startpos-period] ret = 100*((current-previous)/(1.0*previous)) output.append(ret) startpos += period if startpos > endpos: break return output calc_period_returns(a,1) # Expected output: # [5.0, -4.7619047619047619, -5.0, 5.2631578947368416] ``` Is there a more pythonic way of doing this - perhaps using list comprehension and maps?
2012/04/19
[ "https://Stackoverflow.com/questions/10226551", "https://Stackoverflow.com", "https://Stackoverflow.com/users/962891/" ]
Here you go: ``` >>> [100.0 * a1 / a2 - 100 for a1, a2 in zip(a[1:], a)] [5.0, -4.7619047619047592, -5.0, 5.2631578947368354] ``` Since you want to compare neighbor elements of a list, you better create a list of pairs you are interested in, like this: ``` >>> a = range(5) >>> a [0, 1, 2, 3, 4] >>> zip(a, a[1:]) [(0, 1), (1, 2), (2, 3), (3, 4)] ``` After that it is just a simple math to extract a percentage change from a pair of numbers.
I don't know how large your list of numbers is going to be, but if you are going to process large amounts of numbers, you should have a look at numpy. The side effect is that calculations look a lot simpler. With numpy, you create an array for your data ``` >>> import numpy as np >>> a = np.array([100,105,100,95,100], dtype=float) ``` and work with arrays as if they were simple numbers ``` >>> np.diff(a) / a[:-1] * 100. [ 5. -4.76190476 -5. 5.26315789] ```
10,226,551
I have a list of floating point numbers and I want to generate another list of period returns from my first list. This is a run of the mill implementation (not tested - and OBVIOUSLY no error checking/handling): ``` a = [100,105,100,95,100] def calc_period_returns(values, period): output = [] startpos, endpos = (period, len(values)-1) while True: current = values[startpos] previous = values[startpos-period] ret = 100*((current-previous)/(1.0*previous)) output.append(ret) startpos += period if startpos > endpos: break return output calc_period_returns(a,1) # Expected output: # [5.0, -4.7619047619047619, -5.0, 5.2631578947368416] ``` Is there a more pythonic way of doing this - perhaps using list comprehension and maps?
2012/04/19
[ "https://Stackoverflow.com/questions/10226551", "https://Stackoverflow.com", "https://Stackoverflow.com/users/962891/" ]
Here you go: ``` >>> [100.0 * a1 / a2 - 100 for a1, a2 in zip(a[1:], a)] [5.0, -4.7619047619047592, -5.0, 5.2631578947368354] ``` Since you want to compare neighbor elements of a list, you better create a list of pairs you are interested in, like this: ``` >>> a = range(5) >>> a [0, 1, 2, 3, 4] >>> zip(a, a[1:]) [(0, 1), (1, 2), (2, 3), (3, 4)] ``` After that it is just a simple math to extract a percentage change from a pair of numbers.
Thanks for the answer guys! A function I implemented based on your answers if someone just wanna copy paste (like me): ``` def pct_change(nparray): pct=np.zeros_like(nparray) pct[1:]=np.diff(nparray) / np.abs(nparray[:-1]) #TODO zero divisionerror return pct ```
10,226,551
I have a list of floating point numbers and I want to generate another list of period returns from my first list. This is a run of the mill implementation (not tested - and OBVIOUSLY no error checking/handling): ``` a = [100,105,100,95,100] def calc_period_returns(values, period): output = [] startpos, endpos = (period, len(values)-1) while True: current = values[startpos] previous = values[startpos-period] ret = 100*((current-previous)/(1.0*previous)) output.append(ret) startpos += period if startpos > endpos: break return output calc_period_returns(a,1) # Expected output: # [5.0, -4.7619047619047619, -5.0, 5.2631578947368416] ``` Is there a more pythonic way of doing this - perhaps using list comprehension and maps?
2012/04/19
[ "https://Stackoverflow.com/questions/10226551", "https://Stackoverflow.com", "https://Stackoverflow.com/users/962891/" ]
Here you go: ``` >>> [100.0 * a1 / a2 - 100 for a1, a2 in zip(a[1:], a)] [5.0, -4.7619047619047592, -5.0, 5.2631578947368354] ``` Since you want to compare neighbor elements of a list, you better create a list of pairs you are interested in, like this: ``` >>> a = range(5) >>> a [0, 1, 2, 3, 4] >>> zip(a, a[1:]) [(0, 1), (1, 2), (2, 3), (3, 4)] ``` After that it is just a simple math to extract a percentage change from a pair of numbers.
You can also to this: ``` >>> a = [100,105,100,95,100] >>> [(a[i]-a[i-1])/a[i-1] for i in range(1, len(a))] [0.05, -0.047619047619047616, -0.05, 0.05263157894736842] ```
10,226,551
I have a list of floating point numbers and I want to generate another list of period returns from my first list. This is a run of the mill implementation (not tested - and OBVIOUSLY no error checking/handling): ``` a = [100,105,100,95,100] def calc_period_returns(values, period): output = [] startpos, endpos = (period, len(values)-1) while True: current = values[startpos] previous = values[startpos-period] ret = 100*((current-previous)/(1.0*previous)) output.append(ret) startpos += period if startpos > endpos: break return output calc_period_returns(a,1) # Expected output: # [5.0, -4.7619047619047619, -5.0, 5.2631578947368416] ``` Is there a more pythonic way of doing this - perhaps using list comprehension and maps?
2012/04/19
[ "https://Stackoverflow.com/questions/10226551", "https://Stackoverflow.com", "https://Stackoverflow.com/users/962891/" ]
I don't know how large your list of numbers is going to be, but if you are going to process large amounts of numbers, you should have a look at numpy. The side effect is that calculations look a lot simpler. With numpy, you create an array for your data ``` >>> import numpy as np >>> a = np.array([100,105,100,95,100], dtype=float) ``` and work with arrays as if they were simple numbers ``` >>> np.diff(a) / a[:-1] * 100. [ 5. -4.76190476 -5. 5.26315789] ```
Thanks for the answer guys! A function I implemented based on your answers if someone just wanna copy paste (like me): ``` def pct_change(nparray): pct=np.zeros_like(nparray) pct[1:]=np.diff(nparray) / np.abs(nparray[:-1]) #TODO zero divisionerror return pct ```
10,226,551
I have a list of floating point numbers and I want to generate another list of period returns from my first list. This is a run of the mill implementation (not tested - and OBVIOUSLY no error checking/handling): ``` a = [100,105,100,95,100] def calc_period_returns(values, period): output = [] startpos, endpos = (period, len(values)-1) while True: current = values[startpos] previous = values[startpos-period] ret = 100*((current-previous)/(1.0*previous)) output.append(ret) startpos += period if startpos > endpos: break return output calc_period_returns(a,1) # Expected output: # [5.0, -4.7619047619047619, -5.0, 5.2631578947368416] ``` Is there a more pythonic way of doing this - perhaps using list comprehension and maps?
2012/04/19
[ "https://Stackoverflow.com/questions/10226551", "https://Stackoverflow.com", "https://Stackoverflow.com/users/962891/" ]
I don't know how large your list of numbers is going to be, but if you are going to process large amounts of numbers, you should have a look at numpy. The side effect is that calculations look a lot simpler. With numpy, you create an array for your data ``` >>> import numpy as np >>> a = np.array([100,105,100,95,100], dtype=float) ``` and work with arrays as if they were simple numbers ``` >>> np.diff(a) / a[:-1] * 100. [ 5. -4.76190476 -5. 5.26315789] ```
You can also to this: ``` >>> a = [100,105,100,95,100] >>> [(a[i]-a[i-1])/a[i-1] for i in range(1, len(a))] [0.05, -0.047619047619047616, -0.05, 0.05263157894736842] ```
10,226,551
I have a list of floating point numbers and I want to generate another list of period returns from my first list. This is a run of the mill implementation (not tested - and OBVIOUSLY no error checking/handling): ``` a = [100,105,100,95,100] def calc_period_returns(values, period): output = [] startpos, endpos = (period, len(values)-1) while True: current = values[startpos] previous = values[startpos-period] ret = 100*((current-previous)/(1.0*previous)) output.append(ret) startpos += period if startpos > endpos: break return output calc_period_returns(a,1) # Expected output: # [5.0, -4.7619047619047619, -5.0, 5.2631578947368416] ``` Is there a more pythonic way of doing this - perhaps using list comprehension and maps?
2012/04/19
[ "https://Stackoverflow.com/questions/10226551", "https://Stackoverflow.com", "https://Stackoverflow.com/users/962891/" ]
Thanks for the answer guys! A function I implemented based on your answers if someone just wanna copy paste (like me): ``` def pct_change(nparray): pct=np.zeros_like(nparray) pct[1:]=np.diff(nparray) / np.abs(nparray[:-1]) #TODO zero divisionerror return pct ```
You can also to this: ``` >>> a = [100,105,100,95,100] >>> [(a[i]-a[i-1])/a[i-1] for i in range(1, len(a))] [0.05, -0.047619047619047616, -0.05, 0.05263157894736842] ```
17,239,077
Im trying to learn python and started with this, I keep getting a syntax error when i try to run it. the cursor jumps to the end of the close " at def start section. Im not sure where the syntax error is coming from as i speech mark all the print ``` #! python3 # J Presents: Rock, paper, Scissors: The Video Game import random import time rock = 1 paper = 2 scissors = 3 names = { rock: "Rock", paper: "Paper", scissors: "Scissors" } rules = {rock: scissors, paper: rock, scissors: paper} Player_score = 0 computer_score = 0 def start (): print "Let's play a game of Rock, Paper, Scissors." while game (): pass scores() def game (): player = move () computer = random.randint(1, 3) result(player, computer) return play_again() def move(): while True: print player = raw_input("Rock = 1\nPaper = 2\nScissors = 3\nMake a Move: ") try: player = int(player) if player in (1,2,3): return player except ValueError: pass Print "Oops! I didn't understand that. Please enter 1, 2 or 3." def result (player, computer): print "1..." time.sleep(1) print "2..." time.sleep(1) print "3!" time.sleep (0.5) print "Computer threw {0}!".format(names[computer]) global player_score, computer_score if player == computer: print "Tie Game." else: if rules[player} == computer: print "Your victory has been assured." player_score +=1 else: print "The computer laughs as you realise you have been defeated." def play_again(): answer = raw_input("Would you like to play again? y/n: ") if answer in ("y", "Y", "yes", "Yes", "Of course!"): return answer else: print "Thank you very much for playing our game. See you next time!" def scores(): global player_score, computer_score print " HIGH SCORES" print "Player: ", player_score print "Computer: ", computer_score if __name__ == '__main__': start() ```
2013/06/21
[ "https://Stackoverflow.com/questions/17239077", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2401529/" ]
1) You have an indentation error here : ``` try: player = int(player) if player in (1,2,3): return player except ValueError: #Try Except Block Statement pass Print "Oops! I didn't understand that. Please enter 1, 2 or 3." ``` --- 2) Also : ``` if rules[player} == computer: #Error in this line ``` Should be : ``` if rules[player] == computer: ``` --- 3) Indentation error in if else. ``` if player == computer: print "Tie Game." else: if rules[player} == computer: print "Your victory has been assured." player_score +=1 else: #Incorrect indentation print "The computer laughs as you realise you have been defeated." ``` This should be : ``` if rules[player} == computer: print "Your victory has been assured." player_score +=1 else: print "The computer laughs as you realise you have been defeated." ```
``` if rules[player} == computer: ``` The curly brace should be a bracket.
17,239,077
Im trying to learn python and started with this, I keep getting a syntax error when i try to run it. the cursor jumps to the end of the close " at def start section. Im not sure where the syntax error is coming from as i speech mark all the print ``` #! python3 # J Presents: Rock, paper, Scissors: The Video Game import random import time rock = 1 paper = 2 scissors = 3 names = { rock: "Rock", paper: "Paper", scissors: "Scissors" } rules = {rock: scissors, paper: rock, scissors: paper} Player_score = 0 computer_score = 0 def start (): print "Let's play a game of Rock, Paper, Scissors." while game (): pass scores() def game (): player = move () computer = random.randint(1, 3) result(player, computer) return play_again() def move(): while True: print player = raw_input("Rock = 1\nPaper = 2\nScissors = 3\nMake a Move: ") try: player = int(player) if player in (1,2,3): return player except ValueError: pass Print "Oops! I didn't understand that. Please enter 1, 2 or 3." def result (player, computer): print "1..." time.sleep(1) print "2..." time.sleep(1) print "3!" time.sleep (0.5) print "Computer threw {0}!".format(names[computer]) global player_score, computer_score if player == computer: print "Tie Game." else: if rules[player} == computer: print "Your victory has been assured." player_score +=1 else: print "The computer laughs as you realise you have been defeated." def play_again(): answer = raw_input("Would you like to play again? y/n: ") if answer in ("y", "Y", "yes", "Yes", "Of course!"): return answer else: print "Thank you very much for playing our game. See you next time!" def scores(): global player_score, computer_score print " HIGH SCORES" print "Player: ", player_score print "Computer: ", computer_score if __name__ == '__main__': start() ```
2013/06/21
[ "https://Stackoverflow.com/questions/17239077", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2401529/" ]
1) You have an indentation error here : ``` try: player = int(player) if player in (1,2,3): return player except ValueError: #Try Except Block Statement pass Print "Oops! I didn't understand that. Please enter 1, 2 or 3." ``` --- 2) Also : ``` if rules[player} == computer: #Error in this line ``` Should be : ``` if rules[player] == computer: ``` --- 3) Indentation error in if else. ``` if player == computer: print "Tie Game." else: if rules[player} == computer: print "Your victory has been assured." player_score +=1 else: #Incorrect indentation print "The computer laughs as you realise you have been defeated." ``` This should be : ``` if rules[player} == computer: print "Your victory has been assured." player_score +=1 else: print "The computer laughs as you realise you have been defeated." ```
``` if rules[player} == computer: print "Your victory has been assured." player_score +=1 ``` First error: rules[player} => rules[player] second error, you need to ident the lines after the if. ``` else: if rules[player} == computer: print "Your victory has been assured." player_score +=1 else: print "The computer laughs as you realise you have been defeated." ``` Third error: you can't have 2 elses for a single if
39,194,747
I'm coding some python files with sublime and I'd like to comment multiple selected lines which means putting the character '#' at the beginning of each selected line. Is it possible to create a such shortcut-key Binding on sublime to do that ? Thanks Vincent
2016/08/28
[ "https://Stackoverflow.com/questions/39194747", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6767684/" ]
There are a few ways to do this. Primarily, *two*: (1) Use the CPU/processor stack. There are some variants, each with its own limitations. (2) Or, recode your function(s) to use a "stack frame" struct that simulates a "stack". The actual function ceases to be recursive. This can be virtually limitless up to whatever the heap will permit --- For (1) ... (A) If your system permits, you can issue a `syscall` to extend the process's stack size. There may be limits on how much you can do this and collisions with shared library addresses. (B) You can `malloc` a large area. With some [somewhat] intricate inline asm trickery, you can swap this area for the stack [and back again] and call your function with this `malloc` area as the stack. Doable, but not for the faint of heart ... (C) An easier way is to `malloc` a large area. Pass this area to `pthread_attr_setstack`. Then, run your recursive function as a thread using `pthread_create`. Note, you don't really care about multiple threads, it's just an easy way to avoid the "messy" asm trickery. With (A), *assuming* the stack extend syscall permits, the limit could be all of available memory permitted for stack [up to some system-wide or RLIMIT\_\* parameter]. With (B) and (C), you have to "guess" and make the `malloc` large enough before you start. After it has been done, the size is fixed and can *not* be extended further. Actually, that's not quite true. Using the asm trickery repeatedly [when needed], you could simulate a near infinite stack. But, IMO, the overhead of keeping track of these large malloc areas is high enough that I'd opt for (2) below. --- For (2) ... This can literally expand/contract as needed. One of the advantages is that you don't need to guess beforehand at how much memory you'll need. The [pseudo] stack can just keep growing as needed [until `malloc` returns `NULL` :-)]. Here is a sample recursive function [treat loosely as pseudo code]: ``` int myfunc(int a,int b,int c,int d) { int ret; // do some stuff ... if (must_recurse) ret = myfunc(a + 5,b + 7,c - 6,d + 8); else ret = 0; return ret; } ``` Here is that function changed to use a `struct` as a stack frame [again, loose pseudo code]: ``` typedef struct stack_frame frame_t; struct stack_frame { frame_t *prev; int a; int b; int c; int d; }; stack_t *free_pool; #define GROWCOUNT 1000 frame_t * frame_push(frame_t *prev) { frame_t *cur; // NOTE: we can maintain a free pool ... while (1) { cur = free_pool; if (cur != NULL) { free_pool = cur->prev; break; } // refill free pool from heap ... free_pool = calloc(GROWCOUNT,sizeof(stack_t)); if (free_pool == NULL) { printf("frame_push: no memory\n"); exit(1); } cur = free_pool; for (int count = GROWCOUNT; count > 0; --count, ++cur) cur->prev = cur + 1; cur->prev = NULL; } if (prev != NULL) { *cur = *prev; cur->prev = prev; cur->a += 5; cur->b += 7; cur->c += 6; cur->d += 8; } else memset(cur,0,sizeof(frame_t)); return cur; } frame_t * frame_pop(frame_t *cur) { frame_t *prev; prev = cur->prev; cur->prev = free_pool; free_pool = cur; return prev; } int myfunc(void) { int ret; stack_t *cur; cur = frame_push(NULL); // set initial conditions in cur... while (1) { // do stuff ... if (must_recurse) { cur = frame_push(cur); must_recurse = 0; continue; } // pop stack cur = frame_pop(cur); if (cur == NULL) break; } return ret; } ```
All of functions, objects, variable and user defined structures use memory spaces which is control by OS and compiler. So, it means your defined stack works under a general memory space which is specified for the stack of your process in OS. As a result, it does not have a big difference, but you can define an optimized structure with high efficiency to use this general stack much more better.
71,561,891
![This is what I want](https://i.stack.imgur.com/lexMT.png "example of what I want")How to make R side by side two column histogram (above) which I am able to do in python ([image taken from here](https://stackoverflow.com/questions/6871201/plot-two-histograms-on-single-chart-with-matplotlib)) and all the answers I have found for R get ![this result](https://i.stack.imgur.com/I11Cq.png "what I am getting")[image taken from here](https://stackoverflow.com/questions/3541713/how-to-plot-two-histograms-together-in-r) I tried searching for answers on stackoverflow and just google in general but no one seemed to be able to tell me how to make the histogram I want.
2022/03/21
[ "https://Stackoverflow.com/questions/71561891", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18295411/" ]
We remove the `NA` with `na.omit` and get the `first` element - use `[1]` to coerce to `NA` if there are no non-NA elements present ``` library(dplyr) test %>% group_by(name) %>% summarise(across(everything(), ~ first(na.omit(.x))[1])) ``` -output ``` # A tibble: 2 × 4 name test_1 test_2 make_up_test <chr> <int> <int> <dbl> 1 C 2 4 1 2 J 1 3 NA ```
Here is an approach with pivoting: ``` library(tidyr) library(dplyr) test %>% pivot_longer(-name, names_to = "names") %>% drop_na() %>% pivot_wider(names_from = names, values_from = value) %>% relocate(test_2, .after = test_1) ``` ``` name test_1 test_2 make_up_test <chr> <dbl> <dbl> <dbl> 1 J 1 3 NA 2 C 2 4 1 ```
22,358,540
I know how to read bits inside an int in Python but not how to do so on a char. For an int, this elementary operation works: a & (2\*\*bit\_index) . But for a single character it gives the following error message: `unsupported operand type(s) for &: 'str' and 'int'` In case, this "subtlety' matters, I'm also reading my char from a string object using: ``` for my_char in my_string: ``` I'm stressing this point, because it could be that my\_char is actually a string of length one and not a char, just because I know really little about python handle of types. Thank you.
2014/03/12
[ "https://Stackoverflow.com/questions/22358540", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3275464/" ]
You can use a `bytearray` instead of a string. The individual elements are integers, but you can still do basic string manipulation on the whole: ``` >>> arr = bytearray('foo') >>> type(arr[0]) <type 'int'> >>> arr.replace('o', 'u') bytearray(b'fuu') ```
Python doesn't really have char type. You have a string of length one. You need to convert it to int before you can apply those operators in it. Depending on what is in `my_string` this might work: `int(my_char, 10)`
22,358,540
I know how to read bits inside an int in Python but not how to do so on a char. For an int, this elementary operation works: a & (2\*\*bit\_index) . But for a single character it gives the following error message: `unsupported operand type(s) for &: 'str' and 'int'` In case, this "subtlety' matters, I'm also reading my char from a string object using: ``` for my_char in my_string: ``` I'm stressing this point, because it could be that my\_char is actually a string of length one and not a char, just because I know really little about python handle of types. Thank you.
2014/03/12
[ "https://Stackoverflow.com/questions/22358540", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3275464/" ]
Try this instead: `ord(a) & (2**bit_index)`. In python there is no separate character type, a character is simply a string of size one. So if you want to perform bit manipulations with string of size 1, you can use [`ord()`](http://docs.python.org/2.7/library/functions.html#ord) to convert it to `int`.
You can use a `bytearray` instead of a string. The individual elements are integers, but you can still do basic string manipulation on the whole: ``` >>> arr = bytearray('foo') >>> type(arr[0]) <type 'int'> >>> arr.replace('o', 'u') bytearray(b'fuu') ```
22,358,540
I know how to read bits inside an int in Python but not how to do so on a char. For an int, this elementary operation works: a & (2\*\*bit\_index) . But for a single character it gives the following error message: `unsupported operand type(s) for &: 'str' and 'int'` In case, this "subtlety' matters, I'm also reading my char from a string object using: ``` for my_char in my_string: ``` I'm stressing this point, because it could be that my\_char is actually a string of length one and not a char, just because I know really little about python handle of types. Thank you.
2014/03/12
[ "https://Stackoverflow.com/questions/22358540", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3275464/" ]
Correct -- it is a string of length 1 and not a char. Convert your string to a list of integers: ``` >>> s = "hello world" >>> l = [ord(c) for c in s] ``` Then you can use bitwise operators on specific offsets: ``` >>> l[1] = l[1] << 1 >>> print "".join(chr(c) for c in l) h?llo world ```
Python doesn't really have char type. You have a string of length one. You need to convert it to int before you can apply those operators in it. Depending on what is in `my_string` this might work: `int(my_char, 10)`
22,358,540
I know how to read bits inside an int in Python but not how to do so on a char. For an int, this elementary operation works: a & (2\*\*bit\_index) . But for a single character it gives the following error message: `unsupported operand type(s) for &: 'str' and 'int'` In case, this "subtlety' matters, I'm also reading my char from a string object using: ``` for my_char in my_string: ``` I'm stressing this point, because it could be that my\_char is actually a string of length one and not a char, just because I know really little about python handle of types. Thank you.
2014/03/12
[ "https://Stackoverflow.com/questions/22358540", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3275464/" ]
Try this instead: `ord(a) & (2**bit_index)`. In python there is no separate character type, a character is simply a string of size one. So if you want to perform bit manipulations with string of size 1, you can use [`ord()`](http://docs.python.org/2.7/library/functions.html#ord) to convert it to `int`.
Python doesn't really have char type. You have a string of length one. You need to convert it to int before you can apply those operators in it. Depending on what is in `my_string` this might work: `int(my_char, 10)`
22,358,540
I know how to read bits inside an int in Python but not how to do so on a char. For an int, this elementary operation works: a & (2\*\*bit\_index) . But for a single character it gives the following error message: `unsupported operand type(s) for &: 'str' and 'int'` In case, this "subtlety' matters, I'm also reading my char from a string object using: ``` for my_char in my_string: ``` I'm stressing this point, because it could be that my\_char is actually a string of length one and not a char, just because I know really little about python handle of types. Thank you.
2014/03/12
[ "https://Stackoverflow.com/questions/22358540", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3275464/" ]
Try this instead: `ord(a) & (2**bit_index)`. In python there is no separate character type, a character is simply a string of size one. So if you want to perform bit manipulations with string of size 1, you can use [`ord()`](http://docs.python.org/2.7/library/functions.html#ord) to convert it to `int`.
Correct -- it is a string of length 1 and not a char. Convert your string to a list of integers: ``` >>> s = "hello world" >>> l = [ord(c) for c in s] ``` Then you can use bitwise operators on specific offsets: ``` >>> l[1] = l[1] << 1 >>> print "".join(chr(c) for c in l) h?llo world ```
50,182,833
I am running a Flask app on Google Cloud App Engine (flex). Running it locally works just fine, but once it deploys I get a `502 Bad Gateway error (nginx)`. Now I would like to figure out what causes this, but I am not able to find any option to view the console logs that my app creates. Since it works just fine on my local environment, my current workflow to solve this issue involves changing my code locally and deploying it to see if it works afterwards, but each deployment takes over 30min only to figure out it still does not work. There must be a way to do this more efficiently. Following the docs <https://cloud.google.com/appengine/docs/flexible/python/debugging-an-instance> I was able to SSH into my instance in debug-mode and launch the Flask app from the Cloud Shell, however it tells me to access it on <http://127.0.0.1:8080/> which I can't access from the cloud server. Hence I can't navigate the webpage in order to reproduce the 502 error and then see the output in the console. How can I figure out what causes the 502 error on the server?
2018/05/04
[ "https://Stackoverflow.com/questions/50182833", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6341510/" ]
Had the similar issue. Found that app engine looks for app variable in main.py file. My final app.yaml looks like below. app.yaml -------- ``` runtime: python env: flex entrypoint: gunicorn -b :$PORT main:app runtime_config: python_version: 3 ``` and had requirements.txt, which looks like below. requirements.txt ---------------- ``` Flask==1.1.1 gunicorn==20.0.4 ```
Here are my theories: * localhost (`127.0.0.1`) is being used; should use `0.0.0.0` * Flask internal WSGI server is being used; should use e.g. [Gunicorn](https://gunicorn.org/) **NB** You **may** develop and test these solutions using Cloud Shell. Cloud Shell (now) includes a [web preview](https://cloud.google.com/shell/docs/using-web-preview) feature that permits browsing endpoints (including `:8080`) for servers running on the Cloud Shell instance. Flask ----- Flask includes a development (WSGI) server and tutorials generally include: ```py if __name__ == '__main__': app.run(host='127.0.0.1', port=8080, debug=True) ``` Which, if run as `python somefile.py` will use Flask's inbuilt (dev) server and expose it on localhost (`127.0.0.1`). This is inaccessible from other machines: ``` * Serving Flask app "main" (lazy loading) * Environment: production WARNING: Do not use the development server in a production environment. Use a production WSGI server instead. * Debug mode: on * Running on http://127.0.0.1:8080/ (Press CTRL+C to quit) * Restarting with stat * Debugger is active! * Debugger PIN: 244-629-469 ``` If instead, `host='0.0.0.0'` is used, then this will work: ``` * Serving Flask app "main" (lazy loading) * Environment: production WARNING: Do not use the development server in a production environment. Use a production WSGI server instead. * Debug mode: on * Running on http://0.0.0.0:8080/ (Press CTRL+C to quit) * Restarting with stat * Debugger is active! * Debugger PIN: 244-629-469 192.168.9.1 - - [08/May/2019 23:59:59] "GET / HTTP/1.1" 200 - 192.168.9.1 - - [08/May/2019 23:59:59] "GET /favicon.ico HTTP/1.1" 404 - ``` E.g. Gunicorn ------------- Flask's inbuilt server should not be used and Flex's documentation describes how to use gunicorn (one of various alternatives) should be configured: <https://cloud.google.com/appengine/docs/flexible/python/runtime#application_startup> Which, if run `gunicorn --bind=0.0.0.0:8080 main:app` gives: ``` [INFO] Starting gunicorn 19.9.0 [INFO] Listening at: http://0.0.0.0:8080 (1) [INFO] Using worker: sync [INFO] Booting worker with pid: 7 ``` App Engine Flex --------------- Using the recommended configuration, app.yaml would include: ``` runtime: python env: flex entrypoint: gunicorn --bind:$PORT main:app ``` Dockerfiles ----------- You can test these locally with Dockerfiles and -- if you wish -- deploy these to Flex as custom runtimes (after revising `app.yaml`): ``` FROM python:3.7-alpine WORKDIR /app ADD . . RUN pip install -r requirements.txt ``` For Flask add: ``` ENTRYPOINT ["python","main.py"] ``` **NB** In the above, the configuration results from the somefile.py `app.run(...)` And for gunicorn: ``` ENTRYPOINT ["gunicorn","--bind=0.0.0.0:8080","main:app"] ```
17,029,752
I am scraping 23770 webpages with a pretty simple web scraper using `scrapy`. I am quite new to scrapy and even python, but managed to write a spider that does the job. It is, however, really slow (it takes approx. 28 hours to crawl the 23770 pages). I have looked on the `scrapy` webpage and the mailing lists and `stackoverflow`, but I can't seem to find generic recommendations for writing fast crawlers understandable for beginners. Maybe my problem is not the spider itself, but the way i run it. All suggestions welcome! I have listed my code below, if it's needed. ``` from scrapy.spider import BaseSpider from scrapy.selector import HtmlXPathSelector from scrapy.item import Item, Field import re class Sale(Item): Adresse = Field() Pris = Field() Salgsdato = Field() SalgsType = Field() KvmPris = Field() Rum = Field() Postnummer = Field() Boligtype = Field() Kvm = Field() Bygget = Field() class HouseSpider(BaseSpider): name = 'House' allowed_domains = ["http://boliga.dk/"] start_urls = ['http://www.boliga.dk/salg/resultater?so=1&type=Villa&type=Ejerlejlighed&type=R%%C3%%A6kkehus&kom=&amt=&fraPostnr=&tilPostnr=&iPostnr=&gade=&min=&max=&byggetMin=&byggetMax=&minRooms=&maxRooms=&minSize=&maxSize=&minsaledate=1992&maxsaledate=today&kode=&p=%d' %n for n in xrange(1, 23770, 1)] def parse(self, response): hxs = HtmlXPathSelector(response) sites = hxs.select("id('searchresult')/tr") items = [] for site in sites: item = Sale() item['Adresse'] = site.select("td[1]/a[1]/text()").extract() item['Pris'] = site.select("td[2]/text()").extract() item['Salgsdato'] = site.select("td[3]/text()").extract() Temp = site.select("td[4]/text()").extract() Temp = Temp[0] m = re.search('\r\n\t\t\t\t\t(.+?)\r\n\t\t\t\t', Temp) if m: found = m.group(1) item['SalgsType'] = found else: item['SalgsType'] = Temp item['KvmPris'] = site.select("td[5]/text()").extract() item['Rum'] = site.select("td[6]/text()").extract() item['Postnummer'] = site.select("td[7]/text()").extract() item['Boligtype'] = site.select("td[8]/text()").extract() item['Kvm'] = site.select("td[9]/text()").extract() item['Bygget'] = site.select("td[10]/text()").extract() items.append(item) return items ``` Thanks!
2013/06/10
[ "https://Stackoverflow.com/questions/17029752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2183630/" ]
Here's a collection of things to try: * use latest scrapy version (if not using already) * check if non-standard middlewares are used * try to increase `CONCURRENT_REQUESTS_PER_DOMAIN`, `CONCURRENT_REQUESTS` settings ([docs](http://doc.scrapy.org/en/latest/topics/settings.html#concurrent-requests)) * turn off logging `LOG_ENABLED = False` ([docs](http://doc.scrapy.org/en/latest/topics/settings.html#log-enabled)) * try `yield`ing an item in a loop instead of collecting items into the `items` list and returning them * use local cache DNS (see [this thread](https://stackoverflow.com/questions/12427451/how-do-i-improve-scrapys-download-speed)) * check if this site is using download threshold and limits your download speed (see [this thread](https://stackoverflow.com/questions/13505194/scrapy-crawling-speed-is-slow-60-pages-min#comment18491083_13505194)) * log cpu and memory usage during the spider run - see if there are any problems there * try run the same spider under [scrapyd](http://scrapyd.readthedocs.org/en/latest/) service * see if [grequests](https://github.com/kennethreitz/grequests) + [lxml](http://lxml.de/) will perform better (ask if you need any help with implementing this solution) * try running `Scrapy` on `pypy`, see [Running Scrapy on PyPy](https://stackoverflow.com/questions/31029362/running-scrapy-on-pypy) Hope that helps.
Looking at your code, I'd say most of that time is spent in network requests rather than processing the responses. All of the tips @alecxe provides in his answer apply, but I'd suggest the `HTTPCACHE_ENABLED` setting, since it caches the requests and avoids doing it a second time. It would help on following crawls and even offline development. See more info in the docs: <http://doc.scrapy.org/en/latest/topics/downloader-middleware.html#module-scrapy.contrib.downloadermiddleware.httpcache>
17,029,752
I am scraping 23770 webpages with a pretty simple web scraper using `scrapy`. I am quite new to scrapy and even python, but managed to write a spider that does the job. It is, however, really slow (it takes approx. 28 hours to crawl the 23770 pages). I have looked on the `scrapy` webpage and the mailing lists and `stackoverflow`, but I can't seem to find generic recommendations for writing fast crawlers understandable for beginners. Maybe my problem is not the spider itself, but the way i run it. All suggestions welcome! I have listed my code below, if it's needed. ``` from scrapy.spider import BaseSpider from scrapy.selector import HtmlXPathSelector from scrapy.item import Item, Field import re class Sale(Item): Adresse = Field() Pris = Field() Salgsdato = Field() SalgsType = Field() KvmPris = Field() Rum = Field() Postnummer = Field() Boligtype = Field() Kvm = Field() Bygget = Field() class HouseSpider(BaseSpider): name = 'House' allowed_domains = ["http://boliga.dk/"] start_urls = ['http://www.boliga.dk/salg/resultater?so=1&type=Villa&type=Ejerlejlighed&type=R%%C3%%A6kkehus&kom=&amt=&fraPostnr=&tilPostnr=&iPostnr=&gade=&min=&max=&byggetMin=&byggetMax=&minRooms=&maxRooms=&minSize=&maxSize=&minsaledate=1992&maxsaledate=today&kode=&p=%d' %n for n in xrange(1, 23770, 1)] def parse(self, response): hxs = HtmlXPathSelector(response) sites = hxs.select("id('searchresult')/tr") items = [] for site in sites: item = Sale() item['Adresse'] = site.select("td[1]/a[1]/text()").extract() item['Pris'] = site.select("td[2]/text()").extract() item['Salgsdato'] = site.select("td[3]/text()").extract() Temp = site.select("td[4]/text()").extract() Temp = Temp[0] m = re.search('\r\n\t\t\t\t\t(.+?)\r\n\t\t\t\t', Temp) if m: found = m.group(1) item['SalgsType'] = found else: item['SalgsType'] = Temp item['KvmPris'] = site.select("td[5]/text()").extract() item['Rum'] = site.select("td[6]/text()").extract() item['Postnummer'] = site.select("td[7]/text()").extract() item['Boligtype'] = site.select("td[8]/text()").extract() item['Kvm'] = site.select("td[9]/text()").extract() item['Bygget'] = site.select("td[10]/text()").extract() items.append(item) return items ``` Thanks!
2013/06/10
[ "https://Stackoverflow.com/questions/17029752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2183630/" ]
Here's a collection of things to try: * use latest scrapy version (if not using already) * check if non-standard middlewares are used * try to increase `CONCURRENT_REQUESTS_PER_DOMAIN`, `CONCURRENT_REQUESTS` settings ([docs](http://doc.scrapy.org/en/latest/topics/settings.html#concurrent-requests)) * turn off logging `LOG_ENABLED = False` ([docs](http://doc.scrapy.org/en/latest/topics/settings.html#log-enabled)) * try `yield`ing an item in a loop instead of collecting items into the `items` list and returning them * use local cache DNS (see [this thread](https://stackoverflow.com/questions/12427451/how-do-i-improve-scrapys-download-speed)) * check if this site is using download threshold and limits your download speed (see [this thread](https://stackoverflow.com/questions/13505194/scrapy-crawling-speed-is-slow-60-pages-min#comment18491083_13505194)) * log cpu and memory usage during the spider run - see if there are any problems there * try run the same spider under [scrapyd](http://scrapyd.readthedocs.org/en/latest/) service * see if [grequests](https://github.com/kennethreitz/grequests) + [lxml](http://lxml.de/) will perform better (ask if you need any help with implementing this solution) * try running `Scrapy` on `pypy`, see [Running Scrapy on PyPy](https://stackoverflow.com/questions/31029362/running-scrapy-on-pypy) Hope that helps.
I work also on web scraping, using optimized C#, and it ends up CPU bound, so I am switching to C. Parsing HTML blows the CPU data cache, and pretty sure your CPU is not using SSE 4.2 at all, as you can only access this feature using C/C++. If you do the math, you are quickly compute bound but not memory bound.
17,029,752
I am scraping 23770 webpages with a pretty simple web scraper using `scrapy`. I am quite new to scrapy and even python, but managed to write a spider that does the job. It is, however, really slow (it takes approx. 28 hours to crawl the 23770 pages). I have looked on the `scrapy` webpage and the mailing lists and `stackoverflow`, but I can't seem to find generic recommendations for writing fast crawlers understandable for beginners. Maybe my problem is not the spider itself, but the way i run it. All suggestions welcome! I have listed my code below, if it's needed. ``` from scrapy.spider import BaseSpider from scrapy.selector import HtmlXPathSelector from scrapy.item import Item, Field import re class Sale(Item): Adresse = Field() Pris = Field() Salgsdato = Field() SalgsType = Field() KvmPris = Field() Rum = Field() Postnummer = Field() Boligtype = Field() Kvm = Field() Bygget = Field() class HouseSpider(BaseSpider): name = 'House' allowed_domains = ["http://boliga.dk/"] start_urls = ['http://www.boliga.dk/salg/resultater?so=1&type=Villa&type=Ejerlejlighed&type=R%%C3%%A6kkehus&kom=&amt=&fraPostnr=&tilPostnr=&iPostnr=&gade=&min=&max=&byggetMin=&byggetMax=&minRooms=&maxRooms=&minSize=&maxSize=&minsaledate=1992&maxsaledate=today&kode=&p=%d' %n for n in xrange(1, 23770, 1)] def parse(self, response): hxs = HtmlXPathSelector(response) sites = hxs.select("id('searchresult')/tr") items = [] for site in sites: item = Sale() item['Adresse'] = site.select("td[1]/a[1]/text()").extract() item['Pris'] = site.select("td[2]/text()").extract() item['Salgsdato'] = site.select("td[3]/text()").extract() Temp = site.select("td[4]/text()").extract() Temp = Temp[0] m = re.search('\r\n\t\t\t\t\t(.+?)\r\n\t\t\t\t', Temp) if m: found = m.group(1) item['SalgsType'] = found else: item['SalgsType'] = Temp item['KvmPris'] = site.select("td[5]/text()").extract() item['Rum'] = site.select("td[6]/text()").extract() item['Postnummer'] = site.select("td[7]/text()").extract() item['Boligtype'] = site.select("td[8]/text()").extract() item['Kvm'] = site.select("td[9]/text()").extract() item['Bygget'] = site.select("td[10]/text()").extract() items.append(item) return items ``` Thanks!
2013/06/10
[ "https://Stackoverflow.com/questions/17029752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2183630/" ]
Here's a collection of things to try: * use latest scrapy version (if not using already) * check if non-standard middlewares are used * try to increase `CONCURRENT_REQUESTS_PER_DOMAIN`, `CONCURRENT_REQUESTS` settings ([docs](http://doc.scrapy.org/en/latest/topics/settings.html#concurrent-requests)) * turn off logging `LOG_ENABLED = False` ([docs](http://doc.scrapy.org/en/latest/topics/settings.html#log-enabled)) * try `yield`ing an item in a loop instead of collecting items into the `items` list and returning them * use local cache DNS (see [this thread](https://stackoverflow.com/questions/12427451/how-do-i-improve-scrapys-download-speed)) * check if this site is using download threshold and limits your download speed (see [this thread](https://stackoverflow.com/questions/13505194/scrapy-crawling-speed-is-slow-60-pages-min#comment18491083_13505194)) * log cpu and memory usage during the spider run - see if there are any problems there * try run the same spider under [scrapyd](http://scrapyd.readthedocs.org/en/latest/) service * see if [grequests](https://github.com/kennethreitz/grequests) + [lxml](http://lxml.de/) will perform better (ask if you need any help with implementing this solution) * try running `Scrapy` on `pypy`, see [Running Scrapy on PyPy](https://stackoverflow.com/questions/31029362/running-scrapy-on-pypy) Hope that helps.
One workaround to speed up your scrapy is to config your `start_urls` appropriately. For example, If our target data is in `http://apps.webofknowledge.com/doc=1` where the doc number range from `1` to `1000`, you can config your `start_urls` in followings: ``` start_urls = [ "http://apps.webofknowledge.com/doc=250", "http://apps.webofknowledge.com/doc=750", ] ``` In this way, requests will start from `250` to `251,249` and from `750` to `751,749` simultaneously, so you will get 4 times faster compared to `start_urls = ["http://apps.webofknowledge.com/doc=1"]`.
17,029,752
I am scraping 23770 webpages with a pretty simple web scraper using `scrapy`. I am quite new to scrapy and even python, but managed to write a spider that does the job. It is, however, really slow (it takes approx. 28 hours to crawl the 23770 pages). I have looked on the `scrapy` webpage and the mailing lists and `stackoverflow`, but I can't seem to find generic recommendations for writing fast crawlers understandable for beginners. Maybe my problem is not the spider itself, but the way i run it. All suggestions welcome! I have listed my code below, if it's needed. ``` from scrapy.spider import BaseSpider from scrapy.selector import HtmlXPathSelector from scrapy.item import Item, Field import re class Sale(Item): Adresse = Field() Pris = Field() Salgsdato = Field() SalgsType = Field() KvmPris = Field() Rum = Field() Postnummer = Field() Boligtype = Field() Kvm = Field() Bygget = Field() class HouseSpider(BaseSpider): name = 'House' allowed_domains = ["http://boliga.dk/"] start_urls = ['http://www.boliga.dk/salg/resultater?so=1&type=Villa&type=Ejerlejlighed&type=R%%C3%%A6kkehus&kom=&amt=&fraPostnr=&tilPostnr=&iPostnr=&gade=&min=&max=&byggetMin=&byggetMax=&minRooms=&maxRooms=&minSize=&maxSize=&minsaledate=1992&maxsaledate=today&kode=&p=%d' %n for n in xrange(1, 23770, 1)] def parse(self, response): hxs = HtmlXPathSelector(response) sites = hxs.select("id('searchresult')/tr") items = [] for site in sites: item = Sale() item['Adresse'] = site.select("td[1]/a[1]/text()").extract() item['Pris'] = site.select("td[2]/text()").extract() item['Salgsdato'] = site.select("td[3]/text()").extract() Temp = site.select("td[4]/text()").extract() Temp = Temp[0] m = re.search('\r\n\t\t\t\t\t(.+?)\r\n\t\t\t\t', Temp) if m: found = m.group(1) item['SalgsType'] = found else: item['SalgsType'] = Temp item['KvmPris'] = site.select("td[5]/text()").extract() item['Rum'] = site.select("td[6]/text()").extract() item['Postnummer'] = site.select("td[7]/text()").extract() item['Boligtype'] = site.select("td[8]/text()").extract() item['Kvm'] = site.select("td[9]/text()").extract() item['Bygget'] = site.select("td[10]/text()").extract() items.append(item) return items ``` Thanks!
2013/06/10
[ "https://Stackoverflow.com/questions/17029752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2183630/" ]
Looking at your code, I'd say most of that time is spent in network requests rather than processing the responses. All of the tips @alecxe provides in his answer apply, but I'd suggest the `HTTPCACHE_ENABLED` setting, since it caches the requests and avoids doing it a second time. It would help on following crawls and even offline development. See more info in the docs: <http://doc.scrapy.org/en/latest/topics/downloader-middleware.html#module-scrapy.contrib.downloadermiddleware.httpcache>
I work also on web scraping, using optimized C#, and it ends up CPU bound, so I am switching to C. Parsing HTML blows the CPU data cache, and pretty sure your CPU is not using SSE 4.2 at all, as you can only access this feature using C/C++. If you do the math, you are quickly compute bound but not memory bound.
17,029,752
I am scraping 23770 webpages with a pretty simple web scraper using `scrapy`. I am quite new to scrapy and even python, but managed to write a spider that does the job. It is, however, really slow (it takes approx. 28 hours to crawl the 23770 pages). I have looked on the `scrapy` webpage and the mailing lists and `stackoverflow`, but I can't seem to find generic recommendations for writing fast crawlers understandable for beginners. Maybe my problem is not the spider itself, but the way i run it. All suggestions welcome! I have listed my code below, if it's needed. ``` from scrapy.spider import BaseSpider from scrapy.selector import HtmlXPathSelector from scrapy.item import Item, Field import re class Sale(Item): Adresse = Field() Pris = Field() Salgsdato = Field() SalgsType = Field() KvmPris = Field() Rum = Field() Postnummer = Field() Boligtype = Field() Kvm = Field() Bygget = Field() class HouseSpider(BaseSpider): name = 'House' allowed_domains = ["http://boliga.dk/"] start_urls = ['http://www.boliga.dk/salg/resultater?so=1&type=Villa&type=Ejerlejlighed&type=R%%C3%%A6kkehus&kom=&amt=&fraPostnr=&tilPostnr=&iPostnr=&gade=&min=&max=&byggetMin=&byggetMax=&minRooms=&maxRooms=&minSize=&maxSize=&minsaledate=1992&maxsaledate=today&kode=&p=%d' %n for n in xrange(1, 23770, 1)] def parse(self, response): hxs = HtmlXPathSelector(response) sites = hxs.select("id('searchresult')/tr") items = [] for site in sites: item = Sale() item['Adresse'] = site.select("td[1]/a[1]/text()").extract() item['Pris'] = site.select("td[2]/text()").extract() item['Salgsdato'] = site.select("td[3]/text()").extract() Temp = site.select("td[4]/text()").extract() Temp = Temp[0] m = re.search('\r\n\t\t\t\t\t(.+?)\r\n\t\t\t\t', Temp) if m: found = m.group(1) item['SalgsType'] = found else: item['SalgsType'] = Temp item['KvmPris'] = site.select("td[5]/text()").extract() item['Rum'] = site.select("td[6]/text()").extract() item['Postnummer'] = site.select("td[7]/text()").extract() item['Boligtype'] = site.select("td[8]/text()").extract() item['Kvm'] = site.select("td[9]/text()").extract() item['Bygget'] = site.select("td[10]/text()").extract() items.append(item) return items ``` Thanks!
2013/06/10
[ "https://Stackoverflow.com/questions/17029752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2183630/" ]
Looking at your code, I'd say most of that time is spent in network requests rather than processing the responses. All of the tips @alecxe provides in his answer apply, but I'd suggest the `HTTPCACHE_ENABLED` setting, since it caches the requests and avoids doing it a second time. It would help on following crawls and even offline development. See more info in the docs: <http://doc.scrapy.org/en/latest/topics/downloader-middleware.html#module-scrapy.contrib.downloadermiddleware.httpcache>
One workaround to speed up your scrapy is to config your `start_urls` appropriately. For example, If our target data is in `http://apps.webofknowledge.com/doc=1` where the doc number range from `1` to `1000`, you can config your `start_urls` in followings: ``` start_urls = [ "http://apps.webofknowledge.com/doc=250", "http://apps.webofknowledge.com/doc=750", ] ``` In this way, requests will start from `250` to `251,249` and from `750` to `751,749` simultaneously, so you will get 4 times faster compared to `start_urls = ["http://apps.webofknowledge.com/doc=1"]`.
29,449,659
I have a server application written in python/django (REST api) for accepting a file upload from the client application. I want this uploaded file to be stored in AWS S3. I also want the file to be uploaded from client as multipart form / data . How can i achieve this. Any sample code application will help me to understand the way it should be done. Please assist. ``` class FileUploadView(APIView): parser_classes = (FileUploadParser,) def put(self, request, filename, format=None): file_obj = request.data['file'] self.handle_uploaded_file(file_obj) return self.get_response("", True, "", {}) def handle_uploaded_file(self, f): destination = open('<path>', 'wb+') for chunk in f.chunks(): destination.write(chunk) destination.close() ``` Thanks in advance
2015/04/04
[ "https://Stackoverflow.com/questions/29449659", "https://Stackoverflow.com", "https://Stackoverflow.com/users/780805/" ]
If you want to your uploads to go directly to AWS S3, you can use `django-storages` and set your Django file storage backend to use AWS S3. * [django-storages](https://bitbucket.org/david/django-storages) * [django-storages documentation](http://django-storages.readthedocs.org/en/latest/index.html) This will allow your Django project to handle storage transparently to S3 without your having to manually re-upload your uploaded files to S3. **Storage Settings** You will need to add at least these configurations to your Django settings: ``` # default remote file storage DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage' # aws access keys AWS_ACCESS_KEY_ID = 'YOUR-ACCESS-KEY' AWS_SECRET_ACCESS_KEY = 'YOUR-SECRET-ACCESS-KEY' AWS_BUCKET_NAME = 'your-bucket-name' AWS_STORAGE_BUCKET_NAME = AWS_BUCKET_NAME ``` **Example Code to Store Upload to Remote Storage** This is a modified version of your view with a the `handle_uploaded_file` method using Django's storage backend to save the uploade file to the remote destination (using django-storages). Note: Be sure to define the `DEFAULT_FILE_STORAGE` and AWS keys in your `settings` so `django-storage` can access your bucket. ``` from django.core.files.storage import default_storage from django.core.files import File # set file i/o chunk size to maximize throughput FILE_IO_CHUNK_SIZE = 128 * 2**10 class FileUploadView(APIView): parser_classes = (FileUploadParser,) def put(self, request, filename, format=None): file_obj = request.data['file'] self.handle_uploaded_file(file_obj) return self.get_response("", True, "", {}) def handle_uploaded_file(self, f): """ Write uploaded file to destination using default storage. """ # set storage object to use Django's default storage storage = default_storage # set the relative path inside your bucket where you want the upload # to end up fkey = 'sub-path-in-your-bucket-to-store-the-file' # determine mime type -- you may want to parse the upload header # to find out the exact MIME type of the upload file. content_type = 'image/jpeg' # write file to remote server # * "file" is a File storage object that will use your # storage backend (in this case, remote storage to AWS S3) # * "media" is a File object created with your upload file file = storage.open(fkey, 'w') storage.headers.update({"Content-Type": content_type}) f = open(path, 'rb') media = File(f) for chunk in media.chunks(chunk_size=FILE_IO_CHUNK_SIZE): file.write(chunk) file.close() media.close() f.close() ``` See more explanation and examples on how to access the remote storage here: * [django-storages: Amazon S3](http://django-storages.readthedocs.org/en/latest/backends/amazon-S3.html)
Take a look at `boto` package which provides AWS APIs: ``` from boto.s3.connection import S3Connection s3 = S3Connection(access_key, secret_key) b = s3.get_bucket('<bucket>') mp = b.initiate_multipart_upload('<object>') for i in range(1, <parts>+1): io = <receive-image-part> # E.g. StringIO mp.upload_part_from_file(io, part_num=i) mp.complete_upload() ```
48,364,573
New to python and deep learning. I was trying to build an RNN with some data and I don't know where am I going wrong. This is my code: ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline raw = pd.read_excel('Online Retail.xlsx',index_col='InvoiceDate') sales = raw.drop(['InvoiceNo','StockCode','Country','Description'],axis=1) sales.head() sales.index = pd.to_datetime(sales.index) sales.info() train_set = sales.head(50000) test_set = sales.tail(41909) from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() training = np.nan_to_num(train_set) testing = np.nan_to_num(test_set) train_scaled = scaler.fit_transform(training) test_scaled = scaler.fit_transform(testing) def next_batch(training_data,batch_size,steps): rand_start = np.random.randint(0,len(training_data)-steps) y_batch = np.array(training_data[rand_start:rand_start+steps+1].reshape(26,steps+1)) return y_batch[:,:-1].reshape(-1,steps,1),y_batch[:,1:].reshape(-1,steps,1) import tensorflow as tf num_inputs = 1 num_time_steps = 10 num_neurons = 100 num_outputs = 1 learning_rate = 0.03 num_train_iterations = 4000 batch_size = 1 X = tf.placeholder(tf.float32,[None,num_time_steps,num_inputs]) y = tf.placeholder(tf.float32,[None,num_time_steps,num_outputs]) cell = tf.contrib.rnn.OutputProjectionWrapper( tf.contrib.rnn.BasicLSTMCell(num_units=num_neurons,activation=tf.nn.relu),output_size=num_outputs) outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32) loss = tf.reduce_mean(tf.square(outputs - y)) # MSE optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) train = optimizer.minimize(loss) init = tf.global_variables_initializer() saver = tf.train.Saver() with tf.Session(config=tf.ConfigProto()) as sess: sess.run(init) for iteration in range(num_train_iterations): X_batch, y_batch = next_batch(train_scaled,batch_size,num_time_steps) sess.run(train, feed_dict={X: X_batch, y: y_batch}) if iteration % 100 == 0: mse = loss.eval(feed_dict={X: X_batch, y: y_batch}) print(iteration, "\tMSE:", mse) # Save Model for Later saver.save(sess, "./ex_time_series_model") ``` The output: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-36-f2f7c66a33df> in <module>() 4 for iteration in range(num_train_iterations): 5 ----> 6 X_batch, y_batch = next_batch(train_scaled,batch_size,num_time_steps) 7 sess.run(train, feed_dict={X: X_batch, y: y_batch}) 8 <ipython-input-26-f673a469c67d> in next_batch(training_data, batch_size, steps) 1 def next_batch(training_data,batch_size,steps): 2 rand_start = np.random.randint(0,len(training_data)-steps) ----> 3 y_batch = np.array(training_data[rand_start:rand_start+steps+1].reshape(26,steps+1)) 4 return y_batch[:,:-1].reshape(-1,steps,1),y_batch[:,1:].reshape(-1,steps,1) ValueError: cannot reshape array of size 33 into shape (26,11) In [ ]: ```
2018/01/21
[ "https://Stackoverflow.com/questions/48364573", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8766071/" ]
I'm not sure where the number `26` came from, but it doesn't match with your data dimensions. After you dropped four columns, the `training_data` array is `(50000, 3)`, of which you take `(11, 3)` batches. This array obviously can't reshape to `(26, 11)`. What you probably meant is this (in `next_batch` function): ``` y_batch = np.array(training_data[rand_start:rand_start+steps+1].reshape(3,steps+1)) ```
The error says that you trying to reshape a tensor with size `33` into a tensor with size `26x11`, which you can't. You should reshape a tensor with size `286` into `26x11`. Try to debug the `next_batch` function by printing the `y_batch` shape in each step using `print (y_batch.get_shape())` and check it, if it has shape `286`. I didn't catch this point, why you fetch each batch randomly? why didn't you read input data normally? It would be good if you fix the indents when you posting your code, it is hard to track.
7,008,175
I wrote such a code to get timezone based on DST for an specific epoch time: ``` def getTimeZoneFromEpoch(epoch) if time.daylight and time.gmtime(epoch).tm_isdst==1: return -time.altzone/3600.0 else: return -time.timezone/3600.0 ``` But i'm not sure its correct, in fact at the moment i mistakes by 1 hour. Maybe i should swap altzone and timezone in this code, but its not what i understood from python's help (time module): ``` timezone -- difference in seconds between UTC and local standard time altzone -- difference in seconds between UTC and local DST time tm_isdst 1 if summer time is in effect, 0 if not, and -1 if unknown ``` Have i misundestood something?
2011/08/10
[ "https://Stackoverflow.com/questions/7008175", "https://Stackoverflow.com", "https://Stackoverflow.com/users/495838/" ]
I've tested this code to obtain the VM's locale UTC offset. Which, by the way, is only really valid at the moment it is measured. I'm not sure whether your code is equivalent or not. ``` def local_ephemeral_UTC_offset(epoch_time=None): u"Returns a datetime.timedelta object representing the local time offset from UTC at the moment" if epoch_time == None: epoch_time = time() return datetime.fromtimestamp(epoch_time) - datetime.utcfromtimestamp(epoch_time) ```
In short, use `time.localtime()` instead of `time.gmtime()`. --- The problem is that you use `gmtime()` , as the result of the following program shows. ``` from time import * def getTimeZoneFromEpoch(epoch): if daylight and gmtime(epoch).tm_isdst==1: return -altzone/3600.0 else: return -timezone/3600.0 print " tm_isdst of tm_isdst of time zone's\n" + \ ' epoch gmtime(epoch) localtime(epoch) offset' for d in ('13/03/2011', # DST start date in USA '14/03/2011', '', '06/11/2011', # DST end date in USA '07/11/2011', '', '27/03/2011', # DST start date in Europe '28/03/2011', '', '30/10/2011', # DST end date in Europe '31/10/2011'): if d: ds = strptime(d,'%d/%m/%Y') epoch = mktime(ds) lt = localtime(epoch) gt = gmtime(epoch) print '%s %s %12s %11s %7s %17s' % (d,ds.tm_isdst,epoch,gt.tm_isdst,lt.tm_isdst,getTimeZoneFromEpoch(epoch)) else: print ``` With my clock set to the "UTC-07:00 Rocky Mountains" time zone, where the DST starts on March 13th 2011 and ends on November 06th 2011 , the result is: ``` tm_isdst of tm_isdst of time zone's epoch gmtime(epoch) localtime(epoch) offset 13/03/2011 -1 1299999600.0 0 0 -7.0 14/03/2011 -1 1300082400.0 0 1 -7.0 06/11/2011 -1 1320559200.0 0 1 -7.0 07/11/2011 -1 1320649200.0 0 0 -7.0 27/03/2011 -1 1301205600.0 0 1 -7.0 28/03/2011 -1 1301292000.0 0 1 -7.0 30/10/2011 -1 1319954400.0 0 1 -7.0 31/10/2011 -1 1320040800.0 0 1 -7.0 ``` With my clock set to the "UTC+01:00 West Continental Europe" time zone, where the DST starts on March 27th 2011 and ends on October 30th 2011 , the result is: ``` tm_isdst of tm_isdst of time zone's epoch gmtime(epoch) localtime(epoch) offset 13/03/2011 -1 1299970800.0 0 0 1.0 14/03/2011 -1 1300057200.0 0 0 1.0 06/11/2011 -1 1320534000.0 0 0 1.0 07/11/2011 -1 1320620400.0 0 0 1.0 27/03/2011 -1 1301180400.0 0 0 1.0 28/03/2011 -1 1301263200.0 0 1 1.0 30/10/2011 -1 1319925600.0 0 1 1.0 31/10/2011 -1 1320015600.0 0 0 1.0 ```
7,008,175
I wrote such a code to get timezone based on DST for an specific epoch time: ``` def getTimeZoneFromEpoch(epoch) if time.daylight and time.gmtime(epoch).tm_isdst==1: return -time.altzone/3600.0 else: return -time.timezone/3600.0 ``` But i'm not sure its correct, in fact at the moment i mistakes by 1 hour. Maybe i should swap altzone and timezone in this code, but its not what i understood from python's help (time module): ``` timezone -- difference in seconds between UTC and local standard time altzone -- difference in seconds between UTC and local DST time tm_isdst 1 if summer time is in effect, 0 if not, and -1 if unknown ``` Have i misundestood something?
2011/08/10
[ "https://Stackoverflow.com/questions/7008175", "https://Stackoverflow.com", "https://Stackoverflow.com/users/495838/" ]
I've tested this code to obtain the VM's locale UTC offset. Which, by the way, is only really valid at the moment it is measured. I'm not sure whether your code is equivalent or not. ``` def local_ephemeral_UTC_offset(epoch_time=None): u"Returns a datetime.timedelta object representing the local time offset from UTC at the moment" if epoch_time == None: epoch_time = time() return datetime.fromtimestamp(epoch_time) - datetime.utcfromtimestamp(epoch_time) ```
Thanks for the help, besides the two methods that you suggested, I also found a more flexible (and maybe more compatible) version that can also take timezone object (or just use local zone) and return UTC offset There was just this AmbiguousTimeError part that confused me, but I did something about it to make it (kind of) working in all cases. ``` from datetime import datetime import pytz from tzlocal import get_localzone def getUtcOffsetByEpoch(epoch, tz=None): if not tz: tz = get_localzone() delta = 0 while True: try: return tz.utcoffset(datetime.fromtimestamp(epoch + delta)).total_seconds() except pytz.exceptions.AmbiguousTimeError:## FIXME #d = datetime.fromtimestamp(epoch+3600) #print('AmbiguousTimeError', d.year, d.month, d.day, d.hour, d.minute, d.second) delta += 3600 print('delta = %s'%delta) except ( ValueError, OverflowError, ): return tz._utcoffset.total_seconds() ```
7,008,175
I wrote such a code to get timezone based on DST for an specific epoch time: ``` def getTimeZoneFromEpoch(epoch) if time.daylight and time.gmtime(epoch).tm_isdst==1: return -time.altzone/3600.0 else: return -time.timezone/3600.0 ``` But i'm not sure its correct, in fact at the moment i mistakes by 1 hour. Maybe i should swap altzone and timezone in this code, but its not what i understood from python's help (time module): ``` timezone -- difference in seconds between UTC and local standard time altzone -- difference in seconds between UTC and local DST time tm_isdst 1 if summer time is in effect, 0 if not, and -1 if unknown ``` Have i misundestood something?
2011/08/10
[ "https://Stackoverflow.com/questions/7008175", "https://Stackoverflow.com", "https://Stackoverflow.com/users/495838/" ]
In short, use `time.localtime()` instead of `time.gmtime()`. --- The problem is that you use `gmtime()` , as the result of the following program shows. ``` from time import * def getTimeZoneFromEpoch(epoch): if daylight and gmtime(epoch).tm_isdst==1: return -altzone/3600.0 else: return -timezone/3600.0 print " tm_isdst of tm_isdst of time zone's\n" + \ ' epoch gmtime(epoch) localtime(epoch) offset' for d in ('13/03/2011', # DST start date in USA '14/03/2011', '', '06/11/2011', # DST end date in USA '07/11/2011', '', '27/03/2011', # DST start date in Europe '28/03/2011', '', '30/10/2011', # DST end date in Europe '31/10/2011'): if d: ds = strptime(d,'%d/%m/%Y') epoch = mktime(ds) lt = localtime(epoch) gt = gmtime(epoch) print '%s %s %12s %11s %7s %17s' % (d,ds.tm_isdst,epoch,gt.tm_isdst,lt.tm_isdst,getTimeZoneFromEpoch(epoch)) else: print ``` With my clock set to the "UTC-07:00 Rocky Mountains" time zone, where the DST starts on March 13th 2011 and ends on November 06th 2011 , the result is: ``` tm_isdst of tm_isdst of time zone's epoch gmtime(epoch) localtime(epoch) offset 13/03/2011 -1 1299999600.0 0 0 -7.0 14/03/2011 -1 1300082400.0 0 1 -7.0 06/11/2011 -1 1320559200.0 0 1 -7.0 07/11/2011 -1 1320649200.0 0 0 -7.0 27/03/2011 -1 1301205600.0 0 1 -7.0 28/03/2011 -1 1301292000.0 0 1 -7.0 30/10/2011 -1 1319954400.0 0 1 -7.0 31/10/2011 -1 1320040800.0 0 1 -7.0 ``` With my clock set to the "UTC+01:00 West Continental Europe" time zone, where the DST starts on March 27th 2011 and ends on October 30th 2011 , the result is: ``` tm_isdst of tm_isdst of time zone's epoch gmtime(epoch) localtime(epoch) offset 13/03/2011 -1 1299970800.0 0 0 1.0 14/03/2011 -1 1300057200.0 0 0 1.0 06/11/2011 -1 1320534000.0 0 0 1.0 07/11/2011 -1 1320620400.0 0 0 1.0 27/03/2011 -1 1301180400.0 0 0 1.0 28/03/2011 -1 1301263200.0 0 1 1.0 30/10/2011 -1 1319925600.0 0 1 1.0 31/10/2011 -1 1320015600.0 0 0 1.0 ```
Thanks for the help, besides the two methods that you suggested, I also found a more flexible (and maybe more compatible) version that can also take timezone object (or just use local zone) and return UTC offset There was just this AmbiguousTimeError part that confused me, but I did something about it to make it (kind of) working in all cases. ``` from datetime import datetime import pytz from tzlocal import get_localzone def getUtcOffsetByEpoch(epoch, tz=None): if not tz: tz = get_localzone() delta = 0 while True: try: return tz.utcoffset(datetime.fromtimestamp(epoch + delta)).total_seconds() except pytz.exceptions.AmbiguousTimeError:## FIXME #d = datetime.fromtimestamp(epoch+3600) #print('AmbiguousTimeError', d.year, d.month, d.day, d.hour, d.minute, d.second) delta += 3600 print('delta = %s'%delta) except ( ValueError, OverflowError, ): return tz._utcoffset.total_seconds() ```
30,540,825
I have an OS X system where I need to install a module for python 2.6. Both `pip` and `easy_install-2.6` are failing: ``` # /usr/bin/easy_install-2.6 pip Searching for pip Reading http://pypi.python.org/simple/pip/ Download error: unknown url type: https -- Some packages may not be found! Couldn't find index page for 'pip' (maybe misspelled?) Scanning index of all packages (this may take a while) Reading http://pypi.python.org/simple/ Download error: unknown url type: https -- Some packages may not be found! No local packages or download links found for pip error: Could not find suitable distribution for Requirement.parse('pip') ``` Downloading [get\_pip.py](http://pip.readthedocs.org/en/latest/installing.html "get_pip.py") and running it with the stock OS X-supplied python 2.6: ``` # python2.6 ./get_pip.py Traceback (most recent call last): File "./get_pip.py", line 17868, in <module> main() File "./get_pip.py", line 162, in main bootstrap(tmpdir=tmpdir) File "./get_pip.py", line 82, in bootstrap import pip File "/tmp/tmpVJBvaW/pip.zip/pip/__init__.py", line 15, in <module> File "/tmp/tmpVJBvaW/pip.zip/pip/vcs/subversion.py", line 9, in <module> File "/tmp/tmpVJBvaW/pip.zip/pip/index.py", line 30, in <module> File "/tmp/tmpVJBvaW/pip.zip/pip/wheel.py", line 34, in <module> File "/tmp/tmpVJBvaW/pip.zip/pip/_vendor/__init__.py", line 92, in load_module ImportError: No module named 'pip._vendor.distlib.scripts' $ python2.6 --version Python 2.6.9 ``` With python2.7, either method works fine.
2015/05/30
[ "https://Stackoverflow.com/questions/30540825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4146877/" ]
Download the source file [here](https://pypi.python.org/packages/source/p/pip/pip-7.0.1.tar.gz#md5=5627bb807cf3d898a2eba276685537aa). Then do ``` >> cd ~/Downloads >> tar -xzvf pip-7.0.1.tar.gz ``` (replacing `~/Downloads` if necessary). Then ``` >> cd pip-7.0.1 >> sudo python2.6 setup.py install >> cd ``` (the last `cd` is used to leave the build directory). Now you should be able to run ``` >> python2.6 -c 'import pip;print pip.__version__' 7.0.1 ``` By default, pip (when installed from source) should be installed into `/usr/local/bin`. To check: ``` >> /usr/local/bin/pip --version pip 7.0.1 from /Library/Python/2.6/site-packages/pip-7.0.1-py2.6.egg (python 2.6) ``` Now you can install your favorite packages using ``` >> /usr/local/bin/pip install package >> python2.6 -c 'import package' ``` If you have conflicting versions of `pip` in `/usr/local/bin` you can try this ridiculous one liner: ``` >> python -c 'import os;dir="/usr/local/bin";[ os.system("echo %s/%s: && %s/%s --version"%(dir,s,dir,s)) for s in os.listdir("/usr/local/bin") if s.startswith("pip") ]' /usr/local/bin/pip: pip 7.0.1 from /Library/Python/2.6/site-packages/pip-7.0.1-py2.6.egg (python 2.6) /usr/local/bin/pip2: pip 7.0.1 from /Library/Python/2.6/site-packages/pip-7.0.1-py2.6.egg (python 2.6) /usr/local/bin/pip2.6: pip 7.0.1 from /Library/Python/2.6/site-packages/pip-7.0.1-py2.6.egg (python 2.6) ``` to find the one linked to py2.6. (in my case they are all the same)
By default [Homebrew](http://brew.sh/) provides `pip` command via: `brew install python`. So try installing Python using Homebrew. Try to not use `sudo` when working with `brew`. To verify which files are installed with your Python package, try: ``` $ brew list python /usr/local/Cellar/python/2.7.9/bin/pip /usr/local/Cellar/python/2.7.9/bin/pip2 /usr/local/Cellar/python/2.7.9/bin/pip2.7 ... ``` which should consist `pip`. After installation you should symlink your formula's installed files by: ``` brew link python ``` which should create the right symbolic links (such as `/usr/local/bin/pip` pointing to your `Cellar/python/2.?.?/bin/pip`) If you've permission issue, you may fix it by: ``` sudo chgrp -R admin /usr/local /Library/Caches/Homebrew sudo chmod -R g+w /usr/local /Library/Caches/Homebrew ``` and make sure your user is in admin group (`id -Gn $USER`). Then re-link it again: ``` brew unlink python && brew link python ``` To test dry-run, unlink and run: `brew link -n python` to see links of files which `brew` would link. After linking is successful, make sure that your PATH system variable have `/usr/local`, if not, add: ``` export PATH=/usr/local/sbin:/usr/local/bin:$PATH ``` to your `~/.bashrc` file. If successful, your `pip` should work now. --- If you don't want to use Homebrew or you have two Pythons installed on your Mac, you can alternatively install it via: ``` sudo easy_install pip ``` --- Your error: > > Download error: unknown url type: https > > > means that your Python can't handle HTTPS protocol without having SSL support, so try installing: `openssl` package (on Linux either `libssl-dev` or `openssl-devel`).
70,714,374
How to loop multi-variable data like this in python ? I have latitude and longitude data and I want to pass all these value and run it for 5 times. e.g. **round 1** lat = 13.29 , longitude = 100.34 city = 'ABC' **round 2** lat = 94.09834 ,longitude = 103.34 city = 'XYZ' ,... ,.. ,round 5 Very new to python world. Thank you for every kind comment and suggestion :)
2022/01/14
[ "https://Stackoverflow.com/questions/70714374", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17859678/" ]
I was getting permission issues because I was running SLES based docker container inside CentOS based host machine. If I use SLES based host machine, I could run the container without any permission issue.
in my case I fixed it by upgrading docker to latest version. [reference link.](https://travis-ci.community/t/unable-to-access-file-structure-of-docker-container-when-running-in-travis/11229)
28,848,098
I'm trying to make a recursive function that finds all the combinations of a python list. I want to input ['a','b','c'] in my function and as the function runs I want the trace to look like this: ```none ['a','b','c'] ['['a','a'],['b','a'],['c','a']] ['['a','a','b'],['b','a','b'],['c','a','b']] ['['a','a','b','c'],['b','a','b','c'],['c','a','b','c']] ``` My recursive function looks like this: ``` def combo(lst,new_lst = []): for item in lst: new_lst.append([lst[0],item]) print([lst[0],item]) return combo(new_lst,lst[1:]) ```
2015/03/04
[ "https://Stackoverflow.com/questions/28848098", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1995933/" ]
The right answer is that you should use `itertools.combinations`. But if for some reason you don't want to, and want to write a recursive function, you can use the following piece of code. It is an adaptation of the erlang way of generating combinations, so it may seem a bit weird at first: ``` def combinations(N, iterable): if not N: return [[]] if not iterable: return [] head = [iterable[0]] tail = iterable[1:] new_comb = [ head + list_ for list_ in combinations(N - 1, tail) ] return new_comb + combinations(N, tail) ``` This a very elegant way of thinking of combinations of size `N`: you take the first element of an iterable (*head*) and combine it with smaller (`N-1`) combinations of the rest of the iterable (*tail*). Then you add same size (`N`) combinations of the *tail* to that. That's how you get all possible combinations. If you need all combinations, of all lengths you would do: ``` for n in range(1, len(iterable) + 1): print(combinations(n, iterable)) ```
Seems that you want all the product of a list, you can use [`itertools.product`](https://docs.python.org/2/library/itertools.html#itertools.product) within the following function to return a list of generators: ``` >>> from itertools import product >>> def pro(li): ... return [product(l,repeat=i) for i in range(2,len(l)+1)] ... >>> for i in pro(l): ... print list(i) ... [('a', 'a'), ('a', 'b'), ('a', 'c'), ('b', 'a'), ('b', 'b'), ('b', 'c'), ('c', 'a'), ('c', 'b'), ('c', 'c')] [('a', 'a', 'a'), ('a', 'a', 'b'), ('a', 'a', 'c'), ('a', 'b', 'a'), ('a', 'b', 'b'), ('a', 'b', 'c'), ('a', 'c', 'a'), ('a', 'c', 'b'), ('a', 'c', 'c'), ('b', 'a', 'a'), ('b', 'a', 'b'), ('b', 'a', 'c'), ('b', 'b', 'a'), ('b', 'b', 'b'), ('b', 'b', 'c'), ('b', 'c', 'a'), ('b', 'c', 'b'), ('b', 'c', 'c'), ('c', 'a', 'a'), ('c', 'a', 'b'), ('c', 'a', 'c'), ('c', 'b', 'a'), ('c', 'b', 'b'), ('c', 'b', 'c'), ('c', 'c', 'a'), ('c', 'c', 'b'), ('c', 'c', 'c')] ```
11,372,033
I'm getting an error when testing a python script which is installed on my Android Emulator running SDK 2.2 I have installed "Python\_for\_android\_r1.apk" and "sl4a\_r5.apk" in my emulator. It seems that my code is trying to import the following: ``` from urllib import urlencode from urllib2 import urlopen ``` And from what I can tell urllib2 is not found based on the error below. ``` ( FILE "/home/manuel/A;tanaStudio3Workspace/python-for-android/python-build/output/usr/lib/python2.6/urllib2.py, line 124 in urlopen ) ``` Any ideas how I can fix this problem??
2012/07/07
[ "https://Stackoverflow.com/questions/11372033", "https://Stackoverflow.com", "https://Stackoverflow.com/users/953507/" ]
Your urllib module seems to be found. If the module is not found, python will return you an error at the import. Looking at the error, it appears that you are having problems with urlopen. Is the url you are trying to open valid? Line 124 in urllib2 refers to the opener that you are using to get your response.
`A;tanaStudio3Workspace` this is weird. You have no problem with your import module but the path look really wrong. I could assume if you fix the path, it will be alright but for further investigation you need to provide a real traceback.
14,163,429
Original: I have recently started getting MySQL OperationalErrors from some of my old code and cannot seem to trace back the problem. Since it was working before, I thought it may have been a software update that broke something. I am using python 2.7 with django runfcgi with nginx. Here is my original code: **views.py** ``` DBNAME = "test" DBIP = "localhost" DBUSER = "django" DBPASS = "password" db = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME) cursor = db.cursor() def list(request): statement = "SELECT item from table where selected = 1" cursor.execute(statement) results = cursor.fetchall() ``` I have tried the following, but it still does not work: **views.py** ``` class DB: conn = None DBNAME = "test" DBIP = "localhost" DBUSER = "django" DBPASS = "password" def connect(self): self.conn = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME) def cursor(self): try: return self.conn.cursor() except (AttributeError, MySQLdb.OperationalError): self.connect() return self.conn.cursor() db = DB() cursor = db.cursor() def list(request): cursor = db.cursor() statement = "SELECT item from table where selected = 1" cursor.execute(statement) results = cursor.fetchall() ``` Currently, my only workaround is to do `MySQLdb.connect()` in each function that uses mysql. Also I noticed that when using django's `manage.py runserver`, I would not have this problem while nginx would throw these errors. I doubt that I am timing out with the connection because `list()` is being called within seconds of starting the server up. Were there any updates to the software I am using that would cause this to break/is there any fix for this? Edit: I realized that I recently wrote a piece of middle-ware to daemonize a function and this was the cause of the problem. However, I cannot figure out why. Here is the code for the middle-ware ``` def process_request_handler(sender, **kwargs): t = threading.Thread(target=dispatch.execute, args=[kwargs['nodes'],kwargs['callback']], kwargs={}) t.setDaemon(True) t.start() return process_request.connect(process_request_handler) ```
2013/01/04
[ "https://Stackoverflow.com/questions/14163429", "https://Stackoverflow.com", "https://Stackoverflow.com/users/516476/" ]
Sometimes if you see "OperationalError: (2006, 'MySQL server has gone away')", it is because you are issuing a query that is too large. This can happen, for instance, if you're storing your sessions in MySQL, and you're trying to put something really big in the session. To fix the problem, you need to increase the value of the max\_allowed\_packet setting in MySQL. The default value is 1048576. So see the current value for the default, run the following SQL: ``` select @@max_allowed_packet; ``` To temporarily set a new value, run the following SQL: ``` set global max_allowed_packet=10485760; ``` To fix the problem more permanently, create a /etc/my.cnf file with at least the following: ``` [mysqld] max_allowed_packet = 16M ``` After editing /etc/my.cnf, you'll need to restart MySQL or restart your machine if you don't know how.
SQLAlchemy now has a great write-up on how you can use pinging to be pessimistic about your connection's freshness: <http://docs.sqlalchemy.org/en/latest/core/pooling.html#disconnect-handling-pessimistic> From there, ``` from sqlalchemy import exc from sqlalchemy import event from sqlalchemy.pool import Pool @event.listens_for(Pool, "checkout") def ping_connection(dbapi_connection, connection_record, connection_proxy): cursor = dbapi_connection.cursor() try: cursor.execute("SELECT 1") except: # optional - dispose the whole pool # instead of invalidating one at a time # connection_proxy._pool.dispose() # raise DisconnectionError - pool will try # connecting again up to three times before raising. raise exc.DisconnectionError() cursor.close() ``` And a test to make sure the above works: ``` from sqlalchemy import create_engine e = create_engine("mysql://scott:tiger@localhost/test", echo_pool=True) c1 = e.connect() c2 = e.connect() c3 = e.connect() c1.close() c2.close() c3.close() # pool size is now three. print "Restart the server" raw_input() for i in xrange(10): c = e.connect() print c.execute("select 1").fetchall() c.close() ```
14,163,429
Original: I have recently started getting MySQL OperationalErrors from some of my old code and cannot seem to trace back the problem. Since it was working before, I thought it may have been a software update that broke something. I am using python 2.7 with django runfcgi with nginx. Here is my original code: **views.py** ``` DBNAME = "test" DBIP = "localhost" DBUSER = "django" DBPASS = "password" db = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME) cursor = db.cursor() def list(request): statement = "SELECT item from table where selected = 1" cursor.execute(statement) results = cursor.fetchall() ``` I have tried the following, but it still does not work: **views.py** ``` class DB: conn = None DBNAME = "test" DBIP = "localhost" DBUSER = "django" DBPASS = "password" def connect(self): self.conn = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME) def cursor(self): try: return self.conn.cursor() except (AttributeError, MySQLdb.OperationalError): self.connect() return self.conn.cursor() db = DB() cursor = db.cursor() def list(request): cursor = db.cursor() statement = "SELECT item from table where selected = 1" cursor.execute(statement) results = cursor.fetchall() ``` Currently, my only workaround is to do `MySQLdb.connect()` in each function that uses mysql. Also I noticed that when using django's `manage.py runserver`, I would not have this problem while nginx would throw these errors. I doubt that I am timing out with the connection because `list()` is being called within seconds of starting the server up. Were there any updates to the software I am using that would cause this to break/is there any fix for this? Edit: I realized that I recently wrote a piece of middle-ware to daemonize a function and this was the cause of the problem. However, I cannot figure out why. Here is the code for the middle-ware ``` def process_request_handler(sender, **kwargs): t = threading.Thread(target=dispatch.execute, args=[kwargs['nodes'],kwargs['callback']], kwargs={}) t.setDaemon(True) t.start() return process_request.connect(process_request_handler) ```
2013/01/04
[ "https://Stackoverflow.com/questions/14163429", "https://Stackoverflow.com", "https://Stackoverflow.com/users/516476/" ]
I've been struggling with this issue too. I don't like the idea of increasing timeout on mysqlserver. Autoreconnect with `CONNECTION_MAX_AGE` doesn't work either as it was mentioned. Unfortunately I ended up with wrapping every method that queries the database like this ``` def do_db( callback, *arg, **args): try: return callback(*arg, **args) except (OperationalError, InterfaceError) as e: # Connection has gone away, fiter it with message or error code if you could catch another errors connection.close() return callback(*arg, **args) do_db(User.objects.get, id=123) # instead of User.objects.get(id=123) ``` As you can see I rather prefer catching the exception than pinging the database every time before querying it. Because catching an exception is a rare case. I would expect django to reconnect automatically but they seemed to [refused](https://code.djangoproject.com/ticket/21597) that issue.
SQLAlchemy now has a great write-up on how you can use pinging to be pessimistic about your connection's freshness: <http://docs.sqlalchemy.org/en/latest/core/pooling.html#disconnect-handling-pessimistic> From there, ``` from sqlalchemy import exc from sqlalchemy import event from sqlalchemy.pool import Pool @event.listens_for(Pool, "checkout") def ping_connection(dbapi_connection, connection_record, connection_proxy): cursor = dbapi_connection.cursor() try: cursor.execute("SELECT 1") except: # optional - dispose the whole pool # instead of invalidating one at a time # connection_proxy._pool.dispose() # raise DisconnectionError - pool will try # connecting again up to three times before raising. raise exc.DisconnectionError() cursor.close() ``` And a test to make sure the above works: ``` from sqlalchemy import create_engine e = create_engine("mysql://scott:tiger@localhost/test", echo_pool=True) c1 = e.connect() c2 = e.connect() c3 = e.connect() c1.close() c2.close() c3.close() # pool size is now three. print "Restart the server" raw_input() for i in xrange(10): c = e.connect() print c.execute("select 1").fetchall() c.close() ```
14,163,429
Original: I have recently started getting MySQL OperationalErrors from some of my old code and cannot seem to trace back the problem. Since it was working before, I thought it may have been a software update that broke something. I am using python 2.7 with django runfcgi with nginx. Here is my original code: **views.py** ``` DBNAME = "test" DBIP = "localhost" DBUSER = "django" DBPASS = "password" db = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME) cursor = db.cursor() def list(request): statement = "SELECT item from table where selected = 1" cursor.execute(statement) results = cursor.fetchall() ``` I have tried the following, but it still does not work: **views.py** ``` class DB: conn = None DBNAME = "test" DBIP = "localhost" DBUSER = "django" DBPASS = "password" def connect(self): self.conn = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME) def cursor(self): try: return self.conn.cursor() except (AttributeError, MySQLdb.OperationalError): self.connect() return self.conn.cursor() db = DB() cursor = db.cursor() def list(request): cursor = db.cursor() statement = "SELECT item from table where selected = 1" cursor.execute(statement) results = cursor.fetchall() ``` Currently, my only workaround is to do `MySQLdb.connect()` in each function that uses mysql. Also I noticed that when using django's `manage.py runserver`, I would not have this problem while nginx would throw these errors. I doubt that I am timing out with the connection because `list()` is being called within seconds of starting the server up. Were there any updates to the software I am using that would cause this to break/is there any fix for this? Edit: I realized that I recently wrote a piece of middle-ware to daemonize a function and this was the cause of the problem. However, I cannot figure out why. Here is the code for the middle-ware ``` def process_request_handler(sender, **kwargs): t = threading.Thread(target=dispatch.execute, args=[kwargs['nodes'],kwargs['callback']], kwargs={}) t.setDaemon(True) t.start() return process_request.connect(process_request_handler) ```
2013/01/04
[ "https://Stackoverflow.com/questions/14163429", "https://Stackoverflow.com", "https://Stackoverflow.com/users/516476/" ]
The most common issue regarding such warning, is the fact that your application has reached the `wait_timeout` value of MySQL. I had the same problem with a Flask app. Here's how I solved: ``` $ grep timeout /etc/mysql/mysql.conf.d/mysqld.cnf # https://support.rackspace.com/how-to/how-to-change-the-mysql-timeout-on-a-server/ # wait = timeout for application session (tdm) # inteactive = timeout for keyboard session (terminal) # 7 days = 604800s / 4 hours = 14400s wait_timeout = 604800 interactive_timeout = 14400 ``` **Observation:** if you search for the variables via MySQL batch mode, the values will appear as it is. But If you perform `SHOW VARIABLES LIKE 'wait%';` or `SHOW VARIABLES LIKE 'interactive%';`, the value configured for `interactive_timeout`, will appear to both variables, and I don't know why, but the fact is, that the values configured for each variable at '/etc/mysql/mysql.conf.d/mysqld.cnf', will be respected by MySQL.
How old is this code? Django has had databases defined in settings since at least .96. Only other thing I can think of is multi-db support, which changed things a bit, but even that was 1.1 or 1.2. Even if you need a special DB for certain views, I think you'd probably be better off defining it in settings.
14,163,429
Original: I have recently started getting MySQL OperationalErrors from some of my old code and cannot seem to trace back the problem. Since it was working before, I thought it may have been a software update that broke something. I am using python 2.7 with django runfcgi with nginx. Here is my original code: **views.py** ``` DBNAME = "test" DBIP = "localhost" DBUSER = "django" DBPASS = "password" db = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME) cursor = db.cursor() def list(request): statement = "SELECT item from table where selected = 1" cursor.execute(statement) results = cursor.fetchall() ``` I have tried the following, but it still does not work: **views.py** ``` class DB: conn = None DBNAME = "test" DBIP = "localhost" DBUSER = "django" DBPASS = "password" def connect(self): self.conn = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME) def cursor(self): try: return self.conn.cursor() except (AttributeError, MySQLdb.OperationalError): self.connect() return self.conn.cursor() db = DB() cursor = db.cursor() def list(request): cursor = db.cursor() statement = "SELECT item from table where selected = 1" cursor.execute(statement) results = cursor.fetchall() ``` Currently, my only workaround is to do `MySQLdb.connect()` in each function that uses mysql. Also I noticed that when using django's `manage.py runserver`, I would not have this problem while nginx would throw these errors. I doubt that I am timing out with the connection because `list()` is being called within seconds of starting the server up. Were there any updates to the software I am using that would cause this to break/is there any fix for this? Edit: I realized that I recently wrote a piece of middle-ware to daemonize a function and this was the cause of the problem. However, I cannot figure out why. Here is the code for the middle-ware ``` def process_request_handler(sender, **kwargs): t = threading.Thread(target=dispatch.execute, args=[kwargs['nodes'],kwargs['callback']], kwargs={}) t.setDaemon(True) t.start() return process_request.connect(process_request_handler) ```
2013/01/04
[ "https://Stackoverflow.com/questions/14163429", "https://Stackoverflow.com", "https://Stackoverflow.com/users/516476/" ]
For me this was happening in debug mode. So I tried Persistent connections in debug mode, checkout the link: [Django - Documentation - Databases - Persistent connections](https://docs.djangoproject.com/en/2.2/ref/databases/). In settings: ``` 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'dbname', 'USER': 'root', 'PASSWORD': 'root', 'HOST': 'localhost', 'PORT': '3306', 'CONN_MAX_AGE': None }, ```
Check if you are allowed to create mysql connection object in one thread and then use it in another. If it's forbidden, use threading.Local for per-thread connections: ``` class Db(threading.local): """ thread-local db object """ con = None def __init__(self, ...options...): super(Db, self).__init__() self.con = MySQLdb.connect(...options...) db1 = Db(...) def test(): """safe to run from any thread""" cursor = db.con.cursor() cursor.execute(...) ```
14,163,429
Original: I have recently started getting MySQL OperationalErrors from some of my old code and cannot seem to trace back the problem. Since it was working before, I thought it may have been a software update that broke something. I am using python 2.7 with django runfcgi with nginx. Here is my original code: **views.py** ``` DBNAME = "test" DBIP = "localhost" DBUSER = "django" DBPASS = "password" db = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME) cursor = db.cursor() def list(request): statement = "SELECT item from table where selected = 1" cursor.execute(statement) results = cursor.fetchall() ``` I have tried the following, but it still does not work: **views.py** ``` class DB: conn = None DBNAME = "test" DBIP = "localhost" DBUSER = "django" DBPASS = "password" def connect(self): self.conn = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME) def cursor(self): try: return self.conn.cursor() except (AttributeError, MySQLdb.OperationalError): self.connect() return self.conn.cursor() db = DB() cursor = db.cursor() def list(request): cursor = db.cursor() statement = "SELECT item from table where selected = 1" cursor.execute(statement) results = cursor.fetchall() ``` Currently, my only workaround is to do `MySQLdb.connect()` in each function that uses mysql. Also I noticed that when using django's `manage.py runserver`, I would not have this problem while nginx would throw these errors. I doubt that I am timing out with the connection because `list()` is being called within seconds of starting the server up. Were there any updates to the software I am using that would cause this to break/is there any fix for this? Edit: I realized that I recently wrote a piece of middle-ware to daemonize a function and this was the cause of the problem. However, I cannot figure out why. Here is the code for the middle-ware ``` def process_request_handler(sender, **kwargs): t = threading.Thread(target=dispatch.execute, args=[kwargs['nodes'],kwargs['callback']], kwargs={}) t.setDaemon(True) t.start() return process_request.connect(process_request_handler) ```
2013/01/04
[ "https://Stackoverflow.com/questions/14163429", "https://Stackoverflow.com", "https://Stackoverflow.com/users/516476/" ]
As per the [MySQL documentation](http://dev.mysql.com/doc/refman/5.0/en/gone-away.html), your error message is raised when the client can't send a question to the server, most likely because the server itself has closed the connection. In the most common case the server will close an idle connection after a (default) of 8 hours. This is configurable on the server side. The [MySQL documentation](http://dev.mysql.com/doc/refman/5.0/en/gone-away.html) gives a number of other possible causes which might be worth looking into to see if they fit your situation. An alternative to calling `connect()` in every function (which might end up needlessly creating new connections) would be to investigate using the `ping()` method on the connection object; this tests the connection with the option of attempting an automatic reconnect. I struggled to find some [decent documentation](http://www.neotitans.com/resources/python/mysql-python-connection-error-2006.html) for the `ping()` method online, but the answer to [this question](https://stackoverflow.com/questions/12564828/sleeping-processes-in-mysql) might help. Note, automatically reconnecting can be dangerous when handling transactions as it appears the reconnect causes an implicit rollback (and appears to be the main reason why autoreconnect is not a feature of the MySQLdb implementation).
Firstly, You should make sure the MySQL session & global enviroments `wait_timeout` and `interactive_timeout` values. And secondly Your client should try to reconnect to the server below those enviroments values.
14,163,429
Original: I have recently started getting MySQL OperationalErrors from some of my old code and cannot seem to trace back the problem. Since it was working before, I thought it may have been a software update that broke something. I am using python 2.7 with django runfcgi with nginx. Here is my original code: **views.py** ``` DBNAME = "test" DBIP = "localhost" DBUSER = "django" DBPASS = "password" db = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME) cursor = db.cursor() def list(request): statement = "SELECT item from table where selected = 1" cursor.execute(statement) results = cursor.fetchall() ``` I have tried the following, but it still does not work: **views.py** ``` class DB: conn = None DBNAME = "test" DBIP = "localhost" DBUSER = "django" DBPASS = "password" def connect(self): self.conn = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME) def cursor(self): try: return self.conn.cursor() except (AttributeError, MySQLdb.OperationalError): self.connect() return self.conn.cursor() db = DB() cursor = db.cursor() def list(request): cursor = db.cursor() statement = "SELECT item from table where selected = 1" cursor.execute(statement) results = cursor.fetchall() ``` Currently, my only workaround is to do `MySQLdb.connect()` in each function that uses mysql. Also I noticed that when using django's `manage.py runserver`, I would not have this problem while nginx would throw these errors. I doubt that I am timing out with the connection because `list()` is being called within seconds of starting the server up. Were there any updates to the software I am using that would cause this to break/is there any fix for this? Edit: I realized that I recently wrote a piece of middle-ware to daemonize a function and this was the cause of the problem. However, I cannot figure out why. Here is the code for the middle-ware ``` def process_request_handler(sender, **kwargs): t = threading.Thread(target=dispatch.execute, args=[kwargs['nodes'],kwargs['callback']], kwargs={}) t.setDaemon(True) t.start() return process_request.connect(process_request_handler) ```
2013/01/04
[ "https://Stackoverflow.com/questions/14163429", "https://Stackoverflow.com", "https://Stackoverflow.com/users/516476/" ]
Sometimes if you see "OperationalError: (2006, 'MySQL server has gone away')", it is because you are issuing a query that is too large. This can happen, for instance, if you're storing your sessions in MySQL, and you're trying to put something really big in the session. To fix the problem, you need to increase the value of the max\_allowed\_packet setting in MySQL. The default value is 1048576. So see the current value for the default, run the following SQL: ``` select @@max_allowed_packet; ``` To temporarily set a new value, run the following SQL: ``` set global max_allowed_packet=10485760; ``` To fix the problem more permanently, create a /etc/my.cnf file with at least the following: ``` [mysqld] max_allowed_packet = 16M ``` After editing /etc/my.cnf, you'll need to restart MySQL or restart your machine if you don't know how.
I had this problem and did not have the option to change my configuration. I finally figured out that the problem was occurring 49500 records in to my 50000-record loop, because that was the about the time I was trying again (after having tried a long time ago) to hit my second database. So I changed my code so that every few thousand records, I touched the second database again (with a count() of a very small table), and that fixed it. No doubt "ping" or some other means of touching the database would work, as well.
14,163,429
Original: I have recently started getting MySQL OperationalErrors from some of my old code and cannot seem to trace back the problem. Since it was working before, I thought it may have been a software update that broke something. I am using python 2.7 with django runfcgi with nginx. Here is my original code: **views.py** ``` DBNAME = "test" DBIP = "localhost" DBUSER = "django" DBPASS = "password" db = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME) cursor = db.cursor() def list(request): statement = "SELECT item from table where selected = 1" cursor.execute(statement) results = cursor.fetchall() ``` I have tried the following, but it still does not work: **views.py** ``` class DB: conn = None DBNAME = "test" DBIP = "localhost" DBUSER = "django" DBPASS = "password" def connect(self): self.conn = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME) def cursor(self): try: return self.conn.cursor() except (AttributeError, MySQLdb.OperationalError): self.connect() return self.conn.cursor() db = DB() cursor = db.cursor() def list(request): cursor = db.cursor() statement = "SELECT item from table where selected = 1" cursor.execute(statement) results = cursor.fetchall() ``` Currently, my only workaround is to do `MySQLdb.connect()` in each function that uses mysql. Also I noticed that when using django's `manage.py runserver`, I would not have this problem while nginx would throw these errors. I doubt that I am timing out with the connection because `list()` is being called within seconds of starting the server up. Were there any updates to the software I am using that would cause this to break/is there any fix for this? Edit: I realized that I recently wrote a piece of middle-ware to daemonize a function and this was the cause of the problem. However, I cannot figure out why. Here is the code for the middle-ware ``` def process_request_handler(sender, **kwargs): t = threading.Thread(target=dispatch.execute, args=[kwargs['nodes'],kwargs['callback']], kwargs={}) t.setDaemon(True) t.start() return process_request.connect(process_request_handler) ```
2013/01/04
[ "https://Stackoverflow.com/questions/14163429", "https://Stackoverflow.com", "https://Stackoverflow.com/users/516476/" ]
This error is mysterious because MySQL doesn't report why it disconnects, it just goes away. It seems there are many causes of this kind of disconnection. One I just found is, if the query string too large, the server will disconnect. This probably relates to the `max_allowed_packets` setting.
Firstly, You should make sure the MySQL session & global enviroments `wait_timeout` and `interactive_timeout` values. And secondly Your client should try to reconnect to the server below those enviroments values.
14,163,429
Original: I have recently started getting MySQL OperationalErrors from some of my old code and cannot seem to trace back the problem. Since it was working before, I thought it may have been a software update that broke something. I am using python 2.7 with django runfcgi with nginx. Here is my original code: **views.py** ``` DBNAME = "test" DBIP = "localhost" DBUSER = "django" DBPASS = "password" db = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME) cursor = db.cursor() def list(request): statement = "SELECT item from table where selected = 1" cursor.execute(statement) results = cursor.fetchall() ``` I have tried the following, but it still does not work: **views.py** ``` class DB: conn = None DBNAME = "test" DBIP = "localhost" DBUSER = "django" DBPASS = "password" def connect(self): self.conn = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME) def cursor(self): try: return self.conn.cursor() except (AttributeError, MySQLdb.OperationalError): self.connect() return self.conn.cursor() db = DB() cursor = db.cursor() def list(request): cursor = db.cursor() statement = "SELECT item from table where selected = 1" cursor.execute(statement) results = cursor.fetchall() ``` Currently, my only workaround is to do `MySQLdb.connect()` in each function that uses mysql. Also I noticed that when using django's `manage.py runserver`, I would not have this problem while nginx would throw these errors. I doubt that I am timing out with the connection because `list()` is being called within seconds of starting the server up. Were there any updates to the software I am using that would cause this to break/is there any fix for this? Edit: I realized that I recently wrote a piece of middle-ware to daemonize a function and this was the cause of the problem. However, I cannot figure out why. Here is the code for the middle-ware ``` def process_request_handler(sender, **kwargs): t = threading.Thread(target=dispatch.execute, args=[kwargs['nodes'],kwargs['callback']], kwargs={}) t.setDaemon(True) t.start() return process_request.connect(process_request_handler) ```
2013/01/04
[ "https://Stackoverflow.com/questions/14163429", "https://Stackoverflow.com", "https://Stackoverflow.com/users/516476/" ]
As per the [MySQL documentation](http://dev.mysql.com/doc/refman/5.0/en/gone-away.html), your error message is raised when the client can't send a question to the server, most likely because the server itself has closed the connection. In the most common case the server will close an idle connection after a (default) of 8 hours. This is configurable on the server side. The [MySQL documentation](http://dev.mysql.com/doc/refman/5.0/en/gone-away.html) gives a number of other possible causes which might be worth looking into to see if they fit your situation. An alternative to calling `connect()` in every function (which might end up needlessly creating new connections) would be to investigate using the `ping()` method on the connection object; this tests the connection with the option of attempting an automatic reconnect. I struggled to find some [decent documentation](http://www.neotitans.com/resources/python/mysql-python-connection-error-2006.html) for the `ping()` method online, but the answer to [this question](https://stackoverflow.com/questions/12564828/sleeping-processes-in-mysql) might help. Note, automatically reconnecting can be dangerous when handling transactions as it appears the reconnect causes an implicit rollback (and appears to be the main reason why autoreconnect is not a feature of the MySQLdb implementation).
I had this problem and did not have the option to change my configuration. I finally figured out that the problem was occurring 49500 records in to my 50000-record loop, because that was the about the time I was trying again (after having tried a long time ago) to hit my second database. So I changed my code so that every few thousand records, I touched the second database again (with a count() of a very small table), and that fixed it. No doubt "ping" or some other means of touching the database would work, as well.
14,163,429
Original: I have recently started getting MySQL OperationalErrors from some of my old code and cannot seem to trace back the problem. Since it was working before, I thought it may have been a software update that broke something. I am using python 2.7 with django runfcgi with nginx. Here is my original code: **views.py** ``` DBNAME = "test" DBIP = "localhost" DBUSER = "django" DBPASS = "password" db = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME) cursor = db.cursor() def list(request): statement = "SELECT item from table where selected = 1" cursor.execute(statement) results = cursor.fetchall() ``` I have tried the following, but it still does not work: **views.py** ``` class DB: conn = None DBNAME = "test" DBIP = "localhost" DBUSER = "django" DBPASS = "password" def connect(self): self.conn = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME) def cursor(self): try: return self.conn.cursor() except (AttributeError, MySQLdb.OperationalError): self.connect() return self.conn.cursor() db = DB() cursor = db.cursor() def list(request): cursor = db.cursor() statement = "SELECT item from table where selected = 1" cursor.execute(statement) results = cursor.fetchall() ``` Currently, my only workaround is to do `MySQLdb.connect()` in each function that uses mysql. Also I noticed that when using django's `manage.py runserver`, I would not have this problem while nginx would throw these errors. I doubt that I am timing out with the connection because `list()` is being called within seconds of starting the server up. Were there any updates to the software I am using that would cause this to break/is there any fix for this? Edit: I realized that I recently wrote a piece of middle-ware to daemonize a function and this was the cause of the problem. However, I cannot figure out why. Here is the code for the middle-ware ``` def process_request_handler(sender, **kwargs): t = threading.Thread(target=dispatch.execute, args=[kwargs['nodes'],kwargs['callback']], kwargs={}) t.setDaemon(True) t.start() return process_request.connect(process_request_handler) ```
2013/01/04
[ "https://Stackoverflow.com/questions/14163429", "https://Stackoverflow.com", "https://Stackoverflow.com/users/516476/" ]
As per the [MySQL documentation](http://dev.mysql.com/doc/refman/5.0/en/gone-away.html), your error message is raised when the client can't send a question to the server, most likely because the server itself has closed the connection. In the most common case the server will close an idle connection after a (default) of 8 hours. This is configurable on the server side. The [MySQL documentation](http://dev.mysql.com/doc/refman/5.0/en/gone-away.html) gives a number of other possible causes which might be worth looking into to see if they fit your situation. An alternative to calling `connect()` in every function (which might end up needlessly creating new connections) would be to investigate using the `ping()` method on the connection object; this tests the connection with the option of attempting an automatic reconnect. I struggled to find some [decent documentation](http://www.neotitans.com/resources/python/mysql-python-connection-error-2006.html) for the `ping()` method online, but the answer to [this question](https://stackoverflow.com/questions/12564828/sleeping-processes-in-mysql) might help. Note, automatically reconnecting can be dangerous when handling transactions as it appears the reconnect causes an implicit rollback (and appears to be the main reason why autoreconnect is not a feature of the MySQLdb implementation).
How old is this code? Django has had databases defined in settings since at least .96. Only other thing I can think of is multi-db support, which changed things a bit, but even that was 1.1 or 1.2. Even if you need a special DB for certain views, I think you'd probably be better off defining it in settings.
14,163,429
Original: I have recently started getting MySQL OperationalErrors from some of my old code and cannot seem to trace back the problem. Since it was working before, I thought it may have been a software update that broke something. I am using python 2.7 with django runfcgi with nginx. Here is my original code: **views.py** ``` DBNAME = "test" DBIP = "localhost" DBUSER = "django" DBPASS = "password" db = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME) cursor = db.cursor() def list(request): statement = "SELECT item from table where selected = 1" cursor.execute(statement) results = cursor.fetchall() ``` I have tried the following, but it still does not work: **views.py** ``` class DB: conn = None DBNAME = "test" DBIP = "localhost" DBUSER = "django" DBPASS = "password" def connect(self): self.conn = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME) def cursor(self): try: return self.conn.cursor() except (AttributeError, MySQLdb.OperationalError): self.connect() return self.conn.cursor() db = DB() cursor = db.cursor() def list(request): cursor = db.cursor() statement = "SELECT item from table where selected = 1" cursor.execute(statement) results = cursor.fetchall() ``` Currently, my only workaround is to do `MySQLdb.connect()` in each function that uses mysql. Also I noticed that when using django's `manage.py runserver`, I would not have this problem while nginx would throw these errors. I doubt that I am timing out with the connection because `list()` is being called within seconds of starting the server up. Were there any updates to the software I am using that would cause this to break/is there any fix for this? Edit: I realized that I recently wrote a piece of middle-ware to daemonize a function and this was the cause of the problem. However, I cannot figure out why. Here is the code for the middle-ware ``` def process_request_handler(sender, **kwargs): t = threading.Thread(target=dispatch.execute, args=[kwargs['nodes'],kwargs['callback']], kwargs={}) t.setDaemon(True) t.start() return process_request.connect(process_request_handler) ```
2013/01/04
[ "https://Stackoverflow.com/questions/14163429", "https://Stackoverflow.com", "https://Stackoverflow.com/users/516476/" ]
The most common issue regarding such warning, is the fact that your application has reached the `wait_timeout` value of MySQL. I had the same problem with a Flask app. Here's how I solved: ``` $ grep timeout /etc/mysql/mysql.conf.d/mysqld.cnf # https://support.rackspace.com/how-to/how-to-change-the-mysql-timeout-on-a-server/ # wait = timeout for application session (tdm) # inteactive = timeout for keyboard session (terminal) # 7 days = 604800s / 4 hours = 14400s wait_timeout = 604800 interactive_timeout = 14400 ``` **Observation:** if you search for the variables via MySQL batch mode, the values will appear as it is. But If you perform `SHOW VARIABLES LIKE 'wait%';` or `SHOW VARIABLES LIKE 'interactive%';`, the value configured for `interactive_timeout`, will appear to both variables, and I don't know why, but the fact is, that the values configured for each variable at '/etc/mysql/mysql.conf.d/mysqld.cnf', will be respected by MySQL.
I had this problem and did not have the option to change my configuration. I finally figured out that the problem was occurring 49500 records in to my 50000-record loop, because that was the about the time I was trying again (after having tried a long time ago) to hit my second database. So I changed my code so that every few thousand records, I touched the second database again (with a count() of a very small table), and that fixed it. No doubt "ping" or some other means of touching the database would work, as well.
7,629,753
I have been doing a lot of studying of the BaseHTTPServer and found that its not that good for multiple requests. I went through this article <http://metachris.org/2011/01/scaling-python-servers-with-worker-processes-and-socket-duplication/#python> and I wanted to know what is the best way for building a HTTP Server for multiple requests -> My requirements for the HTTP Server are simple - - support multiple requests (where each request may run a LONG Python Script) Till now I have following options -> - BaseHTTPServer (with thread is not good) - Mod\_Python (Apache intergration) - CherryPy? - Any other?
2011/10/02
[ "https://Stackoverflow.com/questions/7629753", "https://Stackoverflow.com", "https://Stackoverflow.com/users/558397/" ]
I have had very good luck with the CherryPy web server, one of the oldest and most solid of the pure-Python web servers. Just write your application as a WSGI callable and it should be easy to run under CherryPy's multi-threaded server. <http://www.cherrypy.org/>
Indeed, the the HTTP servers provided with the standard python library are meant only for light duty use; For moderate scaling (100's of concurrent connections), `mod_wsgi` in apache is a great choice. If your needs are greater than that(10,000's of concurrent connections), You'll want to look at an asynchronous framework, such as Twisted or Tornado. The general structure of an asynchronous application is quite different, so if you think you're likely to need to go down that route, you should definitely start your project in one of those frameworks from the start
7,629,753
I have been doing a lot of studying of the BaseHTTPServer and found that its not that good for multiple requests. I went through this article <http://metachris.org/2011/01/scaling-python-servers-with-worker-processes-and-socket-duplication/#python> and I wanted to know what is the best way for building a HTTP Server for multiple requests -> My requirements for the HTTP Server are simple - - support multiple requests (where each request may run a LONG Python Script) Till now I have following options -> - BaseHTTPServer (with thread is not good) - Mod\_Python (Apache intergration) - CherryPy? - Any other?
2011/10/02
[ "https://Stackoverflow.com/questions/7629753", "https://Stackoverflow.com", "https://Stackoverflow.com/users/558397/" ]
I have had very good luck with the CherryPy web server, one of the oldest and most solid of the pure-Python web servers. Just write your application as a WSGI callable and it should be easy to run under CherryPy's multi-threaded server. <http://www.cherrypy.org/>
[Tornado](http://www.tornadoweb.org/en/stable/) is a really good and easy-to-use asynchronous event-loop / webserver developed by FriendFeed/Facebook. I've personally had very good experiences with it. You can use the HTTP classes as in the example below, or only the io-loop to multiplex plain TCP connections. ``` import tornado.ioloop import tornado.web class MainHandler(tornado.web.RequestHandler): def get(self): self.write("Hello, world") application = tornado.web.Application([ (r"/", MainHandler), ]) if __name__ == "__main__": application.listen(8888) tornado.ioloop.IOLoop.current().start() ```
18,267,454
The sql expression : ```sql select * from order where status=0 and adddate(created_time, interval 1 day)>now(); ``` python code: ```python from sqlalchemy.sql.expression import func, text from datetime import datetime closed_orders = DBSession.query(Order).filter(func.dateadd(Order.create_time, text('interval 1 day'))>datetime.now()).all() ``` but it's got wrong. how to do it correctly? thanks REF :[Using DATEADD in sqlalchemy](https://stackoverflow.com/questions/15572292/using-dateadd-in-sqlalchemy)
2013/08/16
[ "https://Stackoverflow.com/questions/18267454", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2688431/" ]
Try this: ``` from sqlalchemy import func import datetime DBSession.query(Order)\ .filter(func.ADDDATE(Order.create_time,1)>datetime.datetime.now()) ```
presto: ``` extract('hour', cast(t_table.open_time,TIMESTAMP)) - 5 == 12 extract('dow', cast(cast(t_table.open_time, TIMESTAMP) - 5,TIMESTAMP)) == 3 ```
10,618,956
I want to implement a symbol type, which keeps track of the symbols we already have(saved in `_sym_table`), and return them if they exist, or create new ones otherwise. The code: ``` # -*- coding: utf-8 -*- _sym_table = {} class Symbol(object): def __new__(cls, sym): if sym not in _sym_table: return super().__new__(cls) else: return _sym_table[sym] def __init__(self, sym): self.sym = sym _sym_table[sym] = self def __str__(self): return self.sym def __cmp__(self, other): return self is other def __hash__(self): return self.sym.__hash__() ``` But when I call `copy.deepcopy` on a list of such `Symbol` instances, exception is raised: ``` a = Symbol('a') b = Symbol('b') s = [a, b] t = copy.deepcopy(s) ``` Error messages: ``` Traceback (most recent call last): File "xxx.py", line 7, in <module> t = copy.deepcopy(s) File "/usr/lib/python3.2/copy.py", line 147, in deepcopy y = copier(x, memo) File "/usr/lib/python3.2/copy.py", line 209, in _deepcopy_list y.append(deepcopy(a, memo)) File "/usr/lib/python3.2/copy.py", line 174, in deepcopy y = _reconstruct(x, rv, 1, memo) File "/usr/lib/python3.2/copy.py", line 285, in _reconstruct y = callable(*args) File "/usr/lib/python3.2/copyreg.py", line 88, in __newobj__ return cls.__new__(cls, *args) TypeError: __new__() takes exactly 2 arguments (1 given) ``` So my questions are: * How can I make a deep copy on these objects with self-defined `__new__` methods? * And any suggestions about when and how to use `copy.deepcopy`? Thanks a lot!
2012/05/16
[ "https://Stackoverflow.com/questions/10618956", "https://Stackoverflow.com", "https://Stackoverflow.com/users/403367/" ]
one problem is that `deepcopy` and `copy` have no way of knowing which arguments to pass to `__new__`, therefore they only work with classes that don't require constructor arguments. the reason why you can have `__init__` arguments is that `__init__` isn't called when copying an object, but `__new__` must be called to create the new object. so if you want to control copying, you'll have to define the special `__copy__` and `__deepcopy__` methods: ``` def __copy__(self): return self def __deepcopy__(self, memo): return self ``` by the way, [singletons](http://tech.puredanger.com/2007/07/03/pattern-hate-singleton/) are [evil](http://blogs.msdn.com/b/scottdensmore/archive/2004/05/25/140827.aspx) and not really needed in python.
Seems to me you want the Symbol instances to be singletons. Deepcopy, however is supposed to be used when you want an exact copy of an instance, i.e. a different instance that is equal to the original. So the usage here kinda contradicts the purpose of deepcopy. If you want to make it work anyhow, you can define the [\_\_deepcopy\_\_](http://docs.python.org/py3k/library/copy.html#copy.deepcopy) method on *Symbol*.
10,618,956
I want to implement a symbol type, which keeps track of the symbols we already have(saved in `_sym_table`), and return them if they exist, or create new ones otherwise. The code: ``` # -*- coding: utf-8 -*- _sym_table = {} class Symbol(object): def __new__(cls, sym): if sym not in _sym_table: return super().__new__(cls) else: return _sym_table[sym] def __init__(self, sym): self.sym = sym _sym_table[sym] = self def __str__(self): return self.sym def __cmp__(self, other): return self is other def __hash__(self): return self.sym.__hash__() ``` But when I call `copy.deepcopy` on a list of such `Symbol` instances, exception is raised: ``` a = Symbol('a') b = Symbol('b') s = [a, b] t = copy.deepcopy(s) ``` Error messages: ``` Traceback (most recent call last): File "xxx.py", line 7, in <module> t = copy.deepcopy(s) File "/usr/lib/python3.2/copy.py", line 147, in deepcopy y = copier(x, memo) File "/usr/lib/python3.2/copy.py", line 209, in _deepcopy_list y.append(deepcopy(a, memo)) File "/usr/lib/python3.2/copy.py", line 174, in deepcopy y = _reconstruct(x, rv, 1, memo) File "/usr/lib/python3.2/copy.py", line 285, in _reconstruct y = callable(*args) File "/usr/lib/python3.2/copyreg.py", line 88, in __newobj__ return cls.__new__(cls, *args) TypeError: __new__() takes exactly 2 arguments (1 given) ``` So my questions are: * How can I make a deep copy on these objects with self-defined `__new__` methods? * And any suggestions about when and how to use `copy.deepcopy`? Thanks a lot!
2012/05/16
[ "https://Stackoverflow.com/questions/10618956", "https://Stackoverflow.com", "https://Stackoverflow.com/users/403367/" ]
Seems to me you want the Symbol instances to be singletons. Deepcopy, however is supposed to be used when you want an exact copy of an instance, i.e. a different instance that is equal to the original. So the usage here kinda contradicts the purpose of deepcopy. If you want to make it work anyhow, you can define the [\_\_deepcopy\_\_](http://docs.python.org/py3k/library/copy.html#copy.deepcopy) method on *Symbol*.
Define `__getnewargs__` — that way you will not only be able to `copy` and `deepcopy`, but you'll also be able to `pickle`.
10,618,956
I want to implement a symbol type, which keeps track of the symbols we already have(saved in `_sym_table`), and return them if they exist, or create new ones otherwise. The code: ``` # -*- coding: utf-8 -*- _sym_table = {} class Symbol(object): def __new__(cls, sym): if sym not in _sym_table: return super().__new__(cls) else: return _sym_table[sym] def __init__(self, sym): self.sym = sym _sym_table[sym] = self def __str__(self): return self.sym def __cmp__(self, other): return self is other def __hash__(self): return self.sym.__hash__() ``` But when I call `copy.deepcopy` on a list of such `Symbol` instances, exception is raised: ``` a = Symbol('a') b = Symbol('b') s = [a, b] t = copy.deepcopy(s) ``` Error messages: ``` Traceback (most recent call last): File "xxx.py", line 7, in <module> t = copy.deepcopy(s) File "/usr/lib/python3.2/copy.py", line 147, in deepcopy y = copier(x, memo) File "/usr/lib/python3.2/copy.py", line 209, in _deepcopy_list y.append(deepcopy(a, memo)) File "/usr/lib/python3.2/copy.py", line 174, in deepcopy y = _reconstruct(x, rv, 1, memo) File "/usr/lib/python3.2/copy.py", line 285, in _reconstruct y = callable(*args) File "/usr/lib/python3.2/copyreg.py", line 88, in __newobj__ return cls.__new__(cls, *args) TypeError: __new__() takes exactly 2 arguments (1 given) ``` So my questions are: * How can I make a deep copy on these objects with self-defined `__new__` methods? * And any suggestions about when and how to use `copy.deepcopy`? Thanks a lot!
2012/05/16
[ "https://Stackoverflow.com/questions/10618956", "https://Stackoverflow.com", "https://Stackoverflow.com/users/403367/" ]
one problem is that `deepcopy` and `copy` have no way of knowing which arguments to pass to `__new__`, therefore they only work with classes that don't require constructor arguments. the reason why you can have `__init__` arguments is that `__init__` isn't called when copying an object, but `__new__` must be called to create the new object. so if you want to control copying, you'll have to define the special `__copy__` and `__deepcopy__` methods: ``` def __copy__(self): return self def __deepcopy__(self, memo): return self ``` by the way, [singletons](http://tech.puredanger.com/2007/07/03/pattern-hate-singleton/) are [evil](http://blogs.msdn.com/b/scottdensmore/archive/2004/05/25/140827.aspx) and not really needed in python.
Define `__getnewargs__` — that way you will not only be able to `copy` and `deepcopy`, but you'll also be able to `pickle`.
70,351,208
I am trying to fit some `experimental data (x and y)` with a `custom function (Srt)` and using `scipy.optimize.curve_fit()`: Reading the data and defining the function, using dummy values (10,10) for Km and Vmax (which are to be determined using the curve fit) works fine, as long as I use `np.asarray()`: ``` from scipy.special import lambertw from scipy.optimize import curve_fit import numpy as np import scipy def Srt(t,s,Km,Vmax): print("t",type(t)) print("t",t) print("last element of t:",t[-1]) print("s",type(s)) print("s",s) print("last element of s:",s[-1]) Smax = s[-1] # Substrate concentration at end of reaction t0 = t[0] # time=0 (beginning of reaction) s0 = s[0] # Substrate concentration at time = 0 (beginning of reaction) E = np.exp(((Smax - s0) - Vmax*(t+t0))/Km) L = lambertw(((Smax - s0)/Km)*E) y = Smax - Km*L return y x=[2.780000e-03,2.778000e-02,5.278000e-02,7.778000e-02,1.027800e-01 ,1.277800e-01,1.527800e-01,1.777800e-01,2.027800e-01,2.277800e-01 ,2.527800e-01,2.777800e-01,3.027800e-01,3.277800e-01,3.527800e-01] y=[0.44236,0.4308,0.42299,0.41427,0.40548,0.39908,0.39039,0.3845,0.37882 ,0.37411,0.36759,0.36434,0.35864,0.35508,0.35138] xdata = np.asarray(x) ydata = np.asarray(y) Srt(xdata, ydata,10,10) ``` If I do not use `np.asarray`, I get a "Type Error": ``` Srt(x, y,10,10) ``` [![TypeError](https://i.stack.imgur.com/SWE3X.png)](https://i.stack.imgur.com/SWE3X.png) When I continue to use curve\_fit to make the fit for Vmax and Km with: ``` parameters, covariance = scipy.optimize.curve_fit(Srt, xdata, ydata) ``` I get into trouble: If I understand the error message correctly, for some reason, the array `ydata` is not an array anymore when it is read in as `s` ?!? [![ydata_S_Error](https://i.stack.imgur.com/3Z6rG.png)](https://i.stack.imgur.com/3Z6rG.png) What do I have to change in my code so that I can work with my function `Srt` and `curve_fit` ? **EDIT: Full output of code:** ``` t <class 'numpy.ndarray'> t [0.00278 0.02778 0.05278 0.07778 0.10278 0.12778 0.15278 0.17778 0.20278 0.22778 0.25278 0.27778 0.30278 0.32778 0.35278] last element of t: 0.35278 s <class 'numpy.ndarray'> s [0.44236 0.4308 0.42299 0.41427 0.40548 0.39908 0.39039 0.3845 0.37882 0.37411 0.36759 0.36434 0.35864 0.35508 0.35138] last element of s: 0.35138 t <class 'numpy.ndarray'> t [0.00278 0.02778 0.05278 0.07778 0.10278 0.12778 0.15278 0.17778 0.20278 0.22778 0.25278 0.27778 0.30278 0.32778 0.35278] last element of t: 0.35278 s <class 'numpy.float64'> s 1.0 --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-23-5ce34d06b849> in <module> 33 #then the problems start 34 ---> 35 parameters, covariance = scipy.optimize.curve_fit(Srt, xdata, ydata) ~\anaconda3\lib\site-packages\scipy\optimize\minpack.py in curve_fit(f, xdata, ydata, p0, sigma, absolute_sigma, check_finite, bounds, method, jac, **kwargs) 761 # Remove full_output from kwargs, otherwise we're passing it in twice. 762 return_full = kwargs.pop('full_output', False) --> 763 res = leastsq(func, p0, Dfun=jac, full_output=1, **kwargs) 764 popt, pcov, infodict, errmsg, ier = res 765 ysize = len(infodict['fvec']) ~\anaconda3\lib\site-packages\scipy\optimize\minpack.py in leastsq(func, x0, args, Dfun, full_output, col_deriv, ftol, xtol, gtol, maxfev, epsfcn, factor, diag) 386 if not isinstance(args, tuple): 387 args = (args,) --> 388 shape, dtype = _check_func('leastsq', 'func', func, x0, args, n) 389 m = shape[0] 390 ~\anaconda3\lib\site-packages\scipy\optimize\minpack.py in _check_func(checker, argname, thefunc, x0, args, numinputs, output_shape) 24 def _check_func(checker, argname, thefunc, x0, args, numinputs, 25 output_shape=None): ---> 26 res = atleast_1d(thefunc(*((x0[:numinputs],) + args))) 27 if (output_shape is not None) and (shape(res) != output_shape): 28 if (output_shape[0] != 1): ~\anaconda3\lib\site-packages\scipy\optimize\minpack.py in func_wrapped(params) 461 if transform is None: 462 def func_wrapped(params): --> 463 return func(xdata, *params) - ydata 464 elif transform.ndim == 1: 465 def func_wrapped(params): <ipython-input-23-5ce34d06b849> in Srt(t, s, Km, Vmax) 10 print("s",type(s)) 11 print("s",s) ---> 12 print("last element of s:",s[-1]) 13 Smax = s[-1] # Substrate concentration at end of reaction 14 t0 = t[0] # time=0 (beginning of reaction) IndexError: invalid index to scalar variable. ``` **EDIT 2** FULLY Functional code, thanks to Jonathan Weine. Fit is suboptimal due to "bad" experimental data, I am playing around with my full dataset now :D ``` from scipy.special import lambertw from scipy.optimize import curve_fit import numpy as np import scipy def Srt(t, Smax: float, s0: float, Km: float, Vmax: float): t0 = t[0] # time=0 (beginning of reaction) E = np.exp(((Smax - s0) - Vmax*(t+t0))/Km) # L = lambertw(((Smax - s0)/Km)*E) # this apparently can be complex which causes another Error L = np.abs(lambertw(((Smax - s0)/Km)*E)) y = Smax + Km*L return y y = [0.44236,0.4308,0.42299,0.41427,0.40548,0.39908,0.39039,0.3845,0.37882 ,0.37411,0.36759,0.36434,0.35864,0.35508,0.35138,0.34748,0.34437,0.34143 ,0.3391,0.3348,0.33345,0.31404,0.30212,0.29043,0.28026,0.27331,0.26672 ,0.26187,0.25645,0.25208,0.24736,0.244,0.24056,0.23798,0.23359,0.23138 ,0.22845,0.22566,0.22384,0.22112,0.21894,0.21672,0.21466,0.21316,0.21209 ,0.20941,0.20823,0.20687,0.2056,0.20324,0.20266,0.20095,0.19935,0.19895 ,0.19746,0.19616,0.19486,0.19419,0.19382,0.19301,0.19085,0.19108,0.19024 ,0.18933,0.18839,0.18706,0.18643,0.18623,0.18569,0.18469,0.18381,0.18341 ,0.18331,0.18324,0.18222,0.18106,0.18039,0.18022,0.17906,0.17935,0.17842 ,0.17834,0.1781,0.17731,0.17704,0.1766,0.17654,0.1761,0.17568,0.1744 ,0.17453,0.17393,0.17325,0.17329,0.17302,0.17347,0.17344,0.17233,0.17228 ,0.17208,0.17177,0.1712,0.17076,0.171,0.17043,0.17057,0.17003,0.16965 ,0.16923,0.16944,0.16898,0.16879,0.16809,0.16821,0.16794,0.16831,0.16779 ,0.16805,0.16765,0.16762,0.16695,0.16694,0.1669,0.16642,0.16583,0.166 ,0.16625,0.16575,0.1658,0.16553,0.16565,0.1654,0.16419,0.16487,0.16467 ,0.16452,0.16433,0.16468,0.16423,0.16427,0.16372,0.16388,0.16388,0.16394 ,0.16382,0.1631,0.16353,0.1638,0.16304,0.163,0.16296,0.16295,0.16284 ,0.16275,0.16214,0.16243,0.16211,0.16207,0.16185,0.16187,0.16176,0.16168 ,0.16195,0.16138,0.16177,0.16121,0.16163,0.16121,0.161,0.16114,0.16122 ,0.16096,0.16105,0.16102,0.16068,0.16031,0.16028,0.16051,0.16045,0.16017 ,0.15977,0.15927,0.16007,0.15953,0.15933,0.1596,0.15911,0.15903,0.15884 ,0.15856,0.15889,0.15888,0.15861,0.15849,0.158,0.15822,0.15776,0.15759 ,0.15734,0.15757,0.15718,0.15699,0.15747,0.15692,0.15701,0.15715,0.15675 ,0.15732,0.15687,0.15659,0.15664,0.15635,0.15633,0.15591] #csvFile.iloc[0:500,9] x = [2.780000e-03,2.778000e-02,5.278000e-02,7.778000e-02,1.027800e-01 ,1.277800e-01,1.527800e-01,1.777800e-01,2.027800e-01,2.277800e-01 ,2.527800e-01,2.777800e-01,3.027800e-01,3.277800e-01,3.527800e-01 ,3.777800e-01,4.027800e-01,4.277800e-01,4.527800e-01,4.777800e-01 ,5.027800e-01,7.538900e-01,1.003890e+00,1.253890e+00,1.503890e+00 ,1.753890e+00,2.003890e+00,2.253890e+00,2.503890e+00,2.753890e+00 ,3.003890e+00,3.253890e+00,3.503890e+00,3.753890e+00,4.003890e+00 ,4.253890e+00,4.503890e+00,4.753890e+00,5.003890e+00,5.253890e+00 ,5.503890e+00,5.753890e+00,6.003890e+00,6.253890e+00,6.503890e+00 ,6.753890e+00,7.003890e+00,7.253890e+00,7.503890e+00,7.753890e+00 ,8.003890e+00,8.253890e+00,8.503890e+00,8.753890e+00,9.003890e+00 ,9.253890e+00,9.503890e+00,9.753890e+00,1.000389e+01,1.025389e+01 ,1.050389e+01,1.075389e+01,1.100389e+01,1.125389e+01,1.150389e+01 ,1.175389e+01,1.200389e+01,1.225389e+01,1.250389e+01,1.275389e+01 ,1.300389e+01,1.325389e+01,1.350389e+01,1.375389e+01,1.400389e+01 ,1.425389e+01,1.450389e+01,1.475389e+01,1.500389e+01,1.525389e+01 ,1.550389e+01,1.575389e+01,1.600389e+01,1.625389e+01,1.650389e+01 ,1.675389e+01,1.700389e+01,1.725389e+01,1.750389e+01,1.775389e+01 ,1.800389e+01,1.825389e+01,1.850389e+01,1.875389e+01,1.900389e+01 ,1.925389e+01,1.950389e+01,1.975389e+01,2.000389e+01,2.025389e+01 ,2.050389e+01,2.075389e+01,2.100389e+01,2.125389e+01,2.150389e+01 ,2.175389e+01,2.200389e+01,2.225389e+01,2.250389e+01,2.275389e+01 ,2.300389e+01,2.325389e+01,2.350389e+01,2.375389e+01,2.400389e+01 ,2.425389e+01,2.450389e+01,2.475389e+01,2.500389e+01,2.525389e+01 ,2.550389e+01,2.575389e+01,2.600389e+01,2.625389e+01,2.650389e+01 ,2.675389e+01,2.700389e+01,2.725389e+01,2.750389e+01,2.775389e+01 ,2.800389e+01,2.825389e+01,2.850389e+01,2.875389e+01,2.900389e+01 ,2.925389e+01,2.950389e+01,2.975389e+01,3.000389e+01,3.025389e+01 ,3.050389e+01,3.075389e+01,3.100389e+01,3.125389e+01,3.150389e+01 ,3.175389e+01,3.200389e+01,3.225389e+01,3.250389e+01,3.275389e+01 ,3.300389e+01,3.325389e+01,3.350389e+01,3.375389e+01,3.400389e+01 ,3.425389e+01,3.450389e+01,3.475389e+01,3.500389e+01,3.525389e+01 ,3.550389e+01,3.575389e+01,3.600389e+01,3.625389e+01,3.650389e+01 ,3.675389e+01,3.700389e+01,3.725389e+01,3.750389e+01,3.775389e+01 ,3.800389e+01,3.825389e+01,3.850389e+01,3.875389e+01,3.900389e+01 ,3.925389e+01,3.950389e+01,3.975389e+01,4.000389e+01,4.025389e+01 ,4.050389e+01,4.075389e+01,4.100389e+01,4.125389e+01,4.150389e+01 ,4.175389e+01,4.200389e+01,4.225389e+01,4.250389e+01,4.275389e+01 ,4.300389e+01,4.325389e+01,4.350389e+01,4.375389e+01,4.400389e+01 ,4.425389e+01,4.450389e+01,4.475389e+01,4.500389e+01,4.525389e+01 ,4.550389e+01,4.575389e+01,4.600389e+01,4.625389e+01,4.650389e+01 ,4.675389e+01,4.700389e+01,4.725389e+01,4.750389e+01,4.775389e+01 ,4.800389e+01,4.825389e+01,4.850389e+01,4.875389e+01] # csvFile.iloc[0:500,0] xdata = np.array(x) ydata = np.array(y) parameters, covariance = scipy.optimize.curve_fit(Srt, xdata, ydata) Km = parameters[1] Vmax = parameters[0] fit_y = Srt(xdata, ydata[-1],ydata[0], Km, Vmax) print("Km: ", parameters[1], "Vmax: ", parameters[0]) plt.plot(xdata, fit_y, '-', color="green",linewidth=1) plt.plot(xdata, ydata, 'o', color="red") ```
2021/12/14
[ "https://Stackoverflow.com/questions/70351208", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5466187/" ]
Please have a closer look at the [documentation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html) of the `curve_fit` function. Where it states that `ydata` must nominaly be the result of `func(xdata... )`. So the ydata that you hand to `curve_fit` is never passed as argument of the call of `Srt` as you indicated in the manual call. Furthermore, the parameters to be estimated must have the same shape, which means that you have to define `Smax` and `s0` as float input. I modified your example such that it actually runs: ```py from scipy.special import lambertw from scipy.optimize import curve_fit import numpy as np import scipy def Srt(t, Smax: float, s0: float, Km: float, Vmax: float): t0 = t[0] # time=0 (beginning of reaction) E = np.exp(((Smax - s0) - Vmax*(t+t0))/Km) # L = lambertw(((Smax - s0)/Km)*E) # this apparently can be complex which causes another Error L = np.abs(lambertw(((Smax - s0)/Km)*E)) y = Smax - Km*L return y x=[2.780000e-03,2.778000e-02,5.278000e-02,7.778000e-02,1.027800e-01 ,1.277800e-01,1.527800e-01,1.777800e-01,2.027800e-01,2.277800e-01 ,2.527800e-01,2.777800e-01,3.027800e-01,3.277800e-01,3.527800e-01] y=[0.44236,0.4308,0.42299,0.41427,0.40548,0.39908,0.39039,0.3845,0.37882 ,0.37411,0.36759,0.36434,0.35864,0.35508,0.35138] xdata = np.array(x) ydata = np.array(y) parameters, covariance = scipy.optimize.curve_fit(Srt, xdata, ydata) ``` **NOTE**: The `np.abs` inside the function does not make sense, but the complex result of `lambertw` apparently can be complex. In this case an error is raised as there is no safe casting rule, causing curvefit to abort.
Your first error is produced by the `t+t0` expression. It `t` is a list `x`, that's a list "concatenate" expression, which is fine for `[1,2,3]+[4,5]` but not `[1,2,3]+5`. That's why `x` and `y` have to arrays. In the second error, what did the ``` print("s",type(s)) print("s",s) ``` show? Apparently `s` is not an array, or even a list.
13,961,140
I am a beginner to python and am at the moment having trouble using the command line. I have a script test.py (which only contains `print("Hello.")`), and it is located in the map C:\Python27. In my system variables, I have specified python to be C:\Python27 (I have other versions of Python installed on my computer as well). I thought this should be enough to run `python test.py` in the command line, but when I do so I get this: ``` File "<stdin>", line 1 python test.py ^ SyntaxError: invalid syntax ``` What is wrong? Thanks in advance!
2012/12/19
[ "https://Stackoverflow.com/questions/13961140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1913042/" ]
Don't type `python test.py` from inside the Python interpreter. Type it at the command prompt, like so: ![cmd.exe](https://i.stack.imgur.com/gc2Q1.png) ![python test.py](https://i.imgur.com/TFUBm.png)
Running from the command line means running from the terminal or DOS shell. You are running it from Python itself.
13,961,140
I am a beginner to python and am at the moment having trouble using the command line. I have a script test.py (which only contains `print("Hello.")`), and it is located in the map C:\Python27. In my system variables, I have specified python to be C:\Python27 (I have other versions of Python installed on my computer as well). I thought this should be enough to run `python test.py` in the command line, but when I do so I get this: ``` File "<stdin>", line 1 python test.py ^ SyntaxError: invalid syntax ``` What is wrong? Thanks in advance!
2012/12/19
[ "https://Stackoverflow.com/questions/13961140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1913042/" ]
Don't type `python test.py` from inside the Python interpreter. Type it at the command prompt, like so: ![cmd.exe](https://i.stack.imgur.com/gc2Q1.png) ![python test.py](https://i.imgur.com/TFUBm.png)
Come out of the "python interpreter." 1. Check out your PATH variable c:\python27 2. cd and your file location. 3.Now type Python yourfilename.py. I hope this should work
13,961,140
I am a beginner to python and am at the moment having trouble using the command line. I have a script test.py (which only contains `print("Hello.")`), and it is located in the map C:\Python27. In my system variables, I have specified python to be C:\Python27 (I have other versions of Python installed on my computer as well). I thought this should be enough to run `python test.py` in the command line, but when I do so I get this: ``` File "<stdin>", line 1 python test.py ^ SyntaxError: invalid syntax ``` What is wrong? Thanks in advance!
2012/12/19
[ "https://Stackoverflow.com/questions/13961140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1913042/" ]
Looks like your problem is that you are trying to run `python test.py` *from within the Python interpreter*, which is why you're seeing that traceback. Make sure you're out of the interpreter, then run the `python test.py` command from bash or command prompt or whatever.
In order to run scripts, you should write the "python test.py" command in the command prompt, and not within the python shell. also, the test.py file should be at the path you run from in the cli.
13,961,140
I am a beginner to python and am at the moment having trouble using the command line. I have a script test.py (which only contains `print("Hello.")`), and it is located in the map C:\Python27. In my system variables, I have specified python to be C:\Python27 (I have other versions of Python installed on my computer as well). I thought this should be enough to run `python test.py` in the command line, but when I do so I get this: ``` File "<stdin>", line 1 python test.py ^ SyntaxError: invalid syntax ``` What is wrong? Thanks in advance!
2012/12/19
[ "https://Stackoverflow.com/questions/13961140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1913042/" ]
You can simply type `exit()` in the Python terminal to exit the Python interpreter. Then when you run the code, there will be no more errors.
In order to run scripts, you should write the "python test.py" command in the command prompt, and not within the python shell. also, the test.py file should be at the path you run from in the cli.
13,961,140
I am a beginner to python and am at the moment having trouble using the command line. I have a script test.py (which only contains `print("Hello.")`), and it is located in the map C:\Python27. In my system variables, I have specified python to be C:\Python27 (I have other versions of Python installed on my computer as well). I thought this should be enough to run `python test.py` in the command line, but when I do so I get this: ``` File "<stdin>", line 1 python test.py ^ SyntaxError: invalid syntax ``` What is wrong? Thanks in advance!
2012/12/19
[ "https://Stackoverflow.com/questions/13961140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1913042/" ]
Don't type `python test.py` from inside the Python interpreter. Type it at the command prompt, like so: ![cmd.exe](https://i.stack.imgur.com/gc2Q1.png) ![python test.py](https://i.imgur.com/TFUBm.png)
You can simply type `exit()` in the Python terminal to exit the Python interpreter. Then when you run the code, there will be no more errors.
13,961,140
I am a beginner to python and am at the moment having trouble using the command line. I have a script test.py (which only contains `print("Hello.")`), and it is located in the map C:\Python27. In my system variables, I have specified python to be C:\Python27 (I have other versions of Python installed on my computer as well). I thought this should be enough to run `python test.py` in the command line, but when I do so I get this: ``` File "<stdin>", line 1 python test.py ^ SyntaxError: invalid syntax ``` What is wrong? Thanks in advance!
2012/12/19
[ "https://Stackoverflow.com/questions/13961140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1913042/" ]
Looks like your problem is that you are trying to run `python test.py` *from within the Python interpreter*, which is why you're seeing that traceback. Make sure you're out of the interpreter, then run the `python test.py` command from bash or command prompt or whatever.
I faced a similar problem, on my Windows computer, please do check that you have set the Environment Variables correctly. To check that Environment variable is set correctly: 1. Open cmd.exe 2. Type Python and press return 3. (a) If it outputs the version of python then the environment variables are set correctly. (b) If it outputs "no such program or file name" then your environment variable are not set correctly. To set environment variable: 1. goto Computer-> System Properties-> Advanced System Settings -> Set Environment Variables 2. Goto path in the system variables; append ;C:\Python27 in the end. If you have correct variables already set; then you are calling the file inside the python interpreter.
13,961,140
I am a beginner to python and am at the moment having trouble using the command line. I have a script test.py (which only contains `print("Hello.")`), and it is located in the map C:\Python27. In my system variables, I have specified python to be C:\Python27 (I have other versions of Python installed on my computer as well). I thought this should be enough to run `python test.py` in the command line, but when I do so I get this: ``` File "<stdin>", line 1 python test.py ^ SyntaxError: invalid syntax ``` What is wrong? Thanks in advance!
2012/12/19
[ "https://Stackoverflow.com/questions/13961140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1913042/" ]
You can simply type `exit()` in the Python terminal to exit the Python interpreter. Then when you run the code, there will be no more errors.
Come out of the "python interpreter." 1. Check out your PATH variable c:\python27 2. cd and your file location. 3.Now type Python yourfilename.py. I hope this should work
13,961,140
I am a beginner to python and am at the moment having trouble using the command line. I have a script test.py (which only contains `print("Hello.")`), and it is located in the map C:\Python27. In my system variables, I have specified python to be C:\Python27 (I have other versions of Python installed on my computer as well). I thought this should be enough to run `python test.py` in the command line, but when I do so I get this: ``` File "<stdin>", line 1 python test.py ^ SyntaxError: invalid syntax ``` What is wrong? Thanks in advance!
2012/12/19
[ "https://Stackoverflow.com/questions/13961140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1913042/" ]
I faced a similar problem, on my Windows computer, please do check that you have set the Environment Variables correctly. To check that Environment variable is set correctly: 1. Open cmd.exe 2. Type Python and press return 3. (a) If it outputs the version of python then the environment variables are set correctly. (b) If it outputs "no such program or file name" then your environment variable are not set correctly. To set environment variable: 1. goto Computer-> System Properties-> Advanced System Settings -> Set Environment Variables 2. Goto path in the system variables; append ;C:\Python27 in the end. If you have correct variables already set; then you are calling the file inside the python interpreter.
In order to run scripts, you should write the "python test.py" command in the command prompt, and not within the python shell. also, the test.py file should be at the path you run from in the cli.
13,961,140
I am a beginner to python and am at the moment having trouble using the command line. I have a script test.py (which only contains `print("Hello.")`), and it is located in the map C:\Python27. In my system variables, I have specified python to be C:\Python27 (I have other versions of Python installed on my computer as well). I thought this should be enough to run `python test.py` in the command line, but when I do so I get this: ``` File "<stdin>", line 1 python test.py ^ SyntaxError: invalid syntax ``` What is wrong? Thanks in advance!
2012/12/19
[ "https://Stackoverflow.com/questions/13961140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1913042/" ]
You can simply type `exit()` in the Python terminal to exit the Python interpreter. Then when you run the code, there will be no more errors.
Running from the command line means running from the terminal or DOS shell. You are running it from Python itself.
13,961,140
I am a beginner to python and am at the moment having trouble using the command line. I have a script test.py (which only contains `print("Hello.")`), and it is located in the map C:\Python27. In my system variables, I have specified python to be C:\Python27 (I have other versions of Python installed on my computer as well). I thought this should be enough to run `python test.py` in the command line, but when I do so I get this: ``` File "<stdin>", line 1 python test.py ^ SyntaxError: invalid syntax ``` What is wrong? Thanks in advance!
2012/12/19
[ "https://Stackoverflow.com/questions/13961140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1913042/" ]
Looks like your problem is that you are trying to run `python test.py` *from within the Python interpreter*, which is why you're seeing that traceback. Make sure you're out of the interpreter, then run the `python test.py` command from bash or command prompt or whatever.
Running from the command line means running from the terminal or DOS shell. You are running it from Python itself.
61,648,271
**Piece of Code** ``` def wishListCount(): wishlist_count = len(session['Wishlist']) if len(session['Wishlist']) <= 0: return 0 else: return wishlist_count @app.route('/wishlist', methods=['GET', 'POST', 'DELETE']) def wishlist(): if request.method == 'POST': product_id = int(request.form['product_id']) ListItems = [product_id] if 'Wishlist' in session: if product_id in session['Wishlist']: print("This product is already in wishList!") else: session['Wishlist'] = mergeDict(session['Wishlist'], ListItems) else: session['Wishlist'] = ListItems wishlist_count = wishListCount() ``` **Heroku logs** State changed from starting to up 2020-05-07T00:37:33.000000+00:00 app[api]: Build succeeded 2020-05-07T00:37:39.445026+00:00 heroku[router]: at=info method=GET path="/" host=intelli-supermart.herokuapp.com request\_id=bc70627f-fbff-4722-8b7e-f97c18e7e2d5 fwd="203.128.16.105" dyno=web.1 connect=1ms service=102ms status=500 bytes=470 protocol=https 2020-05-07T00:37:39.441994+00:00 app[web.1]: [2020-05-07 00:37:39,440] ERROR in app: Exception on / [GET] 2020-05-07T00:37:39.442004+00:00 app[web.1]: Traceback (most recent call last): 2020-05-07T00:37:39.442005+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site packages/flask/app.py", line 2447, in wsgi\_app 2020-05-07T00:37:39.442005+00:00 app[web.1]: response = self.full\_dispatch\_request() 2020-05-07T00:37:39.442006+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site- packages/flask/app.py", line 1952, in full\_dispatch\_request 2020-05-07T00:37:39.442006+00:00 app[web.1]: rv = self.handle\_user\_exception(e) 2020-05-07T00:37:39.442007+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/flask/app.py", line 1821, in handle\_user\_exception 2020-05-07T00:37:39.442007+00:00 app[web.1]: reraise(exc\_type, exc\_value, tb) 2020-05-07T00:37:39.442007+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/flask/\_compat.py", line 39, in reraise 2020-05-07T00:37:39.442008+00:00 app[web.1]: raise value 2020-05-07T00:37:39.442009+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/flask/app.py", line 1950, in full\_dispatch\_request 2020-05-07T00:37:39.442009+00:00 app[web.1]: rv = self.dispatch\_request() 2020-05-07T00:37:39.442009+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/flask/app.py", line 1936, in dispatch\_request 2020-05-07T00:37:39.442010+00:00 app[web.1]: return self.view\_functionsrule.endpoint 2020-05-07T00:37:39.442010+00:00 app[web.1]: File "/app/app.py", line 117, in index 2020-05-07T00:37:39.442011+00:00 app[web.1]: wishlist\_count = wishListCount() 2020-05-07T00:37:39.442011+00:00 app[web.1]: File "/app/app.py", line 79, in wishListCount 2020-05-07T00:37:39.442011+00:00 app[web.1]: wishlist\_count = len(session['Wishlist']) 2020-05-07T00:37:39.442012+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/werkzeug/local.py", line 377, in 2020-05-07T00:37:39.442012+00:00 app[web.1]: **getitem** = lambda x, i: x.\_get\_current\_object()[i] 2020-05-07T00:37:39.442012+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/flask/sessions.py", line 84, in **getitem** 2020-05-07T00:37:39.442013+00:00 app[web.1]: return super(SecureCookieSession, self).**getitem**(key) 2020-05-07T00:37:39.442019+00:00 app[web.1]: KeyError: 'Wishlist' 2020-05-07T00:37:39.445086+00:00 app[web.1]: 10.11.150.203 - - [07/May/2020:00:37:39 +0000] "GET / HTTP/1.1" 500 290 "-" "Mozilla/5.0 (X11; Linux x86\_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.122 Safari/537.36"
2020/05/07
[ "https://Stackoverflow.com/questions/61648271", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13487131/" ]
To answer my own question, it was happening because of the Spring Boot version which was not ready to handle multistage builds, but after upgrading the service to 2.3.x i can build.
I think it is because of the Jar file not in supported form. That's why jarmode can't process it. Jarmode is a special system used to extracting Layered Jars. You can check out: <https://spring.io/blog/2020/01/27/creating-docker-images-with-spring-boot-2-3-0-m1> for detail info.
11,121,352
I deleted python .pyc files from my local repo and what I thought I did was to delete from remote github. I pushed all changes. The files are still on the repo but not on my local machine. How do I remove files from the github repo? I tried the following: ``` git rm classes/file.pyc git add . git ``` and even: ``` git rm --cached classes/file.pyc ``` Then when I try and checkout the files I get this error. ``` enter code here`error: pathspec 'classes/redis_ha.pyc' did not match any file(s) known to git. ``` I now dont know what else to do. As of now I have a totally corrupted git repo.
2012/06/20
[ "https://Stackoverflow.com/questions/11121352", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1203556/" ]
You should not do `git add`. That's all ``` git rm classes/file.pyc git commit -m"bla bla bla" git push ```
``` git commit -am "A file was deleted" git push ```
39,545,452
I have a php script that should (I think) run a python script to control the energenie radio controlled plug sockets depending on which button is selected. It seems to work in that it echos back the correct message when the button is pressed but the python scripts don''t appear to run. I have added the line: www-data ALL=NOPASSWD: /usr/bin/python /home/pi/lampon.py which should give the apache user privileges to run the python script at least for turning on the power socket but it doesn't work. The script itself does work when run via the pi command line itself. Any suggestions? (the code for the php is below) ``` <html> <head> <meta name="viewport" content="width=device-width" /> <title>LED Control</title> </head> <body> LED Control: <form method="get" action="energenie.php"> <input type="submit" value="ON" name="on"> <input type="submit" value="OFF" name="off"> </form> <?php if(isset($_GET['on'])){ shell_exec("python /home/pi/lampon.py"); echo "LED is on"; } else if(isset($_GET['off'])){ shell_exec("python /home/pi/lampoff.py"); echo "LED is off"; } ?> </body> </html> ```
2016/09/17
[ "https://Stackoverflow.com/questions/39545452", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6842362/" ]
[Parallel.Invoke](https://msdn.microsoft.com/en-us/library/dd992634(v=vs.110).aspx) method: ``` Parallel.Invoke( () => method1(), () => method2(), () => method3(), () => method4() ) ``` Add namespace `System.Threading.Tasks`
You can create a list of `Action` delegate where each delegate is a call to a given method: ``` List<Action> actions = new List<Action> { method1, method2, method3 }; ``` And then use [`Parallel.ForEach`](https://msdn.microsoft.com/en-us/library/dd992001(v=vs.110).aspx) to call them in parallel: ``` Parallel.ForEach(actions, action => action()); ```
57,854,621
I couldn't find any question related to this subject. But does python execute a function after the previous called function is finished or is there in any way parallel execution? **For example:** ``` def a(): print('a') def b(): print('b') a() b() ``` So in this example I would like to know if I can always be sure that `function b` is called **after** `function a` is finished, even if `function a` is a very long script? And what is the defenition of this, so I can look up documentation regarding this matter. Thanks!!
2019/09/09
[ "https://Stackoverflow.com/questions/57854621", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9247792/" ]
Defining the function doesn't mean its execution. Since you defined `a` first, the function object for `a` will be created first, so as for there calls. You can take it as execution timeline starting from top to bottom.
There is no parallel execution of functions in python. The above functions will be executed in the same sequence that they were called in regardless of the amount of computation workload of either of the functions.
57,854,621
I couldn't find any question related to this subject. But does python execute a function after the previous called function is finished or is there in any way parallel execution? **For example:** ``` def a(): print('a') def b(): print('b') a() b() ``` So in this example I would like to know if I can always be sure that `function b` is called **after** `function a` is finished, even if `function a` is a very long script? And what is the defenition of this, so I can look up documentation regarding this matter. Thanks!!
2019/09/09
[ "https://Stackoverflow.com/questions/57854621", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9247792/" ]
TLDR: `b` will only ever run after `a` is exited. Each Python thread will only ever execute one thing at a time and respect ordering of expressions and statements. For the most part, this means executing "top-to-bottom", though function definitions, control flow and other elements can affect execution order. Ordering is preserved in any case, however. --- Strictly speaking, the Python language only defines the [execution order of expressions](https://docs.python.org/3/reference/expressions.html#evaluation-order). > > Python evaluates expressions from left to right. Notice that while > evaluating an assignment, the right-hand side is evaluated before the > left-hand side. > > > Neither [simple statements](https://docs.python.org/3/reference/simple_stmts.html) nor [compound statements](https://docs.python.org/3/reference/compound_stmts.html) define an evaluation order. However, Python is defined based on a [byte code interpreting virtual machine](https://docs.python.org/3/glossary.html#term-virtual-machine), and the reference implementation is based on a [stackbased bytecode](https://docs.python.org/3/library/dis.html#bytecodes) [evaluation loop](https://github.com/python/cpython/blob/master/Python/ceval.c). All major implementations of Python preserve the observable behaviour of executing one statement after the other.
Defining the function doesn't mean its execution. Since you defined `a` first, the function object for `a` will be created first, so as for there calls. You can take it as execution timeline starting from top to bottom.
57,854,621
I couldn't find any question related to this subject. But does python execute a function after the previous called function is finished or is there in any way parallel execution? **For example:** ``` def a(): print('a') def b(): print('b') a() b() ``` So in this example I would like to know if I can always be sure that `function b` is called **after** `function a` is finished, even if `function a` is a very long script? And what is the defenition of this, so I can look up documentation regarding this matter. Thanks!!
2019/09/09
[ "https://Stackoverflow.com/questions/57854621", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9247792/" ]
Defining the function doesn't mean its execution. Since you defined `a` first, the function object for `a` will be created first, so as for there calls. You can take it as execution timeline starting from top to bottom.
In python functions are by default executed in the order they appear. However if you call them in a different order they will execute as such. So in your example ``` def a(): print('a') def b(): print('b') b() a() ``` then `b()` will execute before `a()`
57,854,621
I couldn't find any question related to this subject. But does python execute a function after the previous called function is finished or is there in any way parallel execution? **For example:** ``` def a(): print('a') def b(): print('b') a() b() ``` So in this example I would like to know if I can always be sure that `function b` is called **after** `function a` is finished, even if `function a` is a very long script? And what is the defenition of this, so I can look up documentation regarding this matter. Thanks!!
2019/09/09
[ "https://Stackoverflow.com/questions/57854621", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9247792/" ]
TLDR: `b` will only ever run after `a` is exited. Each Python thread will only ever execute one thing at a time and respect ordering of expressions and statements. For the most part, this means executing "top-to-bottom", though function definitions, control flow and other elements can affect execution order. Ordering is preserved in any case, however. --- Strictly speaking, the Python language only defines the [execution order of expressions](https://docs.python.org/3/reference/expressions.html#evaluation-order). > > Python evaluates expressions from left to right. Notice that while > evaluating an assignment, the right-hand side is evaluated before the > left-hand side. > > > Neither [simple statements](https://docs.python.org/3/reference/simple_stmts.html) nor [compound statements](https://docs.python.org/3/reference/compound_stmts.html) define an evaluation order. However, Python is defined based on a [byte code interpreting virtual machine](https://docs.python.org/3/glossary.html#term-virtual-machine), and the reference implementation is based on a [stackbased bytecode](https://docs.python.org/3/library/dis.html#bytecodes) [evaluation loop](https://github.com/python/cpython/blob/master/Python/ceval.c). All major implementations of Python preserve the observable behaviour of executing one statement after the other.
There is no parallel execution of functions in python. The above functions will be executed in the same sequence that they were called in regardless of the amount of computation workload of either of the functions.
57,854,621
I couldn't find any question related to this subject. But does python execute a function after the previous called function is finished or is there in any way parallel execution? **For example:** ``` def a(): print('a') def b(): print('b') a() b() ``` So in this example I would like to know if I can always be sure that `function b` is called **after** `function a` is finished, even if `function a` is a very long script? And what is the defenition of this, so I can look up documentation regarding this matter. Thanks!!
2019/09/09
[ "https://Stackoverflow.com/questions/57854621", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9247792/" ]
TLDR: `b` will only ever run after `a` is exited. Each Python thread will only ever execute one thing at a time and respect ordering of expressions and statements. For the most part, this means executing "top-to-bottom", though function definitions, control flow and other elements can affect execution order. Ordering is preserved in any case, however. --- Strictly speaking, the Python language only defines the [execution order of expressions](https://docs.python.org/3/reference/expressions.html#evaluation-order). > > Python evaluates expressions from left to right. Notice that while > evaluating an assignment, the right-hand side is evaluated before the > left-hand side. > > > Neither [simple statements](https://docs.python.org/3/reference/simple_stmts.html) nor [compound statements](https://docs.python.org/3/reference/compound_stmts.html) define an evaluation order. However, Python is defined based on a [byte code interpreting virtual machine](https://docs.python.org/3/glossary.html#term-virtual-machine), and the reference implementation is based on a [stackbased bytecode](https://docs.python.org/3/library/dis.html#bytecodes) [evaluation loop](https://github.com/python/cpython/blob/master/Python/ceval.c). All major implementations of Python preserve the observable behaviour of executing one statement after the other.
In python functions are by default executed in the order they appear. However if you call them in a different order they will execute as such. So in your example ``` def a(): print('a') def b(): print('b') b() a() ``` then `b()` will execute before `a()`
69,165,968
I'm trying to run a legacy React app locally for the first time. I'm on a new Mac M1 with Big Sur 11.5.2. My node version is 16.9.0, and I made python3 the default (although the app seems to be looking for python2). I also upgraded CommandLineTools to the latest version. But when I do a simple `npm install`, I get lots of warnings, and finally this error: ```none npm ERR! /Users/cd/.node-gyp/16.9.0/include/node/v8-internal.h:489:38: error: no template named 'remove_cv_t' in namespace 'std'; did you mean 'remove_cv'? npm ERR! !std::is_same<Data, std::remove_cv_t<T>>::value>::Perform(data); npm ERR! ~~~~~^~~~~~~~~~~ npm ERR! remove_cv npm ERR! /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/c++/v1/type_traits:776:50: note: 'remove_cv' declared here npm ERR! template <class _Tp> struct _LIBCPP_TEMPLATE_VIS remove_cv npm ERR! ^ npm ERR! 1 error generated. npm ERR! make: *** [Release/obj.target/binding/src/binding.o] Error 1 npm ERR! gyp ERR! build error npm ERR! gyp ERR! stack Error: `make` failed with exit code: 2 npm ERR! gyp ERR! stack at ChildProcess.onExit (/Users/cd/develop/storybook/webclient/node_modules/node-gyp/lib/build.js:262:23) npm ERR! gyp ERR! stack at ChildProcess.emit (node:events:394:28) npm ERR! gyp ERR! stack at Process.ChildProcess._handle.onexit (node:internal/child_process:290:12) npm ERR! gyp ERR! System Darwin 20.6.0 npm ERR! gyp ERR! command "/opt/homebrew/Cellar/node/16.9.0/bin/node" "/Users/cd/develop/storybook/webclient/node_modules/node-gyp/bin/node-gyp.js" "rebuild" "--verbose" "--libsass_ext=" "--libsass_cflags=" "--libsass_ldflags=" "--libsass_library=" npm ERR! gyp ERR! cwd /Users/cd/develop/storybook/webclient/node_modules/node-sass npm ERR! gyp ERR! node -v v16.9.0 npm ERR! gyp ERR! node-gyp -v v3.8.0 npm ERR! gyp ERR! not ok npm ERR! Build failed with error code: 1 ``` I'm not proficient enough with node to understand exactly how to troubleshoot this, beyond the steps I've already taken. Clearly something is wrong with node-sass, so I installed it separately with: ``` sudo npm install --unsafe-perm -g node-sass ``` That works, but when I re-run `npm install` to get all the dependencies, I get the same error.
2021/09/13
[ "https://Stackoverflow.com/questions/69165968", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1660256/" ]
Upgrade `node-sass` to a version higher than 6.0.1 (mine was 4.0.0) solves this issue Ref: [error: no template named 'remove\_cv\_t' in namespace 'std'; did you mean 'remove\_cv'?](https://stackoverflow.com/questions/67241196/error-no-template-named-remove-cv-t-in-namespace-std-did-you-mean-remove)
try this ``` rm -rf node_modules package-lock.json npm install --saveDev node-sass npm install ```
13,295,064
As part of my course at university I am learning python. A task I have been trying to complete is to write a program that will print out random letters and their corresponding positions in "antidisestablishmentarianism". It will then print the remaining letters on a single line. I have been trying to do this in probably a crazy weird roundabout way - populating a list with the chosen values and removing these characters from the original. I realize my program is probably all wrong and broken; I only started learning lists today! ``` import random word = "antidisestablishmentarianism" wordList =["antidisestablishmentarianism"] print("The Word is:",word,"\n") lengthWord = len(word) usedValues=[] for i in range(5): position = random.randrange(0,lengthWord) print("word[",position, "]\t", word [position]) usedValues=[position] for ch in wordList: wordList.remove([usedValues]) print("The remaining letters are",WordList, sep='') ```
2012/11/08
[ "https://Stackoverflow.com/questions/13295064", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1809406/" ]
I think part of the problem is that you're creating and manipulating your `worldList` and `usedValues` lists incorrectly. To create a list of characters as `wordList` use `list(word)`. To add a used index to `usedValues` use `usedValues.append(position)`. There's also an issue with how you remove the used values from the word list. Here's your code with those errors fixed: ``` import random word = "antidisestablishmentarianism" wordList = list(word) print("The Word is:",word,"\n") lengthWord = len(word) usedValues=[] for i in range(5): position = random.randrange(0,lengthWord) print("word[",position, "]\t", word[position]) usedValues.append(position) for index in usedValues: wordList.pop(index) print("The remaining letters are",WordList, sep='') ``` This will mostly work. However, there's still a logic error. If you get the same random position twice in the first loop, you'll report the same character each time. However, when you remove them from the list later, you'll end up popping two different letters. Similarly, if you remove an letter from near the start of the word, the indexes you remove later on will be incorrect. You can even get a `IndexError` if one of the last positions selected was near the end of the word. One fix would be to remove the selected values from the list immediately, within the first loop. You'll need to explicitly call `len` each cycle then (since it changes each time through) but other than that everything should work correctly. Or here's how I'd solve the problem. Instead of picking five specific indexes and removing them from the list, I'd `random.shuffle` a list of all the indexes and take the first five. The rest can then be either printed out in their random order, or sorted first to give the impression of the letters being removed from the original word. ``` import random word = "antidisestablishmentarianism" indexes = list(range(len(word))) random.shuffle(indexes) for i in indexes[:5]: print("word[%d] is '%s'" % (i, word[i])) rest = sorted(indexes[5:]) # or just use indexes[5:] to keep random order print("The remaining letters are '%s'" % "".join(word[i] for i in rest)) ```
There are a few problems with your code as it stands. Firstly, this line: ``` wordList =["antidisestablishmentarianism"] ``` doesn't do what you think - it actually creates a list containing the single item `"antidisestablishmentarianism"`. To convert a string into a list of characters, you can use `list()` - and since you've already have the variable `word`, there's no need to type the word in again. On a side note, `wordList` isn't a very good variable name. Apart from the fact that it uses camelCase rather than the more pythonic underscore\_separated style, what you actually want here is a list of the *characters* in the word. So, that line can be replaced with: ``` characters = list(word) ``` Moving on ... this line: ``` lengthWord = len(word) ``` is redundant - you only reference `lengthWord` once in your code, so you might as well just replace that reference with `len(word)` where you use it. Your line: ``` usedValues=[position] ``` also isn't doing what you think: it's *replacing* `usedValues` entirely, with a list containing only the latest position in your loop. To append a value to a list, use `list.append()`: ``` used_positions.append(position) ``` (I've given the variable a more accurate name). Your next problem is this block: ``` for ch in wordList: wordList.remove([usedValues]) ``` First of all, you really want to check each of the positions you've previously stored, not each of the characters in the word. Your use of `list.remove()` is also wrong: you can't give a list of values to remove like that, but anyway `list.remove()` will remove the first instance of a value from a list, and what you want to do is remove the item at a particular position, which is what `list.pop()` is for: ``` for position in sorted(used_positions, reverse=True): characters.pop(position) ``` We're using a copy of `used_positions` sorted in reverse so that when we remove an item, the remaining positions in `used_positions` don't slide out of alignment with what's left of `characters`[\*]. Your final problem is the last line: ``` print("The remaining letters are",WordList, sep='') ``` If you want to print the contents of a list separated by `''`, this isn't the way to do it. Instead, you need `str.join()`: ``` print("The remaining letters are", "".join(characters)) ``` Putting all of those changes into practice, and tidying up a little, we end up with: ``` import random word = "antidisestablishmentarianism" characters = list(word) print("The Word is:", word, "\n") used_positions = [] for i in range(5): position = random.randrange(0, len(word)) print("word[",position, "]\t", word[position]) used_positions.append(position) for position in sorted(used_positions, reverse=True): characters.pop(position) print("The remaining letters are", "".join(characters)) ``` [\*] In fact, this throws up another problem: what if your code chooses the same position twice? I'll leave you to think about that one.
45,415,081
I have Eclipse with Pydev and RSE installed on my local Windows machine. I want to remote debug a Python application (Odoo 9.0) that is hosted on an Ubuntu 16.04 VPS. I have Pydev installed on the remote machine. I have been able to connect to the remote machine via SSH using a key for authentication and I can browse the remote file system. Refering to the documentation here; <http://www.pydev.org/manual_adv_remote_debugger.html> and reading the comments in the file located at; /usr/local/lib/python2.7/dist-packages/pydevd\_file\_utils.py it would seem that I need to map remote to local file system. To me this implies that the code must exist on both the remote and local (Eclipse) machines. If this is the case, how do I keep them in sync. I want to be able to develop with my code base on the remote machine. Do I need to copy every change to my local machine? It feels like I'm missing part of the puzzle and the documention that I've found is not detailed enough to be able to implement. Please let me know what steps remain outstanding to implement remote debugging and any implications for my workflow (such as having to copy all changes to both file systems).
2017/07/31
[ "https://Stackoverflow.com/questions/45415081", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6489912/" ]
If you want to develop the code all remotely (instead of locally), my suggestion is using sshfs (so, you'd do all the changes there directly). You should even be able to create a shell script to a remote interpreter in that case too (i.e.: the interpreter may be any script, so, you could chroot it or even run some python through ssh). Note I haven't actually tested this, but in theory it should work ;)
I found a way to get remote editing and remote debug going with eclipse and pydev from my mac to a Debian linux server (bitnami setup). To set up remote editing and debugging - Read these first <https://www.pydev.org/manual_adv_remote_debugger.html> <https://sites.google.com/site/programmersnotebook/remote-development-of-python-scripts-on-raspberry-pi-with-eclipse> Notes on my install * Installed pydevd in server python environment (did not need to copy pysrc as in raspy example above instructions). See links above for install steps. * Created remote project using RSE. (Eclipse Remote Shell extensions) Using RSE "Remote shell" window you can right click on source directory and create a local Eclipse project that points at the server files. See links above for install steps. * Edited `pydevd_file_utils.py` in server pydevd install directory. For me this was `/opt/python/lib/python3.7/site-packages`. If you're not sure where this is enter the following in your server python environment `import pydevd; print(pydevd.__file__)`. Added `PATHS_FROM_ECLIPSE_TO_PYTHON = [('/Users/<myusername>/dev/test/RemoteSystemsTempFiles/<server ref in RSE>/opt/bitnami/apps/odoo/data/addons/13.0/test/test.py','/opt/bitnami/apps/odoo/data/addons/13.0/test/test.py')]`. Read the comments and place it near the example lower down. * could add the following instead `PATHS_FROM_ECLIPSE_TO_PYTHON = [(r'/Users/andrewlemay/esp/test/RemoteSystemsTempFiles/34.253.144.28/',r'/')]` which means it would work for all RSE projects on the server. * Note the RemoteSystemTempFiles dir is part of Eclipse RSE path on your local machine * Add SSH remote port forwarding tunnel. This forwards the data from the server on port 5678 to client localhost:5678 to allow the server remote debugger to send info to the listening client debugger - see command below. With this I did not need IP address in settrace() command or to configure my router to port forward to my local machine. * INFO on SSH tunnels here <https://www.ssh.com/ssh/tunneling/example> **To run** * Set up secure SSH tunnel to server with remote port forwarding on 5678 * Run python script on server via console or RSE Remote Shell (Eclipse>Windowother>Remote systems>Remote Shell **Run commands** **Client** I'm using a private shared key and I enter the following in a local terminal `ssh -t -i "/Users/<username>/keys/<serverkeyname>.pem" <serverusername>@<serverIP> -R 5678:localhost:5678 -C -N` The process will block the terminal. End process with CTRL-C when debugging done to close the tunnel. If you don't need a private shared key you can lose the `-t -i "/Users/<username>/keys/<serverkeyname>.pem"` part. Start Pydev server in eclipse by clicking the `PyDev:start the pydev server` button (have to be in debug perspecive). [PyDev:start the pydev server](https://i.stack.imgur.com/QLdzC.png) You should then get a message in Console saying `Debug Server at port: 5678` **Server** You can use server terminal or Eclipse RSE Remote Shell Window `Python3 test.py` The local Eclipse debug server should burst into life! and allow debugging and breakpoints etc. **Test code - test.py** ``` import os import sys import pydevd pydevd.settrace() i = 3 p = 'Hello!' * i print(p) if __name__ == '__main__': pass print("Hello world 4") for k, v in os.environ.items(): print(f'{k}={v}') ``` Hope this is useful to someone...
49,168,556
For my project I need to extract the CSS Selectors for a given element that I will find through parsing. What I do is navigate to a page with selenium and then with python-beautiful soup I parse the page and find if there are any elements that I need the CSS Selector of. For example I may try to find any input tags with id "print". `soup.find_all('input', {'id': 'print')})` If I manage to find such an element I want to fetch its extract it's CSS Selector, something like "input#print". I don't just find using id's but also a combination of classes and regular expressions. Is there any way to achieve this?
2018/03/08
[ "https://Stackoverflow.com/questions/49168556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7445289/" ]
Try this. ``` from scrapy.selector import Selector from selenium import webdriver link = "https://example.com" xpath_desire = "normalize-space(//input[@id = 'print'])" path1 = "./chromedriver" driver = webdriver.Chrome(executable_path=path1) driver.get(link) temp_test = driver.find_element_by_css_selector("body") elem = temp_test.get_attribute('innerHTML') value = Selector(text=elem).xpath(xpath_desire).extract()[0] print(value) ```
Ok, I am totally new to Python so i am sure that there is a better answer for this, but here's my two cents :) ``` import requests from bs4 import BeautifulSoup url = "https://stackoverflow.com/questions/49168556/extract-css-selector-for- an-element-with-selenium" element = 'a' idName = 'nav-questions' page = requests.get(url) soup = BeautifulSoup(page.content, 'html.parser') tags = soup.find_all(element, id = idName) if tags: for tag in tags : getClassNames = tag.get('class') classNames = ''.join(str('.' + x) for x in getClassNames) print element + '#' + idName + classNames else: print ':(' ``` This would print something like: ``` a#nav-questions.-link.js-gps-track ```
51,160,368
Since the start and end times of DST in a timezone can change every year, so how does python tell if dst is in effect or not?
2018/07/03
[ "https://Stackoverflow.com/questions/51160368", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10028264/" ]
So the answer was to use a function for a constant lookup: `$handler = new StreamHandler('/var/log/php/php.log', constant("Monolog\Logger::" . $level));`
``` <?php class Logger { const MY = 1; } $lookingfor = 'MY'; // approach 1 $value1 = (new ReflectionClass('Logger'))->getConstants()[$lookingfor]; // approach 2 $value2 = constant("Logger::" . $lookingfor); echo "$value1|$value2"; ?> ``` Result: "1|1"
65,814,036
When I want to import a package in python, I can alias it: ``` import package_with_a_very_long_nameeeeee as pl ``` After that statement, I can refer to the package by it alias: ``` pl.foo() ``` julia allows me to do: ``` using PackageWithAVeryLongName pl = PackageWithAVeryLongName pl.foo() ``` But it feels like an ugly hack, possibly with implications I do not understand. Is there an idiomatic way to alias imported packages in julia?
2021/01/20
[ "https://Stackoverflow.com/questions/65814036", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11926170/" ]
This is now possible on the upcoming Julia 1.6 using the exact same syntax as Python uses: ``` julia> import LinearAlgebra as LA julia> typeof(LA) Module help?> LA.svd svd(A; full::Bool = false, alg::Algorithm = default_svd_alg(A)) -> SVD ``` On prior versions, you can do what [@Bill suggests](https://stackoverflow.com/a/65814933/176071) — but I'd *strongly* recommend doing so as a `const` assignment alongside an `import`: ``` julia> import SparseArrays julia> const SA = SparseArrays SparseArrays ```
python: ``` >>> import matplotlib as plt >>> type(plt) <class 'module'> >>> ``` julia: ``` julia> using Plots [ Info: Precompiling Plots [91a5bcdd-55d7-5caf-9e0b-520d859cae80] julia> const plt = Plots Plots julia> typeof(plt) Module ``` So, it is pretty much the identical effect between languages. What may make this usage less than ideal, thus seem ugly in Julia is that multiple dispatch usually allows the export of function names from multiple modules without a conflict since their arguments usually differ. So having to precede the function name with a module alias might mean that something needed export and was not exported with the module. The exceptions ought to be rare.
245,465
How do you connect to a remote server via IP address in the manner that TOAD, SqlDeveloper, are able to connect to databases with just the ip address, username, SID and password? Whenever I try to specify and IP address, it seems to be taking it locally. In other words, how should the string for cx\_Oracle.connect() be formatted to a non local database? There was a previous post which listed as an answer connecting to Oracle via cx\_Oracle module with the following code: ``` #!/usr/bin/python import cx_Oracle connstr='scott/tiger' conn = cx_Oracle.connect(connstr) curs = conn.cursor() curs.execute('select * from emp') print curs.description for row in curs: print row conn.close() ```
2008/10/29
[ "https://Stackoverflow.com/questions/245465", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
```py import cx_Oracle dsn = cx_Oracle.makedsn(host='127.0.0.1', port=1521, sid='your_sid') conn = cx_Oracle.connect(user='your_username', password='your_password', dsn=dsn) conn.close() ```
``` import cx_Oracle ip = '172.30.1.234' port = 1524 SID = 'dev3' dsn_tns = cx_Oracle.makedsn(ip, port, SID) conn = cx_Oracle.connect('dbmylike', 'pass', dsn_tns) print conn.version conn.close() ```
245,465
How do you connect to a remote server via IP address in the manner that TOAD, SqlDeveloper, are able to connect to databases with just the ip address, username, SID and password? Whenever I try to specify and IP address, it seems to be taking it locally. In other words, how should the string for cx\_Oracle.connect() be formatted to a non local database? There was a previous post which listed as an answer connecting to Oracle via cx\_Oracle module with the following code: ``` #!/usr/bin/python import cx_Oracle connstr='scott/tiger' conn = cx_Oracle.connect(connstr) curs = conn.cursor() curs.execute('select * from emp') print curs.description for row in curs: print row conn.close() ```
2008/10/29
[ "https://Stackoverflow.com/questions/245465", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
``` import cx_Oracle CONN_INFO = { 'host': 'xxx.xx.xxx.x', 'port': 12345, 'user': 'user_name', 'psw': 'your_password', 'service': 'abc.xyz.com', } CONN_STR = '{user}/{psw}@{host}:{port}/{service}'.format(**CONN_INFO) connection = cx_Oracle.connect(CONN_STR) ```
Instead of specifying the SID, you can create a dsn and [connect via service\_name](http://cx-oracle.readthedocs.io/en/latest/module.html#cx_Oracle.makedsn) like: ``` import cx_Oracle ip = '192.168.0.1' port = 1521 service_name = 'my_service' dsn = cx_Oracle.makedsn(ip, port, service_name=service_name) db = cx_Oracle.connect('user', 'password', dsn) ``` The benefit of using the service name instead of the specific instance identifier (SID), is that it will work in a RAC environment as well (using a SID won't). This parameter is available as of [cx\_Oracle version 5.1.1](https://bitbucket.org/anthony_tuininga/cx_oracle/commits/b81bd10992eb6a4fb2a3e618cfdf865c72148859) (Aug 28, 2011)
245,465
How do you connect to a remote server via IP address in the manner that TOAD, SqlDeveloper, are able to connect to databases with just the ip address, username, SID and password? Whenever I try to specify and IP address, it seems to be taking it locally. In other words, how should the string for cx\_Oracle.connect() be formatted to a non local database? There was a previous post which listed as an answer connecting to Oracle via cx\_Oracle module with the following code: ``` #!/usr/bin/python import cx_Oracle connstr='scott/tiger' conn = cx_Oracle.connect(connstr) curs = conn.cursor() curs.execute('select * from emp') print curs.description for row in curs: print row conn.close() ```
2008/10/29
[ "https://Stackoverflow.com/questions/245465", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You can specify the server in the connection string, e.g.: ``` import cx_Oracle connstr = 'scott/tiger@server:1521/orcl' conn = cx_Oracle.connect(connstr) ``` * "server" is the server, or the IP address if you want. * "1521" is the port that the database is listening on. * "orcl" is the name of the instance (or database service).
``` import cx_Oracle ip = '172.30.1.234' port = 1524 SID = 'dev3' dsn_tns = cx_Oracle.makedsn(ip, port, SID) conn = cx_Oracle.connect('dbmylike', 'pass', dsn_tns) print conn.version conn.close() ```
245,465
How do you connect to a remote server via IP address in the manner that TOAD, SqlDeveloper, are able to connect to databases with just the ip address, username, SID and password? Whenever I try to specify and IP address, it seems to be taking it locally. In other words, how should the string for cx\_Oracle.connect() be formatted to a non local database? There was a previous post which listed as an answer connecting to Oracle via cx\_Oracle module with the following code: ``` #!/usr/bin/python import cx_Oracle connstr='scott/tiger' conn = cx_Oracle.connect(connstr) curs = conn.cursor() curs.execute('select * from emp') print curs.description for row in curs: print row conn.close() ```
2008/10/29
[ "https://Stackoverflow.com/questions/245465", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Instead of specifying the SID, you can create a dsn and [connect via service\_name](http://cx-oracle.readthedocs.io/en/latest/module.html#cx_Oracle.makedsn) like: ``` import cx_Oracle ip = '192.168.0.1' port = 1521 service_name = 'my_service' dsn = cx_Oracle.makedsn(ip, port, service_name=service_name) db = cx_Oracle.connect('user', 'password', dsn) ``` The benefit of using the service name instead of the specific instance identifier (SID), is that it will work in a RAC environment as well (using a SID won't). This parameter is available as of [cx\_Oracle version 5.1.1](https://bitbucket.org/anthony_tuininga/cx_oracle/commits/b81bd10992eb6a4fb2a3e618cfdf865c72148859) (Aug 28, 2011)
```py import cx_Oracle dsn = cx_Oracle.makedsn(host='127.0.0.1', port=1521, sid='your_sid') conn = cx_Oracle.connect(user='your_username', password='your_password', dsn=dsn) conn.close() ```
245,465
How do you connect to a remote server via IP address in the manner that TOAD, SqlDeveloper, are able to connect to databases with just the ip address, username, SID and password? Whenever I try to specify and IP address, it seems to be taking it locally. In other words, how should the string for cx\_Oracle.connect() be formatted to a non local database? There was a previous post which listed as an answer connecting to Oracle via cx\_Oracle module with the following code: ``` #!/usr/bin/python import cx_Oracle connstr='scott/tiger' conn = cx_Oracle.connect(connstr) curs = conn.cursor() curs.execute('select * from emp') print curs.description for row in curs: print row conn.close() ```
2008/10/29
[ "https://Stackoverflow.com/questions/245465", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
``` import cx_Oracle CONN_INFO = { 'host': 'xxx.xx.xxx.x', 'port': 12345, 'user': 'user_name', 'psw': 'your_password', 'service': 'abc.xyz.com', } CONN_STR = '{user}/{psw}@{host}:{port}/{service}'.format(**CONN_INFO) connection = cx_Oracle.connect(CONN_STR) ```
```py import cx_Oracle dsn = cx_Oracle.makedsn(host='127.0.0.1', port=1521, sid='your_sid') conn = cx_Oracle.connect(user='your_username', password='your_password', dsn=dsn) conn.close() ```