Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How to keep a Python script output window open? | 60,842,143 | 1 | 214 | 389,804 | 0 | python,windows | I found the solution on my py3 enviroment at win10 is just run cmd or powershell as Administrator,and the output would stay at the same console window,any other type of user run python command would cause python to open a new console window. | 0 | 1 | 0 | 0 | 2009-06-16T11:31:00.000 | 25 | 0.008 | false | 1,000,900 | 1 | 0 | 0 | 6 | I have just started with Python. When I execute a python script file on Windows, the output window appears but instantaneously goes away. I need it to stay there so I can analyze my output. How can I keep it open? |
How to keep a Python script output window open? | 56,816,573 | -2 | 214 | 389,804 | 0 | python,windows | You can open PowerShell and type "python".
After Python has been imported, you can copy paste the source code from your favourite text-editor to run the code.
The window won't close. | 0 | 1 | 0 | 0 | 2009-06-16T11:31:00.000 | 25 | -0.015999 | false | 1,000,900 | 1 | 0 | 0 | 6 | I have just started with Python. When I execute a python script file on Windows, the output window appears but instantaneously goes away. I need it to stay there so I can analyze my output. How can I keep it open? |
How to keep a Python script output window open? | 69,880,499 | 0 | 214 | 389,804 | 0 | python,windows | You can launch python with the -i option or set the environment variable PYTHONINSPECT=x. From the docs:
inspect interactively after running script; forces a prompt even
if stdin does not appear to be a terminal; also PYTHONINSPECT=x
So when your script crashes or finishes, you'll get a python prompt and your window will not close. | 0 | 1 | 0 | 0 | 2009-06-16T11:31:00.000 | 25 | 0 | false | 1,000,900 | 1 | 0 | 0 | 6 | I have just started with Python. When I execute a python script file on Windows, the output window appears but instantaneously goes away. I need it to stay there so I can analyze my output. How can I keep it open? |
How to keep a Python script output window open? | 30,281,797 | 2 | 214 | 389,804 | 0 | python,windows | Apart from input and raw_input, you could also use an infinite while loop, like this:
while True: pass (Python 2.5+/3) or while 1: pass (all versions of Python 2/3). This might use computing power, though.
You could also run the program from the command line. Type python into the command line (Mac OS X Terminal) and it should say Python 3.?.? (Your Python version) It it does not show your Python version, or says python: command not found, look into changing PATH values (enviromentl values, listed above)/type C:\(Python folder\python.exe. If that is successful, type python or C:\(Python installation)\python.exe and the full directory of your program. | 0 | 1 | 0 | 0 | 2009-06-16T11:31:00.000 | 25 | 0.015999 | false | 1,000,900 | 1 | 0 | 0 | 6 | I have just started with Python. When I execute a python script file on Windows, the output window appears but instantaneously goes away. I need it to stay there so I can analyze my output. How can I keep it open? |
How To Reversibly Store Password With Python On Linux? | 1,001,833 | 5 | 7 | 2,912 | 0 | python,linux,encryption,passwords | Encrypting the passwords doesn't really buy you a whole lot more protection than storing in plaintext. Anyone capable of accessing the database probably also has full access to your webserver machines.
However, if the loss of security is acceptable, and you really need this, I'd generate a new keyfile (from a good source of random data) as part of the installation process and use this. Obviously store this key as securely as possible (locked down file permissions etc). Using a single key embedded in the source is not a good idea - there's no reason why seperate installations should have the same keys. | 0 | 1 | 0 | 1 | 2009-06-16T14:09:00.000 | 3 | 1.2 | true | 1,001,744 | 0 | 0 | 0 | 1 | First, my question is not about password hashing, but password encryption. I'm building a desktop application that needs to authentificate the user to a third party service. To speed up the login process, I want to give the user the option to save his credentials. Since I need the password to authentificate him to the service, it can't be hashed.
I thought of using the pyCrypto module and its Blowfish or AES implementation to encrypt the credentials. The problem is where to store the key. I know some applications store the key directly in the source code, but since I am coding an open source application, this doesn't seem like a very efficient solution.
So I was wondering how, on Linux, you would implement user specific or system specific keys to increase password storing security.
If you have a better solution to this problem than using pyCrypto and system/user specific keys, don't hesitate to share it. As I said before, hashing is not a solution and I know password encryption is vulnerable, but I want to give the option to the user. Using Gnome-Keyring is not an option either, since a lot of people (including myself) don't use it. |
Automate SSH login under windows | 11,246,704 | 0 | 1 | 3,061 | 0 | c#,python,windows,ssh | pexpect can't import on Windows. So, I use plink.exe with a Python subprocess to connect to the ssh server. | 0 | 1 | 0 | 1 | 2009-06-16T16:34:00.000 | 7 | 0 | false | 1,002,627 | 0 | 0 | 0 | 1 | I want to be able to execute openssh with some custom arguments and then be able to automatically login to the server. I want that my script will enter the password if needed and inject 'yes' if I'm prompted to add the fingerprint to the known hosts.
I've found SharpSsh for C# that do that, but I also need to use -D parameter and use ProxyCommand that I define in SSH, and the library is quite lacking for that usage.
Another thing that I've found was pexcept for Python that should do the trick but I couldn't find where to download it, on the offical page I'm being redirectred from sourceforge to some broken link.
Any help would be appreciated,
Bill. |
What is the easiest way to see if a process with a given pid exists in Python? | 1,006,030 | 1 | 7 | 2,206 | 0 | python,posix | Look at /proc/pid. This exists only of the process is running, and contains lots of information. | 0 | 1 | 0 | 1 | 2009-06-17T09:12:00.000 | 6 | 0.033321 | false | 1,005,972 | 0 | 0 | 0 | 1 | In a POSIX system, I want to see if a given process (PID 4356, for example) is running. It would be even better if I could get metadata about that process. |
Name of file I'm editing | 1,008,586 | 9 | 2 | 191 | 0 | python,vim | Try: !python % | 0 | 1 | 0 | 0 | 2009-06-17T17:45:00.000 | 2 | 1.2 | true | 1,008,557 | 0 | 0 | 0 | 1 | I'm editing a file in ~/Documents. However, my working directory is somewhere else, say ~/Desktop.
The file I'm editing is a Python script. I'm interested in doing a command like...
:!python
without needing to do
:!python ~/Documents/script.py
Is that possible? If so, what would be the command?
Thank you. |
Sandboxing in Linux | 1,029,301 | 0 | 16 | 7,104 | 0 | python,c,linux,security,sandbox | I think your solutions must concentrate on analyzing the source code. I don't know any tools, and I think this would be pretty hard with C, but, for example, a Pascal program which doesn't include any modules would be pretty harmless in my opinion. | 0 | 1 | 0 | 1 | 2009-06-19T19:35:00.000 | 12 | 0 | false | 1,019,707 | 0 | 0 | 1 | 3 | I want to create a Web app which would allow the user to upload some C code, and see the results of its execution (the code would be compiled on the server). The users are untrusted, which obviously has some huge security implications.
So I need to create some kind of sandbox for the apps. At the most basic level, I'd like to restrict access to the file system to some specified directories. I cannot use chroot jails directly, since the web app is not running as a privileged user. I guess a suid executable which sets up the jail would be an option.
The uploaded programs would be rather small, so they should execute quickly (a couple of seconds at most). Hence, I can kill the process after a preset timeout, but how do I ensure that it doesn't spawn new processes? Or if I can't, is killing the entire pgid a reliable method?
What would be the best way to go about this - other than "don't do it at all"? :) What other glaring security problems have I missed?
FWIW, the web app will be written in Python. |
Sandboxing in Linux | 1,019,986 | -2 | 16 | 7,104 | 0 | python,c,linux,security,sandbox | About the only chance you have is running a VirtualMachine and those can have vulnerabilities. If you want your machine hacked in the short term just use permissions and make a special user with access to maybe one directory. If you want to postpone the hacking to some point in the future then run a webserver inside a virtual machine and port forward to that. You'll want to keep a backup of that because you'll probably have that hacked in under an hour and want to restart a fresh copy every few hours. You'll also want to keep an image of the whole machine to just reimage the whole thing once a week or so in order to overcome the weekly hackings. Don't let that machine talk to any other machine on your network. Blacklist it everywhere. I'm talking about the virtual machine and the physical machine IP addresses. Do regular security audits on any other machines on your other machines on the network. Please rename the machines IHaveBeenHacked1 and IHaveBeenHacked2 and prevent access to those in your hosts lists and firewalls.
This way you might stave off your level of hackage for a while. | 0 | 1 | 0 | 1 | 2009-06-19T19:35:00.000 | 12 | -0.033321 | false | 1,019,707 | 0 | 0 | 1 | 3 | I want to create a Web app which would allow the user to upload some C code, and see the results of its execution (the code would be compiled on the server). The users are untrusted, which obviously has some huge security implications.
So I need to create some kind of sandbox for the apps. At the most basic level, I'd like to restrict access to the file system to some specified directories. I cannot use chroot jails directly, since the web app is not running as a privileged user. I guess a suid executable which sets up the jail would be an option.
The uploaded programs would be rather small, so they should execute quickly (a couple of seconds at most). Hence, I can kill the process after a preset timeout, but how do I ensure that it doesn't spawn new processes? Or if I can't, is killing the entire pgid a reliable method?
What would be the best way to go about this - other than "don't do it at all"? :) What other glaring security problems have I missed?
FWIW, the web app will be written in Python. |
Sandboxing in Linux | 15,609,095 | 0 | 16 | 7,104 | 0 | python,c,linux,security,sandbox | Spawning a new VM under KVM or qemu to compile and run the code looks like the way to go. Running the code under jail/LXC can compromise the machine if it exploits the unsecured parts of the OS like networking code. Advantage of running under a VM are obvious. One can only hack the VM but not the machine itself. But the side effect is you need lots of resources (CPU and Memory) to spawn a VM for each request. | 0 | 1 | 0 | 1 | 2009-06-19T19:35:00.000 | 12 | 0 | false | 1,019,707 | 0 | 0 | 1 | 3 | I want to create a Web app which would allow the user to upload some C code, and see the results of its execution (the code would be compiled on the server). The users are untrusted, which obviously has some huge security implications.
So I need to create some kind of sandbox for the apps. At the most basic level, I'd like to restrict access to the file system to some specified directories. I cannot use chroot jails directly, since the web app is not running as a privileged user. I guess a suid executable which sets up the jail would be an option.
The uploaded programs would be rather small, so they should execute quickly (a couple of seconds at most). Hence, I can kill the process after a preset timeout, but how do I ensure that it doesn't spawn new processes? Or if I can't, is killing the entire pgid a reliable method?
What would be the best way to go about this - other than "don't do it at all"? :) What other glaring security problems have I missed?
FWIW, the web app will be written in Python. |
Cross-platform way to check admin rights in a Python script under Windows? | 1,038,617 | 1 | 25 | 18,741 | 0 | python,privileges,admin-rights | Administrator group membership (Domain/Local/Enterprise) is one thing..
tailoring your application to not use blanket privilege and setting fine grained rights is a better option especially if the app is being used iinteractively.
testing for particular named privileges (se_shutdown se_restore etc), file rights is abetter bet and easier to diagnose. | 0 | 1 | 0 | 0 | 2009-06-22T10:20:00.000 | 5 | 0.039979 | false | 1,026,431 | 0 | 0 | 0 | 3 | Is there any cross-platform way to check that my Python script is executed with admin rights? Unfortunately, os.getuid() is UNIX-only and is not available under Windows. |
Cross-platform way to check admin rights in a Python script under Windows? | 1,026,442 | 3 | 25 | 18,741 | 0 | python,privileges,admin-rights | Try doing whatever you need admin rights for, and check for failure.
This will only work for some things though, what are you trying to do? | 0 | 1 | 0 | 0 | 2009-06-22T10:20:00.000 | 5 | 0.119427 | false | 1,026,431 | 0 | 0 | 0 | 3 | Is there any cross-platform way to check that my Python script is executed with admin rights? Unfortunately, os.getuid() is UNIX-only and is not available under Windows. |
Cross-platform way to check admin rights in a Python script under Windows? | 1,026,516 | 3 | 25 | 18,741 | 0 | python,privileges,admin-rights | It's better if you check which platform your script is running (using sys.platform) and do a test based on that, e.g. import some hasAdminRights function from another, platform-specific module.
On Windows you could check whether Windows\System32 is writable using os.access, but remember to try to retrieve system's actual "Windows" folder path, probably using pywin32. Don't hardcode one. | 0 | 1 | 0 | 0 | 2009-06-22T10:20:00.000 | 5 | 0.119427 | false | 1,026,431 | 0 | 0 | 0 | 3 | Is there any cross-platform way to check that my Python script is executed with admin rights? Unfortunately, os.getuid() is UNIX-only and is not available under Windows. |
PHP desktop applications | 1,029,459 | 0 | 8 | 1,720 | 0 | php,python,gtk,desktop,pygtk | Why would you like to develop a desktop app in php??
Get yourself a descent programming environment (c/java/c#/) instead of abusing php
especially with c# and java you get pretty quick very nice results. And both are cross platform (although java is easier for cross platform stuff).
C(++) in combination with QT or GTK is also possible, but there the results appear slower | 0 | 1 | 0 | 1 | 2009-06-22T21:15:00.000 | 4 | 0 | false | 1,029,435 | 0 | 0 | 0 | 2 | I have quite a few years experience of developing PHP web applications, and have recently started to delve into Python as well. Recently I've been interested in getting into desktop applications as well, but have absolutely no experience in that area. I've seen very little written about PHP-gtk and wonder whether it's really a good area to get stuck in to.
What I'm really looking for is something that will allow me to quite quickly develop some decent small/medium sized apps, and be able to deploy them in Linux and Windows. Something in Python or PHP would be great (but I'd be happy to learn something else if it has big advantages).
What do you guys recommend?
Thanks |
PHP desktop applications | 1,029,486 | 2 | 8 | 1,720 | 0 | php,python,gtk,desktop,pygtk | Python and Java are both excellent for working on both Linux and Windows environment. They are generally hassle-free as long as you're not doing any OS specific type of work. Python for creating desktop apps is fairly simple and easy to learn as well if you're coming from a PHP background, especially if you're used to doing object oriented PHP. | 0 | 1 | 0 | 1 | 2009-06-22T21:15:00.000 | 4 | 0.099668 | false | 1,029,435 | 0 | 0 | 0 | 2 | I have quite a few years experience of developing PHP web applications, and have recently started to delve into Python as well. Recently I've been interested in getting into desktop applications as well, but have absolutely no experience in that area. I've seen very little written about PHP-gtk and wonder whether it's really a good area to get stuck in to.
What I'm really looking for is something that will allow me to quite quickly develop some decent small/medium sized apps, and be able to deploy them in Linux and Windows. Something in Python or PHP would be great (but I'd be happy to learn something else if it has big advantages).
What do you guys recommend?
Thanks |
Simple User management example for Google App Engine? | 1,030,362 | 1 | 17 | 7,295 | 1 | php,python,google-app-engine | You don't write user management and registration and all that, because you use Google's own authentication services. This is all included in the App Engine documentation. | 0 | 1 | 0 | 0 | 2009-06-23T01:58:00.000 | 3 | 0.066568 | false | 1,030,293 | 0 | 0 | 1 | 1 | I am newbie in Google App Engine. While I was going through the tutorial, I found several things that we do in php-mysql is not available in GAE. For example in dataStore auto increment feature is not available. Also I am confused about session management in GAE. Over all I am confused and can not visualize the whole thing.
Please advise me a simple user management system with user registration, user login, user logout, session (create,manage,destroy) with data Store. Also please advise me where I can get simple but effective examples.
Thanks in advance. |
BeanStalkd on Solaris doesnt return anything when called from the python library | 1,048,086 | 1 | 2 | 394 | 0 | python,solaris,yaml,beanstalkd | After looking in the code (beanstalkc):
your client has send his 'list-tubes' message, and is waiting for an answer.
(until you kill it)
your server doesn't answer or can't send the answer to the client.
(or the answer is shorter than the client expect)
is a network-admin at your side (or site) :-) | 0 | 1 | 0 | 1 | 2009-06-25T15:06:00.000 | 3 | 0.066568 | false | 1,044,473 | 0 | 0 | 0 | 2 | i am using Solaris 10 OS(x86). i installed beanstalkd and it starts fine by using command "beanstalkd -d -l hostip -p 11300".
i have Python 2.4.4 on my system i installed YAML and beanstalkc python libraries to connect beanstalkd with python my problem is when i try to write some code:
import beanstalkc
beanstalk = beanstalkc.Connection(host='hostip', port=11300)
no error so far but when i try to do someting on beanstalk like say listing queues. nothing happens.
beanstalk.tubes()
it just hangs and nothing returns. if i cancel the operation(using ctr+c on python env.) or stop the server i immediately see an output:
Traceback (most recent call last):
File "", line 1, in ?
File "/usr/lib/python2.4/site-packages/beanstalkc-0.1.1-py2.4.egg/beanstalkc.py", line 134, in tubes
return self._interact_yaml('list-tubes\r\n', ['OK'])
File "/usr/lib/python2.4/site-packages/beanstalkc-0.1.1-py2.4.egg/beanstalkc.py", line 83, in _interact_yaml
size, = self._interact(command, expected_ok, expected_err)
File "/usr/lib/python2.4/site-packages/beanstalkc-0.1.1-py2.4.egg/beanstalkc.py", line 57, in _interact
status, results = self._read_response()
File "/usr/lib/python2.4/site-packages/beanstalkc-0.1.1-py2.4.egg/beanstalkc.py", line 66, in _read_response
response = self.socket_file.readline().split()
File "/usr/lib/python2.4/socket.py", line 332, in readline
data = self._sock.recv(self._rbufsize)
any idea whats going? i am an Unix newbie so i have no idea what i did setup wrong to cause this.
edit: seems like the problem lies within BeanStalkd itself, anyone have used this on Solaris 10? if so which version did you use? The v1.3 labeled one doesnt compile on Solaris while the latest from git code repository compiles it causes the above problem(or perhaps there is some configuration to do on Solaris?).
edit2: i installed and compiled same components with beanstalkd, PyYAML, pythonbeanstalc and libevent to an UBUNTU machine and it works fine. problems seems to be about compilation of beanstalkd on solaris, i have yet to produce or read any solution. |
BeanStalkd on Solaris doesnt return anything when called from the python library | 1,093,128 | 1 | 2 | 394 | 0 | python,solaris,yaml,beanstalkd | I might know what is wrong: don't start it in daemon (-d) mode. I have experienced the same and by accident I found out what is wrong.
Or rather, I don't know what is wrong, but it works without running it in daemon mode.
./beanstalkd -p 9977 &
as an alternative. | 0 | 1 | 0 | 1 | 2009-06-25T15:06:00.000 | 3 | 1.2 | true | 1,044,473 | 0 | 0 | 0 | 2 | i am using Solaris 10 OS(x86). i installed beanstalkd and it starts fine by using command "beanstalkd -d -l hostip -p 11300".
i have Python 2.4.4 on my system i installed YAML and beanstalkc python libraries to connect beanstalkd with python my problem is when i try to write some code:
import beanstalkc
beanstalk = beanstalkc.Connection(host='hostip', port=11300)
no error so far but when i try to do someting on beanstalk like say listing queues. nothing happens.
beanstalk.tubes()
it just hangs and nothing returns. if i cancel the operation(using ctr+c on python env.) or stop the server i immediately see an output:
Traceback (most recent call last):
File "", line 1, in ?
File "/usr/lib/python2.4/site-packages/beanstalkc-0.1.1-py2.4.egg/beanstalkc.py", line 134, in tubes
return self._interact_yaml('list-tubes\r\n', ['OK'])
File "/usr/lib/python2.4/site-packages/beanstalkc-0.1.1-py2.4.egg/beanstalkc.py", line 83, in _interact_yaml
size, = self._interact(command, expected_ok, expected_err)
File "/usr/lib/python2.4/site-packages/beanstalkc-0.1.1-py2.4.egg/beanstalkc.py", line 57, in _interact
status, results = self._read_response()
File "/usr/lib/python2.4/site-packages/beanstalkc-0.1.1-py2.4.egg/beanstalkc.py", line 66, in _read_response
response = self.socket_file.readline().split()
File "/usr/lib/python2.4/socket.py", line 332, in readline
data = self._sock.recv(self._rbufsize)
any idea whats going? i am an Unix newbie so i have no idea what i did setup wrong to cause this.
edit: seems like the problem lies within BeanStalkd itself, anyone have used this on Solaris 10? if so which version did you use? The v1.3 labeled one doesnt compile on Solaris while the latest from git code repository compiles it causes the above problem(or perhaps there is some configuration to do on Solaris?).
edit2: i installed and compiled same components with beanstalkd, PyYAML, pythonbeanstalc and libevent to an UBUNTU machine and it works fine. problems seems to be about compilation of beanstalkd on solaris, i have yet to produce or read any solution. |
Chat comet site using python and twisted | 1,047,755 | 1 | 5 | 4,166 | 0 | python,twisted,orbited | I'd suggest you use Twisted. ;) It has both chat clients and chat servers. Then you also need a web framework. I'd use either Grok or BFD, but there are many Python Web Frameworks around, and few of them are really bad. | 0 | 1 | 0 | 0 | 2009-06-26T04:19:00.000 | 5 | 0.039979 | false | 1,047,306 | 0 | 0 | 0 | 2 | i want to build a site similar to www.omegle.com. can any one suggest me some ideas.
I think its built usning twisted , orbiter comet server. |
Chat comet site using python and twisted | 1,047,805 | 3 | 5 | 4,166 | 0 | python,twisted,orbited | Twisted is a good choice. I used it a few years ago to build a server for a browser-based online game I wrote - it kept track of clients, served them replies to Ajax requests, and used HTML5 Server-Sent DOM Events as well. Worked rather painlessly thanks to Twisted's good HTTP library.
For a Python web framework, I personally favor Django. It's quick to get going with it, and it has a lot of functionality out of the box ("batteries included" as it says on their site I think). Pylons is another popular choice. | 0 | 1 | 0 | 0 | 2009-06-26T04:19:00.000 | 5 | 1.2 | true | 1,047,306 | 0 | 0 | 0 | 2 | i want to build a site similar to www.omegle.com. can any one suggest me some ideas.
I think its built usning twisted , orbiter comet server. |
Problem deploying Python program (packaged with py2exe) | 1,124,797 | 1 | 1 | 2,683 | 0 | python,deployment,wxpython,multiprocessing,py2exe | When you run py2exe, look closely at the final messages when it's completed. It gives you a list of DLLs that it says are needed by the program, but that py2exe doesn't automatically bundle.
Many in the list are reliably available on any Windows install, but there will be a few that you should manually bundle into your Inno Setup installation. Some are only needed if you want to deploy on older Windows installs e.g. Win 2000 or earlier. | 0 | 1 | 0 | 0 | 2009-06-26T11:36:00.000 | 4 | 0.049958 | false | 1,048,651 | 1 | 0 | 0 | 2 | I have a problem: I used py2exe for my program, and it worked on my computer. I packaged it with Inno Setup (still worked on my computer), but when I sent it to a different computer, I got the following error when trying to run the application: "CreateProcess failed; code 14001." The app won't run.
(Note: I am using wxPython and the multiprocessing module in my program.)
I googled for it a bit and found that the the user should install some MS redistributable something, but I don't want to make life complicated for my users. Is there a solution?
Versions:
Python 2.6.2c1,
py2exe 0.6.9,
Windows XP Pro |
Problem deploying Python program (packaged with py2exe) | 1,048,732 | 1 | 1 | 2,683 | 0 | python,deployment,wxpython,multiprocessing,py2exe | You should be able to install that MS redistributable thingy as a part of your InnoSetup setup exe. | 0 | 1 | 0 | 0 | 2009-06-26T11:36:00.000 | 4 | 0.049958 | false | 1,048,651 | 1 | 0 | 0 | 2 | I have a problem: I used py2exe for my program, and it worked on my computer. I packaged it with Inno Setup (still worked on my computer), but when I sent it to a different computer, I got the following error when trying to run the application: "CreateProcess failed; code 14001." The app won't run.
(Note: I am using wxPython and the multiprocessing module in my program.)
I googled for it a bit and found that the the user should install some MS redistributable something, but I don't want to make life complicated for my users. Is there a solution?
Versions:
Python 2.6.2c1,
py2exe 0.6.9,
Windows XP Pro |
subprocess module: using the call method with tempfile objects | 1,049,684 | 1 | 3 | 819 | 0 | python,subprocess | Are you using shell=True option for subprocess? | 0 | 1 | 0 | 0 | 2009-06-26T15:10:00.000 | 2 | 1.2 | true | 1,049,648 | 0 | 0 | 0 | 2 | I have created temporary named files, with the tempfile libraries NamedTemporaryFile method.
I have written to them flushed the buffers, and I have not closed them (or else they might go away)
I am trying to use the subprocess module to call some shell commands using these generated files.
subprocess.call('cat %s' % f.name) always fails saying that the named temporary file does not exist.
os.path.exists(f.name) always returns true.
I can run the cat command on the file directly from the shell.
Is there some reason the subprocess module will not work with temporary files?
Is there any way to make it work?
Thanks in advance. |
subprocess module: using the call method with tempfile objects | 1,049,697 | 3 | 3 | 819 | 0 | python,subprocess | Why don't you make your NamedTemporaryFiles with the optional parameter delete=False? That way you can safely close them knowing they won't disappear, use them normally afterwards, and explicitly unlink them when you're done. This way everything will work cross-platform, too. | 0 | 1 | 0 | 0 | 2009-06-26T15:10:00.000 | 2 | 0.291313 | false | 1,049,648 | 0 | 0 | 0 | 2 | I have created temporary named files, with the tempfile libraries NamedTemporaryFile method.
I have written to them flushed the buffers, and I have not closed them (or else they might go away)
I am trying to use the subprocess module to call some shell commands using these generated files.
subprocess.call('cat %s' % f.name) always fails saying that the named temporary file does not exist.
os.path.exists(f.name) always returns true.
I can run the cat command on the file directly from the shell.
Is there some reason the subprocess module will not work with temporary files?
Is there any way to make it work?
Thanks in advance. |
Is registered atexit handler inherited by spawned child processes? | 1,053,223 | 4 | 4 | 2,133 | 0 | python,multiprocessing,atexit | When you fork to make a child process, that child is an exact copy of the parent -- including of course registered exit functions as well as all other code and data structures. I believe that's the issue you're observing -- of course it's not mentioned in each and every module, because it necessarily applies to every single one. | 0 | 1 | 0 | 0 | 2009-06-27T11:54:00.000 | 3 | 0.26052 | false | 1,052,716 | 0 | 0 | 0 | 1 | I am writing a daemon program using python 2.5. In the main process an exit handler is registered with atexit module, it seems that the handler gets called when each child process ends, which is not I expected.
I noticed this behavior isn't mentioned in python atexit doc, anybody knows the issue? If this is how it should behave, how can I unregister the exit handler in children processes? There is a atexit.unregister in version 3.0, but I am using 2.5. |
psycopg2 on OSX: do I have to install PostgreSQL too? | 1,052,990 | 3 | 2 | 4,829 | 1 | python,macos,postgresql | macports tells me that the psycopg2 package has a dependency on the postgres client and libraries (but not the db server). If you successfully installed psycopg, then you should be good to go.
If you haven't installed yet, consider using macports or fink to deal with dependency resolution for you. In most cases, this will make things easier (occasionally build problems erupt). | 0 | 1 | 0 | 0 | 2009-06-27T14:52:00.000 | 3 | 1.2 | true | 1,052,957 | 0 | 0 | 0 | 1 | I want to access a postgreSQL database that's running on a remote machine, from Python in OS/X. Do I have to install postgres on the mac as well? Or will psycopg2 work on its own.
Any hints for a good installation guide for psycopg2 for os/x? |
How do I find out what Python libraries are installed on my Mac? | 1,055,554 | 3 | 13 | 30,575 | 0 | python | just run the Python interpeter and type the command
import "lib_name"
if it gives an error, you don't have the lib installed...else you are good to go | 0 | 1 | 0 | 0 | 2009-06-28T18:27:00.000 | 7 | 0.085505 | false | 1,055,443 | 1 | 0 | 0 | 4 | I'm just starting out with Python, and have found out that I can import various libraries. How do I find out what libraries exist on my Mac that I can import? How do I find out what functions they include?
I seem to remember using some web server type thing to browse through local help files, but I may have imagined that! |
How do I find out what Python libraries are installed on my Mac? | 1,055,453 | 39 | 13 | 30,575 | 0 | python | From the Python REPL (the command-line interpreter / Read-Eval-Print-Loop), type help("modules") to see a list of all your available libs.
Then to see functions within a module, do help("posix"), for example. If you haven't imported the library yet, you have to put quotes around the library's name. | 0 | 1 | 0 | 0 | 2009-06-28T18:27:00.000 | 7 | 1 | false | 1,055,443 | 1 | 0 | 0 | 4 | I'm just starting out with Python, and have found out that I can import various libraries. How do I find out what libraries exist on my Mac that I can import? How do I find out what functions they include?
I seem to remember using some web server type thing to browse through local help files, but I may have imagined that! |
How do I find out what Python libraries are installed on my Mac? | 1,055,474 | 2 | 13 | 30,575 | 0 | python | On Leopard, depending on the python package you're using and the version number, the modules can be found in /Library/Python:
/Library/Python/2.5/site-packages
or in /Library/Frameworks
/Library/Frameworks/Python.framework/Versions/Current/lib/python2.6/site-packages
(it could also be 3.0 or whatever version)...
I guess it is quite the same with Tiger | 0 | 1 | 0 | 0 | 2009-06-28T18:27:00.000 | 7 | 0.057081 | false | 1,055,443 | 1 | 0 | 0 | 4 | I'm just starting out with Python, and have found out that I can import various libraries. How do I find out what libraries exist on my Mac that I can import? How do I find out what functions they include?
I seem to remember using some web server type thing to browse through local help files, but I may have imagined that! |
How do I find out what Python libraries are installed on my Mac? | 1,055,520 | 3 | 13 | 30,575 | 0 | python | You can install another library: yolk.
yolk is a python package manager and will show you everything you have added via pypi. But it will also show you site-packages added through whatever local package manager you run. | 0 | 1 | 0 | 0 | 2009-06-28T18:27:00.000 | 7 | 0.085505 | false | 1,055,443 | 1 | 0 | 0 | 4 | I'm just starting out with Python, and have found out that I can import various libraries. How do I find out what libraries exist on my Mac that I can import? How do I find out what functions they include?
I seem to remember using some web server type thing to browse through local help files, but I may have imagined that! |
How do I install with distutils to a specific Python installation? | 1,085,582 | 0 | 1 | 327 | 0 | python,installation,distutils | using "python2.3" can be wrong if another (default) installation patched PATH to itself only.
Task can be solved by:
finding full path to desired python interpreter, on ActivePython it is C:\Python26 for default installation of Python 2.6
make a full path to binary (in this case C:\Python26\python.exe)
execute module install command from unpacked module directory using full path to interpreter: C:\Python26\python.exe setup.py install | 0 | 1 | 0 | 0 | 2009-06-29T17:56:00.000 | 2 | 0 | false | 1,059,594 | 1 | 0 | 0 | 1 | I have a Windows machine with Python 2.3, 2.6 and 3.0 installed and 2.5 installed with Cygwin. I've downloaded the pexpect package but when I run "python setup.py install" it installs to the 2.6 installation.
How could I have it install to the Cygwin Python installation, or any other installation? |
How to debug SCons scripts using eclipse and pydev? | 1,075,694 | 3 | 4 | 2,414 | 0 | python,eclipse,pydev,scons | I'm not an Eclipse expert, but since you didn't get any other answer...
If you make the SCons source a part of the Eclipse project, and run the whole command from within Eclipse it should work like any Eclipse debugging. SCons is written in Python, there is no reason it shouldn't be debuggable in Eclipse just like anything else. | 0 | 1 | 0 | 1 | 2009-07-02T16:16:00.000 | 6 | 1.2 | true | 1,075,304 | 0 | 0 | 1 | 5 | I'm a newbie to SCons and also using pydev. Can someone help me with instructions on how to debug scons scripts using Eclipse and pydev? Is it even possible considering the fact that SCons is a seperate app and not an extension to python? |
How to debug SCons scripts using eclipse and pydev? | 45,216,082 | 0 | 4 | 2,414 | 0 | python,eclipse,pydev,scons | As an addendum: on Windows, I had to copy the scons-installed files to reside under C:\Python27\Lib\site-packages\scons in order for this to work. Adding the original installed location, qualified with the version number, to the PYTHONPATH, did not work. | 0 | 1 | 0 | 1 | 2009-07-02T16:16:00.000 | 6 | 0 | false | 1,075,304 | 0 | 0 | 1 | 5 | I'm a newbie to SCons and also using pydev. Can someone help me with instructions on how to debug scons scripts using Eclipse and pydev? Is it even possible considering the fact that SCons is a seperate app and not an extension to python? |
How to debug SCons scripts using eclipse and pydev? | 15,386,322 | 1 | 4 | 2,414 | 0 | python,eclipse,pydev,scons | On MAC to debug scons through pydev follow Lennart's answer but with one simply addition.
Using Finder (or terminal) browse to where scons is installed. You can find this with the "which" command.
e.g. which scons
-> /usr/local/bin/scons
Make a copy of the scons file and call it scons.py.
Now when you create the Debug Configuration in Eclipse use scons.py as the "Main Module".
PS: To add a scons project to Eclipse I found it easier to use a "Linked Folder" pointing at /usr/local/bin/. i.e. Because I was getting a read-only error when trying to add the directory itself. | 0 | 1 | 0 | 1 | 2009-07-02T16:16:00.000 | 6 | 0.033321 | false | 1,075,304 | 0 | 0 | 1 | 5 | I'm a newbie to SCons and also using pydev. Can someone help me with instructions on how to debug scons scripts using Eclipse and pydev? Is it even possible considering the fact that SCons is a seperate app and not an extension to python? |
How to debug SCons scripts using eclipse and pydev? | 1,077,102 | 6 | 4 | 2,414 | 0 | python,eclipse,pydev,scons | You are right. Since the SCons is python based, the SCons scripts are debuggable via EClipse PyDev. For this, you need to do the following in the debug configuration...
1. Under the main tab, set the main module to the SCons file which will be available under the python/scripts directory if you have installed SCons. If you have not run the install of SCons you can point to this file under the SCons directory.
2. Under the arguments tab, set the working directory to the root of your project.
Now set the breakpoint either on SConstruct or SConcript and run in debug mode. That's all!!
With this approach you can not only debug your product code but also the build scripts that builds your product :-) Happy Debugging!!!! | 0 | 1 | 0 | 1 | 2009-07-02T16:16:00.000 | 6 | 1 | false | 1,075,304 | 0 | 0 | 1 | 5 | I'm a newbie to SCons and also using pydev. Can someone help me with instructions on how to debug scons scripts using Eclipse and pydev? Is it even possible considering the fact that SCons is a seperate app and not an extension to python? |
How to debug SCons scripts using eclipse and pydev? | 32,887,089 | 0 | 4 | 2,414 | 0 | python,eclipse,pydev,scons | I've since gain more experience with SCons / Python and I'd recommend using python's pdb module. To use it simply add the following code to your SCons/Python files.
import pdb; pdb.set_trace()
When the file is run from the command line a breakpoint will be hit at this line. I also moved away from Eclipse. A lightweight editor will be just as good for Python development. I use Sublime. | 0 | 1 | 0 | 1 | 2009-07-02T16:16:00.000 | 6 | 0 | false | 1,075,304 | 0 | 0 | 1 | 5 | I'm a newbie to SCons and also using pydev. Can someone help me with instructions on how to debug scons scripts using Eclipse and pydev? Is it even possible considering the fact that SCons is a seperate app and not an extension to python? |
Does python have hooks into EXT3 | 1,075,459 | 7 | 4 | 640 | 0 | python,linux,ext3 | no need for ext3-specific hooks; just check lsof, or more exactly, /proc/<pid>/fd/* and /proc/<pid>/fdinfo/* (that's where lsof gets it's info, AFAICT). There you can check if the file is open, if it's writeable, and the 'cursor' position.
That's not the whole picture; but any more is done in processpace by stdlib on the writing process, as most writes are buffered and the kernel only sees bigger chunks of data, so any 'ext3-aware' monitor wouldn't get that either. | 0 | 1 | 0 | 1 | 2009-07-02T16:32:00.000 | 2 | 1.2 | true | 1,075,391 | 0 | 0 | 0 | 1 | We have several cron jobs that ftp proxy logs to a centralized server. These files can be rather large and take some time to transfer. Part of the requirement of this project is to provide a logging mechanism in which we log the success or failure of these transfers. This is simple enough.
My question is, is there a way to check if a file is currently being written to? My first solution was to just check the file size twice within a given timeframe and check the file size. But a co-worker said that there may be able to hook into the EXT3 file system via python and check the attributes to see if the file is currently being appended to. My Google-Fu came up empty.
Is there a module for EXT3 or something else that would allow me to check the state of a file? The server is running Fedora Core 9 with EXT3 file system. |
Accessing Python Objects in a Core Dump | 1,080,869 | 4 | 2 | 1,101 | 0 | python,python-c-api,postmortem-debugging | It's lots of work, but of course it can be done, especially if you have all the symbols. Look at the header files for the specific version of Python (and compilation options in use to build it): they define PyObject as a struct which includes, first and foremost, a pointer to a type. Lots of macros are used, so you may want to run the compile of that Python from sources again, with exactly the same flags but in addition a -E to stop after preprocessing, so you can refer to the specific C code that made the bits you're seeing in the core dump.
A type object has, among many other things, a string (array of char) that's its name, and from it you can infer what exactly objects of that type contain -- be it content directly, or maybe some content (such as a length, i.e. number of items) and a pointer to the actual data.
I've done such super-advanced post-mortem debugging a couple of times (starting with VERY precise knowledge of the Python versions involved and all the prepared preprocessed sources &c) and each time it took me a day or two (were I still a freelance and charging by the hour, if I had to bid on such a task I'd say at least 20 hours -- at my not-cheap hourly rates!-).
IOW, it's worth it only if it's really truly the only way out of some very costly pickle. On the plus side, it WILL teach you more about Python's internals than you ever thought was there, even after memorizing every line of the sources. Good luck, you'll need some!!! | 0 | 1 | 0 | 0 | 2009-07-03T21:07:00.000 | 1 | 1.2 | true | 1,080,832 | 1 | 0 | 0 | 1 | Is there anyway to discover the python value of a PyObject* from a corefile in gdb |
Transition from Python2.4 to Python2.6 on CentOS, module migration problem | 1,085,044 | 0 | 3 | 6,767 | 0 | python,linux,upgrade,centos,python-2.6 | easy_install is good one but there are low level way for installing module, just:
unpack module source to some directory
type "python setup.py install"
Of course you should do this with required installed python interpreter version; for checking it type:
python -V | 0 | 1 | 0 | 1 | 2009-07-04T06:59:00.000 | 4 | 0 | false | 1,081,698 | 0 | 0 | 0 | 3 | I have a problem of upgrading python from 2.4 to 2.6:
I have CentOS 5 (Full). It has python 2.4 living in /usr/lib/python2.4/ . Additional modules are living in /usr/lib/python2.4/site-packages/ . I've built python 2.6 from sources at /usr/local/lib/python2.6/ . I've set default python to python2.6 . Now old modules for 2.4 are out of pythonpath and are "lost". In particular, yum is broken ("no module named yum").
So what is the right way to migrate/install modules to python2.6? |
Transition from Python2.4 to Python2.6 on CentOS, module migration problem | 1,081,705 | 0 | 3 | 6,767 | 0 | python,linux,upgrade,centos,python-2.6 | There are a couple of options...
If the modules will run under Python 2.6, you can simply create symbolic links to them from the 2.6 site-packages directory to the 2.4 site-packages directory.
If they will not run under 2.6, then you may need to re-compile them against 2.6, or install up-to-date versions of them. Just make sure you are using 2.6 when calling "python setup.py"
...
You may want to post this on serverfault.com, if you run into additional challenges. | 0 | 1 | 0 | 1 | 2009-07-04T06:59:00.000 | 4 | 0 | false | 1,081,698 | 0 | 0 | 0 | 3 | I have a problem of upgrading python from 2.4 to 2.6:
I have CentOS 5 (Full). It has python 2.4 living in /usr/lib/python2.4/ . Additional modules are living in /usr/lib/python2.4/site-packages/ . I've built python 2.6 from sources at /usr/local/lib/python2.6/ . I've set default python to python2.6 . Now old modules for 2.4 are out of pythonpath and are "lost". In particular, yum is broken ("no module named yum").
So what is the right way to migrate/install modules to python2.6? |
Transition from Python2.4 to Python2.6 on CentOS, module migration problem | 1,082,045 | 0 | 3 | 6,767 | 0 | python,linux,upgrade,centos,python-2.6 | Some Python libs may be still not accessible as with Python 2.6 site-packages is changed to dist-packages.
The only way in that case is to do move all stuff generated in site-packages (e.g. by make install) to dist-packages and create a sym-link. | 0 | 1 | 0 | 1 | 2009-07-04T06:59:00.000 | 4 | 0 | false | 1,081,698 | 0 | 0 | 0 | 3 | I have a problem of upgrading python from 2.4 to 2.6:
I have CentOS 5 (Full). It has python 2.4 living in /usr/lib/python2.4/ . Additional modules are living in /usr/lib/python2.4/site-packages/ . I've built python 2.6 from sources at /usr/local/lib/python2.6/ . I've set default python to python2.6 . Now old modules for 2.4 are out of pythonpath and are "lost". In particular, yum is broken ("no module named yum").
So what is the right way to migrate/install modules to python2.6? |
Running both python 2.6 and 3.1 on the same machine | 1,103,562 | 0 | 2 | 5,310 | 0 | python,linux,python-3.x | Why do you need to use make install at all? After having done make to compile python 3.x, just move the python folder somewhere, and create a symlink to the python executable in your ~/bin directory. Add that directory to your path if it isn't already, and you'll have a working python development version ready to be used. As long as the symlink itself is not named python (I've named mine py), you'll never experience any clashes.
An added benefit is that if you want to change to a new release of python 3.x, for example if you're following the beta releases, you simply download, compile and replace the folder with the new one.
It's slightly messy, but the messiness is confined to one directory, and I find it much more convenient than thinking about altinstalls and the like. | 0 | 1 | 0 | 0 | 2009-07-04T18:02:00.000 | 4 | 0 | false | 1,082,692 | 1 | 0 | 0 | 2 | I'm currently toying with python at home and I'm planning to switch to python 3.1. The fact is that I have some scripts that use python 2.6 and I can't convert them since they use some modules that aren't available for python 3.1 atm. So I'm considering installing python 3.1 along with my python 2.6. I only found people on the internet that achieve that by compiling python from the source and use make altinstall instead of the classic make install. Anyway, I think compiling from the source is a bit complicated. I thought running two different versions of a program is easy on Linux (I run fedora 11 for the record). Any hint?
Thanks for reading. |
Running both python 2.6 and 3.1 on the same machine | 1,082,698 | 1 | 2 | 5,310 | 0 | python,linux,python-3.x | You're not supposed to need to run them together.
2.6 already has all of the 3.0 features. You can enable those features with from __future__ import statements.
It's much simpler run 2.6 (with some from __future__ import) until everything you need is in 3.x, then switch. | 0 | 1 | 0 | 0 | 2009-07-04T18:02:00.000 | 4 | 0.049958 | false | 1,082,692 | 1 | 0 | 0 | 2 | I'm currently toying with python at home and I'm planning to switch to python 3.1. The fact is that I have some scripts that use python 2.6 and I can't convert them since they use some modules that aren't available for python 3.1 atm. So I'm considering installing python 3.1 along with my python 2.6. I only found people on the internet that achieve that by compiling python from the source and use make altinstall instead of the classic make install. Anyway, I think compiling from the source is a bit complicated. I thought running two different versions of a program is easy on Linux (I run fedora 11 for the record). Any hint?
Thanks for reading. |
How do I store desktop application data in a cross platform way for python? | 1,084,700 | 0 | 42 | 8,167 | 0 | python,desktop-application,application-settings | Well, for Windows APPDATA (environmental variable) points to a user's "Application Data" folder. Not sure about OSX, though.
The correct way, in my opinion, is to do it on a per-platform basis. | 0 | 1 | 0 | 0 | 2009-07-05T19:45:00.000 | 4 | 0 | false | 1,084,697 | 0 | 0 | 0 | 1 | I have a python desktop application that needs to store user data. On Windows, this is usually in %USERPROFILE%\Application Data\AppName\, on OSX it's usually ~/Library/Application Support/AppName/, and on other *nixes it's usually ~/.appname/.
There exists a function in the standard library, os.path.expanduser that will get me a user's home directory, but I know that on Windows, at least, "Application Data" is localized into the user's language. That might be true for OSX as well.
What is the correct way to get this location?
UPDATE:
Some further research indicates that the correct way to get this on OSX is by using the function NSSearchPathDirectory, but that's Cocoa, so it means calling the PyObjC bridge... |
How to check available Python libraries on Google App Engine & add more | 1,085,550 | 1 | 1 | 1,050 | 0 | python,google-app-engine,sqlite | Afaik, you can only use the GAE specific database. | 0 | 1 | 0 | 0 | 2009-07-06T05:23:00.000 | 2 | 0.099668 | false | 1,085,538 | 0 | 0 | 1 | 1 | How to check available Python libraries on Google App Engine & add more?
Is SQLite available or we must use GQL with their database system only?
Thank you in advance. |
Finding a python process running on Windows with TaskManager | 38,227,531 | 0 | 1 | 12,819 | 0 | python,windows,taskmanager | When I right clicked python in task manager and clicked open file location (win 10) it opened the plex installation folder so for me it is Plex that uses Python.exe to slow my computer down, shame as I use plex all the time for my Roku. Just have to put up with a slow computer and get another one. | 0 | 1 | 0 | 0 | 2009-07-06T13:49:00.000 | 2 | 0 | false | 1,087,087 | 1 | 0 | 0 | 1 | I have several python.exe processes running on my Vista machine and I would like to kill one process thanks to the Windows task manager.
What is the best way to find which process to be killed. I've added the 'command line' column on task manager. It can help but not in all cases.
is there a better way? |
IPython OS X: Up arrow gives "^[[A" | 1,088,164 | 2 | 10 | 1,800 | 0 | macos,ipython | Solved by completely wiping all of site-packages.
I then re-installed Framework Python, re-installed setuptools, and easy_installed ipython FTW. | 0 | 1 | 0 | 0 | 2009-07-06T16:40:00.000 | 2 | 0.197375 | false | 1,087,975 | 1 | 0 | 0 | 1 | Whenever I hit the up arrow in IPython, instead of getting history, I get this set of characters "^[[A" (not including the quotes).
Hitting the down arrow gives "^[[B", and tab completion doesn't work (just enters a tab).
How can I fix this? It happens in both Terminal and iTerm.
Running OS X 10.5, Framework Python 2.5.4. Error occurs in both ipython 0.8.3 and ipython 0.9.1. pyreadline-2.5.1 egg is installed in both cases.
(edit: SSH-ing to another linux machine and using IPython there works fine. So does running the normal "python" command on the OS X machine.)
Cheers,
- Dan |
want to get mac address of remote PC | 1,092,392 | 1 | 2 | 7,015 | 0 | python,python-3.x | All you can access is what the user sends to you.
MAC address is not part of that data. | 0 | 1 | 1 | 0 | 2009-07-07T13:35:00.000 | 4 | 0.049958 | false | 1,092,379 | 0 | 0 | 0 | 1 | I have my web page in python, I am able to get the IP address of the user, who will be accessing our web page, we want to get the mac address of the user's PC, is it possible in python, we are using Linux PC, we want to get it on Linux. |
How do I check what version of Python is running my script? | 49,765,349 | 1 | 1,402 | 1,741,234 | 0 | python,version | Check Python version: python -V or python --version or apt-cache policy python
you can also run whereis python to see how many versions are installed. | 0 | 1 | 0 | 0 | 2009-07-07T16:17:00.000 | 24 | 0.008333 | false | 1,093,322 | 1 | 0 | 0 | 2 | How can I check what version of the Python Interpreter is interpreting my script? |
How do I check what version of Python is running my script? | 17,672,432 | -1 | 1,402 | 1,741,234 | 0 | python,version | If you are working on linux just give command python output will be like this
Python 2.4.3 (#1, Jun 11 2009, 14:09:37)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-44)] on linux2
Type "help", "copyright", "credits" or "license" for more
information. | 0 | 1 | 0 | 0 | 2009-07-07T16:17:00.000 | 24 | -0.008333 | false | 1,093,322 | 1 | 0 | 0 | 2 | How can I check what version of the Python Interpreter is interpreting my script? |
Can I send SIGINT to a Python subprocess on Windows? | 1,095,597 | 1 | 5 | 5,776 | 0 | python,windows,subprocess,sigint,signal-handling | Windows doesn't have the unix signals IPC mechanism.
I would look at sending a CTRL-C to the gdb process. | 0 | 1 | 0 | 0 | 2009-07-08T00:34:00.000 | 1 | 0.197375 | false | 1,095,549 | 0 | 0 | 0 | 1 | I've got a Python script managing a gdb process on Windows, and I need to be able to send a SIGINT to the spawned process in order to halt the target process (managed by gdb)
It appears that there is only SIGTERM available in Win32, but clearly if I run gdb from the console and Ctrl+C, it thinks it's receiving a SIGINT. Is there a way I can fake this such that the functionality is available on all platforms?
(I am using the subprocess module, and python 2.5/2.6) |
Is there a way to plan and diagram an architecture for dynamic scripting languages like groovy or python? | 1,102,219 | 3 | 1 | 359 | 0 | python,architecture,groovy,uml,dynamic-languages | UML isn't too well equipped to handle such things, but you can still use it to communicate your design if you are willing to do some mental mapping. You can find an isomorphism between most dynamic concepts and UMLs static object-model.
For example you can think of a closure as an object implementing a one method interface. It's probably useful to model such interfaces as something a bit more specific than interface Callable { call(args[0..*]: Object) : Object }.
Duck typing can similarly though of as an interface. If you have a method that takes something that can quack, model it as taking an object that is a specialization of the interface _interface Quackable { quack() }.
You can use your imagination for other concepts. Keep in mind that the purpose of design diagrams is to communicate ideas. So don't get overly pedantic about modeling everything 100%, think what do you want your diagrams to say, make sure that they say that and eliminate any extraneous detail that would dilute the message. And if you use some concepts that aren't obvious to your target audience, explain them.
Also, if UML really can't handle what you want to say, try other ways to visualize your message. UML is only a good choice because it gives you a common vocabulary so you don't have to explain every concept on your diagram. | 0 | 1 | 0 | 0 | 2009-07-09T06:23:00.000 | 2 | 0.291313 | false | 1,102,134 | 0 | 0 | 1 | 1 | Say I want to write a large application in groovy, and take advantage of closures, categories and other concepts (that I regularly use to separate concerns). Is there a way to diagram or otherwise communicate in a simple way the architecture of some of this stuff? How do you detail (without verbose documentation) the things that a map of closures might do, for example? I understand that dynamic language features aren't usually recommended on a larger scale because they are seen as complex but does that have to be the case? |
python long running daemon job processor | 1,108,019 | 0 | 2 | 3,378 | 0 | python,web-services,scheduling,long-running-processes | The usual design pattern for a scheduler would be:
Maintain a list of scheduled jobs, sorted by next-run-time (as Date-Time value);
When woken up, compare the first job in the list with the current time. If it's due or overdue, remove it from the list and run it. Continue working your way through the list this way until the first job is not due yet, then go to sleep for (next_job_due_date - current_time);
When a job finishes running, re-schedule it if appropriate;
After adding a job to the schedule, wake up the scheduler process.
Tweak as appropriate for your situation (eg. sometimes you might want to re-schedule jobs to run again at the point that they start running rather than finish). | 0 | 1 | 0 | 0 | 2009-07-10T05:16:00.000 | 5 | 0 | false | 1,107,826 | 0 | 0 | 0 | 1 | I want to write a long running process (linux daemon) that serves two purposes:
responds to REST web requests
executes jobs which can be scheduled
I originally had it working as a simple program that would run through runs and do the updates which I then cron’d, but now I have the added REST requirement, and would also like to change the frequency of some jobs, but not others (let’s say all jobs have different frequencies).
I have 0 experience writing long running processes, especially ones that do things on their own, rather than responding to requests.
My basic plan is to run the REST part in a separate thread/process, and figured I’d run the jobs part separately.
I’m wondering if there exists any patterns, specifically python, (I’ve looked and haven’t really found any examples of what I want to do) or if anyone has any suggestions on where to begin with transitioning my project to meet these new requirements.
I’ve seen a few projects that touch on scheduling, but I’m really looking for real world user experience / suggestions here. What works / doesn’t work for you? |
Python subprocess question | 1,110,847 | 0 | 4 | 1,227 | 0 | python,subprocess | The short answer is that there is no such thing as a good cross platform system for process management, without designing that concept into your system. This is especially in the standar libraries. Even the various unix versions have their own compatibility issues.
Your best bet is to instrument all the processes with the proper event handling to notice events that come in from whatever IPC system works best on whatever platform. Named pipes
will be the general route for the problem you describe, but there will be implementation
differences on each platform. | 0 | 1 | 0 | 0 | 2009-07-10T17:16:00.000 | 4 | 0 | false | 1,110,804 | 0 | 0 | 0 | 1 | I would like to be able to spawn a process in python and have two way communication. Of course, Pexpect does this and is indeed a way I might go. However, it is not quite ideal.
My ideal situation would be to have a cross platform generic technique that involved only the standard python libraries. Subprocess gets pretty close, but the fact that I have to wait for the process to terminate before safely interacting with it is not desirable.
Looking at the documentation, it does say there is a stdin,stdout and stderr file descriptors that I can directly manipulate, but there is a big fat warning that says "Don't Do This". Unfortunately its not entirely clear why this warning exists, but from what I gather from google is that it is related to os buffering, and it is possible to write code that unexpectedly deadlocks when those internal buffers fail (as a side note, any examples that show the wrong way and right way would be appreciated).
So, risking my code to potential deadlocks, I thought it might be interesting to use poll or select to interactively read from the running process without killing it. Although I lose (i think) the cross platform ability, I like the fact that it requires no additional libraries. But more importantly, I would like to know if this is this a good idea. I have yet to try this approach, but I am concerned about gotchas that could potentially devastate my program. Can it work? What should I test for?
In my specific case I am not really concerned about being able to write to the process, just repeatedly reading from it. Also, I don't expect my processes to dump huge amounts of text, so I hope to avoid the deadlocking issue, however I would like to know exactly what those limits are and be able to write some tests to see where it breaks down. |
List of installed fonts OS X / C | 1,113,055 | 1 | 5 | 5,834 | 0 | python,c,macos,fonts | Do you want to write a program to do it, or do you want to use a program to do it? There are many programs that list fonts, xlsfonts comes to mind. | 0 | 1 | 0 | 1 | 2009-07-11T05:45:00.000 | 5 | 0.039979 | false | 1,113,040 | 0 | 0 | 0 | 1 | I'm trying to programatically get a list of installed fonts in C or Python. I need to be able to do this on OS X, does anyone know how? |
How to keep an App Engine/Java app running with deaf requests from a Java/Python web cron? | 4,707,002 | 1 | 3 | 1,731 | 0 | java,python,google-app-engine,httpwebrequest,keep-alive | App engine also has a new PAY feature where you can have it "always-on". Costs about $0.30 USD cents a day. Just go into your billing settings and enable it if you don't mind paying for the feature. I believe it guarantees you at least 3 instances always running.
(I didn't realize hitting a /ping url which caused an instance to spin up would cause it to exceed the 30 sec limit!) | 0 | 1 | 0 | 0 | 2009-07-11T06:02:00.000 | 4 | 0.049958 | false | 1,113,066 | 0 | 0 | 1 | 1 | App Engine allows you 30 seconds to load your application
My application takes around 30 seconds - sometimes more, sometimes less. I don't know how to fix this.
If the app is idle (does not receive a request for a while), it needs to be re-loaded.
So, to avoid the app needing to be reloaded, I want to simulate user activity by pinging the app every so often.
But there's a catch . . .
If I ping the app and it has already been unloaded by App Engine, my web request will be the first request to the app and the app will try to reload. This could take longer than 30 seconds and exceed the loading time limit.
So my idea is to ping the app but not wait for the response. I have simulated this manually by going to the site from a browser, making the request and immediately closing the browser - it seems to keep the app alive.
Any suggestions for a good way to do this in a Python or Java web cron (I'm assuming a Python solution will be simpler)? |
How do I check if a process is alive in Python on Linux? | 1,114,435 | 14 | 3 | 15,728 | 0 | python,process | os.kill does not kill processes, it sends them signals (it's poorly named).
If you send signal 0, you can determine whether you are allowed to send other signals. An error code will indicate whether it's a permission problem or a missing process.
See man 2 kill for more info.
Also, if the process is your child, you can get a SIGCHLD when it dies, and you can use one of the wait calls to deal with it. | 0 | 1 | 0 | 0 | 2009-07-11T18:11:00.000 | 3 | 1 | false | 1,114,312 | 1 | 0 | 0 | 1 | I have a process id in Python. I know I can kill it with os.kill(), but how do I check if it is alive ? Is there a built-in function or do I have to go to the shell? |
How do I tell a Python script (cygwin) to work in current (or relative) directories? | 1,117,427 | 0 | 1 | 2,062 | 0 | python,filesystems,cygwin,path,utilities | What happens when you type "ls"? Do you see "infile.txt" listed there? | 0 | 1 | 0 | 0 | 2009-07-13T01:29:00.000 | 3 | 0 | false | 1,117,414 | 0 | 0 | 0 | 1 | I have lots of directories with text files written using (g)vim, and I have written a handful of utilities that I find useful in Python. I start off the utilities with a pound-bang-/usr/bin/env python line in order to use the Python that is installed under cygwin. I would like to type commands like this:
%cd ~/SomeBook
%which pythonUtil
/usr/local/bin/pythonUtil
%pythonUtil ./infile.txt ./outfile.txt
(or % pythonUtil someRelPath/infile.txt somePossiblyDifferentRelPath/outfile.txt)
pythonUtil: Found infile.txt; Writing outfile.txt; Done (or some such, if anything)
However, my pythonUtil programs keep telling me that they can't find infile.txt. If I copy the utility into the current working directory, all is well, but then I have copies of my utilities littering the landscape. What should I be doing?
Yet Another Edit: To summarize --- what I wanted was os.path.abspath('filename'). That returns the absolute pathname as a string, and then all ambiguity has been removed.
BUT: IF the Python being used is the one installed under cygwin, THEN the absolute pathname will be a CYGWIN-relative pathname, like /home/someUser/someDir/someFile.txt. HOWEVER, IF the Python has been installed under Windows (and is here being called from a cygwin terminal commandline), THEN the absolute pathname will be the complete Windows path, from 'drive' on down, like D:\cygwin\home\someUser\someDir\someFile.txt.
Moral: Don't expect the cygwin Python to generate a Windows-complete absolute pathname for a file not rooted at /; it's beyond its event horizon. However, you can reach out to any file on a WinXP system with the cygwin-python if you specify the file's path using the "/cygdrive/driveLetter" leadin convention.
Remark: Don't use '\'s for separators in the WinXP path on the cygwin commandline; use '/'s and trust the snake. No idea why, but some separators may be dropped and the path may be modified to include extra levels, such as "Documents and Settings\someUser" and other Windows nonsense.
Thanks to the responders for shoving me in the right direction. |
How Do I Use Raw Socket in Python? | 1,186,810 | 2 | 46 | 109,072 | 0 | python,sockets,raw-sockets | Eventually the best solution for this case was to write the entire thing in C, because it's not a big application, so it would've incurred greater penalty to write such a small thing in more than 1 language.
After much toying with both the C and python RAW sockets, I eventually preferred the C RAW sockets. RAW sockets require bit-level modifications of less than 8 bit groups for writing the packet headers. Sometimes writing only 4 bits or less. python defines no assistance to this, whereas Linux C has a full API for this.
But I definitely believe that if only this little bit of header initialization was handled conveniently in python, I would've never used C here. | 0 | 1 | 1 | 0 | 2009-07-13T06:36:00.000 | 8 | 0.049958 | false | 1,117,958 | 0 | 0 | 0 | 1 | I am writing an application to test a network driver for handling corrupted data. And I thought of sending this data using raw socket, so it will not be corrected by the sending machine's TCP-IP stack.
I am writing this application solely on Linux. I have code examples of using raw sockets in system-calls, but I would really like to keep my test as dynamic as possible, and write most if not all of it in Python.
I have googled the web a bit for explanations and examples of the usage of raw sockets in python, but haven't found anything really enlightening. Just a a very old code example that demonstrates the idea, but in no means work.
From what I gathered, Raw Socket usage in Python is nearly identical in semantics to UNIX's raw socket, but without the structs that define the packets structure.
I was wondering if it would even be better not to write the raw socket part of the test in Python, but in C with system-calls, and call it from the main Python code? |
Migrating Django Application to Google App Engine? | 1,118,790 | 1 | 9 | 1,845 | 1 | python,django,google-app-engine | There are a few things that you can't do on the App Engine that you can do on your own server like uploading of files. On the App Engine you kinda have to upload it and store the datastore which can cause a few problems.
Other than that it should be fine from the Presentation part. There are a number of other little things that are better on your own dedicated server but I think eventually a lot of those things will be in the App Engine | 0 | 1 | 0 | 0 | 2009-07-13T10:40:00.000 | 4 | 0.049958 | false | 1,118,761 | 0 | 0 | 1 | 2 | I'm developing a web application and considering Django, Google App Engine, and several other options. I wondered what kind of "penalty" I will incur if I develop a complete Django application assuming it runs on a dedicated server, and then later want to migrate it to Google App Engine.
I have a basic understanding of Google's data store, so please assume I will choose a column based database for my "stand-alone" Django application rather than a relational database, so that the schema could remain mostly the same and will not be a major factor.
Also, please assume my application does not maintain a huge amount of data, so that migration of tens of gigabytes is not required. I'm mainly interested in the effects on the code and software architecture.
Thanks |
Migrating Django Application to Google App Engine? | 1,119,377 | 8 | 9 | 1,845 | 1 | python,django,google-app-engine | Most (all?) of Django is available in GAE, so your main task is to avoid basing your designs around a reliance on anything from Django or the Python standard libraries which is not available on GAE.
You've identified the glaring difference, which is the database, so I'll assume you're on top of that. Another difference is the tie-in to Google Accounts and hence that if you want, you can do a fair amount of access control through the app.yaml file rather than in code. You don't have to use any of that, though, so if you don't envisage switching to Google Accounts when you switch to GAE, no problem.
I think the differences in the standard libraries can mostly be deduced from the fact that GAE has no I/O and no C-accelerated libraries unless explicitly stated, and my experience so far is that things I've expected to be there, have been there. I don't know Django and haven't used it on GAE (apart from templates), so I can't comment on that.
Personally I probably wouldn't target LAMP (where P = Django) with the intention of migrating to GAE later. I'd develop for both together, and try to ensure if possible that the differences are kept to the very top (configuration) and the very bottom (data model). The GAE version doesn't necessarily have to be perfect, as long as you know how to make it perfect should you need it.
It's not guaranteed that this is faster than writing and then porting, but my guess is it normally will be. The easiest way to spot any differences is to run the code, rather than relying on not missing anything in the GAE docs, so you'll likely save some mistakes that need to be unpicked. The Python SDK is a fairly good approximation to the real App Engine, so all or most of your tests can be run locally most of the time.
Of course if you eventually decide not to port then you've done unnecessary work, so you have to think about the probability of that happening, and whether you'd consider the GAE development to be a waste of your time if it's not needed. | 0 | 1 | 0 | 0 | 2009-07-13T10:40:00.000 | 4 | 1.2 | true | 1,118,761 | 0 | 0 | 1 | 2 | I'm developing a web application and considering Django, Google App Engine, and several other options. I wondered what kind of "penalty" I will incur if I develop a complete Django application assuming it runs on a dedicated server, and then later want to migrate it to Google App Engine.
I have a basic understanding of Google's data store, so please assume I will choose a column based database for my "stand-alone" Django application rather than a relational database, so that the schema could remain mostly the same and will not be a major factor.
Also, please assume my application does not maintain a huge amount of data, so that migration of tens of gigabytes is not required. I'm mainly interested in the effects on the code and software architecture.
Thanks |
Platform for developing all things google? | 1,121,377 | 0 | 4 | 263 | 0 | java,python,android,platform | I'd throw down another vote for Eclipse. I've been using it on the mac and I find it to be very buggy. Not sure if that's just the nature of the beast... My experiences with it on XP have been more stable. Haven't had time to check it out on Ubuntu. | 0 | 1 | 0 | 0 | 2009-07-13T15:48:00.000 | 3 | 0 | false | 1,120,297 | 0 | 0 | 1 | 2 | I am interested in developing things for google apps and android using python and java. I am new to both and was wondering if a environment set in windows or linux would be more productive for these tasks? |
Platform for developing all things google? | 1,122,211 | 0 | 4 | 263 | 0 | java,python,android,platform | Internally, I believe Google uses Eclipse running on Ubuntu for Android development, so that'd be your best bet if you're completely paranoid about avoiding all potential issues. Of course, this is impossible, and really you should just use whatever you're comfortable in. | 0 | 1 | 0 | 0 | 2009-07-13T15:48:00.000 | 3 | 0 | false | 1,120,297 | 0 | 0 | 1 | 2 | I am interested in developing things for google apps and android using python and java. I am new to both and was wondering if a environment set in windows or linux would be more productive for these tasks? |
bash/cygwin/$PATH: Do I really have to reboot to alter $PATH? | 1,123,139 | 1 | 5 | 9,783 | 0 | python,bash,path,cygwin,reboot | A couple of things to try and rule out at least:
Do you get the same behavior as the following from the shell? (Pasted from my cygwin, which works as expected.)
$ echo $PATH
/usr/local/bin:/usr/bin:/bin
$ export PATH=$PATH:/cygdrive/c/python/bin
$ echo $PATH
/usr/local/bin:/usr/bin:/bin:/cygdrive/c/python/bin
Is your bashrc setting the PATH in a similar way to the above? (i.e. the second command).
Does your bashrc contain a "source" or "." command anywhere? (Maybe it's sourcing another file which overwrites your PATH variable.) | 0 | 1 | 0 | 0 | 2009-07-14T00:45:00.000 | 4 | 0.049958 | false | 1,122,924 | 1 | 0 | 0 | 2 | I wanted to use the Python installed under cygwin rather than one installed under WinXP directly, so I edited ~/.bashrc and sourced it. Nothing changed. I tried other things, but nothing I did changed $PATH in any way. So I rebooted. Aha; now $PATH has changed to what I wanted.
But, can anyone explain WHY this happened? When do changes to the environment (and its variables) made via cygwin (and bash) take effect only after a reboot?
(Is this any way to run a railroad?) (This question is unlikely to win any points, but I'm curious, and I'm also tired of wading through docs which don't help on this point.) |
bash/cygwin/$PATH: Do I really have to reboot to alter $PATH? | 1,123,086 | 2 | 5 | 9,783 | 0 | python,bash,path,cygwin,reboot | If you want your changes to be permanent, you should modify the proper file (.bashrc in this case) and perform ONE of the following actions:
Restart the cygwin window
source .bashrc (This should work, even if is not working for you)
. .bashrc (that is dot <space> <filename>)
However, .bashrc is used by default when using a BASH shell, so if you are using another shell (csh, ksh, zsh, etc) then your changes will not be reflected by modifying .bashrc. | 0 | 1 | 0 | 0 | 2009-07-14T00:45:00.000 | 4 | 0.099668 | false | 1,122,924 | 1 | 0 | 0 | 2 | I wanted to use the Python installed under cygwin rather than one installed under WinXP directly, so I edited ~/.bashrc and sourced it. Nothing changed. I tried other things, but nothing I did changed $PATH in any way. So I rebooted. Aha; now $PATH has changed to what I wanted.
But, can anyone explain WHY this happened? When do changes to the environment (and its variables) made via cygwin (and bash) take effect only after a reboot?
(Is this any way to run a railroad?) (This question is unlikely to win any points, but I'm curious, and I'm also tired of wading through docs which don't help on this point.) |
How can I find path to given file? | 1,124,862 | 0 | 11 | 91,787 | 0 | python | Uh... This question is a bit unclear.
What do you mean "have"? Do you have the name of the file? Have you opened it? Is it a file object? Is it a file descriptor? What???
If it's a name, what do you mean with "find"? Do you want to search for the file in a bunch of directories? Or do you know which directory it's in?
If it is a file object, then you must have opened it, reasonably, and then you know the path already, although you can get the filename from fileob.name too. | 0 | 1 | 0 | 0 | 2009-07-14T11:30:00.000 | 6 | 0 | false | 1,124,810 | 1 | 0 | 0 | 1 | I have a file, for example "something.exe" and I want to find path to this file
How can I do this in python? |
Python: Persistent shell variables in subprocess | 1,126,137 | 13 | 8 | 3,185 | 0 | python,shell,variables,subprocess,persistent | subprocess.Popen takes an optional named argument env that's a dictionary to use as the subprocess's environment (what you're describing as "shell variables"). Prepare a dict as you need it (you may start with a copy of os.environ and alter that as you need) and pass it to all the subprocess.Popen calls you perform. | 0 | 1 | 0 | 0 | 2009-07-14T15:19:00.000 | 2 | 1.2 | true | 1,126,116 | 0 | 0 | 0 | 1 | I'm trying to execute a series of commands using Pythons subprocess module, however I need to set shell variables with export before running them. Of course the shell doesn't seem to be persistent so when I run a command later those shell variables are lost.
Is there any way to go about this? I could create a /bin/sh process, but how would I get the exit codes of the commands run under that? |
Google App engine template unicode decoding problem | 1,140,751 | 2 | 2 | 2,383 | 0 | python,django,google-app-engine,unicode | Are you using Django 0.96 or Django 1.0? You can check by looking at your main.py and seeing if it contains the following:
from google.appengine.dist import use_library
use_library('django', '1.0')
If you're using Django 1.0, both FILE_CHARSET and DEFAULT_CHARSET should default to 'utf-8'. If your template is saved under a different encoding, just set FILE_CHARSET to whatever that is.
If you're using Django 0.96, you might want to try directly reading the template from the disk and then manually handling the encoding.
e.g., replace
template.render( templatepath , template_values)
with
Template(unicode(template_fh.read(), 'utf-8')).render(template_values) | 0 | 1 | 0 | 0 | 2009-07-16T17:40:00.000 | 3 | 0.132549 | false | 1,139,151 | 0 | 0 | 1 | 3 | When trying to render a Django template file in Google App Engine
from google.appengine.ext.webapp import template
templatepath = os.path.join(os.path.dirname(file), 'template.html')
self.response.out.write (template.render( templatepath , template_values))
I come across the following error:
<type
'exceptions.UnicodeDecodeError'>:
'ascii' codec can't decode byte 0xe2
in position 17692: ordinal not in
range(128)
args = ('ascii', '<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0
Str...07/a-beautiful-method-to-find-peace-of-mind/
--> ', 17692, 17693, 'ordinal not in range(128)')
encoding = 'ascii'
end = 17693
message = ''
object = '<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0
Str...07/a-beautiful-method-to-find-peace-of-mind/
-->
reason = 'ordinal not in range(128)'
start = 17692
It seems that the underlying django template engine has assumed the "ascii" encoding, which should have been "utf-8".
Anyone who knows what might have caused the trouble and how to solve it?
Thanks. |
Google App engine template unicode decoding problem | 1,139,534 | 1 | 2 | 2,383 | 0 | python,django,google-app-engine,unicode | Did you check in your text editor that the template is encoded in utf-8? | 0 | 1 | 0 | 0 | 2009-07-16T17:40:00.000 | 3 | 0.066568 | false | 1,139,151 | 0 | 0 | 1 | 3 | When trying to render a Django template file in Google App Engine
from google.appengine.ext.webapp import template
templatepath = os.path.join(os.path.dirname(file), 'template.html')
self.response.out.write (template.render( templatepath , template_values))
I come across the following error:
<type
'exceptions.UnicodeDecodeError'>:
'ascii' codec can't decode byte 0xe2
in position 17692: ordinal not in
range(128)
args = ('ascii', '<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0
Str...07/a-beautiful-method-to-find-peace-of-mind/
--> ', 17692, 17693, 'ordinal not in range(128)')
encoding = 'ascii'
end = 17693
message = ''
object = '<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0
Str...07/a-beautiful-method-to-find-peace-of-mind/
-->
reason = 'ordinal not in range(128)'
start = 17692
It seems that the underlying django template engine has assumed the "ascii" encoding, which should have been "utf-8".
Anyone who knows what might have caused the trouble and how to solve it?
Thanks. |
Google App engine template unicode decoding problem | 1,141,420 | 6 | 2 | 2,383 | 0 | python,django,google-app-engine,unicode | Well, turns out the rendered results returned by the template needs to be decoded first:
self.response.out.write (template.render( templatepath , template_values).decode('utf-8') )
A silly mistake, but thanks for everyone's answers anyway. :) | 0 | 1 | 0 | 0 | 2009-07-16T17:40:00.000 | 3 | 1.2 | true | 1,139,151 | 0 | 0 | 1 | 3 | When trying to render a Django template file in Google App Engine
from google.appengine.ext.webapp import template
templatepath = os.path.join(os.path.dirname(file), 'template.html')
self.response.out.write (template.render( templatepath , template_values))
I come across the following error:
<type
'exceptions.UnicodeDecodeError'>:
'ascii' codec can't decode byte 0xe2
in position 17692: ordinal not in
range(128)
args = ('ascii', '<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0
Str...07/a-beautiful-method-to-find-peace-of-mind/
--> ', 17692, 17693, 'ordinal not in range(128)')
encoding = 'ascii'
end = 17693
message = ''
object = '<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0
Str...07/a-beautiful-method-to-find-peace-of-mind/
-->
reason = 'ordinal not in range(128)'
start = 17692
It seems that the underlying django template engine has assumed the "ascii" encoding, which should have been "utf-8".
Anyone who knows what might have caused the trouble and how to solve it?
Thanks. |
Python fails to execute firefox webbrowser from a root executed script with privileges drop | 1,140,199 | 1 | 0 | 903 | 0 | python,browser,debian,uid | This could be your environment. Changing the permissions will still leave environment variables like $HOME pointing at the root user's directory, which will be inaccessible. It may be worth trying altering these variables by changing os.environ before launching the browser. There may also be other variables worth checking. | 0 | 1 | 1 | 0 | 2009-07-16T19:38:00.000 | 1 | 1.2 | true | 1,139,835 | 0 | 0 | 0 | 1 | I can't run firefox from a sudoed python script that drops privileges to normal user. If i write
$ sudo python
>>> import os
>>> import pwd, grp
>>> uid = pwd.getpwnam('norby')[2]
>>> gid = grp.getgrnam('norby')[2]
>>> os.setegid(gid)
>>> os.seteuid(uid)
>>> import webbrowser
>>> webbrowser.get('firefox').open('www.google.it')
True
>>> # It returns true but doesn't work
>>> from subprocess import Popen,PIPE
>>> p = Popen('firefox www.google.it', shell=True,stdout=PIPE,stderr=PIPE)
>>> # Doesn't execute the command
>>> You shouldn't really run Iceweasel through sudo WITHOUT the -H option.
Continuing as if you used the -H option.
No protocol specified
Error: cannot open display: :0
I think that is not a python problem, but firefox/iceweasel/debian configuration problem. Maybe firefox read only UID and not EUID, and doesn't execute process because UID is equal 0. What do you think about? |
MPI signal handling | 1,149,142 | 1 | 5 | 2,988 | 0 | python,signals,mpi | If you use mpirun --nw, then mpirun itself should terminate as soon as it's started the subprocesses, instead of waiting for their termination; if that's acceptable then I believe your processes would be able to catch their own signals. | 0 | 1 | 0 | 1 | 2009-07-17T21:18:00.000 | 3 | 0.066568 | false | 1,145,741 | 0 | 0 | 0 | 1 | When using mpirun, is it possible to catch signals (for example, the SIGINT generated by ^C) in the code being run?
For example, I'm running a parallelized python code. I can except KeyboardInterrupt to catch those errors when running python blah.py by itself, but I can't when doing mpirun -np 1 python blah.py.
Does anyone have a suggestion? Even finding how to catch signals in a C or C++ compiled program would be a helpful start.
If I send a signal to the spawned Python processes, they can handle the signals properly; however, signals sent to the parent orterun process (i.e. from exceeding wall time on a cluster, or pressing control-C in a terminal) will kill everything immediately. |
How to install Python 3rd party libgmail-0.1.11.tar.tar into Python in Windows XP home? | 1,148,018 | 2 | 1 | 3,071 | 0 | python | Extract the archive to a temporary directory, and type "python setup.py install". | 0 | 1 | 0 | 0 | 2009-07-18T14:45:00.000 | 5 | 0.07983 | false | 1,147,713 | 1 | 0 | 0 | 1 | I do not know Python, I have installed it only and downloaded the libgmail package. So, please give me verbatim steps in installing the libgmail library. My python directory is c:\python26, so please do not skip any steps in the answer.
Thanks! |
Using Task Queues to schedule the fetching/parsing of a number of feeds in App Engine (Python) | 1,148,720 | 2 | 0 | 365 | 0 | python,google-app-engine,feed | 2 fetches per task? 3? | 0 | 1 | 0 | 0 | 2009-07-18T22:04:00.000 | 3 | 0.132549 | false | 1,148,709 | 0 | 0 | 1 | 2 | Say I had over 10,000 feeds that I wanted to periodically fetch/parse.
If the period were say 1h that would be 24x10000 = 240,000 fetches.
The current 10k limit of the labs Task Queue API would preclude one from
setting up one task per fetch. How then would one do this?
Update: RE: Fetching nurls per task - Given the 30second timeout per request at some point this would hit a ceiling. Is
there anyway to parallelize it so each task queue initiates a bunch of async parallel fetches each of which would take less than 30sec to finish but the lot together may take more than that. |
Using Task Queues to schedule the fetching/parsing of a number of feeds in App Engine (Python) | 1,148,729 | 0 | 0 | 365 | 0 | python,google-app-engine,feed | Group up the fetches, so instead of queuing 1 fetch you queue up, say, a work unit that does 10 fetches. | 0 | 1 | 0 | 0 | 2009-07-18T22:04:00.000 | 3 | 0 | false | 1,148,709 | 0 | 0 | 1 | 2 | Say I had over 10,000 feeds that I wanted to periodically fetch/parse.
If the period were say 1h that would be 24x10000 = 240,000 fetches.
The current 10k limit of the labs Task Queue API would preclude one from
setting up one task per fetch. How then would one do this?
Update: RE: Fetching nurls per task - Given the 30second timeout per request at some point this would hit a ceiling. Is
there anyway to parallelize it so each task queue initiates a bunch of async parallel fetches each of which would take less than 30sec to finish but the lot together may take more than that. |
How can I perform a ping or traceroute using native python? | 2,974,474 | 0 | 10 | 34,815 | 0 | python,ping,traceroute | ICMP Ping is standard as part of the ICMP protocol.
Traceroute uses features of ICMP and IP to determine a path via Time To Live values. Using TTL values, you can do traceroutes in a variety of protocols as long as IP/ICMP work because it is the ICMP TTL EXceeded messages that tell you about the hop in the path.
If you attempt to access a port where no listener is available, by ICMP protocol rules, the host is supposed to send an ICMP Port Unreachable message. | 0 | 1 | 0 | 1 | 2009-07-20T05:01:00.000 | 7 | 0 | false | 1,151,771 | 0 | 0 | 0 | 1 | I would like to be able to perform a ping and traceroute from within Python without having to execute the corresponding shell commands so I'd prefer a native python solution. |
how do I read everything currently in a subprocess.stdout pipe and then return? | 1,161,599 | 2 | 1 | 818 | 0 | python,linux | You should loop using read() against a set number of characters. | 0 | 1 | 0 | 0 | 2009-07-21T20:28:00.000 | 2 | 0.197375 | false | 1,161,580 | 1 | 0 | 0 | 1 | I'm using python's subprocess module to interact with a program via the stdin and stdout pipes. If I call the subprocesses readline() on stdout, it hangs because it is waiting for a newline.
How can I do a read of all the characters in the stdout pipe of a subprocess instance? If it matters, I'm running in Linux. |
Change current process environment's LD_LIBRARY_PATH | 1,178,878 | 0 | 26 | 27,232 | 0 | python,shared-libraries,environment-variables | In my experience trying to change the way the loader works for a running Python is very tricky; probably OS/version dependent; may not work. One work-around that might help in some circumstances is to launch a sub-process that changes the environment parameter using a shell script and then launch a new Python using the shell. | 0 | 1 | 0 | 0 | 2009-07-24T14:33:00.000 | 4 | 0 | false | 1,178,094 | 1 | 0 | 0 | 1 | Is it possible to change environment variables of current process?
More specifically in a python script I want to change LD_LIBRARY_PATH so that on import of a module 'x' which depends on some xyz.so, xyz.so is taken from my given path in LD_LIBRARY_PATH
is there any other way to dynamically change path from where library is loaded?
Edit: I think I need to mention that I have already tried thing like
os.environ["LD_LIBRARY_PATH"] = mypath
os.putenv('LD_LIBRARY_PATH', mypath)
but these modify the env. for spawned sub-process, not the current process, and module loading doesn't consider the new LD_LIBRARY_PATH
Edit2, so question is can we change environment or something so the library loader sees it and loads from there? |
How to set LANG variable in Windows? | 1,180,593 | 6 | 4 | 16,763 | 0 | python,windows,locale | Windows locale support doesn't rely on LANG variable (or, indeed, any other environmental variable). It is whatever the user set it to in Control Panel. | 0 | 1 | 0 | 0 | 2009-07-24T23:01:00.000 | 2 | 1.2 | true | 1,180,590 | 1 | 0 | 0 | 1 | I'm making an application that supports multi language. And I am using gettext and locale to solve this issue.
How to set LANG variable in Windows? In Linux and Unix-like systems it's just as simple as
$ LANG=en_US python appname.py
And it will automatically set the locale to that particular language. But in Windows, the
C:\>SET LANG=en_US python appname.py
or
C:\>SET LANG=en_US
C:\>python appname.py
doesn't work. |
How can I launch a background process in Pylons? | 1,182,609 | 1 | 3 | 1,069 | 0 | python,background,pylons | I think this has little to do with pylons. I would do it (in whatever framework) in these steps:
generate some ID for the new job, and add a record in the database.
create a new process, e.g. through the subprocess module, and pass the ID on the command line (*).
have the process write its output to /tmp/project/ID
in pylons, implement URLs of the form /job/ID or /job?id=ID. That will look into the database whether the job is completed or not, and merge the temporary output into the page.
(*) It might be better for the subprocess to create another process immediately, and have the pylons process wait for the first child, so that there will be no zombie processes. | 0 | 1 | 0 | 0 | 2009-07-25T17:36:00.000 | 2 | 0.099668 | false | 1,182,587 | 0 | 0 | 1 | 1 | I am trying to write an application that will allow a user to launch a fairly long-running process (5-30 seconds). It should then allow the user to check the output of the process as it is generated. The output will only be needed for the user's current session so nothing needs to be stored long-term. I have two questions regarding how to accomplish this while taking advantage of the Pylons framework:
What is the best way to launch a background process such as this with a Pylons controller?
What is the best way to get the output of the background process back to the user? (Should I store the output in a database, in session data, etc.?)
Edit:
The problem is if I launch a command using subprocess in a controller, the controller waits for the subprocess to finish before continuing, showing the user a blank page that is just loading until the process is complete. I want to be able to redirect the user to a status page immediately after starting the subprocess, allowing it to complete on its own. |
How to associate py extension with python launcher on Mac OS X? | 1,185,893 | 3 | 5 | 14,435 | 0 | python,macos | The file associations are done with the "Get Info". You select your .PY file, select the File menu; Get Info menu item.
Mid-way down the Get Info page is "Open With".
You can pick the Python Launcher. There's a Change All.. button that changes the association for all .py files. | 0 | 1 | 0 | 0 | 2009-07-26T23:01:00.000 | 4 | 0.148885 | false | 1,185,817 | 1 | 0 | 0 | 3 | Does anyone know how to associate the py extension with the python interpreter on Mac OS X 10.5.7? I have gotten as far as selecting the application with which to associate it (/System/Library/Frameworks/Python.framework/Versions/2.5/bin/python), but the python executable appears as a non-selectable grayed-out item. Any ideas? |
How to associate py extension with python launcher on Mac OS X? | 28,869,258 | 0 | 5 | 14,435 | 0 | python,macos | The default python installation (atleast on 10.6.8) includes the Python Launcher.app in /System/Library/Frameworks/Python.framework/Resources/, which is aliased to the latest/current version of Python installed on the system. This application launches terminal and sets the right environment to run the script. | 0 | 1 | 0 | 0 | 2009-07-26T23:01:00.000 | 4 | 0 | false | 1,185,817 | 1 | 0 | 0 | 3 | Does anyone know how to associate the py extension with the python interpreter on Mac OS X 10.5.7? I have gotten as far as selecting the application with which to associate it (/System/Library/Frameworks/Python.framework/Versions/2.5/bin/python), but the python executable appears as a non-selectable grayed-out item. Any ideas? |
How to associate py extension with python launcher on Mac OS X? | 1,185,899 | 6 | 5 | 14,435 | 0 | python,macos | The python.org OS X Python installers include an application called "Python Launcher.app" which does exactly what you want. It gets installed into /Applications /Python n.n/ for n.n > 2.6 or /Applications/MacPython n.n/ for 2.5 and earlier. In its preference panel, you can specify which Python executable to launch; it can be any command-line path, including the Apple-installed one at /usr/bin/python2.5. You will also need to ensure that .py is associated with "Python Launcher"; you can use the Finder's Get Info command to do that as described elsewhere. Be aware, though, that this could be a security risk if downloaded .py scripts are automatically launched by your browser(s). (Note, the Apple-supplied Python in 10.5 does not include "Python Launcher.app"). | 0 | 1 | 0 | 0 | 2009-07-26T23:01:00.000 | 4 | 1 | false | 1,185,817 | 1 | 0 | 0 | 3 | Does anyone know how to associate the py extension with the python interpreter on Mac OS X 10.5.7? I have gotten as far as selecting the application with which to associate it (/System/Library/Frameworks/Python.framework/Versions/2.5/bin/python), but the python executable appears as a non-selectable grayed-out item. Any ideas? |
Parallel SSH in Python | 1,185,871 | 1 | 3 | 7,048 | 0 | python,ssh,parallel-processing | You can simply use subprocess.Popen for that purpose, without any problems.
However, you might want to simply install cronjobs on the remote machines. :-) | 0 | 1 | 1 | 1 | 2009-07-26T23:19:00.000 | 6 | 0.033321 | false | 1,185,855 | 0 | 0 | 0 | 4 | I wonder what is the best way to handle parallel SSH connections in python.
I need to open several SSH connections to keep in background and to feed commands in interactive or timed batch way.
Is this possible to do it with the paramiko libraries? It would be nice not to spawn a different SSH process for each connection.
Thanks. |
Parallel SSH in Python | 1,185,880 | 1 | 3 | 7,048 | 0 | python,ssh,parallel-processing | Reading the paramiko API docs, it looks like it is possible to open one ssh connection, and multiplex as many ssh tunnels on top of that as are wished. Common ssh clients (openssh) often do things like this automatically behind the scene if there is already a connection open. | 0 | 1 | 1 | 1 | 2009-07-26T23:19:00.000 | 6 | 0.033321 | false | 1,185,855 | 0 | 0 | 0 | 4 | I wonder what is the best way to handle parallel SSH connections in python.
I need to open several SSH connections to keep in background and to feed commands in interactive or timed batch way.
Is this possible to do it with the paramiko libraries? It would be nice not to spawn a different SSH process for each connection.
Thanks. |
Parallel SSH in Python | 1,188,586 | 3 | 3 | 7,048 | 0 | python,ssh,parallel-processing | Yes, you can do this with paramiko.
If you're connecting to one server, you can run multiple channels through a single connection. If you're connecting to multiple servers, you can start multiple connections in separate threads. No need to manage multiple processes, although you could substitute the multiprocessing module for the threading module and have the same effect.
I haven't looked into twisted conch in a while, but it looks like it getting updates again, which is nice. I couldn't give you a good feature comparison between the two, but I find paramiko is easier to get going. It takes a little more effort to get into twisted, but it could be well worth it if you're doing other network programming. | 0 | 1 | 1 | 1 | 2009-07-26T23:19:00.000 | 6 | 0.099668 | false | 1,185,855 | 0 | 0 | 0 | 4 | I wonder what is the best way to handle parallel SSH connections in python.
I need to open several SSH connections to keep in background and to feed commands in interactive or timed batch way.
Is this possible to do it with the paramiko libraries? It would be nice not to spawn a different SSH process for each connection.
Thanks. |
Parallel SSH in Python | 1,516,547 | -1 | 3 | 7,048 | 0 | python,ssh,parallel-processing | This might not be relevant to your question. But there are tools like pssh, clusterssh etc. that can parallely spawn connections. You can couple Expect with pssh to control them too. | 0 | 1 | 1 | 1 | 2009-07-26T23:19:00.000 | 6 | -0.033321 | false | 1,185,855 | 0 | 0 | 0 | 4 | I wonder what is the best way to handle parallel SSH connections in python.
I need to open several SSH connections to keep in background and to feed commands in interactive or timed batch way.
Is this possible to do it with the paramiko libraries? It would be nice not to spawn a different SSH process for each connection.
Thanks. |
Appengine and GWT - feeding the python some java | 1,391,971 | 0 | 3 | 3,162 | 0 | java,python,google-app-engine,gwt | I agree with your evaluation of Python's text processing and GWT's quality. Have you considered using Jython? Googling "pyparsing jython" gives some mixed reviews, but it seems there has been some success with recent versions of Jython. | 0 | 1 | 0 | 0 | 2009-07-27T02:16:00.000 | 4 | 0 | false | 1,186,155 | 0 | 0 | 1 | 1 | I realize this is a dated question since appengine now comes in java, but I have a python appengine app that I want to access via GWT. Python is just better for server-side text processing (using pyparsing of course!). I have tried to interpret GWT's client-side RPC and that is convoluted since there is no python counterpart (python-gwt-rpc is out of date). I just tried using JSON and RequestBuilder, but that fails when using SSL. Does anyone have a good solution for putting a GWT frontend on a python appengine app? |
How to insert indent/tab in os x terminal for IronPython (ipy.exe)? | 1,195,079 | 0 | 0 | 1,938 | 0 | macos,mono,ironpython | Look at the Terminal menu, Preferences... menu item.
In the Preferences dialog box, click on the Settings selection.
Within the settings, click on the Keyboard tab.
You have probably modified your tab key to not work correctly. It should not be mentioned as a special key and should generate an ordinary tab character.
Also, run a stty -a command in your Terminal tool. Someone may have changed your oxtabs setting or mapped tab (^I) to something unexpected. | 0 | 1 | 0 | 0 | 2009-07-28T15:12:00.000 | 2 | 0 | false | 1,194,802 | 1 | 0 | 0 | 1 | I am running IronPython 2.0.2 interactive console with Mono 2.4 on OSX Terminal.app . How do I insert indent/tab in the Terminal.app ? I want to do this so I can indent my code.
For example I want to input print "hello tab" what I see is print "hellotab" despite pressing the tab key many times. When the command gets executed it prints hellotab. Another weird behavior is that after pressing tab a bunch of time and then press delete the tabs shows up, but cannot be removed, and ipy still prints hellotab .
I tried inserting tabs with IronRuby (ir.exe) and I don't see the tab showing up when I press it, but it is displayed when the command is executed. |
How to start a background process in Python? | 71,059,019 | 0 | 363 | 549,636 | 0 | python,process,daemon | I haven't tried this yet but using .pyw files instead of .py files should help. pyw files dosen't have a console so in theory it should not appear and work like a background process. | 0 | 1 | 0 | 0 | 2009-07-28T18:56:00.000 | 9 | 0 | false | 1,196,074 | 1 | 0 | 0 | 1 | I'm trying to port a shell script to the much more readable python version. The original shell script starts several processes (utilities, monitors, etc.) in the background with "&". How can I achieve the same effect in python? I'd like these processes not to die when the python scripts complete. I am sure it's related to the concept of a daemon somehow, but I couldn't find how to do this easily. |
Java Wrapper to Perl/Python code | 1,201,722 | 4 | 4 | 2,709 | 0 | java,python,perl,web-services,wrapper | This depends heavily upon your needs. If Jython is an option for the Python code (it isn't always 100% compatible), then it is probably the best option there. Otherwise, you will need to use Java's Process Builder to call the interpretters directly and return the results on their output stream. This will not be fast (but then again, Jython isn't that fast either, relative to regular Java code), but it is an extremely flexible solution. | 0 | 1 | 0 | 1 | 2009-07-29T16:53:00.000 | 5 | 1.2 | true | 1,201,628 | 0 | 0 | 1 | 1 | I have to deploy some Web Services on a server that only supports the Java ones, but some of them will be done using perl or python. I want to know if is possible to develop a Java wrapper to call a specific code written in perl or python. So, I want to have all the Web Services in Java, but some of them will call some code using other languages.
Thanks in advance.
Regards,
Ukrania |
What is the most compatible way to install python modules on a Mac? | 49,869,959 | 2 | 127 | 168,159 | 0 | python,macos,module,packages,macports | You may already have pip3 pre-installed, so just try it! | 0 | 1 | 0 | 1 | 2009-07-31T16:58:00.000 | 13 | 0.03076 | false | 1,213,690 | 0 | 0 | 0 | 3 | I'm starting to learn python and loving it. I work on a Mac mainly as well as Linux. I'm finding that on Linux (Ubuntu 9.04 mostly) when I install a python module using apt-get it works fine. I can import it with no trouble.
On the Mac, I'm used to using Macports to install all the Unixy stuff. However, I'm finding that most of the python modules I install with it are not being seen by python. I've spent some time playing around with PATH settings and using python_select . Nothing has really worked and at this point I'm not really understanding, instead I'm just poking around.
I get the impression that Macports isn't universally loved for managing python modules. I'd like to start fresh using a more "accepted" (if that's the right word) approach.
So, I was wondering, what is the method that Mac python developers use to manage their modules?
Bonus questions:
Do you use Apple's python, or some other version?
Do you compile everything from source or is there a package manger that works well (Fink?). |
What is the most compatible way to install python modules on a Mac? | 1,214,123 | 7 | 127 | 168,159 | 0 | python,macos,module,packages,macports | I use MacPorts to install Python and any third-party modules tracked by MacPorts into /opt/local, and I install any manually installed modules (those not in the MacPorts repository) into /usr/local, and this has never caused any problems. I think you may be confused as to the use of certain MacPorts scripts and environment variables.
MacPorts python_select is used to select the "current" version of Python, but it has nothing to do with modules. This allows you to, e.g., install both Python 2.5 and Python 2.6 using MacPorts, and switch between installs.
The $PATH environment variables does not affect what Python modules are loaded. $PYTHONPATH is what you are looking for. $PYTHONPATH should point to directories containing Python modules you want to load. In my case, my $PYTHONPATH variable contains /usr/local/lib/python26/site-packages. If you use MacPorts' Python, it sets up the other proper directories for you, so you only need to add additional paths to $PYTHONPATH. But again, $PATH isn't used at all when Python searches for modules you have installed.
$PATH is used to find executables, so if you install MacPorts' Python, make sure /opt/local/bin is in your $PATH. | 0 | 1 | 0 | 1 | 2009-07-31T16:58:00.000 | 13 | 1 | false | 1,213,690 | 0 | 0 | 0 | 3 | I'm starting to learn python and loving it. I work on a Mac mainly as well as Linux. I'm finding that on Linux (Ubuntu 9.04 mostly) when I install a python module using apt-get it works fine. I can import it with no trouble.
On the Mac, I'm used to using Macports to install all the Unixy stuff. However, I'm finding that most of the python modules I install with it are not being seen by python. I've spent some time playing around with PATH settings and using python_select . Nothing has really worked and at this point I'm not really understanding, instead I'm just poking around.
I get the impression that Macports isn't universally loved for managing python modules. I'd like to start fresh using a more "accepted" (if that's the right word) approach.
So, I was wondering, what is the method that Mac python developers use to manage their modules?
Bonus questions:
Do you use Apple's python, or some other version?
Do you compile everything from source or is there a package manger that works well (Fink?). |
What is the most compatible way to install python modules on a Mac? | 2,380,159 | 6 | 127 | 168,159 | 0 | python,macos,module,packages,macports | If you use Python from MacPorts, it has it's own easy_install located at: /opt/local/bin/easy_install-2.6 (for py26, that is). It's not the same one as simply calling easy_install directly, even if you used python_select to change your default python command. | 0 | 1 | 0 | 1 | 2009-07-31T16:58:00.000 | 13 | 1 | false | 1,213,690 | 0 | 0 | 0 | 3 | I'm starting to learn python and loving it. I work on a Mac mainly as well as Linux. I'm finding that on Linux (Ubuntu 9.04 mostly) when I install a python module using apt-get it works fine. I can import it with no trouble.
On the Mac, I'm used to using Macports to install all the Unixy stuff. However, I'm finding that most of the python modules I install with it are not being seen by python. I've spent some time playing around with PATH settings and using python_select . Nothing has really worked and at this point I'm not really understanding, instead I'm just poking around.
I get the impression that Macports isn't universally loved for managing python modules. I'd like to start fresh using a more "accepted" (if that's the right word) approach.
So, I was wondering, what is the method that Mac python developers use to manage their modules?
Bonus questions:
Do you use Apple's python, or some other version?
Do you compile everything from source or is there a package manger that works well (Fink?). |
Ubuntu + virtualenv = a mess? virtualenv hates dist-packages, wants site-packages | 6,919,668 | 4 | 14 | 8,965 | 0 | python,ubuntu,setuptools,virtualenv | You really should not touch Ubuntu's Python installation unless you are building system admin tools, or building something that could be considered to be a new system service.
If you are using Ubuntu to develop or deploy Python applications, always build your own Python from source, tar it up, and use that for deployment. That way you will have all the directories in the right place and virtualenv will work normally. If you will deploy several Python apps on the server, then make your Python live in some place like /home/python or /opt/python or somewhere outside of your home directory. Make sure that you have write permissions for the developers group (users?) so that people can easily add packages.
This also allows you to have two tiers of packages. The ones that are your in-house standard tools can be installed in your Python distro and be part of the tarball that you deploy, and only the app-specific packages would be in a virtualenv.
Do not upgrade or modify the Ubuntu system installed Python. | 0 | 1 | 0 | 0 | 2009-08-01T01:45:00.000 | 5 | 0.158649 | false | 1,215,610 | 1 | 0 | 0 | 3 | Can someone please explain to me what is going on with python in ubuntu 9.04?
I'm trying to spin up virtualenv, and the --no-site-packages flag seems to do nothing with ubuntu. I installed virtualenv 1.3.3 with easy_install (which I've upgraded to setuptools 0.6c9) and everything seems to be installed to /usr/local/lib/python2.6/dist-packages
I assume that when installing a package using apt-get, it's placed in /usr/lib/python2.6/dist-packages/ ?
The issue is, there is a /usr/local/lib/python2.6/site-packages as well that just sits there being empty. It would seem (by looking at the path in a virtualenv) that this is the folder virtualenv uses as backup. Thus even thought I omit --no-site-packages, I cant access my local systems packages from any of my virtualenv's.
So my questions are:
How do I get virtualenv to point to one of the dist-packages?
Which dist-packages should I point it to? /usr/lib/python2.6/dist-packages or /usr/local/lib/python2.6/dist-packages/
What is the point of /usr/lib/python2.6/site-packages? There's nothing in there!
Is it first come first serve on the path? If I have a newer version of package XYZ installed in /usr/local/lib/python2.6/dist-packages/ and and older one (from ubuntu repos/apt-get) in /usr/lib/python2.6/dist-packages, which one gets imported when I import xyz? I'm assuming this is based on the path list, yes?
Why the hell is this so confusing? Is there something I'm missing here?
Where is it defined that easy_install should install to /usr/local/lib/python2.6/dist-packages?
Will this affect pip as well?
Thanks to anyone who can clear this up! |
Ubuntu + virtualenv = a mess? virtualenv hates dist-packages, wants site-packages | 1,215,627 | 9 | 14 | 8,965 | 0 | python,ubuntu,setuptools,virtualenv | I'd be tempted to hack it by making site-packages a link to dist-packages, but I guess this might affect other cases where you want to install some extension other than from the ubuntu dist. I can't think of another answer to 1 except tweaking virtualenv's sources (with both ubuntu and virtualenv being so popular I wouldn't be surprised to find tweaked versions already exist).
Re 2, if you're using /usr/local/bin/python you should use the /usr/local version of the lib (including site-packages) and conversely if you're using /usr/bin/python.
Re 3, there will be something there if you ever install an extension for /usr/bin/python from sources (not via easy_install or from ubuntu's distro).
Re 4, yes, earlier entries on the path take precedence.
Re 5, easy_install is easy only in its name -- it does so much dark magic that it's been carefully kept out of the standard python library despite its convenience because the consensus among us python committers is that deep dark magic for convenience is "easy" only on the surface.
Re 6, I think that's an ubuntu modification to easy_install -- if that's right then it's defined wherever Canonical or other ubuntu maintainers make their collective decisions.
Re 7, sorry, no idea -- I have no reasonably recent ubuntu at hand to check. | 0 | 1 | 0 | 0 | 2009-08-01T01:45:00.000 | 5 | 1.2 | true | 1,215,610 | 1 | 0 | 0 | 3 | Can someone please explain to me what is going on with python in ubuntu 9.04?
I'm trying to spin up virtualenv, and the --no-site-packages flag seems to do nothing with ubuntu. I installed virtualenv 1.3.3 with easy_install (which I've upgraded to setuptools 0.6c9) and everything seems to be installed to /usr/local/lib/python2.6/dist-packages
I assume that when installing a package using apt-get, it's placed in /usr/lib/python2.6/dist-packages/ ?
The issue is, there is a /usr/local/lib/python2.6/site-packages as well that just sits there being empty. It would seem (by looking at the path in a virtualenv) that this is the folder virtualenv uses as backup. Thus even thought I omit --no-site-packages, I cant access my local systems packages from any of my virtualenv's.
So my questions are:
How do I get virtualenv to point to one of the dist-packages?
Which dist-packages should I point it to? /usr/lib/python2.6/dist-packages or /usr/local/lib/python2.6/dist-packages/
What is the point of /usr/lib/python2.6/site-packages? There's nothing in there!
Is it first come first serve on the path? If I have a newer version of package XYZ installed in /usr/local/lib/python2.6/dist-packages/ and and older one (from ubuntu repos/apt-get) in /usr/lib/python2.6/dist-packages, which one gets imported when I import xyz? I'm assuming this is based on the path list, yes?
Why the hell is this so confusing? Is there something I'm missing here?
Where is it defined that easy_install should install to /usr/local/lib/python2.6/dist-packages?
Will this affect pip as well?
Thanks to anyone who can clear this up! |
Ubuntu + virtualenv = a mess? virtualenv hates dist-packages, wants site-packages | 1,216,108 | 2 | 14 | 8,965 | 0 | python,ubuntu,setuptools,virtualenv | Well I have a Ubuntu 9.04 and quickly tried setting up a couple sandboxes with site-packages and one without. And things are working fine.
The only difference in the approach I took is I used Ubuntu's python-virtualenv package (1.3.3). And presume that it is tweaked by Ubuntu team to suit Ubuntu setups.
To sum up disable easy_installed virtualenv for a while, use packaged python-virtualenv and see if that meets your expectations.
In fact we use similar setup for production without any problem. Rest is already answered by Alex. | 0 | 1 | 0 | 0 | 2009-08-01T01:45:00.000 | 5 | 0.07983 | false | 1,215,610 | 1 | 0 | 0 | 3 | Can someone please explain to me what is going on with python in ubuntu 9.04?
I'm trying to spin up virtualenv, and the --no-site-packages flag seems to do nothing with ubuntu. I installed virtualenv 1.3.3 with easy_install (which I've upgraded to setuptools 0.6c9) and everything seems to be installed to /usr/local/lib/python2.6/dist-packages
I assume that when installing a package using apt-get, it's placed in /usr/lib/python2.6/dist-packages/ ?
The issue is, there is a /usr/local/lib/python2.6/site-packages as well that just sits there being empty. It would seem (by looking at the path in a virtualenv) that this is the folder virtualenv uses as backup. Thus even thought I omit --no-site-packages, I cant access my local systems packages from any of my virtualenv's.
So my questions are:
How do I get virtualenv to point to one of the dist-packages?
Which dist-packages should I point it to? /usr/lib/python2.6/dist-packages or /usr/local/lib/python2.6/dist-packages/
What is the point of /usr/lib/python2.6/site-packages? There's nothing in there!
Is it first come first serve on the path? If I have a newer version of package XYZ installed in /usr/local/lib/python2.6/dist-packages/ and and older one (from ubuntu repos/apt-get) in /usr/lib/python2.6/dist-packages, which one gets imported when I import xyz? I'm assuming this is based on the path list, yes?
Why the hell is this so confusing? Is there something I'm missing here?
Where is it defined that easy_install should install to /usr/local/lib/python2.6/dist-packages?
Will this affect pip as well?
Thanks to anyone who can clear this up! |
Multiple versions of Python on OS X Leopard | 1,219,303 | 1 | 21 | 19,206 | 0 | python,macos,osx-leopard,zope | The approach I prefer which should work on every UNIX-like operating system:
Create for each application which need an specific python version an user account. Install in each user count the corresponding python version with an user-local prefix (like ~/build/python) and add ~/build/bin/ to the PATH environment variable of the user. Install/use your python applications in their correct user.
The advantage of this approach is the perfect isolation between the individual python installations and relatively convenient selection of the correct python environment (just su to the appropriate user). Also the operating system remains untouched. | 0 | 1 | 0 | 0 | 2009-08-02T13:23:00.000 | 4 | 0.049958 | false | 1,218,891 | 1 | 0 | 0 | 1 | I currently have multiple versions of Python installed on my Mac, the one that came with it, a version I downloaded recently from python.org, an older version used to run Zope locally and another version that Appengine is using. It's kind of a mess. Any recommendations of using one version of python to rule them all? How would I go about deleted older versions and linking all of my apps to a single install. Any Mac specific gotchas I should know about? Is this a dumb idea? |
How to write native newline character to a file descriptor in Python? | 1,223,313 | 8 | 43 | 36,433 | 0 | python | How about os.write(<file descriptor>, os.linesep)? (import os is unnecessary because you seem to have already imported it, otherwise you'd be getting errors using os.write to begin with.) | 0 | 1 | 0 | 0 | 2009-08-03T16:30:00.000 | 2 | 1 | false | 1,223,289 | 1 | 0 | 0 | 1 | The os.write function can be used to writes bytes into a file descriptor (not file object). If I execute os.write(fd, '\n'), only the LF character will be written into the file, even on Windows. I would like to have CRLF in the file on Windows and only LF in Linux.
What is the best way to achieve this?
I'm using Python 2.6, but I'm also wondering if Python 3 has a different solution. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.