Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Recover process with subprocess.Popen? | 2,056,808 | 1 | 2 | 1,293 | 0 | python,popen | If a process dies, all its open file handles are closed. This includes any unnamed pipes created by popen(). So, no, there's no way to recover a Popen object from just a PID. The OS won't even consider your new process the parent, so you won't even get SIGCHLD signals (though waitpid() might still work).
I'm not sure if the child is guaranteed to survive, either, since a write to a pipe with no reader (namely, the redirected stdout of the child) should kill the child with a SIGPIPE.
If you want your parent process to pick up where the child left off, you need to spawn the child to write to a file, usually in /tmp or /var/log, and have it record its PID like you are now (the usual location is /var/run). (Having it write to a named pipe risks getting it killed with SIGPIPE as above.) If you suffix your filename with the PID, then it becomes easy for the manager process to figure out which file belongs to which daemon. | 0 | 1 | 0 | 0 | 2010-01-13T12:33:00.000 | 3 | 0.066568 | false | 2,056,594 | 0 | 0 | 0 | 2 | I have a python program that uses subprocess.Popen to launch another process (python process or whatever), and after launching it I save the child's PID to a file. Let's suppose that suddenly the parent process dies (because of an exception or whatever). Is there any way to access again to the object returned by Popen?
I mean, the basic idea is to read the file at first, and if it exists and it has a PID written on it, then access to that process someway, in order to know the return code or whatever. If there isn't a PID, then launch the process with Popen.
Thanks a lot!! |
Practices while releasing the python/ruby/script based web applications on production | 2,059,364 | 3 | 1 | 242 | 0 | python,ruby,linux,scripting,release | I would create a branch in SVN for every release of web application and when the release is ready there, I would check it out on the server and set to be run or move it into the place of the old version. | 0 | 1 | 0 | 1 | 2010-01-13T18:52:00.000 | 4 | 1.2 | true | 2,059,337 | 0 | 0 | 1 | 3 | I am purely a windows programmer and spend all my time hacking VC++.
Recently I have been heading several web based applications and myself built applications with python (/pylons framework) and doing projects on rails. All the web projects are hosted on ubuntu linux.
The RELEASE procedures and check list we followed for building and releasing VC++ windows application are merely no more useful when it comes to script based language.
So we don't built any binaries now. I copied asp/php files into IIS folder through ftp server when using open source cms applications.
So FTP is the one of the way to host the files to the web server. Now we feel lazy or not so passionate to copy files via ftp instead we use the SVN checkout and we simply do svn update to get the latest copy.
Is SVN checkout and svn update are the right methods to update the latest build files into the server? Are there any downside in using svn update? Any better method to release the script/web based scripts into the production server?
PS: I have used ssh server at some extension on linux platform. |
Practices while releasing the python/ruby/script based web applications on production | 4,454,448 | 0 | 1 | 242 | 0 | python,ruby,linux,scripting,release | One downside of doing an svn update: though you can go back in time, to what revision do you go back to? You have to look it up. svn update pseudo-deployments work much cleaner if you use tags - in that case you'd be doing an svn switch to a different tag, not an svn update on the same branch or the trunk.
You want to tag your software with the version number something like 1.1.4 , and then have a simple script to zip it up application-1.1.4,zip, and deploy it - then you have automated repeatable releases and rollbacks as well as greater visibility into what is changing between releases. | 0 | 1 | 0 | 1 | 2010-01-13T18:52:00.000 | 4 | 0 | false | 2,059,337 | 0 | 0 | 1 | 3 | I am purely a windows programmer and spend all my time hacking VC++.
Recently I have been heading several web based applications and myself built applications with python (/pylons framework) and doing projects on rails. All the web projects are hosted on ubuntu linux.
The RELEASE procedures and check list we followed for building and releasing VC++ windows application are merely no more useful when it comes to script based language.
So we don't built any binaries now. I copied asp/php files into IIS folder through ftp server when using open source cms applications.
So FTP is the one of the way to host the files to the web server. Now we feel lazy or not so passionate to copy files via ftp instead we use the SVN checkout and we simply do svn update to get the latest copy.
Is SVN checkout and svn update are the right methods to update the latest build files into the server? Are there any downside in using svn update? Any better method to release the script/web based scripts into the production server?
PS: I have used ssh server at some extension on linux platform. |
Practices while releasing the python/ruby/script based web applications on production | 2,059,406 | 2 | 1 | 242 | 0 | python,ruby,linux,scripting,release | Is SVN checkout and svn update are the right methods to update the latest build files into the server?
Very, very good methods. You know what you got. You can go backwards at any time.
Are there any downside in using svn update? None.
Any better method to release the script/web based scripts into the production server?
What we do.
We do not run out of the SVN checkout directories. The SVN checkout directory is "raw" source sitting on the server.
We use Python's setup.py install to create the application in /opt/app/app-x.y directory tree. Each tagged SVN branch is also a branch in the final installation.
Ruby has gems and other installation tools that are probably similar to Python's.
Our web site's Apache and mod_wsgi configurations refer to a specific /opt/app/app-x.y version. We can then stage a version, do testing, do things like migrate data from production to the next release, and generally get ready.
Then we adjust our Apache and mod_wsgi configuration to use the next version.
Previous versions are all in place. And left in place. We'll delete them some day when they confuse us. | 0 | 1 | 0 | 1 | 2010-01-13T18:52:00.000 | 4 | 0.099668 | false | 2,059,337 | 0 | 0 | 1 | 3 | I am purely a windows programmer and spend all my time hacking VC++.
Recently I have been heading several web based applications and myself built applications with python (/pylons framework) and doing projects on rails. All the web projects are hosted on ubuntu linux.
The RELEASE procedures and check list we followed for building and releasing VC++ windows application are merely no more useful when it comes to script based language.
So we don't built any binaries now. I copied asp/php files into IIS folder through ftp server when using open source cms applications.
So FTP is the one of the way to host the files to the web server. Now we feel lazy or not so passionate to copy files via ftp instead we use the SVN checkout and we simply do svn update to get the latest copy.
Is SVN checkout and svn update are the right methods to update the latest build files into the server? Are there any downside in using svn update? Any better method to release the script/web based scripts into the production server?
PS: I have used ssh server at some extension on linux platform. |
"Real" and non-embedded use of Ruby, Python and their friends | 2,067,929 | 3 | 3 | 464 | 0 | python,ruby,perl,scripting | Python (combined with PyQt) is a very solid combination for GUI desktop applications (note that while QT is LGPL, PyQt (the Python bindings) is dual licensed: GPL or commercial).
It offers the same (GUI library-wise) as Qt on C++ but with Python's specific strenghts. I'll list some of the more obvious ones:
rapid prototyping
extremely readable (hence maintainable) code
Should I stick with C(++) for desktop apps?
In general: no, unless you want to / need to (for a specific reason). | 0 | 1 | 0 | 1 | 2010-01-14T22:03:00.000 | 7 | 1.2 | true | 2,067,907 | 0 | 0 | 0 | 4 | So I'm aware of the big ammount of general-purpose scripting languages like Ruby, Python, Perl, maybe even PHP, etc. that actually claim being usable for creating desktop applications too.
I think my question can be answered clearly
Are there actually companies using a special scripting language only to create their applications?
Are there any real advantages on creating a product in a language like Python only?
I'm not talking about the viability of those languages for web-development!
Should I stick with C(++) for desktop apps?
best regards,
lamas |
"Real" and non-embedded use of Ruby, Python and their friends | 2,068,161 | 6 | 3 | 464 | 0 | python,ruby,perl,scripting | The company I work for uses Perl and Tk with PerlApp to build executable packages to produce or major software application.
Perl beats C and C++ for simplicity of code. You can do things in one line of Perl that take 20 lines of C.
We've used WxPerl for a few smaller projects. We'd like to move fully to WxPerl, but existing code works, so the move has a low priority until Wx can give us something we need that Tk can't.
Python is popular for building GUI apps, too. You may have heard about Chandler. That's a big Python app. There are many others as well.
Ruby is also a suitable choice.
PHP is breaking into the world of command line apps. I am not sure about the power or flexibility of its GUI toolkits. | 0 | 1 | 0 | 1 | 2010-01-14T22:03:00.000 | 7 | 1 | false | 2,067,907 | 0 | 0 | 0 | 4 | So I'm aware of the big ammount of general-purpose scripting languages like Ruby, Python, Perl, maybe even PHP, etc. that actually claim being usable for creating desktop applications too.
I think my question can be answered clearly
Are there actually companies using a special scripting language only to create their applications?
Are there any real advantages on creating a product in a language like Python only?
I'm not talking about the viability of those languages for web-development!
Should I stick with C(++) for desktop apps?
best regards,
lamas |
"Real" and non-embedded use of Ruby, Python and their friends | 2,072,473 | 2 | 3 | 464 | 0 | python,ruby,perl,scripting | I would recommend you not try to look for a language that is best for GUI apps but instead look for the language that you like the most and then use that to write your app.
Ruby, Python, Perl all have GUI tool kits available to them. Most of them have access to the same often used tool kits like TK, GTK, and Wx. The look and feel of a an app will be dependent more on the GUI tool kit than on the language, and performance wise your likely to see more impact for how you write your app than language choice.
If your comfortable with C++ then you should also look at C# or Java as options. While not scripting languages they have many of the same benefits like memory management and more sane string implementations. | 0 | 1 | 0 | 1 | 2010-01-14T22:03:00.000 | 7 | 0.057081 | false | 2,067,907 | 0 | 0 | 0 | 4 | So I'm aware of the big ammount of general-purpose scripting languages like Ruby, Python, Perl, maybe even PHP, etc. that actually claim being usable for creating desktop applications too.
I think my question can be answered clearly
Are there actually companies using a special scripting language only to create their applications?
Are there any real advantages on creating a product in a language like Python only?
I'm not talking about the viability of those languages for web-development!
Should I stick with C(++) for desktop apps?
best regards,
lamas |
"Real" and non-embedded use of Ruby, Python and their friends | 2,068,564 | 1 | 3 | 464 | 0 | python,ruby,perl,scripting | I have used a number of programs that were developed using scripted languages. Several embedded device vendors ship my group Windows-based configuration and debugging utilities written in TCL. Google's drawing program SketchUp has a lot of Ruby inside it (and users can create add-ons using Ruby). I have seen many Linux applications written in Python. There are many more examples out there, but often times finished applications are bundled up to the point where you can't really tell what's powering it on the inside.
Yes, there can be advantages to working with scripted languages. Some scripted languages make it easier to do specific tasks; for example, text processing is much easier (IMO) in a language like Ruby that has regular expression support and a robust String class than it is in plain old C. Generating a UI using a scripted language may make it easier to support multiple platforms, as all the platform-specific code is taken care of inside the language interpreter or pre-compiled libraries. For example, our suppliers who build TCL-based apps claim they can build the UI for an app using TCL in a fraction of the time it would take them to build it in C++ or VB, and then they can port it to Linux almost effortlessly.
On the other hand there are a few things that scripted languages typically aren't suited for, such as writing drivers or doing anything that requires low-level hardware access.
Most importantly, however, is this: modern languages have become quite powerful to the point where choice of language doesn't make as big of a difference as it used to be. Use the language you are most comfortable with. The learning curve associated with learning a new language will usually have a much larger impact on your project. | 0 | 1 | 0 | 1 | 2010-01-14T22:03:00.000 | 7 | 0.028564 | false | 2,067,907 | 0 | 0 | 0 | 4 | So I'm aware of the big ammount of general-purpose scripting languages like Ruby, Python, Perl, maybe even PHP, etc. that actually claim being usable for creating desktop applications too.
I think my question can be answered clearly
Are there actually companies using a special scripting language only to create their applications?
Are there any real advantages on creating a product in a language like Python only?
I'm not talking about the viability of those languages for web-development!
Should I stick with C(++) for desktop apps?
best regards,
lamas |
How would I discover the memory used by an application through a python script? | 2,175,363 | 0 | 1 | 241 | 0 | python,memory-management,windows-7,squish | Remember that Squish allows remote testing of the application. A system parameter queried via Python directly will only apply to the case of local testing.
An approach that works in either case is to call the currentApplicationContext() function that will give you a handle to the Application Under Test. It has a usedMemory property you can query. I don't recall which process property exactly is being queried but it should provide a rough indication. | 0 | 1 | 0 | 0 | 2010-01-18T05:30:00.000 | 3 | 0 | false | 2,084,063 | 0 | 0 | 0 | 2 | Recently I've found myself testing an aplication in Froglogic's Squish, using Python to create test scripts. Just the other day, the question of how much memory the program is using has come up, and I've found myself unable to answer it.
It seems reasonable to assume that there's a way to query the os (windows 7) API for the information, but I've no idea where to begin. Does anyone know how I'd go about this? |
How would I discover the memory used by an application through a python script? | 2,084,070 | -1 | 1 | 241 | 0 | python,memory-management,windows-7,squish | In command line: tasklist /FO LIST and parse the results?
Sorry, I don't know a Pythonic way. =P | 0 | 1 | 0 | 0 | 2010-01-18T05:30:00.000 | 3 | -0.066568 | false | 2,084,063 | 0 | 0 | 0 | 2 | Recently I've found myself testing an aplication in Froglogic's Squish, using Python to create test scripts. Just the other day, the question of how much memory the program is using has come up, and I've found myself unable to answer it.
It seems reasonable to assume that there's a way to query the os (windows 7) API for the information, but I've no idea where to begin. Does anyone know how I'd go about this? |
Clear terminal in Python | 2,084,517 | 2 | 241 | 516,293 | 0 | python,terminal | You could tear through the terminfo database, but the functions for doing so are in curses anyway. | 0 | 1 | 0 | 1 | 2010-01-18T07:34:00.000 | 27 | 0.014814 | false | 2,084,508 | 0 | 0 | 0 | 4 | Does any standard "comes with batteries" method exist to clear the terminal screen from a Python script, or do I have to go curses (the libraries, not the words)? |
Clear terminal in Python | 4,808,001 | 0 | 241 | 516,293 | 0 | python,terminal | If all you need is to clear the screen, this is probably good enough. The problem is there's not even a 100% cross platform way of doing this across linux versions. The problem is the implementations of the terminal all support slightly different things. I'm fairly sure that "clear" will work everywhere. But the more "complete" answer is to use the xterm control characters to move the cursor, but that requires xterm in and of itself.
Without knowing more of your problem, your solution seems good enough. | 0 | 1 | 0 | 1 | 2010-01-18T07:34:00.000 | 27 | 0 | false | 2,084,508 | 0 | 0 | 0 | 4 | Does any standard "comes with batteries" method exist to clear the terminal screen from a Python script, or do I have to go curses (the libraries, not the words)? |
Clear terminal in Python | 2,084,525 | 2 | 241 | 516,293 | 0 | python,terminal | python -c "from os import system; system('clear')" | 0 | 1 | 0 | 1 | 2010-01-18T07:34:00.000 | 27 | 0.014814 | false | 2,084,508 | 0 | 0 | 0 | 4 | Does any standard "comes with batteries" method exist to clear the terminal screen from a Python script, or do I have to go curses (the libraries, not the words)? |
Clear terminal in Python | 26,639,250 | 108 | 241 | 516,293 | 0 | python,terminal | Why hasn't anyone talked about just simply doing Ctrl+L in Windows or Cmd+L in Mac.
Surely the simplest way of clearing screen. | 0 | 1 | 0 | 1 | 2010-01-18T07:34:00.000 | 27 | 1 | false | 2,084,508 | 0 | 0 | 0 | 4 | Does any standard "comes with batteries" method exist to clear the terminal screen from a Python script, or do I have to go curses (the libraries, not the words)? |
How to deploy highly iterative updates | 2,085,042 | 1 | 0 | 349 | 0 | python,macos,deployment,ubuntu,rsync | to avoid su www I see two easy choices.
make a folder writable to you and readable by www's group in some path that the web-server will be able to serve, then you can rsync to that folder from somewhere on your local machine.
put your public ssh key in www's authorized_keys and rsync to the www user (a bit less security in some setups perhaps, but not much, and usually more convenient).
working around su www by putting your or its password in some file would seem far less secure.
A script to invoke "rsync -avz --partial /some/path www@server:some/other/path" should be quick to write in python (although I do not python well). | 0 | 1 | 0 | 1 | 2010-01-18T09:25:00.000 | 2 | 0.099668 | false | 2,084,969 | 0 | 0 | 0 | 1 | I have a set of binary assets (swf files) each about 150Kb in size. I am developing them locally on my home computer and I want to periodically deploy them for review. My current strategy is:
Copy the .swf's into a transfer directory that is also a hg (mercurial) repo.
hg push the changes to my slicehost VPN
ssh onto my slicehost VPN
cd to my transfer directory and hg up
su www and cp the changed files into my public folder for viewing.
I would like to automate the process. Best case scenario is something close to:
Copy the .swf's into a "quick deploy" directory
Run a single local script to do all of the above.
I am interested in:
advice on where to put passwords since I need to su www to transfer files into the public web directories.
how the division of responsibility between local machine and server is handled.
I think using rsync is a better tool than hg since I don't really need a revision history of these types of changes. I can write this as a python script, a shell script or however is considered a best practice.
Eventually I would like to build this into a system that can handle my modest deployment needs. Perhaps there is an open-source deployment system that handles this and other types of situations? I'll probably roll-my-own for this current need but long term I'd like something relatively flexible.
Note: My home development computer is OS X and the target server is some recent flavour of Ubuntu. I'd prefer a python based solution but if this is best handled from the shell I have no problems putting it together that way. |
How can I determine if a python script is executed from crontab? | 2,087,056 | 1 | 5 | 4,167 | 0 | python,unix,terminal,cron | An easier workaround would be to pass a flag to the script only from the crontab, like --crontab, and then just check for that flag. | 0 | 1 | 0 | 1 | 2010-01-18T15:17:00.000 | 6 | 0.033321 | false | 2,086,961 | 0 | 0 | 0 | 4 | I would like to know how can I determine if a python script is executed from crontab?
I don't want a solution that will require adding a parameter because I want to be able to detect this even from an imported module (not the main script). |
How can I determine if a python script is executed from crontab? | 2,087,816 | 0 | 5 | 4,167 | 0 | python,unix,terminal,cron | If you want to detect this from an imported module, I would have the main program set a global variable in the module, which would output different things depending on the value of this global variable (and have the main program decide how to set the variable through a flag that you would use in your crontab). This is quite robust (comparing to studying PPIDs). | 0 | 1 | 0 | 1 | 2010-01-18T15:17:00.000 | 6 | 0 | false | 2,086,961 | 0 | 0 | 0 | 4 | I would like to know how can I determine if a python script is executed from crontab?
I don't want a solution that will require adding a parameter because I want to be able to detect this even from an imported module (not the main script). |
How can I determine if a python script is executed from crontab? | 2,087,031 | 21 | 5 | 4,167 | 0 | python,unix,terminal,cron | Not quite what you asked, but maybe what you want is os.isatty(sys.stdout.fileno()), which tells if stdout is connected to (roughly speaking) a terminal. It will be false if you pipe the output to a file or another process, or if the process is run from cron. | 0 | 1 | 0 | 1 | 2010-01-18T15:17:00.000 | 6 | 1.2 | true | 2,086,961 | 0 | 0 | 0 | 4 | I would like to know how can I determine if a python script is executed from crontab?
I don't want a solution that will require adding a parameter because I want to be able to detect this even from an imported module (not the main script). |
How can I determine if a python script is executed from crontab? | 2,087,053 | 5 | 5 | 4,167 | 0 | python,unix,terminal,cron | Set an environment variable at the cron command invocation. That works even within a module, as you can just check os.getenv(). | 0 | 1 | 0 | 1 | 2010-01-18T15:17:00.000 | 6 | 0.16514 | false | 2,086,961 | 0 | 0 | 0 | 4 | I would like to know how can I determine if a python script is executed from crontab?
I don't want a solution that will require adding a parameter because I want to be able to detect this even from an imported module (not the main script). |
How to send a string from a python script at Google App Engine to the browser client as a file | 2,089,653 | 0 | 1 | 180 | 0 | python,html,google-app-engine,mime-types | Setting a content-disposition: attachment header will cause most browsers to download whatever you send them as a file. Safari sometimes ignores it. | 0 | 1 | 0 | 0 | 2010-01-18T22:18:00.000 | 2 | 0 | false | 2,089,635 | 0 | 0 | 1 | 1 | I have a python web-application running inside the Google App Engine.
The application creates on user-demand a string and I want the string to be send to the browser client (application/octet-stream?) as a file.
How can i realize this? |
C++ I/O with Python | 2,093,422 | 2 | 1 | 1,539 | 0 | c++,python | one dirty method:
You can use Python to read (raw_input) from stdin (if there is not input, it will wait). the C++ program writes to stdout. | 0 | 1 | 0 | 0 | 2010-01-19T12:28:00.000 | 4 | 0.099668 | false | 2,093,411 | 1 | 0 | 0 | 2 | I am writing a module in Python which runs a C++ Program using subprocess module. Once I get the output from C++, I need to store the that in Python List . How do I do that ? |
C++ I/O with Python | 2,093,491 | 0 | 1 | 1,539 | 0 | c++,python | In the command for the process you could do a redirect to a temporary file. Then read that file when the process returns. | 0 | 1 | 0 | 0 | 2010-01-19T12:28:00.000 | 4 | 0 | false | 2,093,411 | 1 | 0 | 0 | 2 | I am writing a module in Python which runs a C++ Program using subprocess module. Once I get the output from C++, I need to store the that in Python List . How do I do that ? |
Using Third-Party Modules with Python in an Automator Service | 2,094,638 | 0 | 2 | 2,403 | 0 | python,automator | When you install modules, you typically install them per Python instance. So in this case you have installed them for the Python in /Library/Frameworks/Python.framework/Versions/Current/bin/python, and it will then be available only for that Python. /usr/bin/python is then apparently another Python installation (I'm not an OS X expert).
To make it available for the /usr/bin/python installation, install it for /usr/bin/python. | 0 | 1 | 0 | 1 | 2010-01-19T13:40:00.000 | 2 | 0 | false | 2,093,837 | 1 | 0 | 0 | 1 | I have installed Py-Appscript on my machine and it can be used with the Python installation at /Library/Frameworks/Python.framework/Versions/Current/bin/python.
I am trying to use this installation of Py-Appscript with an Automator service. To do this, I use the Run Shell Script action and then set the Shell to usr/bin/python (which is my only choice for Python, unfortunately).
The usr/bin/python does not appear to have access to my third-party modules and crashes on the line:
from appscript import *
Is there a way for me to give usr/bin/python access to my third-party modules?
OR
Is there a way to tell Automator to use /Library/Frameworks/Python.framework/Versions/Current/bin/python instead?
I need Automator to run the Python directly from the Run Shell Script action. Any action that calls Python scripts that are external to Automator (via bin/bash, for example) does not perform quickly enough to be useful. |
Best F/OSS IDE for Python Web Development (Windows or Linux)? | 2,111,703 | 1 | 2 | 1,183 | 0 | python,ide | I am also working with mod_wsgi, python, apache software stack. I am using WingIDE as my environment, which gives you debugging capabilities. If you are vi person it has a VI/VIM personality which coupled with auto-completion makes for a very productive work environment. | 0 | 1 | 0 | 0 | 2010-01-19T21:13:00.000 | 6 | 0.033321 | false | 2,097,134 | 1 | 0 | 0 | 4 | Would like to know what is the best F/OSS IDE for Python Web development. I've always used vim myself, but I'm increasingly interested in having a tool that integrates syntax checking/highlighting, source control, debugging, and other IDE goodies.
I use both Windows and Linux as desktops, so recommendations for either platform are welcome!
Thanks,
-aj |
Best F/OSS IDE for Python Web Development (Windows or Linux)? | 2,120,061 | 0 | 2 | 1,183 | 0 | python,ide | What about IDLE? It's bundled with Python distributions. | 0 | 1 | 0 | 0 | 2010-01-19T21:13:00.000 | 6 | 0 | false | 2,097,134 | 1 | 0 | 0 | 4 | Would like to know what is the best F/OSS IDE for Python Web development. I've always used vim myself, but I'm increasingly interested in having a tool that integrates syntax checking/highlighting, source control, debugging, and other IDE goodies.
I use both Windows and Linux as desktops, so recommendations for either platform are welcome!
Thanks,
-aj |
Best F/OSS IDE for Python Web Development (Windows or Linux)? | 2,120,241 | 0 | 2 | 1,183 | 0 | python,ide | "syntax checking/highlighting, source control, debugging, and other IDE goodies"
Emacs fits this criteria, if you use the right extensions. Though it does have a much steeper learning curve than any IDE I know of. | 0 | 1 | 0 | 0 | 2010-01-19T21:13:00.000 | 6 | 0 | false | 2,097,134 | 1 | 0 | 0 | 4 | Would like to know what is the best F/OSS IDE for Python Web development. I've always used vim myself, but I'm increasingly interested in having a tool that integrates syntax checking/highlighting, source control, debugging, and other IDE goodies.
I use both Windows and Linux as desktops, so recommendations for either platform are welcome!
Thanks,
-aj |
Best F/OSS IDE for Python Web Development (Windows or Linux)? | 2,120,215 | 0 | 2 | 1,183 | 0 | python,ide | I've been using Komodo Edit for a while now and it's quite good for Python development. It's free and I think it's also open-source now, though it wasn't always so. | 0 | 1 | 0 | 0 | 2010-01-19T21:13:00.000 | 6 | 0 | false | 2,097,134 | 1 | 0 | 0 | 4 | Would like to know what is the best F/OSS IDE for Python Web development. I've always used vim myself, but I'm increasingly interested in having a tool that integrates syntax checking/highlighting, source control, debugging, and other IDE goodies.
I use both Windows and Linux as desktops, so recommendations for either platform are welcome!
Thanks,
-aj |
How can I keep on-the-fly application-level statistics in an application running under Apache? | 2,113,376 | 1 | 3 | 333 | 0 | python,multithreading,apache,pylons,fork | Perhaps you could keep the relevant counters and other statistics in a memcached, that is accessed by all apache processes? | 0 | 1 | 0 | 0 | 2010-01-21T22:11:00.000 | 3 | 1.2 | true | 2,113,352 | 0 | 0 | 1 | 1 | I have an application running under apache that I want to keep "in the moment" statistics on. I want to have the application tell me things like:
requests per second, broken down by types of request
latency to make requests to various backend services via thrift (broken down by service and server)
number of errors being served per second
etc.
I want to do this without any external dependencies. However, I'm running into issues sharing statistics between apache processes. Obviously, I can't just use global memory. What is a good pattern for this sort of issue?
The application is written in python using pylons, though I suspect this is more of a "communication across processes" design question than something that's python specific. |
Determining Whether a Directory is Writeable | 2,113,457 | 78 | 120 | 85,042 | 0 | python,file,permissions,directory,operating-system | It may seem strange to suggest this, but a common Python idiom is
It's easier to ask for forgiveness
than for permission
Following that idiom, one might say:
Try writing to the directory in question, and catch the error if you don't have the permission to do so. | 0 | 1 | 0 | 1 | 2010-01-21T22:24:00.000 | 10 | 1 | false | 2,113,427 | 0 | 0 | 0 | 1 | What would be the best way in Python to determine whether a directory is writeable for the user executing the script? Since this will likely involve using the os module I should mention I'm running it under a *nix environment. |
Why does not the Python MSI installers come with Tcl/Tk header files? | 2,114,635 | 1 | 0 | 511 | 0 | python,tcl,tkinter,header-files,tk | The windows installers don't include ANY source files. Simply because that's how windows apps work. It can be compiled on one computer and it will work on all. So windows versions of things like python and php come precompiled with all options enabled.
If you want the source files you have to download a source tarball or something. | 0 | 1 | 0 | 0 | 2010-01-22T02:40:00.000 | 2 | 0.099668 | false | 2,114,615 | 1 | 0 | 0 | 1 | The MSI installers downloadable from python.org does not include Tcl/Tk header (not source) files (that are required to compile some packages like matplotlib). Does anyone know of the rationale behind not including them? |
Python approach to Web Services and/or handeling GET and POST | 2,114,986 | 1 | 2 | 474 | 0 | python,webserver,twisted,tornado | I'd recommend against building your own web server and handling raw socket calls to build web applications; it makes much more sense to just write your web services as wsgi applications and use an existing web server, whether it's something like tornado or apache with mod_wsgi. | 0 | 1 | 1 | 0 | 2010-01-22T04:01:00.000 | 3 | 1.2 | true | 2,114,847 | 0 | 0 | 0 | 1 | I have been working with python for a while now. Recently I got into Sockets with Twisted which was good for learning Telnet, SSH, and Message Passing. I wanted to take an idea and implement it in a web fashion. A week of searching and all I can really do is create a resource that handles GET and POST all to itself. And this I am told is bad practice.
So The questions I have after one week:
* Are other options like Tornado and Standard Python Sockets a better (or more popular) approach?
* Should one really use separate resources in Twisted GET and POST operations?
* What is a good resource to start in this area of Python Development?
My background with languages are C, Java, HTML/DHTML/XHTML/XML and my main systems (even home) are Linux. |
virtual serial port on Arch linux | 2,648,514 | 3 | 2 | 2,709 | 0 | python | socat command is solution.
First you need to install socat:
pacman -S socat
Just insert this in console, but first you should be login as root:
socat PTY,link=/dev/ttyVirtualS0,echo=0 PTY,link=/dev/ttyVirtualS1,echo=0
and now we have two virtual serial ports which are virtualy connected:
/dev/ttyVirtualS0 <-------> /dev/ttyVirtualS1 | 0 | 1 | 0 | 0 | 2010-01-22T17:39:00.000 | 1 | 1.2 | true | 2,119,217 | 0 | 0 | 0 | 1 | I am using Arch linux and I need to create virtual serial port on it. I tried everything but it seems doesnt work. All I want is to connect that virtual port to another virtual port over TCP and after that to use it in my python application to communicate with python application to other side. Is that posible? Please help me.
Thanx |
Dynamic terminal printing with python | 2,122,421 | 0 | 57 | 88,169 | 0 | python,terminal | When I do this in shell scripts on Unix, I tend to just use the clear program. You can use the Python subprocess module to execute it. It will at least get you what you're looking for quickly. | 0 | 1 | 0 | 0 | 2010-01-23T06:33:00.000 | 10 | 0 | false | 2,122,385 | 0 | 0 | 0 | 2 | Certain applications like hellanzb have a way of printing to the terminal with the appearance of dynamically refreshing data, kind of like top().
Whats the best method in python for doing this? I have read up on logging and curses, but don't know what to use. I am creating a reimplementation of top. If you have any other suggestions I am open to them as well. |
Dynamic terminal printing with python | 68,317,499 | 0 | 57 | 88,169 | 0 | python,terminal | I don't think that including another libraries in this situation is really good practice. So, solution:
print("\rCurrent: %s\t%s" % (str(<value>), <another_value>), end="") | 0 | 1 | 0 | 0 | 2010-01-23T06:33:00.000 | 10 | 0 | false | 2,122,385 | 0 | 0 | 0 | 2 | Certain applications like hellanzb have a way of printing to the terminal with the appearance of dynamically refreshing data, kind of like top().
Whats the best method in python for doing this? I have read up on logging and curses, but don't know what to use. I am creating a reimplementation of top. If you have any other suggestions I am open to them as well. |
Python / Linux/ Daemon process trying to show gtk.messagedialog | 2,124,579 | 3 | 0 | 772 | 0 | python,linux,ubuntu,gtk,daemon | If five users are logged in to X sessions, who gets the message? Everyone?
If someone is logged in locally but only using the tty, and not X11, should they see the message?
If someone is logged in remotely via ssh -X to run a graphic application on their own system off of your CPU, should they see the message? How would you get it to them?
Linux is too flexible for your current approach. The standard way to do this is for any user who is interested in the kind of message you are sending to run an application that receives the message and displays it in a way of its choosing. Dbus is a popular way of setting up the messaging process. This way remote users or users logged in with TTY mode only still have an option for seeing the message. | 0 | 1 | 0 | 0 | 2010-01-23T19:48:00.000 | 3 | 0.197375 | false | 2,124,455 | 0 | 0 | 0 | 2 | on Ubuntu 8/9,
i'm trying to write a daemon in python, that monitors a certain network condition and informs the user using a gtk.messagedialog.
I installed this script using rc-update.
The daemon starts at boot, but doesn't show the dialog even after I login. I assume because init.d starts my daemon at tty1 and no gnome is available.
Tried running the dialog through a subprocess, but it seems to inherit the same run environment.
Whats the best practice for this sort of thing. |
Python / Linux/ Daemon process trying to show gtk.messagedialog | 2,136,615 | 0 | 0 | 772 | 0 | python,linux,ubuntu,gtk,daemon | You may use notify-send (from libnotify-bin package) to send notifications to desktop users from your daemon. | 0 | 1 | 0 | 0 | 2010-01-23T19:48:00.000 | 3 | 0 | false | 2,124,455 | 0 | 0 | 0 | 2 | on Ubuntu 8/9,
i'm trying to write a daemon in python, that monitors a certain network condition and informs the user using a gtk.messagedialog.
I installed this script using rc-update.
The daemon starts at boot, but doesn't show the dialog even after I login. I assume because init.d starts my daemon at tty1 and no gnome is available.
Tried running the dialog through a subprocess, but it seems to inherit the same run environment.
Whats the best practice for this sort of thing. |
iPhone app with Google App Engine | 2,124,718 | 2 | 2 | 1,021 | 1 | iphone,python,google-app-engine,gql | True, Google App Engine is a very cool product, but the datastore is a different beast than a regular mySQL database. That's not to say that what you need can't be done with the GAE datastore; however it may take some reworking on your end.
The most prominent different that you notice right off the start is that GAE uses an object-relational mapping for its data storage scheme. Essentially object graphs are persisted in the database, maintaining there attributes and relationships to other objects. In many cases ORM (object relational mappings) map fairly well on top of a relational database (this is how Hibernate works). The mapping is not perfect though and you will find that you need to make alterations to persist your data. Also, GAE has some unique contraints that complicate things a bit. One contraint that bothers me a lot is not being able to query for attribute paths: e.g. "select ... where dog.owner.name = 'bob' ". It is these rules that force you to read and understand how GAE data store works before you jump in.
I think GAE could work well in your situation. It just may take some time to understand ORM persistence in general, and GAE datastore in specifics. | 0 | 1 | 0 | 0 | 2010-01-23T20:55:00.000 | 4 | 1.2 | true | 2,124,688 | 0 | 0 | 1 | 3 | I've prototyped an iPhone app that uses (internally) SQLite as its data base. The intent was to ultimately have it communicate with a server via PHP, which would use MySQL as the back-end database.
I just discovered Google App Engine, however, but know very little about it. I think it'd be nice to use the Python interface to write to the data store - but I know very little about GQL's capability. I've basically written all the working database code using MySQL, testing internally on the iPhone with SQLite. Will GQL offer the same functionality that SQL can? I read on the site that it doesn't support join queries. Also is it truly relational?
Basically I guess my question is can an app that typically uses SQL backend work just as well with Google's App Engine, with GQL?
I hope that's clear... any guidance is great. |
iPhone app with Google App Engine | 2,124,705 | 1 | 2 | 1,021 | 1 | iphone,python,google-app-engine,gql | That's a pretty generic question :)
Short answer: yes. It's going to involve some rethinking of your data model, but yes, changes are you can support it with the GAE Datastore API.
When you create your Python models (think of these as tables), you can certainly define references to other models (so now we have a foreign key). When you select this model, you'll get back the referencing models (pretty much like a join).
It'll most likely work, but it's not a drop in replacement for a mySQL server. | 0 | 1 | 0 | 0 | 2010-01-23T20:55:00.000 | 4 | 0.049958 | false | 2,124,688 | 0 | 0 | 1 | 3 | I've prototyped an iPhone app that uses (internally) SQLite as its data base. The intent was to ultimately have it communicate with a server via PHP, which would use MySQL as the back-end database.
I just discovered Google App Engine, however, but know very little about it. I think it'd be nice to use the Python interface to write to the data store - but I know very little about GQL's capability. I've basically written all the working database code using MySQL, testing internally on the iPhone with SQLite. Will GQL offer the same functionality that SQL can? I read on the site that it doesn't support join queries. Also is it truly relational?
Basically I guess my question is can an app that typically uses SQL backend work just as well with Google's App Engine, with GQL?
I hope that's clear... any guidance is great. |
iPhone app with Google App Engine | 2,125,297 | 2 | 2 | 1,021 | 1 | iphone,python,google-app-engine,gql | GQL offers almost no functionality at all; it's only used for SELECT queries, and it only exists to make writing SELECT queries easier for SQL programmers. Behind the scenes, it converts your queries to db.Query objects.
The App Engine datastore isn't a relational database at all. You can do some stuff that looks relational, but my advice for anyone coming from an SQL background is to avoid GQL at all costs to avoid the trap of thinking the datastore is anything at all like an RDBMS, and to forget everything you know about database design. Specifically, if you're normalizing anything, you'll soon wish you hadn't. | 0 | 1 | 0 | 0 | 2010-01-23T20:55:00.000 | 4 | 0.099668 | false | 2,124,688 | 0 | 0 | 1 | 3 | I've prototyped an iPhone app that uses (internally) SQLite as its data base. The intent was to ultimately have it communicate with a server via PHP, which would use MySQL as the back-end database.
I just discovered Google App Engine, however, but know very little about it. I think it'd be nice to use the Python interface to write to the data store - but I know very little about GQL's capability. I've basically written all the working database code using MySQL, testing internally on the iPhone with SQLite. Will GQL offer the same functionality that SQL can? I read on the site that it doesn't support join queries. Also is it truly relational?
Basically I guess my question is can an app that typically uses SQL backend work just as well with Google's App Engine, with GQL?
I hope that's clear... any guidance is great. |
PID files hanging around for daemons after server restart | 2,134,763 | 3 | 3 | 727 | 0 | python,linux,sigterm | Not a direct solution but it might be a good idea to check for an actual process running with the pid in the pid file at startup and if none exists, to cleanup the stale file.
It's possible that your process is getting a SIGKILL before it has a chance to cleanup the pid file. | 0 | 1 | 0 | 1 | 2010-01-25T18:53:00.000 | 3 | 0.197375 | false | 2,134,732 | 0 | 0 | 0 | 1 | I have some daemons that use PID files to prevent parallel execution of my program. I have set up a signal handler to trap SIGTERM and do the necessary clean-up including the PID file. This works great when I test using "kill -s SIGTERM #PID". However, when I reboot the server the PID files are still hanging around preventing start-up of the daemons. It is my understanding that SIGTERM is sent to all processes when a server is shutting down. Should I be trapping another signal (SIGINT, SIGQUIT?) in my daemon? |
Named pipe is not flushing in Python | 2,136,921 | 0 | 4 | 4,276 | 0 | python,select,ipc,named-pipes,flush | The flush operation is irrelevant for named pipes; the data for named pipes is held strictly in memory, and won't be released until it is read or the FIFO is closed. | 0 | 1 | 0 | 0 | 2010-01-26T00:40:00.000 | 3 | 0 | false | 2,136,844 | 1 | 0 | 0 | 3 | I have a named pipe created via the os.mkfifo() command. I have two different Python processes accessing this named pipe, process A is reading, and process B is writing. Process A uses the select function to determine when there is data available in the fifo/pipe. Despite the fact that process B flushes after each write call, process A's select function does not always return (it keeps blocking as if there is no new data). After looking into this issue extensively, I finally just programmed process B to add 5KB of garbage writes before and after my real call, and likewise process A is programmed to ignore those 5KB. Now everything works fine, and select is always returning appropriately. I came to this hack-ish solution by noticing that process A's select would return if process B were to be killed (after it was writing and flushing, it would sleep on a read pipe). Is there a problem with flush in Python for named pipes? |
Named pipe is not flushing in Python | 2,200,679 | 1 | 4 | 4,276 | 0 | python,select,ipc,named-pipes,flush | What APIs are you using? os.read() and os.write() don't buffer anything. | 0 | 1 | 0 | 0 | 2010-01-26T00:40:00.000 | 3 | 0.066568 | false | 2,136,844 | 1 | 0 | 0 | 3 | I have a named pipe created via the os.mkfifo() command. I have two different Python processes accessing this named pipe, process A is reading, and process B is writing. Process A uses the select function to determine when there is data available in the fifo/pipe. Despite the fact that process B flushes after each write call, process A's select function does not always return (it keeps blocking as if there is no new data). After looking into this issue extensively, I finally just programmed process B to add 5KB of garbage writes before and after my real call, and likewise process A is programmed to ignore those 5KB. Now everything works fine, and select is always returning appropriately. I came to this hack-ish solution by noticing that process A's select would return if process B were to be killed (after it was writing and flushing, it would sleep on a read pipe). Is there a problem with flush in Python for named pipes? |
Named pipe is not flushing in Python | 2,508,809 | 1 | 4 | 4,276 | 0 | python,select,ipc,named-pipes,flush | To find out if Python's internal buffering is causing your problems, when running your scripts do "python -u" instead of "python". This will force python in to "unbuffered mode" which will cause all output to be printed instantaneously. | 0 | 1 | 0 | 0 | 2010-01-26T00:40:00.000 | 3 | 0.066568 | false | 2,136,844 | 1 | 0 | 0 | 3 | I have a named pipe created via the os.mkfifo() command. I have two different Python processes accessing this named pipe, process A is reading, and process B is writing. Process A uses the select function to determine when there is data available in the fifo/pipe. Despite the fact that process B flushes after each write call, process A's select function does not always return (it keeps blocking as if there is no new data). After looking into this issue extensively, I finally just programmed process B to add 5KB of garbage writes before and after my real call, and likewise process A is programmed to ignore those 5KB. Now everything works fine, and select is always returning appropriately. I came to this hack-ish solution by noticing that process A's select would return if process B were to be killed (after it was writing and flushing, it would sleep on a read pipe). Is there a problem with flush in Python for named pipes? |
Bash or Python or Awk to match and modify files | 2,139,973 | 0 | 1 | 592 | 0 | python,bash,awk | Doing this in Python should be pretty trivial. It's probably possible in awk, but sounds a bit too complicated to be fun. It's surely is possible in bash, but programming in bash is for masochists.
I'd go with Python, of the given options, although Perl and Ruby are good options too if you know them. | 0 | 1 | 0 | 0 | 2010-01-26T14:02:00.000 | 5 | 0 | false | 2,139,823 | 0 | 0 | 0 | 1 | I have a set of 10000 files c1.dat ... c10000.dat. Each of these files contains a line which starts with @ and contains a string with spaces specific for this file, lije c37 7.379 6.23.
I have another set of 10000 files kind of determined_cXXX_send.dat (where XXX goes from 1 to 10000). Each of these files has only one line. Each line is of thsis type:
_1 1 3456.000000 -21 0 -98.112830 -20.326192
What I would like to do is, for each number XXX (between 1 to 10000), get from the cXXX.dat file the string like c37 7.379 6.23 , and add it in the file determined_cXXX_send.dat to the beginning of the file so I get:
c37 7.379 6.23 _1 1 3456.000000 -21 0 -98.112830 -20.326192
I tried with both bash and python but got no good solution.
What would be the best approach?
thanks |
I'm using Hadoop for data processing with python, what file format should be used? | 2,149,550 | 1 | 3 | 1,633 | 0 | python,hadoop | If you're using Hadoop Streaming, your input can be in any line-based format; your mapper and reducer input comes from sys.stdin, which you read any way you want. You don't need to use the default tab-deliminated fields (although in my experience, one format should be used among all tasks for consistency when possible).
However, with the default splitter and partitioner, you cannot control how your input and output is partitioned or sorted, so you your mappers and reducers must decide whether any particular line is a header line or a data line using only that line - they won't know the original file boundaries.
You may be able to specify a partitioner which lets a mapper assume that the first input line is the first line in a file, or even move away from a line-based format. This was hard to do the last time I tried with Streaming, and in my opinion mapper and reducer tasks should be input agnostic for efficiency and reusability - it's best to think of a stream of input records, rather than keeping track of file boundaries.
Another option with Streaming is to ship header information in a separate file, which is included with your data. It will be available to your mappers and reducers in their working directories. One idea would be to associate each line with the appropriate header information in an inital task, perhaps by using three fields per line instead of two, rather than associating them by file.
In general, try and treat the input as a stream and don't rely on file boundaries, input size, or order. All of these restrictions can be implemented, but at the cost of complexity. If you do need to implement them, do so at the beginning or end of your task chain.
If you're using Jython or SWIG, you may have other options, but I found those harder to work with than Streaming. | 0 | 1 | 0 | 0 | 2010-01-27T02:21:00.000 | 2 | 0.099668 | false | 2,144,171 | 1 | 0 | 0 | 1 | I'm using Hadoop for data processing with python, what file format should be used?
I have project with a substantial amount of text pages.
Each text file has some header information that I need to preserve during the processing; however, I don't want the headers to interfere with the clustering algorithms.
I'm using python on Hadoop (or is there a sub package better suited?)
How should I format my text files, and store those text files in Hadoop for processing? |
How can I run Python code on a windows system? | 2,145,311 | 1 | 2 | 1,021 | 0 | python,windows | You don't say in your question what you are going to use Python for, so most answers above are completely correct in pointing out that you install Python by downloading it from Python.org. But you seem to expect more. Is it correct to assume you are going to use it to do web development?
In that case, prepare for a shock, because Python doesn't do things like PHP does at all. You want to use a web framework. There are loads of them for Python. Which on to use depends both on what you are going to do, and your personal taste.
The only "Download as one file and install to run" web system I know of that's based on Python is Plone. And Plone is great, but it's not a webframework, it's a content management system. But hey, maybe that's what you want? :-)
The other frameworks are usually easy to install as well.
(In the long run: If you are going to do web development, you'll be happier with something Unix based. Just saying.) | 0 | 1 | 0 | 0 | 2010-01-27T07:45:00.000 | 6 | 0.033321 | false | 2,145,232 | 1 | 0 | 0 | 2 | I am used to using PHP and it is easy to set up, I can just run an exe package like Xampp and have apache and PHP running in 5 minutes on my windows system. Is there something similar to Python? |
How can I run Python code on a windows system? | 2,145,253 | 0 | 2 | 1,021 | 0 | python,windows | Download python installer and run python. | 0 | 1 | 0 | 0 | 2010-01-27T07:45:00.000 | 6 | 0 | false | 2,145,232 | 1 | 0 | 0 | 2 | I am used to using PHP and it is easy to set up, I can just run an exe package like Xampp and have apache and PHP running in 5 minutes on my windows system. Is there something similar to Python? |
When running a python script in IDLE, is there a way to pass in command line arguments (args)? | 65,991,396 | 1 | 44 | 83,523 | 0 | python,command-line-arguments,python-idle | IDLE now has a GUI way to add arguments to sys.argv! Under the 'Run' menu header select 'Run... Customized' or just Shift+F5...A dialog will appear and that's it! | 0 | 1 | 0 | 0 | 2010-01-27T17:33:00.000 | 12 | 0.016665 | false | 2,148,994 | 1 | 0 | 0 | 3 | I'm testing some python code that parses command line input. Is there a way to pass this input in through IDLE? Currently I'm saving in the IDLE editor and running from a command prompt.
I'm running Windows. |
When running a python script in IDLE, is there a way to pass in command line arguments (args)? | 33,057,011 | 1 | 44 | 83,523 | 0 | python,command-line-arguments,python-idle | Visual Studio 2015 has an addon for Python. You can supply arguments with that. VS 2015 is now free. | 0 | 1 | 0 | 0 | 2010-01-27T17:33:00.000 | 12 | 0.016665 | false | 2,148,994 | 1 | 0 | 0 | 3 | I'm testing some python code that parses command line input. Is there a way to pass this input in through IDLE? Currently I'm saving in the IDLE editor and running from a command prompt.
I'm running Windows. |
When running a python script in IDLE, is there a way to pass in command line arguments (args)? | 54,038,733 | 0 | 44 | 83,523 | 0 | python,command-line-arguments,python-idle | import sys
sys.argv = [sys.argv[0], '-arg1', 'val1', '-arg2', 'val2']
//If you're passing command line for 'help' or 'verbose' you can say as:
sys.argv = [sys.argv[0], '-h'] | 0 | 1 | 0 | 0 | 2010-01-27T17:33:00.000 | 12 | 0 | false | 2,148,994 | 1 | 0 | 0 | 3 | I'm testing some python code that parses command line input. Is there a way to pass this input in through IDLE? Currently I'm saving in the IDLE editor and running from a command prompt.
I'm running Windows. |
Change process name of Python script | 2,155,073 | 1 | 11 | 5,268 | 0 | python,windows,process | You could use py2exe to turn your Python program into a self-contained executable with whatever name that you choose to give it. | 0 | 1 | 0 | 0 | 2010-01-28T14:11:00.000 | 2 | 0.099668 | false | 2,155,042 | 1 | 0 | 0 | 1 | Windows Task Manager lists all running processes in the "Processes" tab.
The image name of Python scripts is always python.exe, or pythonw.exe, or the name of the Python interpreter.
Is there a nice way to change the image name of a Python script, other than changing the name of the Python interpreter? |
wxPython autocomplete | 2,159,924 | -1 | 0 | 1,719 | 0 | python,ide,editor,autocomplete,wxpython | try use brain to autocomplete... :)
just joking. when I coding in in PyQt4, I open qt-assistant and search the manual,
and wrap myclass like : MyButton = QPusuButton
I think it is impossible to use autocomplete in python,
because only in runtime the computer know what happens. | 0 | 1 | 0 | 0 | 2010-01-29T04:38:00.000 | 8 | -0.024995 | false | 2,159,901 | 0 | 0 | 0 | 4 | What editors or IDEs offer decent autocompletion for wxPython on Windows or Linux? Are there any? I tried several and support is either non-existant or limited. |
wxPython autocomplete | 2,160,186 | 0 | 0 | 1,719 | 0 | python,ide,editor,autocomplete,wxpython | Whatever the default windows IDE for Python is can autocomplete, with code not from the standard library. | 0 | 1 | 0 | 0 | 2010-01-29T04:38:00.000 | 8 | 0 | false | 2,159,901 | 0 | 0 | 0 | 4 | What editors or IDEs offer decent autocompletion for wxPython on Windows or Linux? Are there any? I tried several and support is either non-existant or limited. |
wxPython autocomplete | 2,160,555 | 1 | 0 | 1,719 | 0 | python,ide,editor,autocomplete,wxpython | I use Eclipse/PyDev for wxPython development. I've been very satisfied with Eclipse for Python development productivity. It does have support for autocompletion for wxPython. | 0 | 1 | 0 | 0 | 2010-01-29T04:38:00.000 | 8 | 0.024995 | false | 2,159,901 | 0 | 0 | 0 | 4 | What editors or IDEs offer decent autocompletion for wxPython on Windows or Linux? Are there any? I tried several and support is either non-existant or limited. |
wxPython autocomplete | 31,372,616 | 1 | 0 | 1,719 | 0 | python,ide,editor,autocomplete,wxpython | I'm partial to PyCharm. However, most IDEs will auto complete code based on what modules you've imported, so it's not specific to PyCharm. | 0 | 1 | 0 | 0 | 2010-01-29T04:38:00.000 | 8 | 0.024995 | false | 2,159,901 | 0 | 0 | 0 | 4 | What editors or IDEs offer decent autocompletion for wxPython on Windows or Linux? Are there any? I tried several and support is either non-existant or limited. |
Can access AppEngine SDK sites via local ip-address when localhost works just fine and a MacOSX | 29,235,036 | 0 | 10 | 6,718 | 0 | python,google-app-engine,facebook,macos | In Android Studio with Google App Engine plugin.
Just add httpAddress = '0.0.0.0' to app cfg in build.grade file. | 0 | 1 | 0 | 0 | 2010-01-30T15:47:00.000 | 5 | 0 | false | 2,168,409 | 0 | 0 | 1 | 1 | Can access AppEngine SDK sites via local ip-address when localhost works just fine and a MacOSX using the GoogleAppEngineLauncher.
I'm trying to setup facebook development site (using a dyndns.org hostname pointing at my firewall which redirects the call to my mac book).
It seems like GoogleAppEngineLauncher defaults to localhost and blocks access to the ip-address directly.
Is there a way to change that behaviour in GoogleAppEngineLauncher?
Is this some kind of limitation built in by Google?
It doesn't seem to be an issue of configuration, because there isn't any settings for this.
So I'm guessing patching the source will be required? |
python how to -generate license- using time module | 2,171,940 | 0 | 2 | 1,108 | 0 | python,datetime,time,licensing | Nearly every standard function will return the machine time that can be adjusted by the user.
One possibility is to call a web service that returns the "correct" time. But this is only possible if you can assume internet access.
And may be should ask your self the question if that hassle is really worth the effort? | 0 | 1 | 0 | 0 | 2010-01-31T13:58:00.000 | 3 | 0 | false | 2,171,902 | 0 | 0 | 0 | 1 | I'm searching for a way to generate a (limited time license) .so
when a user starts the program . it has to check license date first before the program runs.
but the problem is :
i tried a couple of solutions . one of them is python's time.ctime , (to check time and see if it's realy during the license time) and it returns the time of the machine, so whenever a user want to use software without license he'll just change time of the machine.
i hope the idea is clear enough
any better ideas?
please inform me if you want more explanation |
How can I grap resident set size from Python on Solaris? | 2,193,909 | 0 | 1 | 479 | 0 | python,solaris,getrusage | Well...you can pull it from the pmap application by calling pmap -x. But I was looking more for a way to access the info directly in /proc from my app. The only way to do it is to access the /proc/<pid>/xmap file. Unfortunately, the data is stored as an array of prxmap structs...so either a Python C-module is in order or using the ctypes module. I'll post an update when I get one of those written. | 0 | 1 | 0 | 1 | 2010-02-01T21:17:00.000 | 2 | 0 | false | 2,180,156 | 0 | 0 | 0 | 1 | Calling resource.getrusage() from Python returns a 0 value for resident set size on Solaris and Linux systems. On Linux you can pull the RSS From /proc//status instead. Does anybody have a good way to pull RSS on Solaris, either similar or not to the Linux workaround? |
Pylons and Flex 3 | 2,258,476 | 0 | 0 | 251 | 0 | python,apache-flex,pylons,twisted | I'm working on webapp which has client-side UI coded in Flex 3 and backend is Pylons app. Our client communicates with backend using HTTP GET and POST requests, POST request bodies and all response bodies carry data in JSON format. Works well, just few gotchas:
Flex apps cannot do PUT and DELETE requests. We work around this by doing POST requests and specifying the "real" intended method in X-HTTP-Method-Override header. Then we have some extra routes in routing configuration that handle these requests and treat them as normal PUTs and GETs.
Flex apps can send custom HTTP headers but cannot read custom headers received from server (well they can on IE, but cannot on Firefox and Chrome, IIRC). | 0 | 1 | 0 | 0 | 2010-02-02T15:45:00.000 | 1 | 0 | false | 2,185,329 | 0 | 0 | 1 | 1 | Has anyone used Python/Pylons as the server backend for a Flex 3 application? Does anyone have any thoughts on how well this would work? I read Bruce Eckel's article about tying Flex 3 to Twisted, and I've done Twisted programming, but for just a web service I think Pylons is simpler to use.
Thanks in advance,
Doug |
Need a way to count entities in GAE datastore that meet a certain condition? (over 1000 entities) | 4,088,516 | 3 | 4 | 4,016 | 0 | python,google-app-engine,google-cloud-datastore | Results of datastore count() queries
and offsets for all datastore queries
are no longer capped at 1000.
Since Version 1.3.6 | 0 | 1 | 0 | 0 | 2010-02-04T17:02:00.000 | 6 | 0.099668 | false | 2,201,580 | 0 | 0 | 1 | 1 | I'm building an app on GAE that needs to report on events occurring. An event has a type and I also need to report by event type.
For example, say there is an event A, B and C. They occur periodically at random. User logs in and creates a set of entities to which those events can be attributed. When the user comes back to check the status, I need to be able to tell how many events of A, B and/or C occurred during a specific time range, say a day or a month.
The 1000 limit is throwing a wrench into how I would normally do it. I don't need to retrieve all of the entities and present them to the user, but I do need to show the total count for a specific date range. Any suggestions?
I'm a bit of python/GAE noob... |
Code Changes While Keeping Large Objects In Memory in Python | 2,203,707 | 0 | 2 | 128 | 0 | python | The usual problem with reload is that instances stay bound to the old version of the class. If you are not keeping old instances around, reload is simple and works very well. | 0 | 1 | 0 | 0 | 2010-02-04T21:49:00.000 | 2 | 0 | false | 2,203,492 | 1 | 0 | 0 | 1 | I have an application that starts by loading a large pickled trie (173M) from disk and then uses it to do some processing. I'm making frequent changes to the processing part, which is inconvenient because loading the trie takes 15 minutes or so. I'm looking for a way to eliminate the repeated loading during testing, since the trie never changes.
One thing I can't do is use a smaller version of the trie.
Ideas I've had so far are memcached and turning the trie into a web service that accepts a query and returns the data I need.
What I'm looking for is the least-effort path to a situation in which I can repeatedly change and reload the processing code while maintaining access to the in-memory trie. A direct reference to the tree would be preferable since this would require minimal code changes, but really I'm looking to minimize overall effort. |
Minimal linear regression program | 2,204,122 | 1 | 2 | 2,428 | 0 | c++,python,bash,linear-algebra | How about extracting the coeffs into a file, import to another machine and then use Excel/Matlab/whatever other program that does this for you? | 0 | 1 | 0 | 0 | 2010-02-04T23:45:00.000 | 3 | 0.066568 | false | 2,204,087 | 0 | 1 | 0 | 2 | I am running some calculations in an external machine and at the end I get X, Y pairs. I want to apply linear regression and obtain A, B, and R2. In this machine I can not install anything (it runs Linux) and has basic stuff installed on it, python, bash (of course), etc.
I wonder what would be the best approach to use a script (python, bash, etc) or program (I can compile C and C++) that gives me the linear regression coefficients without the need to add external libraries (numpy, etc) |
Minimal linear regression program | 2,204,124 | 3 | 2 | 2,428 | 0 | c++,python,bash,linear-algebra | For a single, simple, known function (as in your case: a line) it is not hard to simply code a basic least square routine from scratch (but does require some attention to detail). It is a very common assignment in introductory numeric analysis classes.
So, look up least squares on wikipedia or mathworld or in a text book and go to town. | 0 | 1 | 0 | 0 | 2010-02-04T23:45:00.000 | 3 | 0.197375 | false | 2,204,087 | 0 | 1 | 0 | 2 | I am running some calculations in an external machine and at the end I get X, Y pairs. I want to apply linear regression and obtain A, B, and R2. In this machine I can not install anything (it runs Linux) and has basic stuff installed on it, python, bash (of course), etc.
I wonder what would be the best approach to use a script (python, bash, etc) or program (I can compile C and C++) that gives me the linear regression coefficients without the need to add external libraries (numpy, etc) |
Detect 64bit OS (windows) in Python | 3,740,665 | 6 | 41 | 46,662 | 0 | python,windows,64-bit | Many of these proposed solutions, such as platform.architecture(), fail because their results depend on whether you are running 32-bit or 64-bit Python.
The only reliable method I have found is to check for the existence of os.environ['PROGRAMFILES(X86)'], which is unfortunately hackish. | 0 | 1 | 0 | 0 | 2010-02-05T16:55:00.000 | 22 | 1 | false | 2,208,828 | 1 | 0 | 0 | 4 | Does anyone know how I would go about detected what bit version Windows is under Python. I need to know this as a way of using the right folder for Program Files.
Many thanks |
Detect 64bit OS (windows) in Python | 2,208,869 | 38 | 41 | 46,662 | 0 | python,windows,64-bit | I guess you should look in os.environ['PROGRAMFILES'] for the program files folder. | 0 | 1 | 0 | 0 | 2010-02-05T16:55:00.000 | 22 | 1 | false | 2,208,828 | 1 | 0 | 0 | 4 | Does anyone know how I would go about detected what bit version Windows is under Python. I need to know this as a way of using the right folder for Program Files.
Many thanks |
Detect 64bit OS (windows) in Python | 2,208,946 | -2 | 41 | 46,662 | 0 | python,windows,64-bit | There should be a directory under Windows 64bit, a Folder called \Windows\WinSxS64 for 64 bit, under Windows 32bit, it's WinSxS. | 0 | 1 | 0 | 0 | 2010-02-05T16:55:00.000 | 22 | -0.01818 | false | 2,208,828 | 1 | 0 | 0 | 4 | Does anyone know how I would go about detected what bit version Windows is under Python. I need to know this as a way of using the right folder for Program Files.
Many thanks |
Detect 64bit OS (windows) in Python | 2,209,113 | 3 | 41 | 46,662 | 0 | python,windows,64-bit | You should be using environment variables to access this. The program files directory is stored in the environment variable PROGRAMFILES on x86 Windows, the 32-bit program files is directory is stored in the PROGRAMFILES(X86) environment variable, these can be accessed by using os.environ('PROGRAMFILES').
Use sys.getwindowsversion() or the existence of PROGRAMFILES(X86) (if 'PROGRAMFILES(X86)' in os.environ) to determine what version of Windows you are using. | 0 | 1 | 0 | 0 | 2010-02-05T16:55:00.000 | 22 | 0.027266 | false | 2,208,828 | 1 | 0 | 0 | 4 | Does anyone know how I would go about detected what bit version Windows is under Python. I need to know this as a way of using the right folder for Program Files.
Many thanks |
Disassembling with python - no easy solution? | 2,217,220 | 2 | 7 | 1,298 | 0 | python,swig,disassembly | Well, after much meddling around, I managed to compile SWIGed libdisasm!
Unfortunately, it seems to crash python on incorrect (and sometimes correct) usage.
How I did it:
I compiled libdisasm.lib using Visual Studio 6, the only thing you need for this is the source code in whichever libdisasm release you use, and stdint.h and inttypes.h (The Visual C++ compatible version, google it).
I SWIGed the given libdisasm_oop.i file with the following command line
swig -python -shadow -o x86disasm_wrap.c -outdir . libdisasm_oop.i
Used Cygwin to run ./configure in the libdisasm root dir. The only real thing you get from this is config.h
I then created a new DLL project, added x86disasm_wrap.c to it, added the c:\PythonXX\libs and c:\PythonXX\Include folders to the corresponding variables, set to Release configuration (important, either this or do #undef _DEBUG before including python.h).
Also, there is a chance you'll need to fix the path to config.h.
Compiled the DLL project, and named the output _x86disasm.dll.
Place that in the same folder as the SWIG generated x86disasm.py and you're done.
Any suggestions for other, less crashy disasm libs for python? | 0 | 1 | 0 | 0 | 2010-02-07T13:02:00.000 | 4 | 0.099668 | false | 2,216,816 | 1 | 0 | 0 | 1 | I'm trying to create a python script that will disassemble a binary (a Windows exe to be precise) and analyze its code.
I need the ability to take a certain buffer, and extract some sort of struct containing information about the instructions in it.
I've worked with libdisasm in C before, and I found it's interface quite intuitive and comfortable.
The problem is, its Python interface is available only through SWIG, and I can't get it to compile properly under Windows.
At the availability aspect, diStorm provides a nice out-of-the-box interface, but it provides only the Mnemonic of each instruction, and not a binary struct with enumerations defining instruction type and what not.
This is quite uncomfortable for my purpose, and will require a lot of what I see as spent time wrapping the interface to make it fit my needs.
I've also looked at BeaEngine, which does in fact provide the output I need, a struct with binary info concerning each instruction, but its interface is really odd and counter-intuitive, and it crashes pretty much instantly when provided with wrong arguments.
The CTypes sort of ultimate-death-to-your-python crashes.
So, I'd be happy to hear about other solutions, which are a little less time consuming than messing around with djgcc or mingw to make SWIGed libdisasm, or writing an OOP wrapper for diStorm.
If anyone has some guidance as to how to compile SWIGed libdisasm, or better yet, a compiled binary (pyd or dll+py), I'd love to hear/have it. :)
Thanks ahead. |
RSS Feed aggregator using Google App Engine - Python | 2,253,676 | 1 | 4 | 3,392 | 0 | python,rss,feed | I found a way to work around this issue, though I am not sure if this is the optimal solution.
Instead of Minidom I have used cElementTree to parse the RSS feed. I process each "item" tag and its children in a seperate task and add these tasks to the task queue.
This has helped me avoid the DeadlineExceededError. I get the "This resource uses a lot of CPU resources" warning though.
Any idea on how to avoid the warning?
A_iyer | 0 | 1 | 0 | 0 | 2010-02-08T19:20:00.000 | 3 | 0.066568 | false | 2,224,219 | 0 | 0 | 1 | 1 | I am trying to build a GAE app that processes an RSS feed and stores all the data from the feed into Google Datastore. I use Minidom to extract content from the RSS feed. I also tried using Feedparser and BeautifulSoup but they did not work for me.
My app currently parses the feed and saves it in the Google datastore in about 25 seconds on my local machine. I uploaded the app and I when I tried to use it, I got the "DeadLine Exceeded Error".
I would like to know if there are any possible ways to speed up this process? The feed I use will eventually grow to have more than a 100 items over time. |
Does anyone know of a Urwid like environment that is cross-platform for Python 3.x? | 2,239,928 | 0 | 0 | 322 | 0 | python,cross-platform,python-3.x | Help port Urwid to Python 3! That is most likely more work that just running 2to3 on it, though. | 0 | 1 | 0 | 0 | 2010-02-09T05:18:00.000 | 3 | 0 | false | 2,226,913 | 0 | 0 | 1 | 1 | I would like it to run on Linux, OS X, and Windows (XP/Vista/7).
Thanks for any input. |
Can I port my existing python apps on ASE? | 2,233,946 | 4 | 8 | 901 | 0 | python,android,ase,android-scripting | As of yet, there is no support for a gui on ASE apart from some simple input and display dialogs. Look at /sdcard/ase/extras/python to find libraries already available. You can add new libraries by copying them there. | 0 | 1 | 0 | 0 | 2010-02-10T00:44:00.000 | 1 | 1.2 | true | 2,233,631 | 1 | 0 | 0 | 1 | I learned that the Android Scripting Environment (ASE) supports python code. Can I take my existing python programs and run them on android?
Apart from the GUI, what else will I need to adapt? How can I find the list of supported python libraries for ASE? |
How to send file to serial port using kermit protocol in python | 3,143,184 | 0 | 2 | 3,905 | 0 | python,serial-port,pyserial,kermit | You should be able to do it via the subprocess module. The following assumes that you can send commands to your remote machine and parse out the results already. :-)
I don't have anything to test this on at the moment, so I'm going to be pretty general.
Roughly:
1.) use pyserial to connect to the remote system through the serial port.
2.) run the kermit client on the remote system using switches that will send the file or files you wish to transfer over the remote systems serial port (the serial line you are using.)
3.) disconnect your pyserial instance
4.) start your kermit client with subprocess and accept the files.
5.) reconnect your pyserial instance and clean everything up.
I'm willing to bet this isn't much help, but when I actually did this a few years ago (using os.system, rather than subprocess on a hideous, hideous SuperDOS system) it took me a while to get my fat head around the fact that I had to start a kermit client remotely to send the file to my client!
If I have some time this week I'll break out one of my old geode boards and see if I can post some actual working code. | 0 | 1 | 0 | 1 | 2010-02-10T14:31:00.000 | 1 | 1.2 | true | 2,237,483 | 0 | 0 | 0 | 1 | I have device connected through serial port to PC. Using c-kermit I can send commands to device and read output. I can also send files using kermit protocol.
In python we have pretty nice library - pySerial. I can use it to send/receive data from device. But is there some nice solution to send files using kermit protocol? |
Declare which signals are subscribed to on DBus? | 2,389,202 | 4 | 3 | 580 | 0 | python,dbus | D-Bus clients call AddMatch on the bus daemon to register their interest in messages matching a particular pattern; most bindings add a match rule either for all signals on a particular service and object path, or for signals on a particular interface on that service and object path, when you create a proxy object.
Using dbus-monitor you can see match rules being added: try running dbus-monitor member=AddMatch and then running an application that uses D-Bus. Similarly, you can eavesdrop calls to RemoveMatch. However, there's currently no way to ask the daemon for the set of match rules currently in effect. Adding a way to ask that question would make more sense than adding a way for clients to re-advertise this, given that the daemon knows already. | 0 | 1 | 0 | 0 | 2010-02-10T21:42:00.000 | 2 | 0.379949 | false | 2,240,562 | 0 | 0 | 0 | 2 | Is there a way to declare which signals are subscribed by a Python application over DBus?
In other words, is there a way to advertise through the "Introspectable" interface which signals are subscribed to. I use "D-Feet D-Bus debugger".
E.g. Application subscribes to signal X (using the add_signal_receiver method on a bus object). |
Declare which signals are subscribed to on DBus? | 2,364,175 | 1 | 3 | 580 | 0 | python,dbus | This is probably not possible since a signal is emitted on the bus and the application just picks out what is interesting. Subscribing is not happening inside dbus. | 0 | 1 | 0 | 0 | 2010-02-10T21:42:00.000 | 2 | 1.2 | true | 2,240,562 | 0 | 0 | 0 | 2 | Is there a way to declare which signals are subscribed by a Python application over DBus?
In other words, is there a way to advertise through the "Introspectable" interface which signals are subscribed to. I use "D-Feet D-Bus debugger".
E.g. Application subscribes to signal X (using the add_signal_receiver method on a bus object). |
Location to put user configuration files in windows | 2,243,910 | 12 | 13 | 7,044 | 0 | python,windows,configuration,configuration-files | %APPDATA% is the right place for these (probably in a subdirectory for your library). Unfortunately a fair number of *nix apps ported to Windows don't respect that and I end up with .gem, .ssh, .VirtualBox, etc., folders cluttering up my home directory and not hidden by default as on *nix.
You can make it easy even for users that don't know much about the layout of the Windows directory structure by having a menu item (or similar) that opens the configuration file in an editor for them.
If possible, do provide a GUI front-end to the file, though, even if it's quite a simple one. Windows users will expect a Tools | Options menu item that brings up a dialog box allowing them to set options and will be non-plussed by not having one. | 0 | 1 | 0 | 0 | 2010-02-11T10:48:00.000 | 3 | 1.2 | true | 2,243,895 | 1 | 0 | 0 | 2 | I'm writing a python library that has a per-user configuration file that can be edited by the user of the library. The library also generates logging files.
On *nix, the standard seems to be to dump them in $HOME/.library_name.
However, I am not sure what to do with Windows users. I've used windows for years before switching to Linux and it seems that applications tended to either A) rely on GUI configuration (which I'd rather not develop) or B) dump configuration data in the registry (which is annoying to develop and not portable with the *nix config files)
I currently am dumping the files into the $HOME/.library_name on windows as well, but this feels very unnatural on Windows.
I've considered placing it into %APPDATA%, where application data tends to live, but this has its own problems though. My biggest concern is that lay users might not even know where that directory is (unlike %HOME/~), and user-editable configuration files don't seem to go here normally.
What is the standard location for per-user editable config files on windows? |
Location to put user configuration files in windows | 2,243,919 | 1 | 13 | 7,044 | 0 | python,windows,configuration,configuration-files | On windows the user is not expected to configure an application using editable config files so there is no standard.
The standard for configuration which is editable using a GUI is the registry.
If you're using QT (or PyQT?) then you can use QSettings which provide an abstraction layer. On Linux it uses a config file and on windows is writes to the registry. | 0 | 1 | 0 | 0 | 2010-02-11T10:48:00.000 | 3 | 0.066568 | false | 2,243,895 | 1 | 0 | 0 | 2 | I'm writing a python library that has a per-user configuration file that can be edited by the user of the library. The library also generates logging files.
On *nix, the standard seems to be to dump them in $HOME/.library_name.
However, I am not sure what to do with Windows users. I've used windows for years before switching to Linux and it seems that applications tended to either A) rely on GUI configuration (which I'd rather not develop) or B) dump configuration data in the registry (which is annoying to develop and not portable with the *nix config files)
I currently am dumping the files into the $HOME/.library_name on windows as well, but this feels very unnatural on Windows.
I've considered placing it into %APPDATA%, where application data tends to live, but this has its own problems though. My biggest concern is that lay users might not even know where that directory is (unlike %HOME/~), and user-editable configuration files don't seem to go here normally.
What is the standard location for per-user editable config files on windows? |
Python Fabric: How to answer to keyboard input? | 2,246,509 | 1 | 24 | 13,991 | 0 | python,automation,fabric | Those both methods are valid and works.
I choose the first one, because I didn't want to have any interaction with my deployment system.
So here is the solution I used:
% yes | ./manage.py rebuild_index
WARNING: This will irreparably remove EVERYTHING from your search index.
Your choices after this are to restore from backups or rebuild via the rebuild_index command.
Are you sure you wish to continue? [y/N]
Removing all documents from your index because you said so.
All documents removed.
Indexing 27 Items. | 0 | 1 | 0 | 0 | 2010-02-11T17:26:00.000 | 6 | 0.033321 | false | 2,246,256 | 0 | 0 | 0 | 1 | I would like to automate the response for some question prompted by some programs, like mysql prompting for a password, or apt asking for a 'yes' or ... when I want to rebuild my haystack index with a ./manage.py rebuild_index.
For MySQL, I can use the --password= switch, and I'm sure that apt has a 'quiet' like option. But how can I pass the response to other programs ? |
How to improve the throughput of request_logs on Google App Engine | 2,249,540 | 1 | 1 | 499 | 0 | python,google-app-engine,logging | You can increase the per-request batch size of logs. In the latest SDK (1.3.1), check out google_appengine/google/appengine/tools/appcfg.py around like 861 (RequestLogLines method of LogsRequester class). You can modify the "limit" parameter.
I am using 1000 and it works pretty well. | 0 | 1 | 0 | 0 | 2010-02-12T03:40:00.000 | 1 | 1.2 | true | 2,249,530 | 0 | 0 | 1 | 1 | Downloading logs from App Engine is nontrivial. Requests are batched; appcfg.py does not use normal file IO but rather a temporary file (in reverse chronological order) which it ultimately appends to the local log file; when appending, the need to find the "sentinel" makes log rotation difficult since one must leave enough old logs for appcfg.py to remember where it left off. Finally, Google deletes old logs after some time (20 minutes for the app I use).
As an app scales, and the log generation rate grows, how can one increase the speed of fetching the logs so that appcfg.py does not fall behind? |
Running script on server start in google app engine, in Python | 2,253,428 | 4 | 5 | 2,804 | 0 | python,google-app-engine | I use appengine python with the django helper. As far as I know you cannot hook anything on the deploy, but you could put a call to check if you need to do your setup in the main function of main.py. This is how the helper initializes itself on the first request. I haven't looked at webapp in a while, but I assume main.py acts in a similar fashion for that framework.
Be aware that main is run on the first request, not when you first deploy. It will also happen if appengine starts up a new instance to handle load, or if all instances were stopped because of inactivity. So make sure you check to see if you need to do your initialization and then only do it if needed. | 0 | 1 | 0 | 0 | 2010-02-12T15:03:00.000 | 4 | 1.2 | true | 2,252,672 | 0 | 0 | 1 | 3 | Is it possible to run a script each time the dev server starts? Also at each deploy to google?
I want the application to fill the database based on what some methods returns.
Is there any way to do this?
..fredrik |
Running script on server start in google app engine, in Python | 2,252,697 | 2 | 5 | 2,804 | 0 | python,google-app-engine | You can do this by writing a script in your favorite scripting language that performs the actions that you desire and then runs the dev server or runs appcfg.py update. | 0 | 1 | 0 | 0 | 2010-02-12T15:03:00.000 | 4 | 0.099668 | false | 2,252,672 | 0 | 0 | 1 | 3 | Is it possible to run a script each time the dev server starts? Also at each deploy to google?
I want the application to fill the database based on what some methods returns.
Is there any way to do this?
..fredrik |
Running script on server start in google app engine, in Python | 2,259,561 | 1 | 5 | 2,804 | 0 | python,google-app-engine | Try to make wrapper around the server runner and script that run deployment. So you will be able to run custom code when you need. | 0 | 1 | 0 | 0 | 2010-02-12T15:03:00.000 | 4 | 0.049958 | false | 2,252,672 | 0 | 0 | 1 | 3 | Is it possible to run a script each time the dev server starts? Also at each deploy to google?
I want the application to fill the database based on what some methods returns.
Is there any way to do this?
..fredrik |
Sharing scripts that require a virtualenv to be activated | 2,254,286 | -1 | 40 | 12,524 | 0 | python,virtualenv | If it's only on one server, then flexibility is irrelevant. Modify the shebang. If you're worried about that, make a packaged, installed copy on the dev server that doesn't use the virtualenv. Once it's out of develepment, whether that's for local users or users in guatemala, virtualenv is no longer the right tool. | 0 | 1 | 0 | 0 | 2010-02-12T17:22:00.000 | 3 | 1.2 | true | 2,253,712 | 1 | 0 | 0 | 2 | I have virtualenv and virtualenvwrapper installed on a shared Linux server with default settings (virtualenvs are in ~/.virtualenvs). I have several Python scripts that can only be run when the correct virtualenv is activated.
Now I want to share those scripts with other users on the server, but without requiring them to know anything about virtualenv... so they can run python scriptname or ./scriptname and the script will run with the libraries available in my virtualenv.
What's the cleanest way to do this? I've toyed with a few options (like changing the shebang line to point at the virtualenv provided interpreter), but they seem quite inflexible. Any suggestions?
Edit: This is a development server where several other people have accounts. However, none of them are Python programmers (I'm currently trying to convert them). I just want to make it easy for them to run these scripts and possibly inspect their logic, without exposing non-Pythonistas to environment details. Thanks. |
Sharing scripts that require a virtualenv to be activated | 2,253,847 | 6 | 40 | 12,524 | 0 | python,virtualenv | I would vote for adding a shebang line in scriptname pointing to the correct virtualenv python. You just tell your users the full path to scriptname (or put it in their PATH), and they don't even need to know it is a Python script.
If your users are programmers, then I don't see why you wouldn't want them to know/learn about virtualenv. | 0 | 1 | 0 | 0 | 2010-02-12T17:22:00.000 | 3 | 1 | false | 2,253,712 | 1 | 0 | 0 | 2 | I have virtualenv and virtualenvwrapper installed on a shared Linux server with default settings (virtualenvs are in ~/.virtualenvs). I have several Python scripts that can only be run when the correct virtualenv is activated.
Now I want to share those scripts with other users on the server, but without requiring them to know anything about virtualenv... so they can run python scriptname or ./scriptname and the script will run with the libraries available in my virtualenv.
What's the cleanest way to do this? I've toyed with a few options (like changing the shebang line to point at the virtualenv provided interpreter), but they seem quite inflexible. Any suggestions?
Edit: This is a development server where several other people have accounts. However, none of them are Python programmers (I'm currently trying to convert them). I just want to make it easy for them to run these scripts and possibly inspect their logic, without exposing non-Pythonistas to environment details. Thanks. |
changing the process name of a python script | 18,992,161 | 8 | 26 | 26,676 | 0 | python,linux | the procname library didn't work for me on ubuntu. I went with setproctitle instead (pip install setproctitle). This is what gunicorn uses and it worked for me. | 0 | 1 | 0 | 0 | 2010-02-12T22:08:00.000 | 4 | 1 | false | 2,255,444 | 0 | 0 | 0 | 1 | Is there a way to change the name of a process running a python script on Linux?
When I do a ps, all I get are "python" process names. |
How do I interact with MATLAB from Python? | 2,257,221 | 1 | 24 | 23,758 | 0 | python,matlab,ctypes | Regarding OS compatibility, if you use the matlab version for Linux, the scripts written in windows should work without any changes.
If possible, you may also consider the possibility of doing everything with python. Scipy/numpy with Matplotlib provide a complete Matlab replacement. | 0 | 1 | 0 | 1 | 2010-02-13T00:12:00.000 | 4 | 0.049958 | false | 2,255,942 | 0 | 0 | 0 | 1 | A friend asked me about creating a small web interface that accepts some inputs, sends them to MATLAB for number crunching and outputs the results. I'm a Python/Django developer by trade, so I can handle the web interface, but I am clueless when it comes to MATLAB. Specifically:
I'd really like to avoid hosting this on a Windows server. Any issues getting MATLAB running in Linux with scripts created on Windows?
Should I be looking into shelling out commands or compiling it to C and using ctypes to interact with it?
If compiling is the way to go, is there anything I should know about getting it compiled and working in Python? (It's been a long time since I've compiled or worked with C)
Any suggestions, tips, or tricks on how to pull this off? |
Recommended Django Deployment | 2,257,323 | 4 | 9 | 3,882 | 0 | python,linux,django,deployment,webserver | Update your question to remove the choices that don't work. If it has Python 2.4, and an installation is a headache, just take it off the list, and update the question to list the real candidates. Only list the ones that actually fit your requirements. (You don't say what your requirements are, but minimal upgrades appears to be important.)
Toss a coin.
When choosing between two platforms which meet your requirements (which you haven't identified) tossing a coin is the absolute best way to choose.
If you're not sure if something matches your requirements, it's often good to enumerate what you value. So far, the only thing in the question that you seem to value is "no installations". Beyond that, I can only guess at what requirements you actually have.
Once you've identified the set of features you're looking for, feel free to toss a coin.
Note that Linux distributions all have more-or-less the same open-source code base. Choosing among them is a preference for packaging, support and selection of pre-integrated elements of the existing Linux code base. Just toss a coin.
Choosing among web front-ends is entirely a question of what features you require. Find all the web front-ends that meet your requirements and toss a coin to choose among them.
None of these are "lock-in" decisions. If you don't like the linux distro you chose initially, you can simply chose another. They all have the same basic suite of apps and the same API's. The choice is merely a matter of preference.
Don't like the web server you chose? At the end of the mod_wsgi pipe, they all appear the same to your Django app (plus or minus a few config changes). Don't like lighttpd? Switch to nginx or Apache -- your Django app doesn't change. So there's no lock-in and no negative consequences to making a sub-optimal choice.
When there's no down-side risk, just toss a coin. | 0 | 1 | 0 | 0 | 2010-02-13T08:47:00.000 | 4 | 0.197375 | false | 2,256,987 | 0 | 0 | 1 | 3 | Short version: How do you deploy your Django servers? What application server, front-end (if any, and by front-end I mean reverse proxy), and OS do you run it on? Any input would be greatly appreciated, I'm quite a novice when it comes to Python and even more as a server administrator.
Long version:
I'm migrating between server hosts, so much for weekends... it's not all bad, though. I have the opportunity to move to a different, possibly better "deployment" of Django.
Currently I'm using Django through Tornado's WSGI interface with an nginx front-end on Debian Lenny. I'm looking to move into the Rackspace Cloud so I've been given quite a few choices when it comes to OS:
Debian 5.0 (Lenny)
FC 11 or 12
Ubuntu 9.10 or 8.04 (LTS)
CentOS 5.4
Gentoo 10.1
Arch Linux 2009.02
What I've gathered is this:
Linux Distributions
Debian and CentOS are very slow to release non-bugfix updates of software, since they focus mainly on stability. Is this good or bad? I can see stability being a good thing, but the fact that I can't get Python 2.6 without quite a headache of replacing Python 2.4 is kind of a turn-off--and if I do, then I'm stuck when it comes to ever hoping to use apt/yum to install a Python library (it'll try to reinstall Python 2.4).
Ubuntu and Fedora seem very... ready to go. Almost too ready to go, it's like everything it already done. I like to tinker with things and I prefer to know what's installed and how it's configured versus hitting the ground running with a "cookie-cutter" setup (no offense intended, it's just the best way to describe what I'm trying to say). I've been playing around with Fedora and I was pleasently surprised to find that pycurl, simplejson and a bunch of other libraries were already installed; that raised the question, though, what else is installed? I run a tight ship on a very small VPS, I prefer to run only what I need.
Then there's Gentoo... I've managed to install Gentoo on my desktop (took a week, almost) and ended up throwing it out after quite a few events where I wanted to do something and had to spend 45 minutes recompiling software with new USE flags so I can parse PNG's through PIL. I've wondered though, is Gentoo good for something "static" like a server? I know exactly what I'm going to be doing on my server, so USE flags will change next to never. It optimizes compiles to fit the needs of what you tell it to, and nothing more--something I could appreciate running on minimal RAM and HDD space. I've heard, though, that Gentoo has a tendency to break when you attempt to update the software on it... that more than anything else has kept me away from it for now.
I don't know anything about Arch Linux. Any opinions on this distro would be appreciated.
Web Server
I've been using Tornado and I can safely say it's been the biggest hassle to get running. I had to write my own script to prefork it since, at the time I setup this server, I was probably around 10% of Tornado's user-base (not counting FriendFeed). I have to then setup another "watchdog" program to make sure those forks don't misbehave. The good part is, though, it uses around 40MB of RAM to run all 7 of my Django powered sites; I liked that, I liked that a lot.
I've been using nginx as a front-end to Tornado, I could run nginx right in front of Django FastCGI workers, but those don't have the reliability of Tornado when you crank up the concurrency level. This isn't really an option for me, but I figured I might as well list it.
There's also Apache, which Django recommends you use through mod_wsgi. I personally don't like Apache that much, I understand it's very, very, very mature and what not, but it just seems so... fat, compared to nginx and lighttpd. Apache/mod_python isn't even an option, as I have very limited RAM.
Segue to Lighttpd! Not much to say here, I've never used it. I've heard you can run it in front of Apache/mod_wsgi or run it in front of Django FastCGI workers, also. I've heard it has minor memory leaking issues, I'm sure that could be solved with a cron job, though.
What I'm looking for is what you have seen as the "best" deployment of Django for your needs. Any input or clarifications of what I've said above would be more than welcome. |
Recommended Django Deployment | 2,257,450 | 3 | 9 | 3,882 | 0 | python,linux,django,deployment,webserver | At the place I rent server, they have shaved down the Ubuntu images to bare minimum. Presumably because they had to make a special image anyway with just the right drivers and such in it, but I don't know exactly.
They have even removed wget and nano. So you get all the apt-get goodness and not a whole lot of "cookie-cutter" OS.
Just saying this because I would imagine that this is the way it is done almost everywhere and therefore playing around with a normal Ubuntu-server install will not provide you with the right information to make your decision.
Other than that, I agree with the others, that it is not much of a lock-in so you could just try something.
On the webserver-side I would suggest taking a look at cherokee, if have not done so already.
It might not be your cup of joe, but there is no harm in trying it.
I prefer the easy setup of both Ubuntu and Cherokee. Although I play around with a lot of things for fun, I prefer these for my business. I have other things to do than manage servers, so any solution that helps me do it faster, is just good. If these projects are mostly for fun then this will most likely not apply since you won't get a whole lot of experience from these easy-setup-with-nice-gui-and-very-helpfull-wizards | 0 | 1 | 0 | 0 | 2010-02-13T08:47:00.000 | 4 | 0.148885 | false | 2,256,987 | 0 | 0 | 1 | 3 | Short version: How do you deploy your Django servers? What application server, front-end (if any, and by front-end I mean reverse proxy), and OS do you run it on? Any input would be greatly appreciated, I'm quite a novice when it comes to Python and even more as a server administrator.
Long version:
I'm migrating between server hosts, so much for weekends... it's not all bad, though. I have the opportunity to move to a different, possibly better "deployment" of Django.
Currently I'm using Django through Tornado's WSGI interface with an nginx front-end on Debian Lenny. I'm looking to move into the Rackspace Cloud so I've been given quite a few choices when it comes to OS:
Debian 5.0 (Lenny)
FC 11 or 12
Ubuntu 9.10 or 8.04 (LTS)
CentOS 5.4
Gentoo 10.1
Arch Linux 2009.02
What I've gathered is this:
Linux Distributions
Debian and CentOS are very slow to release non-bugfix updates of software, since they focus mainly on stability. Is this good or bad? I can see stability being a good thing, but the fact that I can't get Python 2.6 without quite a headache of replacing Python 2.4 is kind of a turn-off--and if I do, then I'm stuck when it comes to ever hoping to use apt/yum to install a Python library (it'll try to reinstall Python 2.4).
Ubuntu and Fedora seem very... ready to go. Almost too ready to go, it's like everything it already done. I like to tinker with things and I prefer to know what's installed and how it's configured versus hitting the ground running with a "cookie-cutter" setup (no offense intended, it's just the best way to describe what I'm trying to say). I've been playing around with Fedora and I was pleasently surprised to find that pycurl, simplejson and a bunch of other libraries were already installed; that raised the question, though, what else is installed? I run a tight ship on a very small VPS, I prefer to run only what I need.
Then there's Gentoo... I've managed to install Gentoo on my desktop (took a week, almost) and ended up throwing it out after quite a few events where I wanted to do something and had to spend 45 minutes recompiling software with new USE flags so I can parse PNG's through PIL. I've wondered though, is Gentoo good for something "static" like a server? I know exactly what I'm going to be doing on my server, so USE flags will change next to never. It optimizes compiles to fit the needs of what you tell it to, and nothing more--something I could appreciate running on minimal RAM and HDD space. I've heard, though, that Gentoo has a tendency to break when you attempt to update the software on it... that more than anything else has kept me away from it for now.
I don't know anything about Arch Linux. Any opinions on this distro would be appreciated.
Web Server
I've been using Tornado and I can safely say it's been the biggest hassle to get running. I had to write my own script to prefork it since, at the time I setup this server, I was probably around 10% of Tornado's user-base (not counting FriendFeed). I have to then setup another "watchdog" program to make sure those forks don't misbehave. The good part is, though, it uses around 40MB of RAM to run all 7 of my Django powered sites; I liked that, I liked that a lot.
I've been using nginx as a front-end to Tornado, I could run nginx right in front of Django FastCGI workers, but those don't have the reliability of Tornado when you crank up the concurrency level. This isn't really an option for me, but I figured I might as well list it.
There's also Apache, which Django recommends you use through mod_wsgi. I personally don't like Apache that much, I understand it's very, very, very mature and what not, but it just seems so... fat, compared to nginx and lighttpd. Apache/mod_python isn't even an option, as I have very limited RAM.
Segue to Lighttpd! Not much to say here, I've never used it. I've heard you can run it in front of Apache/mod_wsgi or run it in front of Django FastCGI workers, also. I've heard it has minor memory leaking issues, I'm sure that could be solved with a cron job, though.
What I'm looking for is what you have seen as the "best" deployment of Django for your needs. Any input or clarifications of what I've said above would be more than welcome. |
Recommended Django Deployment | 2,259,882 | 0 | 9 | 3,882 | 0 | python,linux,django,deployment,webserver | Personally I find one of the BSD systems far superior to Linux distros for server related tasks. Give OpenBSD or perhaps FreeBSD a chance. Once you do you´ll never go back. | 0 | 1 | 0 | 0 | 2010-02-13T08:47:00.000 | 4 | 0 | false | 2,256,987 | 0 | 0 | 1 | 3 | Short version: How do you deploy your Django servers? What application server, front-end (if any, and by front-end I mean reverse proxy), and OS do you run it on? Any input would be greatly appreciated, I'm quite a novice when it comes to Python and even more as a server administrator.
Long version:
I'm migrating between server hosts, so much for weekends... it's not all bad, though. I have the opportunity to move to a different, possibly better "deployment" of Django.
Currently I'm using Django through Tornado's WSGI interface with an nginx front-end on Debian Lenny. I'm looking to move into the Rackspace Cloud so I've been given quite a few choices when it comes to OS:
Debian 5.0 (Lenny)
FC 11 or 12
Ubuntu 9.10 or 8.04 (LTS)
CentOS 5.4
Gentoo 10.1
Arch Linux 2009.02
What I've gathered is this:
Linux Distributions
Debian and CentOS are very slow to release non-bugfix updates of software, since they focus mainly on stability. Is this good or bad? I can see stability being a good thing, but the fact that I can't get Python 2.6 without quite a headache of replacing Python 2.4 is kind of a turn-off--and if I do, then I'm stuck when it comes to ever hoping to use apt/yum to install a Python library (it'll try to reinstall Python 2.4).
Ubuntu and Fedora seem very... ready to go. Almost too ready to go, it's like everything it already done. I like to tinker with things and I prefer to know what's installed and how it's configured versus hitting the ground running with a "cookie-cutter" setup (no offense intended, it's just the best way to describe what I'm trying to say). I've been playing around with Fedora and I was pleasently surprised to find that pycurl, simplejson and a bunch of other libraries were already installed; that raised the question, though, what else is installed? I run a tight ship on a very small VPS, I prefer to run only what I need.
Then there's Gentoo... I've managed to install Gentoo on my desktop (took a week, almost) and ended up throwing it out after quite a few events where I wanted to do something and had to spend 45 minutes recompiling software with new USE flags so I can parse PNG's through PIL. I've wondered though, is Gentoo good for something "static" like a server? I know exactly what I'm going to be doing on my server, so USE flags will change next to never. It optimizes compiles to fit the needs of what you tell it to, and nothing more--something I could appreciate running on minimal RAM and HDD space. I've heard, though, that Gentoo has a tendency to break when you attempt to update the software on it... that more than anything else has kept me away from it for now.
I don't know anything about Arch Linux. Any opinions on this distro would be appreciated.
Web Server
I've been using Tornado and I can safely say it's been the biggest hassle to get running. I had to write my own script to prefork it since, at the time I setup this server, I was probably around 10% of Tornado's user-base (not counting FriendFeed). I have to then setup another "watchdog" program to make sure those forks don't misbehave. The good part is, though, it uses around 40MB of RAM to run all 7 of my Django powered sites; I liked that, I liked that a lot.
I've been using nginx as a front-end to Tornado, I could run nginx right in front of Django FastCGI workers, but those don't have the reliability of Tornado when you crank up the concurrency level. This isn't really an option for me, but I figured I might as well list it.
There's also Apache, which Django recommends you use through mod_wsgi. I personally don't like Apache that much, I understand it's very, very, very mature and what not, but it just seems so... fat, compared to nginx and lighttpd. Apache/mod_python isn't even an option, as I have very limited RAM.
Segue to Lighttpd! Not much to say here, I've never used it. I've heard you can run it in front of Apache/mod_wsgi or run it in front of Django FastCGI workers, also. I've heard it has minor memory leaking issues, I'm sure that could be solved with a cron job, though.
What I'm looking for is what you have seen as the "best" deployment of Django for your needs. Any input or clarifications of what I've said above would be more than welcome. |
How do I do os.getpid() in C++? | 2,257,431 | 0 | 0 | 862 | 0 | c++,python | You cannot easily retrieve the Python interpreter's PID from your C++ program.
Either assign the named pipe a constant name, or if you really need multiple pipes of the same Python program, create a temporary file to which the Python programs write their PIDs (use file locking!) - then you can read the PIDs from the C++ program. | 0 | 1 | 0 | 1 | 2010-02-13T12:09:00.000 | 4 | 0 | false | 2,257,415 | 0 | 0 | 0 | 4 | newb here. I am trying to make a c++ program that will read from a named pipe created by python. My problem is, the named pipe created by python uses os.getpid() as part of the pipe name. when i try calling the pipe from c++, i use getpid(). i am not getting the same value from c++. is there a method equivalent in c++ for os.getpid?
thanks!
edit:
sorry, i am actually using os.getpid() to get the session id via ProcessIDtoSessionID(). i then use the session id as part of the pipe name |
How do I do os.getpid() in C++? | 2,257,426 | 2 | 0 | 862 | 0 | c++,python | You won't get the same value if you're running as a separate process as each process has their own process ID. Find some other way to identify the pipe. | 0 | 1 | 0 | 1 | 2010-02-13T12:09:00.000 | 4 | 0.099668 | false | 2,257,415 | 0 | 0 | 0 | 4 | newb here. I am trying to make a c++ program that will read from a named pipe created by python. My problem is, the named pipe created by python uses os.getpid() as part of the pipe name. when i try calling the pipe from c++, i use getpid(). i am not getting the same value from c++. is there a method equivalent in c++ for os.getpid?
thanks!
edit:
sorry, i am actually using os.getpid() to get the session id via ProcessIDtoSessionID(). i then use the session id as part of the pipe name |
How do I do os.getpid() in C++? | 2,257,422 | 4 | 0 | 862 | 0 | c++,python | You don't get same proccess IDs because your python program and c++ programs are run in different proccesses thus having different process IDs. So generally use a different logic to name your fifo files. | 0 | 1 | 0 | 1 | 2010-02-13T12:09:00.000 | 4 | 0.197375 | false | 2,257,415 | 0 | 0 | 0 | 4 | newb here. I am trying to make a c++ program that will read from a named pipe created by python. My problem is, the named pipe created by python uses os.getpid() as part of the pipe name. when i try calling the pipe from c++, i use getpid(). i am not getting the same value from c++. is there a method equivalent in c++ for os.getpid?
thanks!
edit:
sorry, i am actually using os.getpid() to get the session id via ProcessIDtoSessionID(). i then use the session id as part of the pipe name |
How do I do os.getpid() in C++? | 2,257,420 | 0 | 0 | 862 | 0 | c++,python | The standard library does not give you anything other than files. You will need to use some other OS specific API. | 0 | 1 | 0 | 1 | 2010-02-13T12:09:00.000 | 4 | 0 | false | 2,257,415 | 0 | 0 | 0 | 4 | newb here. I am trying to make a c++ program that will read from a named pipe created by python. My problem is, the named pipe created by python uses os.getpid() as part of the pipe name. when i try calling the pipe from c++, i use getpid(). i am not getting the same value from c++. is there a method equivalent in c++ for os.getpid?
thanks!
edit:
sorry, i am actually using os.getpid() to get the session id via ProcessIDtoSessionID(). i then use the session id as part of the pipe name |
How can I detect what other copy of Python script is already running | 2,262,290 | 3 | 2 | 648 | 0 | python | You could use a D-Bus service. Your script would start a new service if none is found running in the current session, and otherwise send a D-Bus message to the running instace (that can send "anything", including strings, lists, dicts).
The GTK-based library libunique (missing Python bindings?) uses this approach in its implementation of "unique" applications. | 0 | 1 | 0 | 0 | 2010-02-14T17:30:00.000 | 4 | 0.148885 | false | 2,261,997 | 0 | 0 | 0 | 1 | I have a script. It uses GTK. And I need to know if another copy of scrip starts. If it starts window will extend.
Please, tell me the way I can detect it. |
What mutex/locking/waiting mechanism to use when writing a Chat application with Tornado Web Framework | 2,795,094 | 0 | 0 | 791 | 0 | python,asynchronous,tornado | Tornado has a "chat" example which uses long polling. It contains everything you need (or actually, probably more than you need since it includes a 3rd-party login) | 0 | 1 | 0 | 0 | 2010-02-14T17:39:00.000 | 2 | 0 | false | 2,262,039 | 0 | 0 | 0 | 1 | We're implementing a Chat server using Tornado.
The premise is simple, a user makes open an HTTP ajax connection to the Tornado server, and the Tornado server answers only when a new message appears in the chat-room. Whenever the connection closes, regardless if a new message came in or an error/timeout occurred, the client reopens the connection.
Looking at Tornado, the question arises of what library can we use to allow us to have these calls wait on some central object that would signal them - A_NEW_MESSAGE_HAS_ARRIVED_ITS_TIME_TO_SEND_BACK_SOME_DATA.
To describe this in Win32 terms, each async call would be represented as a thread that would be hanging on a WaitForSingleObject(...) on some central Mutex/Event/etc.
We will be operating in a standard Python environment (Tornado), is there something built-in we can use, do we need an external library/server, is there something Tornado recommends?
Thanks |
How to read from stdin or from a file if no data is piped in Python? | 2,265,010 | 3 | 20 | 10,274 | 0 | python,pipe,stdin,command-line-interface | There is no reliable way to detect if sys.stdin is connected to anything, nor is it appropriate do so (e.g., the user wants to paste the data in). Detect the presence of a filename as an argument, and use stdin if none is found. | 0 | 1 | 0 | 0 | 2010-02-15T09:42:00.000 | 6 | 0.099668 | false | 2,264,991 | 0 | 0 | 0 | 1 | I have a CLI script and want it to read data from a file. It should be able to read it in two ways :
cat data.txt | ./my_script.py
./my_script.py data.txt
—a bit like grep, for example.
What I know:
sys.argv and optparse let me read any args and options easily.
sys.stdin let me read data piped in
fileinput make the full process automatic
Unfortunately:
using fileinput uses stdin and any args as input. So I can't use options that are not filenames as it tries to open them.
sys.stdin.readlines() works fine, but if I don't pipe any data, it hangs until I enter Ctrl + D
I don't know how to implement "if nothing in stdin, read from a file in args" because stdin is always True in a boolean context.
I'd like a portable way to do this if possible. |
Path of current Python instance? | 2,276,543 | 0 | 1 | 590 | 0 | python,windows,installation,path,python-3.x | Hmm, find the Lib dir from sys.path and extrapolate from there? | 0 | 1 | 0 | 1 | 2010-02-16T21:31:00.000 | 2 | 0 | false | 2,276,512 | 1 | 0 | 0 | 1 | I need to access the Scripts and tcl sub-directories of the currently executing Python instance's installation directory on Windows.
What is the best way to locate these directories? |
How can I keep a Python script command window open when opened from a .Net application? | 7,886,740 | 0 | 1 | 797 | 0 | c#,python,command-line | I had a similar issue, I solved it by adding raw_input() at the end of the script.
Even if it's not using the value the script should hold before quitting as it waits for input from the command line.
Sadly this doesn't help if the script errors out before it reaches the raw_input() line, but you'll see how far along it gets. | 0 | 1 | 0 | 0 | 2010-02-17T00:45:00.000 | 2 | 0 | false | 2,277,571 | 1 | 0 | 0 | 1 | Now the obvious answer is to just open the script from a command line, but that isn't an option. I'm writing a simple application to syntax highlight Python and then run the scripts from the same program. A Python IDE if you will. The scripts I want to run are entirely command line programs.
I'm using a System.Diagnostics.Process object. I can either use it to run a command line prompt or to run the Python script, but the command line window will close as soon as the script errors or finishes. I need to keep it open. The ideal solution would be to open the command line and run the Python script from the command line, but I need to do it in one click from inside a .Net application.
Any ideas? Is there a better way to keep the console open? |
Question regarding UDP communication in twisted framework | 2,278,671 | -1 | 1 | 435 | 0 | python,udp,twisted | Are you sure that it is not a receive problem?
There is no indication that your packets won't be fragmented en route to the destination | 0 | 1 | 0 | 0 | 2010-02-17T06:06:00.000 | 2 | -0.099668 | false | 2,278,665 | 0 | 0 | 0 | 2 | I would like to find out if Twisted imposes restriction on maximum size of UDP packets. The allowable limit on linux platforms is upto 64k (although I intend to send packets of about 10k bytes consisting of JPEG images) but I am not able to send more than approx. 2500 bytes |
Question regarding UDP communication in twisted framework | 2,280,167 | 1 | 1 | 435 | 0 | python,udp,twisted | It's very unlikely that Twisted is imposing any limit but there's no reason some other part of the network wouldn't drop the packets if they're too large. It's very rare for people to send UDP packets of such a large size for precisely that sort of reason. Most game applications for example try to keep them below 1.5K these days, and below 512 bytes in the not-too-distant past. | 0 | 1 | 0 | 0 | 2010-02-17T06:06:00.000 | 2 | 0.099668 | false | 2,278,665 | 0 | 0 | 0 | 2 | I would like to find out if Twisted imposes restriction on maximum size of UDP packets. The allowable limit on linux platforms is upto 64k (although I intend to send packets of about 10k bytes consisting of JPEG images) but I am not able to send more than approx. 2500 bytes |
Bash alias to Python script -- is it possible? | 2,301,977 | 0 | 4 | 11,553 | 0 | python,bash,curl,scripting,multiplatform | I thought Pycurl might be the answer. Ahh Daniel Sternberg and his innocent presumptions that everybody knows what he does. I asked on the list whether or not pycurl had a "curl -o" analogue, and then asked 'If so: How would one go about coding it/them in a Python script?' His reply was the following:
"curl.setopt(pycurl.WRITEDATA, fp)
possibly combined with:
curl.setopt(pycurl.WRITEFUNCITON, callback) "
...along with Sourceforge links to two revisions of retriever.py. I can barely recall where easy_install put the one I've got; how am I supposed to compare them?
It's pretty apparent this gentleman never had a helpdesk or phone tech support job in the Western Hemisphere, where you have to assume the 'customer' just learned how to use their comb yesterday and be prepared to walk them through everything and anything. One-liners (or three-liners with abstruse links as chasers) don't do it for me.
BZT | 0 | 1 | 0 | 0 | 2010-02-17T10:11:00.000 | 2 | 0 | false | 2,279,749 | 0 | 0 | 0 | 1 | The particular alias I'm looking to "class up" into a Python script happens to be one that makes use of the cUrl -o (output to file) option. I suppose I could as easily turn it into a BASH function, but someone advised me that I could avoid the quirks and pitfalls of the different versions and "flavors" of BASH by taking my ideas and making them Python scripts.
Coincident with this idea is another notion I had to make a feature of legacy Mac OS (officially known as "OS 9" or "Classic") pertaining to downloads platform-independent: writing the URL to some part of the file visible from one's file navigator {Konqueror, Dolphin, Nautilus, Finder or Explorer}. I know that only a scant few file types support this kind of thing using some other command-line tools (exiv2, wrjpgcom, etc). Which is perfectly fine with me as I only use this alias to download single-page image files such as JPEGs anyways.
I reckon I might as well take full advantage of the power of Python by having the script pass the string which is the source URL of the download (entered by the user and used first by cUrl) to something like exiv2 which could write it to the Comment block, EXIF User Comment block, and (taking as a first and worst example) Windows XP's File Description field. Starting small is sometimes a good way to start.
Hope someone has advice or suggestions.
BZT |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.