Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Determine if Python is running inside virtualenv | 51,409,948 | 9 | 397 | 237,037 | 0 | python,virtualenv | You can do which python and see if its pointing to the one in virtual env. | 0 | 1 | 0 | 0 | 2009-12-09T04:18:00.000 | 15 | 1 | false | 1,871,549 | 1 | 0 | 0 | 3 | Is it possible to determine if the current script is running inside a virtualenv environment? |
Determine if Python is running inside virtualenv | 38,939,054 | 204 | 397 | 237,037 | 0 | python,virtualenv | Try using pip -V (notice capital V)
If you are running the virtual env. it'll show the path to the env.'s location. | 0 | 1 | 0 | 0 | 2009-12-09T04:18:00.000 | 15 | 1 | false | 1,871,549 | 1 | 0 | 0 | 3 | Is it possible to determine if the current script is running inside a virtualenv environment? |
A business Case for Enterprise Python | 1,879,157 | 4 | 9 | 2,483 | 0 | python,enterprise | The larger your investment in an existing technology is, the more expensive it is to move away from it. COBOL is perhaps the best example here.
That investment isn't just in porting existing solutions, but also in retraining or hiring new staff so that you have the skill sets to build and support the new technologies even while still maintaining your legacy solutions.
Add to that the fact that for most large Multinational Corporations, software isn't their core business. As long as it functions effectively and fulfills the business need, they don't tend to care so much about the 'details'.
You need to be able to offer some pretty compelling benefits to overcome this kind of inertia.
Sad but true. | 0 | 1 | 0 | 0 | 2009-12-10T06:52:00.000 | 5 | 0.158649 | false | 1,879,113 | 0 | 0 | 1 | 3 | This will not be a "programming" question but more technology / platform related question. I'm trying to figure out whether Python can be a suitable Java alternative for enterprise / web applications.
Which are the ideal cases where you would prefer to use Python instead of Java? How would a typical Python web application (databases/sessions/concurrency) perform as compared to a typical Java application? How do specific Python frameworks square up against Java based frameworks (Spring, SEAM, Grails etc.)?
For businesses, switching from the Java infrastructure to a Python infrastructure .. is it too hard/expensive/resource intensive/not viable? Also shed some light on the business case for providing a Python + Google AppEngine based solution to the end customer. Will it be cost effective in an typical scenario?
Sorry if I am asking too wide a question, I would have liked to keep it specific, but I need your help to evaluate Python as a whole from the perspectives of the programmers, service providing company and end business customer.
For an SME, a Python/GoogleAppEngine based technology stack is a clear scalable and affordable platform. But what about a large MNC that already has a lot invested in Java.
Thank you so much. I am researching this myself and will gladly share my conclusions here!
Thank you,
Srirangan |
A business Case for Enterprise Python | 1,880,411 | 0 | 9 | 2,483 | 0 | python,enterprise | There is -- almost -- no usable "Business Case" for any technology choice.
"what about a large MNC that already has a lot invested in Java" Ask around. See if there's a business case for Java.
I doubt you'll find anything. Most companies drift into technology choices slowly.
There was no business case for COBOL -- it was the only game in town in the olden days.
There is rarely a business case for Java. What usually happens is that some visionary individual started building the first web site (probably in Perl). The "web thing" gained traction, and some vision individual started building web sites in Java. Eventually, the success of those small teams indicated to others that Java had advantages over COBOL.
Managers say the words "make a business case", but watch what they actually do. They listen to (1) their peers, (2) successful people.
To make the "business case" for Python, you have to be that visionary individual.
1) Use Python.
2) Be successful.
3) Share your successes.
4) Be prepared to explain that your success is due to your tools, not your personal level of genius and charisma. | 0 | 1 | 0 | 0 | 2009-12-10T06:52:00.000 | 5 | 0 | false | 1,879,113 | 0 | 0 | 1 | 3 | This will not be a "programming" question but more technology / platform related question. I'm trying to figure out whether Python can be a suitable Java alternative for enterprise / web applications.
Which are the ideal cases where you would prefer to use Python instead of Java? How would a typical Python web application (databases/sessions/concurrency) perform as compared to a typical Java application? How do specific Python frameworks square up against Java based frameworks (Spring, SEAM, Grails etc.)?
For businesses, switching from the Java infrastructure to a Python infrastructure .. is it too hard/expensive/resource intensive/not viable? Also shed some light on the business case for providing a Python + Google AppEngine based solution to the end customer. Will it be cost effective in an typical scenario?
Sorry if I am asking too wide a question, I would have liked to keep it specific, but I need your help to evaluate Python as a whole from the perspectives of the programmers, service providing company and end business customer.
For an SME, a Python/GoogleAppEngine based technology stack is a clear scalable and affordable platform. But what about a large MNC that already has a lot invested in Java.
Thank you so much. I am researching this myself and will gladly share my conclusions here!
Thank you,
Srirangan |
A business Case for Enterprise Python | 2,506,203 | 1 | 9 | 2,483 | 0 | python,enterprise | The answer to your question is yes. Python can be well suited for Enterprise because python is a language which has raw power, flexible and can be glued with other programming languages. What enterprise really requires is a language which does everything and i feel python is already enterprise ready. If you want examples then i believe there can be no bigger example than google. Google is running python internally and externally for its business critical applications. The only problem with python is that it is not very well recognized by top MNC company and we as a python programmer find hard time convincing the management team. I guess you will face the same issue. But i guarantee you once you get your feet wet in python, you will understand its true power | 0 | 1 | 0 | 0 | 2009-12-10T06:52:00.000 | 5 | 0.039979 | false | 1,879,113 | 0 | 0 | 1 | 3 | This will not be a "programming" question but more technology / platform related question. I'm trying to figure out whether Python can be a suitable Java alternative for enterprise / web applications.
Which are the ideal cases where you would prefer to use Python instead of Java? How would a typical Python web application (databases/sessions/concurrency) perform as compared to a typical Java application? How do specific Python frameworks square up against Java based frameworks (Spring, SEAM, Grails etc.)?
For businesses, switching from the Java infrastructure to a Python infrastructure .. is it too hard/expensive/resource intensive/not viable? Also shed some light on the business case for providing a Python + Google AppEngine based solution to the end customer. Will it be cost effective in an typical scenario?
Sorry if I am asking too wide a question, I would have liked to keep it specific, but I need your help to evaluate Python as a whole from the perspectives of the programmers, service providing company and end business customer.
For an SME, a Python/GoogleAppEngine based technology stack is a clear scalable and affordable platform. But what about a large MNC that already has a lot invested in Java.
Thank you so much. I am researching this myself and will gladly share my conclusions here!
Thank you,
Srirangan |
How to upgrade the version of Python used by Apache? | 1,881,774 | 0 | 2 | 2,824 | 0 | python,apache,upgrade,sys.path | On RH box Apache probably runs as root user. Login as root and see which version of python root sees.
HIH
..richie | 0 | 1 | 0 | 0 | 2009-12-10T12:42:00.000 | 3 | 0 | false | 1,880,746 | 0 | 0 | 0 | 2 | On a Red hat box, I upgraded Python from 2.3 to 2.6.4 and changed the symlink to python so when I type in python the 2.6.4 interpreter comes up.
However my .py file works from the command-line, but not in the browser. It seemed like a sys.path issue so I opened the file in a browser and printed out sys.path.
Surprisingly, my sys.path is different when called from a browser than when called from a command-line. Because the paths are all referring to 2.3, I believe Apache is picking up Python 2.3 rather than the new 2.6.4 version I installed.
How do I make Apache use Python 2.6.4? |
How to upgrade the version of Python used by Apache? | 1,882,006 | 1 | 2 | 2,824 | 0 | python,apache,upgrade,sys.path | Apache isn't calling python directly, so the path is irrelevant. You will probably want to build yourself a new mod_wsgi to link against python 2.6.4. | 0 | 1 | 0 | 0 | 2009-12-10T12:42:00.000 | 3 | 0.066568 | false | 1,880,746 | 0 | 0 | 0 | 2 | On a Red hat box, I upgraded Python from 2.3 to 2.6.4 and changed the symlink to python so when I type in python the 2.6.4 interpreter comes up.
However my .py file works from the command-line, but not in the browser. It seemed like a sys.path issue so I opened the file in a browser and printed out sys.path.
Surprisingly, my sys.path is different when called from a browser than when called from a command-line. Because the paths are all referring to 2.3, I believe Apache is picking up Python 2.3 rather than the new 2.6.4 version I installed.
How do I make Apache use Python 2.6.4? |
Executing server-side Unix scripts asynchronously | 1,897,759 | 1 | 2 | 257 | 0 | python,http,unix,asynchronous | Django is great for writing web applications, and the subprocess module (subprocess.Popen en .communicate()) is great for executing shell scripts. You can give it a stdin,stdout and stderr stream for communication if you want. | 0 | 1 | 0 | 1 | 2009-12-13T21:07:00.000 | 3 | 0.066568 | false | 1,897,748 | 0 | 0 | 0 | 1 | We have a collection of Unix scripts (and/or Python modules) that each perform a long running task. I would like to provide a web interface for them that does the following:
Asks for relevant data to pass into scripts.
Allows for starting/stopping/killing them.
Allows for monitoring the progress and/or other information provided by the scripts.
Possibly some kind of logging (although the scripts already do logging).
I do know how to write a server that does this (e.g. by using Python's built-in HTTP server/JSON), but doing this properly is non-trivial and I do not want to reinvent the wheel.
Are there any existing solutions that allow for maintaining asynchronous server-side tasks? |
Emacs Python-Mode: Sending statements to a subprocess does not lead to REPL-style evaluation | 1,901,609 | 2 | 1 | 289 | 0 | python,emacs | It sounds like you need print; use print.
emacs is launching a python process and getting text from its standard output, not a python value. | 0 | 1 | 0 | 0 | 2009-12-14T14:55:00.000 | 1 | 0.379949 | false | 1,901,354 | 0 | 0 | 0 | 1 | After selecting 1 + 1 and issuing python-send-region, my subprocess buffer shows no results. I have to evaluate print 1 + 1, instead.
How can I force the python-send-* commands to print the value of the respective statements rather than echoing their stdout? |
Is Google App Engine right for me? | 1,903,297 | 3 | 9 | 1,390 | 0 | python,google-app-engine,web2py | The AppEngine uses BigTable as it's datastore backend. Don't try to write a traditional relational-database driven application. BigTable is much more well suited for use as a highly-scalable key-value store. Avoid joins if at all possible. | 0 | 1 | 0 | 0 | 2009-12-14T19:51:00.000 | 7 | 0.085505 | false | 1,903,065 | 0 | 0 | 1 | 6 | I am thinking about using Google App Engine.It is going to be a huge website. In that case, what is your piece of advice using Google App Engine. I heard GAE has restrictions like we cannot store images or files more than 1MB limit(they are going to change this from what I read in the GAE roadmap),query is limited to 1000 results, and I am also going to se web2py with GAE. So I would like to know your comments.
Thanks |
Is Google App Engine right for me? | 1,904,574 | 2 | 9 | 1,390 | 0 | python,google-app-engine,web2py | I wouldn't worry about any of this. After having played with Google App Engine for a while now, I've found that it scales quite well for large data sets. If your data elements are large (i.e. photos), then you'll need to integrate with another service to handle them, but that's probably going to be true no matter what with data of that size. Also, I've found BigTable relatively easy to work with having come from a background entirely in relational databases. Finally, Django is a somewhat hidden, but awesome, "feature" of Google App Engine. If you've never used it, it's a really nice, elegant web framework that makes a lot of common tasks trivial (forms come to mind here). | 0 | 1 | 0 | 0 | 2009-12-14T19:51:00.000 | 7 | 0.057081 | false | 1,903,065 | 0 | 0 | 1 | 6 | I am thinking about using Google App Engine.It is going to be a huge website. In that case, what is your piece of advice using Google App Engine. I heard GAE has restrictions like we cannot store images or files more than 1MB limit(they are going to change this from what I read in the GAE roadmap),query is limited to 1000 results, and I am also going to se web2py with GAE. So I would like to know your comments.
Thanks |
Is Google App Engine right for me? | 1,905,263 | 5 | 9 | 1,390 | 0 | python,google-app-engine,web2py | using web2py on Google App Engine is a great strategy. It lets you get up and running fast, and if you do outgrow the restrictions of GAE then you can move your web2py application elsewhere.
However, keeping this portability means you should stay away from the advanced parts of GAE (Task Queues, Transactions, ListProperty, etc). | 0 | 1 | 0 | 0 | 2009-12-14T19:51:00.000 | 7 | 0.141893 | false | 1,903,065 | 0 | 0 | 1 | 6 | I am thinking about using Google App Engine.It is going to be a huge website. In that case, what is your piece of advice using Google App Engine. I heard GAE has restrictions like we cannot store images or files more than 1MB limit(they are going to change this from what I read in the GAE roadmap),query is limited to 1000 results, and I am also going to se web2py with GAE. So I would like to know your comments.
Thanks |
Is Google App Engine right for me? | 1,994,758 | 0 | 9 | 1,390 | 0 | python,google-app-engine,web2py | What about Google Wave? It's being built on appengine, and once live, real-time translatable chat reaches the corporate sector... I could see it hitting top 1000th... But then again, that's an internal project that gets to do special stuff other appengine apps can't.... Like hanging threads; I think... And whatever else Wave has under the hood... | 0 | 1 | 0 | 0 | 2009-12-14T19:51:00.000 | 7 | 0 | false | 1,903,065 | 0 | 0 | 1 | 6 | I am thinking about using Google App Engine.It is going to be a huge website. In that case, what is your piece of advice using Google App Engine. I heard GAE has restrictions like we cannot store images or files more than 1MB limit(they are going to change this from what I read in the GAE roadmap),query is limited to 1000 results, and I am also going to se web2py with GAE. So I would like to know your comments.
Thanks |
Is Google App Engine right for me? | 1,904,610 | 8 | 9 | 1,390 | 0 | python,google-app-engine,web2py | Having developed a smallish site with GAE, I have some thoughts
If you mean "huge" like "the next YouTube", then GAE might be a great fit, because of the previously mentioned scaling.
If you mean "huge" like "massively complex, with a whole slew of screens, models, and features", then GAE might not be a good fit. Things like unit testing are hard on GAE, and there's not a built-in structure for your app that you'd get with something like (famously) (Ruby on) Rails, or (Python powered) Turbogears.
ie: there is no staging environment: just your development copy of the system and production. This may or may not be a bad thing, depending on your situation.
Additionally, it depends on the other Python modules you intend to pull in: some Python modules just don't run on GAE (because you can't talk to hardware, or because there are just too many files in the package).
Hope this helps | 0 | 1 | 0 | 0 | 2009-12-14T19:51:00.000 | 7 | 1 | false | 1,903,065 | 0 | 0 | 1 | 6 | I am thinking about using Google App Engine.It is going to be a huge website. In that case, what is your piece of advice using Google App Engine. I heard GAE has restrictions like we cannot store images or files more than 1MB limit(they are going to change this from what I read in the GAE roadmap),query is limited to 1000 results, and I am also going to se web2py with GAE. So I would like to know your comments.
Thanks |
Is Google App Engine right for me? | 1,903,114 | -11 | 9 | 1,390 | 0 | python,google-app-engine,web2py | If you are planning on a 'huge' website, then don't use App Engine. Simple as that. The App Engine is not built to deliver the next top 1000th website.
Allow me to also ask what do you mean by 'huge', how many simultaneous users? Queries per second? DB load? | 0 | 1 | 0 | 0 | 2009-12-14T19:51:00.000 | 7 | -1 | false | 1,903,065 | 0 | 0 | 1 | 6 | I am thinking about using Google App Engine.It is going to be a huge website. In that case, what is your piece of advice using Google App Engine. I heard GAE has restrictions like we cannot store images or files more than 1MB limit(they are going to change this from what I read in the GAE roadmap),query is limited to 1000 results, and I am also going to se web2py with GAE. So I would like to know your comments.
Thanks |
How can I deal with python eggs for multiple platforms in one location? | 2,164,148 | 0 | 6 | 1,197 | 0 | python,easy-install,pkg-resources | Use "easy_install -m" to install all the platform-specific packages, so that there is no default version on sys.path. That way, version resolution takes place at runtime, and platform information will be taken into consideration. | 0 | 1 | 0 | 1 | 2009-12-14T21:33:00.000 | 4 | 0 | false | 1,903,653 | 1 | 0 | 0 | 1 | We have a common python installation for all of our systems in order to ensure every system has the same python installation and to ease configuration issues. This installation is located on a shared drive. We also have multiple platforms that share this installation. We get around conflicting platform-specific files by setting the --exec-prefix configure option when compiling python.
My issue is that I now want to install an egg using easy_install (or otherwise) that is platform-dependent. easy_install puts the egg in the site-packages directory of the platform-independent part of the install. The name of the egg has the platform in it so there should be no conflict. But python will only load the first one it finds. (So, on Solaris it might try to load the Linux egg). Modifying the easy-install.pth file can change which one it finds, but that's pretty useless.
I can move the .egg files into a platform-depended packages directory and then use pkg_resources.require() to load them (or manually adjust the path). But it seems as though I shouldn't have to since the platform is in the name of the egg.
Is there any more generic way I can ensure that python will load the egg for the correct platform? |
Run a remote python script from ASP.Net | 1,904,344 | 0 | 1 | 1,105 | 0 | asp.net,python,remote-execution | Probably the best approach is the least coupled one. If you can determine a protocol that you're comfortable with the two (asp/python) talking in, it will go a long way to reducing headaches.
Let's say you pick XML.
Setup the python script to run as a WSGI application with either cherrypy or apache (or whatever). The script formats it's response in XML and passes that to WSGI which returns the XML over HTTP.
On the ASP.NET side of things, whenever you want to "run the script" you simply query the URL with the WebRequest class, then parse the results with LINQ-to-XML (which on a side note is a really cool technology).
Here's where this becomes relevant: Later on if either the ASP.NET implementation or the python implementation changes you don't have to re-code/refactor the other. Later if you realize that the ASP.NET app and some desktop app need to be able to do that, you've standardized on a protocol and implementing it should be easy and well supported. | 0 | 1 | 0 | 1 | 2009-12-14T23:40:00.000 | 1 | 0 | false | 1,904,320 | 0 | 0 | 0 | 1 | I have a python script on a linux server that I can SSH into and I want to run the script on the linux server( and pass it parameters entered by the user) and get the output on an ASP.net webpage running on IIS. How would I be able to do that?
Would it be easier if I was running a wamp server?
Edit: The servers are in the same internal intranet. |
In Python, how can I test if I'm in Google App Engine SDK? | 64,592,250 | 0 | 41 | 8,997 | 0 | python,google-app-engine | Update on October 2020:
I tried using os.environ["SERVER_SOFTWARE"] and os.environ["APPENGINE_RUNTIME"] but both didn't work so I just logged all keys from the results from os.environ.
In these keys, there was GAE_RUNTIME which I used to check if I was in the local environment or cloud environment.
The exact key might change or you could add your own in app.yaml but the point is, log os.environ, perhaps by adding to a list in a test webpage, and use its results to check your environment. | 0 | 1 | 0 | 0 | 2009-12-16T18:14:00.000 | 7 | 0 | false | 1,916,579 | 0 | 0 | 1 | 1 | Whilst developing I want to handle some things slight differently than I will when I eventually upload to the Google servers.
Is there a quick test that I can do to find out if I'm in the SDK or live? |
python write CD/DVD iso file | 1,920,536 | 1 | 3 | 5,258 | 0 | python,file,system,iso | Following 'do not reinvent the wheel' I would try using mkisofs (part of cdrtools) (although originating on Linux, I think there are windows builds floating around the net). | 0 | 1 | 0 | 0 | 2009-12-17T08:34:00.000 | 1 | 1.2 | true | 1,920,246 | 0 | 0 | 0 | 1 | I'm making a cross-platform (Windows and OS X) with wxPython that will be compiled to exe later.
Is it possible for me to create ISO files for CDs or DVDs in Python to burn a data disc with?
Thanks,
Chris |
How can I run a python script on windows? | 1,921,032 | 3 | 0 | 14,781 | 0 | python,ide | IDE for running scripts? You can have any IDE you like, but if you need only to run python scripts you go like this:
python.exe pythonScript.py | 0 | 1 | 0 | 0 | 2009-12-17T11:04:00.000 | 5 | 0.119427 | false | 1,920,997 | 0 | 0 | 0 | 2 | Can anyone please tell me an IDE for running python programs? Is it possible to run the program through command line? |
How can I run a python script on windows? | 1,921,097 | 0 | 0 | 14,781 | 0 | python,ide | PyDev and Komodo Edit are 2 nice Python IDE on Windows.
I also like the SciTE text editor very much.
These 3 solutions make possible to run Python scripts | 0 | 1 | 0 | 0 | 2009-12-17T11:04:00.000 | 5 | 0 | false | 1,920,997 | 0 | 0 | 0 | 2 | Can anyone please tell me an IDE for running python programs? Is it possible to run the program through command line? |
Python: when to use pty.fork() versus os.fork() | 11,804,899 | 2 | 14 | 5,805 | 0 | python,linux,fork,kill,pty | Pseudotermials are necessary for some applications that really expect a terminal. An interactive shell is one of these examples but there are many other. The pty.fork option is not there as another os.fork but as a specific API to use a pseudoterminal. | 0 | 1 | 0 | 0 | 2009-12-17T14:47:00.000 | 3 | 0.132549 | false | 1,922,254 | 0 | 0 | 0 | 2 | I'm uncertain whether to use pty.fork() or os.fork() when spawning external background processes from my app. (Such as chess engines)
I want the spawned processes to die if the parent is killed, as with spawning apps in a terminal.
What are the ups and downs between the two forks? |
Python: when to use pty.fork() versus os.fork() | 1,923,358 | 11 | 14 | 5,805 | 0 | python,linux,fork,kill,pty | The child process created with os.fork() inherits stdin/stdout/stderr from parent process, while the child created with pty.fork() is connected to new pseudo terminal. You need the later when you write a program like xterm: pty.fork() in parent process returns a descriptor to control terminal of child process, so you can visually represent data from it and translate user actions into terminal input sequences.
Update:
From pty(7) man page:
A process that expects to be connected
to a terminal, can open the slave end
of a pseudo-terminal and then be
driven by a program that has
opened the master end. Anything that
is written on the master end is
provided to the process on the slave
end as though it was input typed on
a terminal. For example, writing the
interrupt character (usually
control-C) to the master device
would cause an interrupt signal
(SIGINT) to be generated for the
foreground process group that is
connected to the slave. Conversely,
anything that is written to the
slave end of the pseudo-terminal can
be read by the process that is
connected to the master end. | 0 | 1 | 0 | 0 | 2009-12-17T14:47:00.000 | 3 | 1.2 | true | 1,922,254 | 0 | 0 | 0 | 2 | I'm uncertain whether to use pty.fork() or os.fork() when spawning external background processes from my app. (Such as chess engines)
I want the spawned processes to die if the parent is killed, as with spawning apps in a terminal.
What are the ups and downs between the two forks? |
Linux USB Mapping Question | 1,924,715 | 0 | 2 | 2,055 | 0 | python,linux,usb,dbus | what about using dmesg output to find out the device name (sdc1 etc...)
use it right after dbus tells you something is was inserted in USB. you could do tail dmesg for example | 0 | 1 | 0 | 0 | 2009-12-17T21:14:00.000 | 3 | 0 | false | 1,924,646 | 0 | 0 | 0 | 1 | I'm working on a utility that will auto mount an inserted USB stick on linux. I have tied into D-Bus to receive notification of when a device is inserted, and that works great. However, I need to determine which device in /dev is mapped to the inserted USB stick. I am getting the D-Bus notification and then scanning the USB system with pyUSB ( 0.4 ). I filter for USB_MASS_STORAGE_DEVICE classes, and I can see the device that's been added or removed. I need to mount this device so I can query it for available space and report that to our app so we can determine if enough free space exists so we can write our data.
I'm using python for this task. I'm not sure what our target distro will be, only that it will be at least 2.6
edit: My question is: How do I determine which device in /dev maps to the buss-device number I get from pyUSB. |
Get Cygwin installation path in a Python script | 2,568,441 | 0 | 3 | 1,626 | 0 | python,windows,registry,cygwin | You can use the HKEY_LOCAL_MACHINE\SOFTWARE\Cygwin\setup\rootdir value for Cygwin 1.7 | 0 | 1 | 0 | 0 | 2009-12-18T00:29:00.000 | 3 | 0 | false | 1,925,552 | 1 | 0 | 0 | 1 | I'm writing a cross-platform python script that needs to know if and where Cygwin is installed if the platform is NT. Right now I'm just using a naive check for the existence of the default install path 'C:\Cygwin'. I would like to be able to determine the installation path programmatically.
The Windows registry doesn't appear to be an option since Cygwin no longer stores it's mount points in the registry. Because of this is it even possible to programmatically get a Cygwin installation path? |
Python programming on Eclipse with Pydev | 1,925,758 | 2 | 0 | 2,673 | 0 | python,eclipse,ide,pydev | Open a new text file and start writing code? | 0 | 1 | 0 | 0 | 2009-12-18T01:30:00.000 | 3 | 0.132549 | false | 1,925,750 | 1 | 0 | 0 | 2 | I need major help getting started! I managed to create a new project, and add python.exe as the interpreter. But when the project is created it's blank. How do I start programming? Ugh. |
Python programming on Eclipse with Pydev | 1,925,760 | 4 | 0 | 2,673 | 0 | python,eclipse,ide,pydev | Create PyDev project
Add "Source Folder" under the project
Add "Modules" to the "Source Folder"
Get coding :-) | 0 | 1 | 0 | 0 | 2009-12-18T01:30:00.000 | 3 | 1.2 | true | 1,925,750 | 1 | 0 | 0 | 2 | I need major help getting started! I managed to create a new project, and add python.exe as the interpreter. But when the project is created it's blank. How do I start programming? Ugh. |
How to get the LAN IP that a socket is sending (linux) | 1,926,048 | 0 | 0 | 626 | 0 | python,sockets,ip-address | quick answer - socket.getpeername() (provided that socket is a socket object, not a module)
(playing around in python/ipython/idle/... interactive shell is very helpful)
.. or if I read you question carefully, maybe socket.getsockname() :) | 0 | 1 | 1 | 0 | 2009-12-18T02:49:00.000 | 2 | 0 | false | 1,925,974 | 0 | 0 | 0 | 1 | I need some code to get the address of the socket i just created (to filter out packets originating from localhost on a multicast network)
this:
socket.gethostbyname(socket.gethostname())
works on mac but it returns only the localhost IP in linux... is there anyway to get the LAN address
thanks
--edit--
is it possible to get it from the socket settings itself, like, the OS has to select a LAN IP to send on... can i play on getsockopt(... IP_MULTICAST_IF...) i dont know exactly how to use this though...?
--- edit ---
SOLVED!
send_sock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_LOOP, 0)
putting this on the send socket eliminated packet echos to the host sending them, which eliminates the need for the program to know which IP the OS has selected to send.
yay! |
Python GUI (glade) to display output of shell process | 1,929,138 | 1 | 2 | 3,524 | 0 | python,user-interface,glade | glade is only a program to build gui with gtk so when you ask for a glade object maybe you should ask for gtk widget and in this case textbuffer and textview chould be a solution or maybe treeview and liststore.
subprocess.Popen has stdout and stderr arguments that can accept a file-like object. you can create an adapter that writes to the textbuffer or add items in the liststore | 0 | 1 | 0 | 0 | 2009-12-18T15:55:00.000 | 3 | 0.066568 | false | 1,929,018 | 0 | 0 | 0 | 1 | I'm writing a python application that runs several subprocesses using subprocess.Popen objects.
I have a glade GUI and want to display the output of these commands (running in subprocess.Popen) in the gui in real time.
Can anyone suggest a way to do this? What glade object do I need to use and how to redirect the output? |
Is there a better way to serve the results of an expensive, blocking python process over HTTP? | 1,929,711 | 1 | 5 | 495 | 0 | python,http,mod-wsgi,tornado | You might consider a queuing system with AJAX notification methods.
Whenever there is a request for your expensive resource, and that resource needs to be generated, add that request to the queue (if it's not already there). That queuing operation should return an ID of an object that you can query to get its status.
Next you have to write a background service that spins up worker threads. These workers simply dequeue the request, generate the data, then saves the data's location in the request object.
The webpage can make AJAX calls to your server to find out the progress of the generation and to give a link to the file once it's available.
This is how LARGE media sites work - those that have to deal with video in particular. It might be overkill for your MP3 work however.
Alternatively, look into running a couple machines to distribute the load. Your threads on Apache will still block, but atleast you won't consume resources on the web server. | 0 | 1 | 0 | 1 | 2009-12-18T17:39:00.000 | 5 | 0.039979 | false | 1,929,681 | 0 | 0 | 1 | 3 | We have a web service which serves small, arbitrary segments of a fixed inventory of larger MP3 files. The MP3 files are generated on-the-fly by a python application. The model is, make a GET request to a URL specifying which segments you want, get an audio/mpeg stream in response. This is an expensive process.
We're using Nginx as the front-end request handler. Nginx takes care of caching responses for common requests.
We initially tried using Tornado on the back-end to handle requests from Nginx. As you would expect, the blocking MP3 operation kept Tornado from doing its thing (asynchronous I/O). So, we went multithreaded, which solved the blocking problem, and performed quite well. However, it introduced a subtle race condition (under real world load) that we haven't been able to diagnose or reproduce yet. The race condition corrupts our MP3 output.
So we decided to set our application up as a simple WSGI handler behind Apache/mod_wsgi (still w/ Nginx up front). This eliminates the blocking issue and the race condition, but creates a cascading load (i.e. Apache creates too many processses) on the server under real world conditions. We're working on tuning Apache/mod_wsgi right now, but still at a trial-and-error phase. (Update: we've switched back to Tornado. See below.)
Finally, the question: are we missing anything? Is there a better way to serve CPU-expensive resources over HTTP?
Update: Thanks to Graham's informed article, I'm pretty sure this is an Apache tuning problem. In the mean-time, we've gone back to using Tornado and are trying to resolve the data-corruption issue.
For those who were so quick to throw more iron at the problem, Tornado and a bit of multi-threading (despite the data integrity problem introduced by threading) handles the load acceptably on a small (single core) Amazon EC2 instance. |
Is there a better way to serve the results of an expensive, blocking python process over HTTP? | 1,929,738 | 0 | 5 | 495 | 0 | python,http,mod-wsgi,tornado | It looks like you are doing things right -- just lacking CPU power: can you determine what is the CPU loading in the process of generating these MP3?
I think the next thing you have to do there is to add more hardware to render the MP3's on other machines. Or that or find a way to deliver pre-rendered MP3 (maybe you can cahce some of your media?)
BTW, scaling for the web was the theme of a Keynote lecture by Jacob Kaplan-Moss on PyCon Brasil this year, and it is far from being a closed problem. The stack of technologies one needs to handle is quite impressible - (I could not find an online copy o f the presentation, though - -sorry for that) | 0 | 1 | 0 | 1 | 2009-12-18T17:39:00.000 | 5 | 0 | false | 1,929,681 | 0 | 0 | 1 | 3 | We have a web service which serves small, arbitrary segments of a fixed inventory of larger MP3 files. The MP3 files are generated on-the-fly by a python application. The model is, make a GET request to a URL specifying which segments you want, get an audio/mpeg stream in response. This is an expensive process.
We're using Nginx as the front-end request handler. Nginx takes care of caching responses for common requests.
We initially tried using Tornado on the back-end to handle requests from Nginx. As you would expect, the blocking MP3 operation kept Tornado from doing its thing (asynchronous I/O). So, we went multithreaded, which solved the blocking problem, and performed quite well. However, it introduced a subtle race condition (under real world load) that we haven't been able to diagnose or reproduce yet. The race condition corrupts our MP3 output.
So we decided to set our application up as a simple WSGI handler behind Apache/mod_wsgi (still w/ Nginx up front). This eliminates the blocking issue and the race condition, but creates a cascading load (i.e. Apache creates too many processses) on the server under real world conditions. We're working on tuning Apache/mod_wsgi right now, but still at a trial-and-error phase. (Update: we've switched back to Tornado. See below.)
Finally, the question: are we missing anything? Is there a better way to serve CPU-expensive resources over HTTP?
Update: Thanks to Graham's informed article, I'm pretty sure this is an Apache tuning problem. In the mean-time, we've gone back to using Tornado and are trying to resolve the data-corruption issue.
For those who were so quick to throw more iron at the problem, Tornado and a bit of multi-threading (despite the data integrity problem introduced by threading) handles the load acceptably on a small (single core) Amazon EC2 instance. |
Is there a better way to serve the results of an expensive, blocking python process over HTTP? | 1,937,378 | 1 | 5 | 495 | 0 | python,http,mod-wsgi,tornado | Please define "cascading load", as it has no common meaning.
Your most likely problem is going to be if you're running too many Apache processes.
For a load like this, make sure you're using the prefork mpm, and make sure you're limiting yourself to an appropriate number of processes (no less than one per CPU, no more than two). | 0 | 1 | 0 | 1 | 2009-12-18T17:39:00.000 | 5 | 0.039979 | false | 1,929,681 | 0 | 0 | 1 | 3 | We have a web service which serves small, arbitrary segments of a fixed inventory of larger MP3 files. The MP3 files are generated on-the-fly by a python application. The model is, make a GET request to a URL specifying which segments you want, get an audio/mpeg stream in response. This is an expensive process.
We're using Nginx as the front-end request handler. Nginx takes care of caching responses for common requests.
We initially tried using Tornado on the back-end to handle requests from Nginx. As you would expect, the blocking MP3 operation kept Tornado from doing its thing (asynchronous I/O). So, we went multithreaded, which solved the blocking problem, and performed quite well. However, it introduced a subtle race condition (under real world load) that we haven't been able to diagnose or reproduce yet. The race condition corrupts our MP3 output.
So we decided to set our application up as a simple WSGI handler behind Apache/mod_wsgi (still w/ Nginx up front). This eliminates the blocking issue and the race condition, but creates a cascading load (i.e. Apache creates too many processses) on the server under real world conditions. We're working on tuning Apache/mod_wsgi right now, but still at a trial-and-error phase. (Update: we've switched back to Tornado. See below.)
Finally, the question: are we missing anything? Is there a better way to serve CPU-expensive resources over HTTP?
Update: Thanks to Graham's informed article, I'm pretty sure this is an Apache tuning problem. In the mean-time, we've gone back to using Tornado and are trying to resolve the data-corruption issue.
For those who were so quick to throw more iron at the problem, Tornado and a bit of multi-threading (despite the data integrity problem introduced by threading) handles the load acceptably on a small (single core) Amazon EC2 instance. |
Configuring extension modules with distutils/setuptools | 1,940,353 | 0 | 6 | 2,010 | 0 | python,setuptools,distutils | I would subclass distutils.core.Distribution and pass it with distutils.core.setup(distclass=CustomDistribution) - this gives you access to the command-line parameters in the same way that the normal setup has them, and you can do things like adjusting the extensions list in the CustomDistribution.__init__ method. But I agree with dalke, the way of distutils is full of pain... | 0 | 1 | 0 | 0 | 2009-12-18T21:56:00.000 | 4 | 0 | false | 1,930,900 | 1 | 0 | 0 | 4 | I have a Python project with mutiple extension modules written in C, which talk to a third-party library. However, depending on the user's environment and options some modules should not be built, and some compiler flags should be enabled/disabled. The problem is that I have to build the list of extension modules before I call setup(), and ideally I'd like to use a distutils.Command subclass to handle the user options. Right now I have a few options:
Require a "python setup.py configure" command be run before building the modules, store the information in a pickle file, and use it to generate the extensions list next time the script runs. This is how my project currently works, which seems quite silly.
Manually scrape options out of sys.argv and use them to build the list. This is not a long-term solution because I will eventually want to run some scripts to check the settings before building.
Subclass build_ext from distutils, do my configuration in the beginning of the run() method (possibly also using options sent via (2)) and directly modify self.distribution.ext_modules before building. I'm afraid this may confuse setuptools, however, as it may assume the list of extension modules is fixed when setup() is called. It also means that when setup() is called with a command other than build_ext the list of extension modules is empty.
Is there a preferred way to do this? |
Configuring extension modules with distutils/setuptools | 6,883,355 | 0 | 6 | 2,010 | 0 | python,setuptools,distutils | The config command is designed to be subclassed and used by projects with requirements like yours. | 0 | 1 | 0 | 0 | 2009-12-18T21:56:00.000 | 4 | 0 | false | 1,930,900 | 1 | 0 | 0 | 4 | I have a Python project with mutiple extension modules written in C, which talk to a third-party library. However, depending on the user's environment and options some modules should not be built, and some compiler flags should be enabled/disabled. The problem is that I have to build the list of extension modules before I call setup(), and ideally I'd like to use a distutils.Command subclass to handle the user options. Right now I have a few options:
Require a "python setup.py configure" command be run before building the modules, store the information in a pickle file, and use it to generate the extensions list next time the script runs. This is how my project currently works, which seems quite silly.
Manually scrape options out of sys.argv and use them to build the list. This is not a long-term solution because I will eventually want to run some scripts to check the settings before building.
Subclass build_ext from distutils, do my configuration in the beginning of the run() method (possibly also using options sent via (2)) and directly modify self.distribution.ext_modules before building. I'm afraid this may confuse setuptools, however, as it may assume the list of extension modules is fixed when setup() is called. It also means that when setup() is called with a command other than build_ext the list of extension modules is empty.
Is there a preferred way to do this? |
Configuring extension modules with distutils/setuptools | 2,376,647 | 1 | 6 | 2,010 | 0 | python,setuptools,distutils | Is there a preferred way to do this?
From my experience working with other people's modules, I can say there is certainly not consensus on the right way to do this.
I have tried and rejected subclassing bits of distutils -- I found it fragile and difficult to get working across different Python versions and different systems.
For our code, after trying the types of things you are considering, I have settled on doing detection and configuration right in setup.py before the main call to
setup(). This is admittedly a bit ugly, but it means that someone trying compile your stuff has one place to figure out e.g. why the include path is wrong. (And they certainly don't need to be experts on distutils internals). | 0 | 1 | 0 | 0 | 2009-12-18T21:56:00.000 | 4 | 0.049958 | false | 1,930,900 | 1 | 0 | 0 | 4 | I have a Python project with mutiple extension modules written in C, which talk to a third-party library. However, depending on the user's environment and options some modules should not be built, and some compiler flags should be enabled/disabled. The problem is that I have to build the list of extension modules before I call setup(), and ideally I'd like to use a distutils.Command subclass to handle the user options. Right now I have a few options:
Require a "python setup.py configure" command be run before building the modules, store the information in a pickle file, and use it to generate the extensions list next time the script runs. This is how my project currently works, which seems quite silly.
Manually scrape options out of sys.argv and use them to build the list. This is not a long-term solution because I will eventually want to run some scripts to check the settings before building.
Subclass build_ext from distutils, do my configuration in the beginning of the run() method (possibly also using options sent via (2)) and directly modify self.distribution.ext_modules before building. I'm afraid this may confuse setuptools, however, as it may assume the list of extension modules is fixed when setup() is called. It also means that when setup() is called with a command other than build_ext the list of extension modules is empty.
Is there a preferred way to do this? |
Configuring extension modules with distutils/setuptools | 1,931,958 | 0 | 6 | 2,010 | 0 | python,setuptools,distutils | My own experience with changing distutils has been weak and shaky, so all I can offer are pointers. Take a look at numpy. That has an entire submodule (numpy.distutils) with ways to work with (or work around) distutils. Otherwise, ask the distutils mailing list. | 0 | 1 | 0 | 0 | 2009-12-18T21:56:00.000 | 4 | 0 | false | 1,930,900 | 1 | 0 | 0 | 4 | I have a Python project with mutiple extension modules written in C, which talk to a third-party library. However, depending on the user's environment and options some modules should not be built, and some compiler flags should be enabled/disabled. The problem is that I have to build the list of extension modules before I call setup(), and ideally I'd like to use a distutils.Command subclass to handle the user options. Right now I have a few options:
Require a "python setup.py configure" command be run before building the modules, store the information in a pickle file, and use it to generate the extensions list next time the script runs. This is how my project currently works, which seems quite silly.
Manually scrape options out of sys.argv and use them to build the list. This is not a long-term solution because I will eventually want to run some scripts to check the settings before building.
Subclass build_ext from distutils, do my configuration in the beginning of the run() method (possibly also using options sent via (2)) and directly modify self.distribution.ext_modules before building. I'm afraid this may confuse setuptools, however, as it may assume the list of extension modules is fixed when setup() is called. It also means that when setup() is called with a command other than build_ext the list of extension modules is empty.
Is there a preferred way to do this? |
Weblogic domain and cluster creation with WLST | 2,258,690 | 5 | 3 | 3,100 | 0 | python,weblogic,jython,wlst | I eventually found the answer. I am posting here for reference.
Out of the 5 mentioned tasks, all can be performed with an offline wlst script. All of them have to be performed on the node where AdminServer is supposed to live.
Now, for updating the domain information on the second node, there is an nmEnroll command in wlst which hast to be performed online
So, to summarize,
Execute an offline wlst script to perform all the 5 tasks mentioned in the question. This has to be done on the node (physical computer) where we want our AdminServer to run.
Start nodemanager on all the nodes to be used in the cluster,
Start the AdminServer on the node where we executed the domain creation script.
On all the other nodes execute the script which looks like following.
connect('user','password','t3://adminhost:adminport')
nmEnroll('path_to_the_domain_dir') | 0 | 1 | 0 | 0 | 2009-12-19T14:10:00.000 | 2 | 1.2 | true | 1,933,000 | 0 | 0 | 1 | 2 | I want to create a cluster with 2 managed servers on 2 different physical machines.
I have following tasks to be performed (please correct me if I miss something)
Domain creation.
Set admin server properties and create AdminServer under SSL
Create logical machines for the physical ones
Create managed servers
create cluster with the managed servers
I have following questions.
Which of the above mentioned tasks can be done offline if any ?
Which of the above mentioned tasks must also be performed on the 2nd physical machine ? |
Weblogic domain and cluster creation with WLST | 32,569,619 | 0 | 3 | 3,100 | 0 | python,weblogic,jython,wlst | There are two steps missed after the step 1, you need to copy the configuration from the machine where the AdminServer is running run to the other machine in the cluster using the command pack content in Weblogic installation:
1.1 On the machine where the AdminServer is running run ./pack.shdomain=/home/oracle/config/domains/my_domain
-template=/home/oracle/my_domain.jar -template_name=remote_managed -managed=true
1.2 Go on the other machines and copy the jar file produced in the previous step and run ./unpack.sh -domain=/home/oracle/config/domains/my_domain -template=/home/oracle/my_domain.jar SAML_IDP_FromScript
Now you have copied all the file you need to start the NodeManager and the ManagedServers on the other machines. | 0 | 1 | 0 | 0 | 2009-12-19T14:10:00.000 | 2 | 0 | false | 1,933,000 | 0 | 0 | 1 | 2 | I want to create a cluster with 2 managed servers on 2 different physical machines.
I have following tasks to be performed (please correct me if I miss something)
Domain creation.
Set admin server properties and create AdminServer under SSL
Create logical machines for the physical ones
Create managed servers
create cluster with the managed servers
I have following questions.
Which of the above mentioned tasks can be done offline if any ?
Which of the above mentioned tasks must also be performed on the 2nd physical machine ? |
Why use Django on Google App Engine? | 2,988,728 | 0 | 89 | 25,088 | 0 | python,django,google-app-engine | I am still very new to Google App engine development, but the interfaces Django provides do appear much nicer than the default. The benefits will depend on what you are using to run Django on the app engine. The Google App Engine Helper for Django allows you to use the full power of the Google App Engine with some Django functionality on the side.
Django non-rel attempts to provide as much of Django's power as possible, but running on the app-engine for possible extra scalability. In particular, it includes Django models (one of Django's core features), but this is a leaky abstraction due to the differences between relational databases and bigtable. There will most likely be tradeoffs in functionality and efficiency, as well as an increased number of bugs and quirks. Of course, this might be worth it in circumstances like those described in the question, but otherwise would strongly recommend using the helper at the start as then you have the option of moving towards either pure app-engine or Django non-rel later. Also, if you do switch to Django non-rel, your increased knowledge of how app engine works will be useful if the Django abstraction ever breaks - certainly much more useful than knowledge of the quirks/workarounds for Django non-rel if you swap the other way. | 0 | 1 | 0 | 0 | 2009-12-20T05:03:00.000 | 8 | 0 | false | 1,934,914 | 0 | 0 | 1 | 5 | When researching Google App Engine (GAE), it's clear that using Django is wildly popular for developing in Python on GAE. I've been scouring the web to find information on the costs and benefits of using Django, to find out why it's so popular. While I've been able to find a wide variety of sources on how to run Django on GAE and the various methods of doing so, I haven't found any comparative analysis on why Django is preferable to using the webapp framework provided by Google.
To be clear, it's immediately apparent why using Django on GAE is useful for developers with an existing skillset in Django (a majority of Python web developers, no doubt) or existing code in Django (where using GAE is more of a porting exercise). My team, however, is evaluating GAE for use on an all-new project and our existing experience is with TurboGears, not Django.
It's been quite difficult to determine why Django is beneficial to a development team when the BigTable libraries have replaced Django's ORM, sessions and authentication are necessarily changed, and Django's templating (if desirable) is available without using the entire Django stack.
Finally, it's clear that using Django does have the advantage of providing an "exit strategy" if we later wanted to move away from GAE and need a platform to target for the exodus.
I'd be extremely appreciative for help in pointing out why using Django is better than using webapp on GAE. I'm also completely inexperienced with Django, so elaboration on smaller features and/or conveniences that work on GAE are also valuable to me. |
Why use Django on Google App Engine? | 1,934,918 | 3 | 89 | 25,088 | 0 | python,django,google-app-engine | I have experience using Django and not GAE. From my experiences with Django it was a very simplistic setup and the deployment process was incredibly easy in terms of web projects. Granted I had to learn Python to really get a good hold on things, but at the end of the day I would use it again on a project. This was almost 2 years ago before it reached 1.0 so I my knowledge is a bit outdated.
If you are worried about changing platforms, then this would be a better choice I suppose. | 0 | 1 | 0 | 0 | 2009-12-20T05:03:00.000 | 8 | 0.07486 | false | 1,934,914 | 0 | 0 | 1 | 5 | When researching Google App Engine (GAE), it's clear that using Django is wildly popular for developing in Python on GAE. I've been scouring the web to find information on the costs and benefits of using Django, to find out why it's so popular. While I've been able to find a wide variety of sources on how to run Django on GAE and the various methods of doing so, I haven't found any comparative analysis on why Django is preferable to using the webapp framework provided by Google.
To be clear, it's immediately apparent why using Django on GAE is useful for developers with an existing skillset in Django (a majority of Python web developers, no doubt) or existing code in Django (where using GAE is more of a porting exercise). My team, however, is evaluating GAE for use on an all-new project and our existing experience is with TurboGears, not Django.
It's been quite difficult to determine why Django is beneficial to a development team when the BigTable libraries have replaced Django's ORM, sessions and authentication are necessarily changed, and Django's templating (if desirable) is available without using the entire Django stack.
Finally, it's clear that using Django does have the advantage of providing an "exit strategy" if we later wanted to move away from GAE and need a platform to target for the exodus.
I'd be extremely appreciative for help in pointing out why using Django is better than using webapp on GAE. I'm also completely inexperienced with Django, so elaboration on smaller features and/or conveniences that work on GAE are also valuable to me. |
Why use Django on Google App Engine? | 1,942,826 | 0 | 89 | 25,088 | 0 | python,django,google-app-engine | If you decide to run you app outside of GAE, you can still use Django. You won't really have that much luck with the GAE webapp | 0 | 1 | 0 | 0 | 2009-12-20T05:03:00.000 | 8 | 0 | false | 1,934,914 | 0 | 0 | 1 | 5 | When researching Google App Engine (GAE), it's clear that using Django is wildly popular for developing in Python on GAE. I've been scouring the web to find information on the costs and benefits of using Django, to find out why it's so popular. While I've been able to find a wide variety of sources on how to run Django on GAE and the various methods of doing so, I haven't found any comparative analysis on why Django is preferable to using the webapp framework provided by Google.
To be clear, it's immediately apparent why using Django on GAE is useful for developers with an existing skillset in Django (a majority of Python web developers, no doubt) or existing code in Django (where using GAE is more of a porting exercise). My team, however, is evaluating GAE for use on an all-new project and our existing experience is with TurboGears, not Django.
It's been quite difficult to determine why Django is beneficial to a development team when the BigTable libraries have replaced Django's ORM, sessions and authentication are necessarily changed, and Django's templating (if desirable) is available without using the entire Django stack.
Finally, it's clear that using Django does have the advantage of providing an "exit strategy" if we later wanted to move away from GAE and need a platform to target for the exodus.
I'd be extremely appreciative for help in pointing out why using Django is better than using webapp on GAE. I'm also completely inexperienced with Django, so elaboration on smaller features and/or conveniences that work on GAE are also valuable to me. |
Why use Django on Google App Engine? | 1,934,925 | 51 | 89 | 25,088 | 0 | python,django,google-app-engine | Django probably isn't the right choice for you, if you're sure that GAE is right for you. The strengths of the two technologies don't align very well - you completely lose a lot of Django's wonderful orm on GAE, and if you do use it, you write code that isn't really directly suitable to bigtable and the way GAE works.
The thing about GAE is that it gets the great scalability by forcing you to write code that scales easily from the ground up. You just can't do a number of things that scale poorly (of course, you can still write poorly scaling code, but you avoid some pitfalls). The tradeoff is that you really end up coding around the framework, if you use something like Django which is designed for a different environment.
If you see yourself ever leaving GAE for any reason, becoming invested in the infrastructure there is a problem for you. Coding for bigtable means that it will be harder to move to a different architecture (though the apache project is working to solve this for you with the HBase component of the Hadoop project). It would still be a lot of work to transition off of GAE.
What's the driving motivator behind using GAE, besides being a Google product, and a cool buzzword? Is there a reason that scaling using something like mediatemple's offering is unlikely to work well for you? Are you sure that the ways that GAE scales are right for your application? How does the cost compare to dedicated servers, if you're expecting to get to that performance realm? Can you solve your problem well using the tools GAE provides, as compared to a more traditional load-balanced server setup?
All this said, unless you absolutely positively need the borderline-ridiculous scaling that GAE offers, I'd personally suggest not letting that particular service structure your choice of framework. I like Django, so I'd say you should use it, but not on GAE.
Edit (June 2010):
As an update to this comment sometime later:
Google has announced sql-like capabilitys for GAE that aren't free, but will let you easily do things like run SQL-style commands to generate reports on your data.
Additionally, there are upcoming changes to the GAE query language which will allow complex queries in a far easier fashion. Look at the videos from Google I/O 2010.
Furthermore, there is work being done during the Summer of Code 2010 project which should bring no-sql support to django core, and by extension, make working with GAE significantly easier.
GAE is becoming more attractive as a hosting platform.
Edit (August 2011):
And Google just raised the cost to most users of the platform significantly by changing the pricing structure. The lockin problem has gotten better (if your application is big enough you can deploy the apache alternatives), but for most applications, running servers or VPS deployments is cheaper.
Very few people really have bigdata problems. "Oh my startup might scale someday" isn't a bigdata problem. Build stuff now and get it out the door using the standard tools. | 0 | 1 | 0 | 0 | 2009-12-20T05:03:00.000 | 8 | 1 | false | 1,934,914 | 0 | 0 | 1 | 5 | When researching Google App Engine (GAE), it's clear that using Django is wildly popular for developing in Python on GAE. I've been scouring the web to find information on the costs and benefits of using Django, to find out why it's so popular. While I've been able to find a wide variety of sources on how to run Django on GAE and the various methods of doing so, I haven't found any comparative analysis on why Django is preferable to using the webapp framework provided by Google.
To be clear, it's immediately apparent why using Django on GAE is useful for developers with an existing skillset in Django (a majority of Python web developers, no doubt) or existing code in Django (where using GAE is more of a porting exercise). My team, however, is evaluating GAE for use on an all-new project and our existing experience is with TurboGears, not Django.
It's been quite difficult to determine why Django is beneficial to a development team when the BigTable libraries have replaced Django's ORM, sessions and authentication are necessarily changed, and Django's templating (if desirable) is available without using the entire Django stack.
Finally, it's clear that using Django does have the advantage of providing an "exit strategy" if we later wanted to move away from GAE and need a platform to target for the exodus.
I'd be extremely appreciative for help in pointing out why using Django is better than using webapp on GAE. I'm also completely inexperienced with Django, so elaboration on smaller features and/or conveniences that work on GAE are also valuable to me. |
Why use Django on Google App Engine? | 1,935,061 | 0 | 89 | 25,088 | 0 | python,django,google-app-engine | I cannot answer the question but you may want to look into web2py. It is similar to Django in many respects but its database abstraction layer works on GAE and supports most of the GAE functionality (not all but we try to catch up). In this way if GAE works for you great, if it does not, you can move your code to a different db (SQLite, MySQL, PostgreSQL, Oracle, MSSQL, FireBird, DB2, Informix, Ingres, and - soon - Sybase and MongoDB). | 0 | 1 | 0 | 0 | 2009-12-20T05:03:00.000 | 8 | 0 | false | 1,934,914 | 0 | 0 | 1 | 5 | When researching Google App Engine (GAE), it's clear that using Django is wildly popular for developing in Python on GAE. I've been scouring the web to find information on the costs and benefits of using Django, to find out why it's so popular. While I've been able to find a wide variety of sources on how to run Django on GAE and the various methods of doing so, I haven't found any comparative analysis on why Django is preferable to using the webapp framework provided by Google.
To be clear, it's immediately apparent why using Django on GAE is useful for developers with an existing skillset in Django (a majority of Python web developers, no doubt) or existing code in Django (where using GAE is more of a porting exercise). My team, however, is evaluating GAE for use on an all-new project and our existing experience is with TurboGears, not Django.
It's been quite difficult to determine why Django is beneficial to a development team when the BigTable libraries have replaced Django's ORM, sessions and authentication are necessarily changed, and Django's templating (if desirable) is available without using the entire Django stack.
Finally, it's clear that using Django does have the advantage of providing an "exit strategy" if we later wanted to move away from GAE and need a platform to target for the exodus.
I'd be extremely appreciative for help in pointing out why using Django is better than using webapp on GAE. I'm also completely inexperienced with Django, so elaboration on smaller features and/or conveniences that work on GAE are also valuable to me. |
how do I add a python module on MacOS X? | 1,935,323 | 3 | 1 | 28,430 | 0 | python,macos,module-search-path | I think by default /Library/Python/2.5/site-packages/ is part of your search path. This directory is usually used for third party libraries. | 0 | 1 | 0 | 0 | 2009-12-20T08:52:00.000 | 4 | 1.2 | true | 1,935,290 | 1 | 0 | 0 | 1 | I'm trying to use pywn, a python library for using WordNet. I've played about with python a little under Windows, but am completely new at MacOS X stuff. I'm running under MacOS 10.5.8, so my default Python interpreter is 2.5.1
The pywn instructions say: "Put each of the .py files somewhere in your python search path."
Where is the python search path defined under the default python installation in MacOS X?
If I've put the pywn files in /Users/nick/programming/pywn, what is the best way of adding this to the search path?
Is this the best place to put the files? |
pydev 1.5.3 not working fine with Easy Eclipse 1.3.1 | 2,967,350 | 0 | 1 | 228 | 0 | python,eclipse,pydev | My pydev broke entirely with 1.5.3.
I had to downgrade yum downgrade eclipse-pydev and keep yum from updating it ever since. | 0 | 1 | 0 | 0 | 2009-12-21T08:45:00.000 | 2 | 0 | false | 1,938,929 | 0 | 0 | 0 | 2 | I installed Pydev 1.5.3 (so that I could get the merged version of Pydev Extensions in core PyDev) in an EasyEclipse 1.3.1 installation. After this, Compare with > Base revision etc. comparison operations stopped working. I had to disable the PyDev 1.5.3 and revert back to the pre-installed Pydev 1.3.13 (part of EasyEclipse 1.3.1).
Has anybody faced similar problem? Is there any work-around for this? |
pydev 1.5.3 not working fine with Easy Eclipse 1.3.1 | 2,973,340 | 0 | 1 | 228 | 0 | python,eclipse,pydev | I am now using PyDev 1.5.6 and its working fine with EasyEclipse (along with SubClipse). The issues in comparison seem to have been resolved. In fact, the file diff in 1.5.6 is looking much more beautiful than before. | 0 | 1 | 0 | 0 | 2009-12-21T08:45:00.000 | 2 | 1.2 | true | 1,938,929 | 0 | 0 | 0 | 2 | I installed Pydev 1.5.3 (so that I could get the merged version of Pydev Extensions in core PyDev) in an EasyEclipse 1.3.1 installation. After this, Compare with > Base revision etc. comparison operations stopped working. I had to disable the PyDev 1.5.3 and revert back to the pre-installed Pydev 1.3.13 (part of EasyEclipse 1.3.1).
Has anybody faced similar problem? Is there any work-around for this? |
howto scroll a gtk.scrolledwindow object from python code | 1,941,109 | 0 | 3 | 2,933 | 0 | python,pygtk,subprocess,glade | look at gtk.ScrolledWindow.set_placement.
(never tried) | 1 | 1 | 0 | 0 | 2009-12-21T16:03:00.000 | 3 | 0 | false | 1,940,957 | 0 | 0 | 0 | 1 | I'm writing a python application that has a glade gui. Using subprocess to execute some shell commands in the background.
Using a glade GUI which has a scrolledwindow widget and a textview widget inside the scrolledwindow widget. The textview gets populated as the subprocess.Popen object run and display their stdout and stderr to this textview.
My problem is that the textview is constantly populated, but stays still @ scroll position 0, 0 (top-most, left-most)
I want this scrolledwindow widget to stays at bottom-most, left-most at all times...
Does anyone have any idea which method I need to call scroll this thing downwards? |
Windows Server cannot execute a py2exe-generated app | 1,962,069 | 1 | 5 | 3,390 | 0 | python,windows,py2exe | I did not find the cause to the problem, but using python 2.5 with py2exe on the same script worked fine on the server.
I guess there is something wrong with py2exe under 2.6. | 0 | 1 | 0 | 0 | 2009-12-24T21:26:00.000 | 4 | 1.2 | true | 1,959,811 | 1 | 0 | 0 | 1 | A simple python script needs to run on a windows server with no python installed.
I used py2exe, which generated a healthy dist subdirectory, with script.exe that runs fine on the local machine.
However, when I run it on the server (Windows Server 2003 R2), it produces this:
The system cannot execute the specified program.
and ERRORLEVEL is 9020.
Any ideas? |
Problem installing MySQLdb on windows - Can't find python | 2,179,175 | 0 | 0 | 431 | 1 | python,windows-installer,mysql | did you use an egg?
if so, python might not be able to find it.
import os,sys
os.environ['PYTHON_EGG_CACHE'] = 'C:/temp'
sys.path.append('C:/path/to/MySQLdb.egg') | 0 | 1 | 0 | 0 | 2009-12-30T14:24:00.000 | 1 | 0 | false | 1,980,454 | 1 | 0 | 0 | 1 | I'm trying to install the module mySQLdb on a windows vista 64 (amd) machine.
I've installed python on a different folder other than suggested by Python installer.
When I try to install the .exe mySQLdb installer, it can't find python 2.5 and it halts the installation.
Is there anyway to supply the installer with the correct python location (even thou the registry and path are right)? |
writing pexpect like program in c++ on Linux | 1,982,873 | 2 | 0 | 639 | 0 | c++,python,linux,pexpect | You could just use "expect". It is very light weight and is made to do what youre describing. | 0 | 1 | 0 | 1 | 2009-12-30T22:06:00.000 | 2 | 1.2 | true | 1,982,788 | 0 | 0 | 0 | 1 | Is there any way of writing pexpect like small program which can launch a process and pass the password to that process?
I don't want to install and use pexpect python library but want to know the logic behind it so that using linux system apis I can build something similar. |
Interaction between Java App and Python App | 1,984,457 | 0 | 3 | 6,004 | 0 | java,python,interface,interaction | Expose one of the two as a service of some kind, web service maybe. Another option is to port the python code to Jython | 0 | 1 | 0 | 0 | 2009-12-31T07:52:00.000 | 6 | 0 | false | 1,984,445 | 0 | 0 | 1 | 2 | I have a python application which I cant edit its a black box from my point of view. The python application knows how to process text and return processed text.
I have another application written in Java which knows how to collect non processed texts.
Current state, the python app works in batch mode every x minutes.
I want to make the python
processing part of the process: Java app collects text and request the python app to process and return processed text as part of a flow.
What do you think is the simplest solution for this?
Thanks,
Rod |
Interaction between Java App and Python App | 1,984,650 | 0 | 3 | 6,004 | 0 | java,python,interface,interaction | An option is making the python application work as a server, listens for request via sockets (TCP). | 0 | 1 | 0 | 0 | 2009-12-31T07:52:00.000 | 6 | 0 | false | 1,984,445 | 0 | 0 | 1 | 2 | I have a python application which I cant edit its a black box from my point of view. The python application knows how to process text and return processed text.
I have another application written in Java which knows how to collect non processed texts.
Current state, the python app works in batch mode every x minutes.
I want to make the python
processing part of the process: Java app collects text and request the python app to process and return processed text as part of a flow.
What do you think is the simplest solution for this?
Thanks,
Rod |
What's the best tool to parse log files? | 1,994,373 | 11 | 16 | 32,524 | 0 | python,perl,parsing | In the end, it really depends on how much semantics you want to identify, whether your logs fit common patterns, and what you want to do with the parsed data.
If you can use regular expressions to find what you need, you have tons of options. Perl is a popular language and has very convenient native RE facilities. I personally feel a lot more comfortable with Python and find that the little added hassle for doing REs is not significant.
If you want to do something smarter than RE matching, or want to have a lot of logic, you may be more comfortable with Python or even with Java/C++/etc. For instance, it is easy to read line-by-line in Python and then apply various predicate functions and reactions to matches, which is great if you have a ruleset you would like to apply. | 0 | 1 | 0 | 0 | 2010-01-03T08:45:00.000 | 9 | 1 | false | 1,994,355 | 1 | 0 | 0 | 2 | I use grep to parse through my trading apps logs, but it's limited in the sense that I need to visually trawl through the output to see what happened etc.
I'm wondering if Perl is a better option? Any good resources to learn log and string parsing with Perl?
I'd also believe that Python would be good for this. Perl vs Python vs 'grep on linux'? |
What's the best tool to parse log files? | 1,995,141 | 1 | 16 | 32,524 | 0 | python,perl,parsing | on linux, you can use just the shell(bash,ksh etc) to parse log files if they are not too big in size. The other tools to go for are usually grep and awk. However, for more programming power, awk is usually used. If you have big files to parse, try awk.
Of course, Perl or Python or practically any other languages with file reading and string manipulation capabilities can be used as well. | 0 | 1 | 0 | 0 | 2010-01-03T08:45:00.000 | 9 | 0.022219 | false | 1,994,355 | 1 | 0 | 0 | 2 | I use grep to parse through my trading apps logs, but it's limited in the sense that I need to visually trawl through the output to see what happened etc.
I'm wondering if Perl is a better option? Any good resources to learn log and string parsing with Perl?
I'd also believe that Python would be good for this. Perl vs Python vs 'grep on linux'? |
How can I open '_mysql.pyd' in 'D:\Python25\Lib\site-packages'? | 1,994,987 | 1 | 0 | 468 | 0 | python,mysql | .pyd files are DLLs. You can't usefully open them in text editors. | 0 | 1 | 0 | 0 | 2010-01-03T09:12:00.000 | 1 | 1.2 | true | 1,994,403 | 0 | 0 | 0 | 1 | My IDE is 'ulipad', and when I open the file, it can't be shown. How can I get it? |
Copy file or directories recursively in Python | 42,249,637 | 5 | 143 | 192,392 | 0 | python | shutil.copy and shutil.copy2 are copying files.
shutil.copytree copies a folder with all the files and all subfolders. shutil.copytree is using shutil.copy2 to copy the files.
So the analog to cp -r you are saying is the shutil.copytree because cp -r targets and copies a folder and its files/subfolders like shutil.copytree. Without the -r cp copies files like shutil.copy and shutil.copy2 do. | 0 | 1 | 0 | 1 | 2010-01-03T10:06:00.000 | 6 | 0.16514 | false | 1,994,488 | 0 | 0 | 0 | 1 | Python seems to have functions for copying files (e.g. shutil.copy) and functions for copying directories (e.g. shutil.copytree) but I haven't found any function that handles both. Sure, it's trivial to check whether you want to copy a file or a directory, but it seems like a strange omission.
Is there really no standard function that works like the unix cp -r command, i.e. supports both directories and files and copies recursively? What would be the most elegant way to work around this problem in Python? |
Python execution | 1,996,354 | 0 | 1 | 523 | 0 | python,runlevel | Yes. The scripts that control daemons are (normally) plain old bash scripts and can run whatever a bash script can run. The only difference is that in a low runlevel, lots of other system services will not be running, so if the program tries to do something that depends on another daemon, that may fail. | 0 | 1 | 0 | 1 | 2010-01-03T14:21:00.000 | 2 | 0 | false | 1,995,102 | 1 | 0 | 0 | 1 | Is it possible for a python script to execute at a low run level?
Edit:
To clarify, is it possible for a python script to run in the background, kind of like a daemon. |
sharing objects between module in GAE | 1,997,676 | 0 | 1 | 130 | 0 | python,google-app-engine | Later requests that happen to be served on the same process (you can't control that) would access just the same mod1.my_data object (unless you take pains to reassign it as a fresh object at the start of each request, of course). | 0 | 1 | 0 | 0 | 2010-01-04T05:37:00.000 | 1 | 1.2 | true | 1,997,663 | 0 | 0 | 1 | 1 | To share a state(e.g. user) between a module in django people sometime use thread local storage, but as google app engine follows CGI standard and keeps state of a request in os.environ , can I share objects between two modules just by setting it e.g.
mod1.my_data = {} and now any other module can get handle to my_data?
without worrying about other threads/requests sharing/overwriting it? |
Problem allocating heap space over 4 GB when calling java "from Python" | 2,000,424 | 2 | 1 | 1,663 | 0 | java,python,ram | It's hard to be sure without knowing more detail - like which OS you're on - but my guess is that you're using a 32-bit version of Python which means that when you launch Java, you're also getting the 32-bit version which has a heap size limit of 4GB.
To test if this is the case, compare the output of java -version when run from the command line and when run from your Python script. | 0 | 1 | 0 | 0 | 2010-01-04T15:52:00.000 | 3 | 1.2 | true | 2,000,331 | 0 | 0 | 1 | 2 | I am using using os.system call from python to run jar file.
The jar file requires large heap space and thus i am allocating 4 Gb heap space using Xmx.
When i execute the command
"java -Xms4096m -Xmx4096m -jar camXnet.jar net.txt"
from command line it executes properly, however when i call it from a python program via os.system, it works only if memory allocated is less than 4Gb, otherwise it fails to execute.
Any solutions?
By fails to execute i mean that A command window appears indicating that os.system has been called and then it disappears, i will check for the error code if any being returned. however no problems are encountered if xmx,xms are set to lower value.
Ok i checked both version and there is a difference The one being called via python is Java HotSpot Client VM mixed mode,sharing while one being called via normal command line is Java HotSpot 64 bit server
How do make os.system in python call the correct one that is the 64 bit server.
UPDATE: I tried using subprocess module as yet the version of java return is same as that from os.system |
Problem allocating heap space over 4 GB when calling java "from Python" | 2,122,159 | 1 | 1 | 1,663 | 0 | java,python,ram | I was having the same problem launching 64bit Java from 32bit python. I solved the problem using Dave Webb's suggestiong of putting the full path to 64bit Java.exe in the python script. This worked fine so it is not necessary to use 64 bit Python | 0 | 1 | 0 | 0 | 2010-01-04T15:52:00.000 | 3 | 0.066568 | false | 2,000,331 | 0 | 0 | 1 | 2 | I am using using os.system call from python to run jar file.
The jar file requires large heap space and thus i am allocating 4 Gb heap space using Xmx.
When i execute the command
"java -Xms4096m -Xmx4096m -jar camXnet.jar net.txt"
from command line it executes properly, however when i call it from a python program via os.system, it works only if memory allocated is less than 4Gb, otherwise it fails to execute.
Any solutions?
By fails to execute i mean that A command window appears indicating that os.system has been called and then it disappears, i will check for the error code if any being returned. however no problems are encountered if xmx,xms are set to lower value.
Ok i checked both version and there is a difference The one being called via python is Java HotSpot Client VM mixed mode,sharing while one being called via normal command line is Java HotSpot 64 bit server
How do make os.system in python call the correct one that is the 64 bit server.
UPDATE: I tried using subprocess module as yet the version of java return is same as that from os.system |
AppEngine/Python, query database and send multiple images to the client as a response to a single get request | 2,041,528 | 1 | 0 | 856 | 0 | python,ajax,image,google-app-engine | As an improvement to Alex's answer, there's no need to use memcache: Simply do a keys-only query to get a list of keys of images you want to send to the client, then use db.get() to fetch the image corresponding to the required key for each image request. This requires roughly the same amount of effort as a single regular query. | 0 | 1 | 0 | 0 | 2010-01-05T01:44:00.000 | 4 | 0.049958 | false | 2,003,630 | 0 | 0 | 1 | 4 | I am working on a social-network type of application on App Engine, and would like to send multiple images to the client based on a single get request. In particular, when a client loads a page, they should see all images that are associated with their account.
I am using python on the server side, and would like to use Javascript/JQuery on the client side to decode/display the received images.
The difficulty is that I would like to only perform a single query on the server side (ie. query for all images associated with a single user) and send all of the images resulting from the query to the client as a single unit, which will then be broken up into the individual images. Ideally, I would like to use something similar to JSON, but while JSON appears to allow multiple "objects" to be sent as a JSON response, it does not appear to have the ability to allow multiple images (or binary files) to be sent as a JSON response.
Is there another way that I should be looking at this problem, or perhaps a different technology that I should be considering that might allow me to send multiple images to the client, in response to a single get request?
Thank you and Kind Regards
Alexander |
AppEngine/Python, query database and send multiple images to the client as a response to a single get request | 2,003,692 | 2 | 0 | 856 | 0 | python,ajax,image,google-app-engine | The App Engine part isn't much of a problem (as long as the number of images and total size doesn't exceed GAE's limits), but the user's browser is unlikely to know what to do in order to receive multiple payloads per GET request -- that's just not how the web works. I guess you could concatenate all the blobs/bytestreams (together with metadata needed for the client to reconstruct them) and send that (it will still have to be a separate payload from the HTML / CSS / Javascript that you're also sending), as long as you can cajole Javascript into separating the megablob into the needed images again (but for that part you should open a separate question and tag it Javascript, as Python has little to do with it, and GAE nothing at all).
I would instead suggest just accepting the fact that the browser (presumably via ajax, as you mention in tags) will be sending multiple requests, just as it does to every other webpage on the WWW, and focus on optimizing the serving side -- the requests will be very close in time, so you should just use memcache to keep the yet-unsent images to avoid multiple fetch-from-storage requests in your GAE app. | 0 | 1 | 0 | 0 | 2010-01-05T01:44:00.000 | 4 | 1.2 | true | 2,003,630 | 0 | 0 | 1 | 4 | I am working on a social-network type of application on App Engine, and would like to send multiple images to the client based on a single get request. In particular, when a client loads a page, they should see all images that are associated with their account.
I am using python on the server side, and would like to use Javascript/JQuery on the client side to decode/display the received images.
The difficulty is that I would like to only perform a single query on the server side (ie. query for all images associated with a single user) and send all of the images resulting from the query to the client as a single unit, which will then be broken up into the individual images. Ideally, I would like to use something similar to JSON, but while JSON appears to allow multiple "objects" to be sent as a JSON response, it does not appear to have the ability to allow multiple images (or binary files) to be sent as a JSON response.
Is there another way that I should be looking at this problem, or perhaps a different technology that I should be considering that might allow me to send multiple images to the client, in response to a single get request?
Thank you and Kind Regards
Alexander |
AppEngine/Python, query database and send multiple images to the client as a response to a single get request | 2,003,690 | 0 | 0 | 856 | 0 | python,ajax,image,google-app-engine | Send the client URLs for all the images in one hit, and deal with it on the client. That fits with the design of the protocol, and still lets you only make one query. The client might, if you're lucky, be able to stream those back in its next request, but the neat thing is that it'll work (eventually) even if it can't reuse the connection for some reason (usually a busted proxy in the way). | 0 | 1 | 0 | 0 | 2010-01-05T01:44:00.000 | 4 | 0 | false | 2,003,630 | 0 | 0 | 1 | 4 | I am working on a social-network type of application on App Engine, and would like to send multiple images to the client based on a single get request. In particular, when a client loads a page, they should see all images that are associated with their account.
I am using python on the server side, and would like to use Javascript/JQuery on the client side to decode/display the received images.
The difficulty is that I would like to only perform a single query on the server side (ie. query for all images associated with a single user) and send all of the images resulting from the query to the client as a single unit, which will then be broken up into the individual images. Ideally, I would like to use something similar to JSON, but while JSON appears to allow multiple "objects" to be sent as a JSON response, it does not appear to have the ability to allow multiple images (or binary files) to be sent as a JSON response.
Is there another way that I should be looking at this problem, or perhaps a different technology that I should be considering that might allow me to send multiple images to the client, in response to a single get request?
Thank you and Kind Regards
Alexander |
AppEngine/Python, query database and send multiple images to the client as a response to a single get request | 2,003,652 | 0 | 0 | 856 | 0 | python,ajax,image,google-app-engine | Trying to send all of the images in one request means that you will be fighting very hard against some of the fundamental assumptions of the web and browser technology. If you don't have a really, really compelling reason to do this, you should consider delivering one image per request. That already works now, no sweat, no effort, no wheels reinvented.
I can't think of a sensible way to do what you ask, but I can tell you that you are asking for pain in trying to implement the solution that you are describing. | 0 | 1 | 0 | 0 | 2010-01-05T01:44:00.000 | 4 | 0 | false | 2,003,630 | 0 | 0 | 1 | 4 | I am working on a social-network type of application on App Engine, and would like to send multiple images to the client based on a single get request. In particular, when a client loads a page, they should see all images that are associated with their account.
I am using python on the server side, and would like to use Javascript/JQuery on the client side to decode/display the received images.
The difficulty is that I would like to only perform a single query on the server side (ie. query for all images associated with a single user) and send all of the images resulting from the query to the client as a single unit, which will then be broken up into the individual images. Ideally, I would like to use something similar to JSON, but while JSON appears to allow multiple "objects" to be sent as a JSON response, it does not appear to have the ability to allow multiple images (or binary files) to be sent as a JSON response.
Is there another way that I should be looking at this problem, or perhaps a different technology that I should be considering that might allow me to send multiple images to the client, in response to a single get request?
Thank you and Kind Regards
Alexander |
Get remote MAC address using Python and Linux | 2,010,975 | 0 | 6 | 22,526 | 0 | python,linux,networking,mac-address | Many years ago, I was tasked with gathering various machine info from all machines on a corporate campus. One desired piece of info was the MAC address, which is difficult to get on a network that spanned multiple subnets. At the time, I used the Windows built-in "nbtstat" command.
Today there is a Unix utility called "nbtscan" that provides similar info. If you do not wish to use an external tool, maybe there are NetBIOS libraries for python that could be used to gather the info for you? | 0 | 1 | 1 | 1 | 2010-01-06T03:40:00.000 | 7 | 0 | false | 2,010,816 | 0 | 0 | 0 | 1 | How do I get the MAC address of a remote host on my LAN? I'm using Python and Linux. |
how do i get the byte count of a variable in python just like wc -c gives in unix | 2,020,334 | 0 | 1 | 1,889 | 0 | python,tar | This answer seems irrelevant, since I seem to have misunderstood the question, which has now been clarified. However, should someone find this question, while searching with pretty much the same terms, this answer may still be relevant:
Just open the file in binary mode
f = open(filename, 'rb')
read/skip a bunch and print the next byte(s). I used the same method to 'fix' the n-th byte in a zillion images once. | 0 | 1 | 0 | 1 | 2010-01-07T12:45:00.000 | 5 | 0 | false | 2,020,318 | 0 | 0 | 0 | 2 | i am facing some problem with files with huge data.
i need to skip doing some execution on those files.
i get the data of the file into a variable.
now i need to get the byte of the variable and if it is greater than 102400 , then print a message.
update : i cannot open the files , since it is present in a tar file.
the content is already getting copied to a variable called 'data'
i am able to print contents of the variable data. i just need to check if it has more than 102400 bytes.
thanks |
how do i get the byte count of a variable in python just like wc -c gives in unix | 2,020,425 | 1 | 1 | 1,889 | 0 | python,tar | len(data) gives you the size in bytes if it's binary data. With strings the size depends on the encoding used. | 0 | 1 | 0 | 1 | 2010-01-07T12:45:00.000 | 5 | 0.039979 | false | 2,020,318 | 0 | 0 | 0 | 2 | i am facing some problem with files with huge data.
i need to skip doing some execution on those files.
i get the data of the file into a variable.
now i need to get the byte of the variable and if it is greater than 102400 , then print a message.
update : i cannot open the files , since it is present in a tar file.
the content is already getting copied to a variable called 'data'
i am able to print contents of the variable data. i just need to check if it has more than 102400 bytes.
thanks |
What linux distro is better suited for Python web development? | 2,021,876 | 0 | 7 | 11,243 | 0 | python,windows,linux,preferences | Most major distributions will include Python and Apache, so it's really just a matter of choice. If you're new to Linux, I'd suggest either Ubuntu or Fedora. Both are great for new users and have excellent community support. | 0 | 1 | 0 | 0 | 2010-01-07T16:21:00.000 | 9 | 0 | false | 2,021,803 | 0 | 0 | 0 | 7 | Which linux distro is better suited for Python web development?
Background:
I currently develop on Windows and it's fine, but I am looking to move my core Python development to Linux. I'm sure most any distro will work fine, but does anyone have any reasons to believe one distro is better than another? |
What linux distro is better suited for Python web development? | 2,021,840 | 0 | 7 | 11,243 | 0 | python,windows,linux,preferences | I use RHEL and have been very happy, so from that I would say Fedora would be fine. I use Debian at home, and it's great (headless though, so no web there).
That said, I think you should probably just pick one based on what your company uses, or any number of non-Python reasons. I don't think you are going to find Python tool availability an issue with any Linux distribution. | 0 | 1 | 0 | 0 | 2010-01-07T16:21:00.000 | 9 | 0 | false | 2,021,803 | 0 | 0 | 0 | 7 | Which linux distro is better suited for Python web development?
Background:
I currently develop on Windows and it's fine, but I am looking to move my core Python development to Linux. I'm sure most any distro will work fine, but does anyone have any reasons to believe one distro is better than another? |
What linux distro is better suited for Python web development? | 28,327,632 | 0 | 7 | 11,243 | 0 | python,windows,linux,preferences | i think i remember a podcast with Guido Van Rossum and his core team python developers back in the days, and one of that core python developer is now employed by Canonical to take care of python integration for the ubuntu distro. So that explains why ubuntu is a much more pythonic distro compared to other distro.
On the other hand, the gentoo linux distro also has python integrated in its Portage package management system. So gentoo and ubuntu I would say are good for python development system and each represent both ends of the spectrum. | 0 | 1 | 0 | 0 | 2010-01-07T16:21:00.000 | 9 | 0 | false | 2,021,803 | 0 | 0 | 0 | 7 | Which linux distro is better suited for Python web development?
Background:
I currently develop on Windows and it's fine, but I am looking to move my core Python development to Linux. I'm sure most any distro will work fine, but does anyone have any reasons to believe one distro is better than another? |
What linux distro is better suited for Python web development? | 2,021,832 | 13 | 7 | 11,243 | 0 | python,windows,linux,preferences | Largely distribution won't matter, as Python is present and largely self sufficient on virtually all Linux distributions.
If you're wanting to focus on development, I'd recommend Ubuntu. Ubuntu is arguably one of the most fully featured "ready for the user" distributions that makes system administration a snap, so you can focus on the development tasks you want to tackle.
If you have a Linux environment that's a target for your code (like say, RedHat or something), then go with the desktop distribution that matches your target environment (like, say, Fedora for RedHat, Gentoo for Gentoo, Ubuntu for Ubuntu Server, etc.)
Otherwise, all of them are suitable. | 0 | 1 | 0 | 0 | 2010-01-07T16:21:00.000 | 9 | 1.2 | true | 2,021,803 | 0 | 0 | 0 | 7 | Which linux distro is better suited for Python web development?
Background:
I currently develop on Windows and it's fine, but I am looking to move my core Python development to Linux. I'm sure most any distro will work fine, but does anyone have any reasons to believe one distro is better than another? |
What linux distro is better suited for Python web development? | 2,975,353 | 0 | 7 | 11,243 | 0 | python,windows,linux,preferences | I am working with Python on Cento 5.4 and Fedora 12 and I am very happy.
I also use Eclipse IDE for python and other languages without having any major issues. | 0 | 1 | 0 | 0 | 2010-01-07T16:21:00.000 | 9 | 0 | false | 2,021,803 | 0 | 0 | 0 | 7 | Which linux distro is better suited for Python web development?
Background:
I currently develop on Windows and it's fine, but I am looking to move my core Python development to Linux. I'm sure most any distro will work fine, but does anyone have any reasons to believe one distro is better than another? |
What linux distro is better suited for Python web development? | 2,022,173 | 0 | 7 | 11,243 | 0 | python,windows,linux,preferences | As the other answers have mentioned so far, the Python 2.6 interpreter will be available on all recent Linux distribution releases. That shouldn't influence your choice.
However, your choice of IDE may eliminate some possibilities. You should make sure the distribution you select has a package for the latest version of your IDE, and that it is updated often enough.
As an example, I like to use Eclipse with PyDev for developing Python apps in either OS, but Ubuntu's official repositories had only Eclipse 3.2 (from 2006) until October of last year, when they finally updated to 3.5 in the latest distribution. | 0 | 1 | 0 | 0 | 2010-01-07T16:21:00.000 | 9 | 0 | false | 2,021,803 | 0 | 0 | 0 | 7 | Which linux distro is better suited for Python web development?
Background:
I currently develop on Windows and it's fine, but I am looking to move my core Python development to Linux. I'm sure most any distro will work fine, but does anyone have any reasons to believe one distro is better than another? |
What linux distro is better suited for Python web development? | 5,528,749 | 0 | 7 | 11,243 | 0 | python,windows,linux,preferences | Any desktop distribution like Ubuntu, OpenSUSE, Fedora, ... is OK, But if you want to always have the latest versions, I recommend ArchLinux. | 0 | 1 | 0 | 0 | 2010-01-07T16:21:00.000 | 9 | 0 | false | 2,021,803 | 0 | 0 | 0 | 7 | Which linux distro is better suited for Python web development?
Background:
I currently develop on Windows and it's fine, but I am looking to move my core Python development to Linux. I'm sure most any distro will work fine, but does anyone have any reasons to believe one distro is better than another? |
How do you ask gstreamer if a file can be played? | 2,026,830 | 0 | 1 | 208 | 0 | python,uri,decode,gstreamer,codec | I guess you can try to play it and see if that raises any error - in fact, there's no way to know the set of codecs necessary without opening the file. Some distributions even have hooks in place that ask the user to download the right codec when you start playing something. | 0 | 1 | 0 | 1 | 2010-01-08T06:49:00.000 | 1 | 1.2 | true | 2,025,964 | 0 | 0 | 0 | 1 | I'm trying to write a simple command line audio player using the Python Gstreamer bindings.
Is there a function in the gstreamer API that determines in advance whether or not a particular file (URI) can be decoded and played by the currently installed set of codecs? |
Strategies or support for making parts of a Twisted application reloadable? | 2,026,161 | 1 | 3 | 457 | 0 | python,twisted | You could write something similar to paster's reloader, that would work like this:
start your main function, and before importing / using any twisted code, fork/spawn a subprocess.
In the subprocess, run your twisted application.
In the main process, run your code which checks for changed files. If code has changed, reload the subprocess.
However, the issue here is that unlike a development webserver, most twisted apps have a lot more state and just flat out killing / restarting the process is a bad idea, you may lose some state.
There is a way to do it cleanly:
When you spawn the twisted app, use subprocess.Popen() or similar, to get stdin/stdout pipes. Now in your subprocess, use the twisted reactor to listen on stdin (there is code for this in twisted, see twisted.internet.stdio which allows you to have a Protocol which talks to a stdio transport, in the usual twisted non-blocking manner).
Finally, when you decide it's time to reload, write something to the stdin of the subprocess telling it to shutdown. Now your twisted code can respond and shut down gracefully. Once it's cleanly quit, your master process can just spawn it again.
(Alternately you can use signals to achieve this, but this may not be OS portable) | 0 | 1 | 0 | 0 | 2010-01-08T07:28:00.000 | 2 | 0.099668 | false | 2,026,091 | 0 | 0 | 1 | 1 | I've written a specialized JSON-RPC server and just started working my way up into the application logic and finding it is a tad annoying to constantly having to stop/restart the server to make certain changes.
Previously I had a handler that ran in intervals to compare module modified time stamps with the past check then reload the module as needed. Unfortunately I don't trust it to work correctly now.
Is there a way for a reactor to stop and restart itself in a manner similar to Paster's Reloadable HTTPServer? |
Interactive Python GUI | 2,032,648 | 2 | 2 | 1,183 | 0 | python,user-interface,pygtk,interactive,spawn | Your main GUI thread will freeze if you spawn off a process and wait for it to completely. Often, you can simply use subprocess and poll it now and then for completion rather than waiting for it to finish. This will keep your GUI from freezing. | 0 | 1 | 0 | 0 | 2010-01-09T06:49:00.000 | 2 | 0.197375 | false | 2,032,617 | 0 | 0 | 0 | 2 | Python have been really bumpy for me, because the last time I created a GUI client, the client seems to hang when spawning a process, calling a shell script, and calling outside application.
This have been my major problem with Python since then, and now I'm in a new project, can someone give me pointers, and a word of advice in order for my GUI python application to still be interactive when spawning another process? |
Interactive Python GUI | 2,032,635 | 4 | 2 | 1,183 | 0 | python,user-interface,pygtk,interactive,spawn | Simplest (not necessarily "best" in an abstract sense): spawn the subprocess in a separate thread, communicating results back to the main thread via a Queue.Queue instance -- the main thread must periodically check that queue to see if the results have arrived yet, but periodic polling isn't hard to arrange in any event loop. | 0 | 1 | 0 | 0 | 2010-01-09T06:49:00.000 | 2 | 0.379949 | false | 2,032,617 | 0 | 0 | 0 | 2 | Python have been really bumpy for me, because the last time I created a GUI client, the client seems to hang when spawning a process, calling a shell script, and calling outside application.
This have been my major problem with Python since then, and now I'm in a new project, can someone give me pointers, and a word of advice in order for my GUI python application to still be interactive when spawning another process? |
GAE and Django: What are the benefits? | 2,428,291 | 1 | 12 | 2,741 | 0 | python,django,google-app-engine | I prefer webapp. It scales better according to Google and seems to better integrated with the App Engine infrastructure. Plus it's more lightweight. | 0 | 1 | 0 | 0 | 2010-01-09T19:46:00.000 | 8 | 0.024995 | false | 2,034,684 | 0 | 0 | 1 | 3 | Currently I have a website on the Google App Engine written in Google's webapp framework. What I want to know is what are the benefits of converting my app to run with django? And what are the downsides? Also how did you guys code your GAE apps? Did you use webapp or django? Or did you go an entirely different route and use the Java api?
Thanks |
GAE and Django: What are the benefits? | 2,035,524 | 2 | 12 | 2,741 | 0 | python,django,google-app-engine | GAE is a great tool for new and small projects, that do not require a relational database. I use a range of web hosting solutions.
1) I built www.gaiagps.com on the App Engine, because it was just some brochureware, and a tiny key-value store for the blog part.
2) My colleague also built a web crawler on GAE, because it was just some simple Python scripts that collected web pages. That app actually sends the data over to EC2 though, where more work is done.
3) I host www.trailbehind.com on EC2 because it uses a geo-database (PostGIS) which you would basically have to implement yourself on App Engine.
4) I host TRAC and SVN on WebFaction, because it's off-the-shelf for any slice there.
If I need to do a site in a couple of days, I use GAE. If it's a large or existing project, or has a funky database, I use something else. | 0 | 1 | 0 | 0 | 2010-01-09T19:46:00.000 | 8 | 0.049958 | false | 2,034,684 | 0 | 0 | 1 | 3 | Currently I have a website on the Google App Engine written in Google's webapp framework. What I want to know is what are the benefits of converting my app to run with django? And what are the downsides? Also how did you guys code your GAE apps? Did you use webapp or django? Or did you go an entirely different route and use the Java api?
Thanks |
GAE and Django: What are the benefits? | 2,590,020 | 0 | 12 | 2,741 | 0 | python,django,google-app-engine | try kay-framework if you are looking for framework specifically designed for google app engine. | 0 | 1 | 0 | 0 | 2010-01-09T19:46:00.000 | 8 | 0 | false | 2,034,684 | 0 | 0 | 1 | 3 | Currently I have a website on the Google App Engine written in Google's webapp framework. What I want to know is what are the benefits of converting my app to run with django? And what are the downsides? Also how did you guys code your GAE apps? Did you use webapp or django? Or did you go an entirely different route and use the Java api?
Thanks |
If I use QT For Windows, will my application run great on Linux/Mac/Windows? | 2,040,550 | 5 | 5 | 1,271 | 0 | python,qt,cross-platform | As other posters mentioned, the key issue is making sure you never touch a different non-Qt non-cross-platform API. Or really even a different non-Qt crossplatform API, if you use Qt you kind of need to commit to it, it's a comprehensive framework and for the most part sticking with Qt is easier than going to anything else. There's some nice advantages as the basic primitives in your program will work the same way all over the place. (i.e. a QString in your networking code will be the same as a QString in your interface code.) Portability-wise, if you stay within the API Qt provides you, it should work on multiple platforms.
There will be areas where you may need to call some Qt functions which provide specific cross-platform tweaks more important to some platforms than others (e.g. dock icons) and you won't immediately have a polished application on all three platforms. But in general, you should remain very close to an application that compiles and runs on all three. (Try to use qmake or a similar build system too, as the build process for Qt applications varies depending on the platform. Different flags, etc.)
There's some odd issues that come up when you mix Qt with other APIs like OpenGL, in particular the way windows locks GL contexts differs from the way OS X and Linux does, so if you intend to use OpenGL with multiple threads, try to periodically compile on the other platforms to make sure nothing is completely busted. This will also quickly point out areas where you might have inadvertently used a non-cross-platform system API.
I've used Qt with a team to build a multi-threaded 3-d multiplayer real-time networked game (read: non-trivial application that fully utilized many many areas of Qt) and we were nothing but blown away by the effectiveness of Qt's ability to support multiple platforms. (We developed on OS X while targeting Windows and I regularly made sure it still ran on Linux as well.) We encountered only a few platform specific bugs, almost all of which arose from the use of non-Qt APIs such as OpenGL. (Which should really tell you something, that OpenGL was more of a struggle to use cross platform than Qt was.)
At the end of the experience we were pleased at how little time we needed to spend dealing with platform-specific bugs. It was surprising how well we could make a GUI app for windows given almost none of the team actually used it as a primary development platform through the project.
But do test early and often. I don't think your approach of writing an entire application and then testing is a good idea. It's possible with Qt, but unlikely if you don't have experience writing portable code and/or are new to Qt. | 1 | 1 | 0 | 0 | 2010-01-09T22:40:00.000 | 6 | 0.16514 | false | 2,035,249 | 0 | 0 | 0 | 5 | I'm under the impressions that Python runs in the Triforce smoothly. A program that runs in Windows will run in Linux. Is this sentiment correct?
Having said that, if I create my application in QT For Windows, will it run flawlessly in Linux/Mac as well?
Thanks. |
If I use QT For Windows, will my application run great on Linux/Mac/Windows? | 2,035,284 | 1 | 5 | 1,271 | 0 | python,qt,cross-platform | Generally - as long as you don't use code that is not covered by Qt classes - yes.
I have several time just recompiled applications I wrote in Linux(64bit) under Windows, and the other way arround. It works for me every time.
Depends on your needs, you might also find compiler problems, but I am sure you will know how to work around them. Other people mentioned some issues you should look for, just read the other posts in the question. | 1 | 1 | 0 | 0 | 2010-01-09T22:40:00.000 | 6 | 0.033321 | false | 2,035,249 | 0 | 0 | 0 | 5 | I'm under the impressions that Python runs in the Triforce smoothly. A program that runs in Windows will run in Linux. Is this sentiment correct?
Having said that, if I create my application in QT For Windows, will it run flawlessly in Linux/Mac as well?
Thanks. |
If I use QT For Windows, will my application run great on Linux/Mac/Windows? | 2,035,265 | 0 | 5 | 1,271 | 0 | python,qt,cross-platform | It might run well, but it will take some testing, and of course Qt only handles the GUI portability, not the myriad of other things that might cause portability problems.
Qt apps generally don't fit in very well on MacOS because they don't have Applescript support by default and don't necessarily have the right keybindings. But if you do the work to fix those issues, they work, but not nicely. On the Mac, it's far better to build a native UI. If this is an in-house app, Qt is probably OK, but if it's for sale, you won't make many sales and will create yourself some support hassles. | 1 | 1 | 0 | 0 | 2010-01-09T22:40:00.000 | 6 | 0 | false | 2,035,249 | 0 | 0 | 0 | 5 | I'm under the impressions that Python runs in the Triforce smoothly. A program that runs in Windows will run in Linux. Is this sentiment correct?
Having said that, if I create my application in QT For Windows, will it run flawlessly in Linux/Mac as well?
Thanks. |
If I use QT For Windows, will my application run great on Linux/Mac/Windows? | 2,035,272 | 8 | 5 | 1,271 | 0 | python,qt,cross-platform | Yes. No. Maybe. See also: Java and "write once, run anywhere".
Filesystem layout, external utilities, anything you might do with things like dock icons, character encoding behaviors, these and more are areas you might run into some trouble.
Using Qt and Python, and strenuously avoiding anything that seems tied to Windows-specific libraries or behaviors whenever possible will make running the application on Mac and Linux much easier, but for any non-trivial application, the first time someone tries it, it will blow up in their face.
But through careful choice of frameworks and libraries, making the application work cross-platform will be much more like bug fixing than traditional "porting". | 1 | 1 | 0 | 0 | 2010-01-09T22:40:00.000 | 6 | 1.2 | true | 2,035,249 | 0 | 0 | 0 | 5 | I'm under the impressions that Python runs in the Triforce smoothly. A program that runs in Windows will run in Linux. Is this sentiment correct?
Having said that, if I create my application in QT For Windows, will it run flawlessly in Linux/Mac as well?
Thanks. |
If I use QT For Windows, will my application run great on Linux/Mac/Windows? | 2,035,601 | 0 | 5 | 1,271 | 0 | python,qt,cross-platform | As the others said, everything which is done using Qt-Functionality will most likely run quite flawlessly, WHEN you dont use platform specific functionality of qt.
There isnt that much (most of it has to do with window-manager stuff) , but some things might not work on other systems.
But such things are surely mentiond in the documentation of Qt.
Still there are things which cant be done using Qt, so you will have to do that yourself using plain Python...
Yeah "Python" itself is platform-independent (well it should), but there are lots of other things involved ... well mainly the OS.
And how the OS reacts you will plainly have to findout yourself by testing the application on all target OS.
Recently i wrote an quite simple GUI-application, while it ran flawlessy on Windows, it didnt run on Linux, because on Linux Python interpreted files encoded in unicode differently than on Windows.
Additionally a small script which should return the hostname of the machine, which it did on Windows, only returned "localhost" on Linux, which was obviously not what i wanted. | 1 | 1 | 0 | 0 | 2010-01-09T22:40:00.000 | 6 | 0 | false | 2,035,249 | 0 | 0 | 0 | 5 | I'm under the impressions that Python runs in the Triforce smoothly. A program that runs in Windows will run in Linux. Is this sentiment correct?
Having said that, if I create my application in QT For Windows, will it run flawlessly in Linux/Mac as well?
Thanks. |
What is my current desktop environment? | 2,035,693 | 4 | 6 | 4,870 | 0 | python,linux,environment | Sometimes people run a mix of desktop environments. Make your app desktop-agnostic using xdg-utils; that means using xdg-open to open a file or url, using xdg-user-dir DOCUMENTS to find the docs folder, xdg-email to send e-mail, and so on. | 0 | 1 | 0 | 0 | 2010-01-10T01:06:00.000 | 4 | 0.197375 | false | 2,035,657 | 0 | 0 | 0 | 1 | How can I get to know what my desktop environment is using Python? I like the result to be gnome or KDE or else. |
compile python .py file without executing | 2,042,452 | 4 | 53 | 87,360 | 0 | python | $ python -c "import py_compile; py_compile.compile('yourfile.py')"
or
$ python -c "import py_compile; py_compile.compileall('dir')" | 0 | 1 | 0 | 0 | 2010-01-11T14:34:00.000 | 6 | 0.132549 | false | 2,042,426 | 1 | 0 | 0 | 1 | Is there a way to compile a Python .py file from the command-line without executing it?
I am working with an application that stores its python extensions in a non-standard path with limited permissions and I'd like to compile the files during installation. I don't need the overhead of Distutils. |
Can I ask for screenshot of some great personalized IDE for python? | 2,048,362 | 0 | 2 | 266 | 0 | python,open-source,development-environment | Install any of the Linux distributions on your computer. If you have a preference, great. If not, try Ubuntu, Fedora, Debian. Any of them is pretty user and developer friendly too.
IDE, well I don't use one. But you may try NetBeans with Python support or Eclipse (with PyDev).
Code style- well, try and learn to be pythonic. It should come with practice and asking questions
I think that should get you started! | 0 | 1 | 0 | 0 | 2010-01-12T10:57:00.000 | 4 | 0 | false | 2,048,345 | 0 | 0 | 0 | 1 | How to setup a Linux/Unix machine for python development? Which Linux/Unix version should I use? What IDE should be used? What development plugins should I have? What code style should would be THE BEST? All above, a great development machine for open source (python developers) development?
Can i ask for screenshot of some great personalized IDE for Python? All platform users are invited. Please, do include the source/plugin/article how you made it.
Thanks. |
Atomic file write operations (cross platform) | 4,406,823 | 2 | 31 | 15,100 | 0 | java,python,file,file-io | In Linux, Solaris, Unix this is easy. Just use rename() from your program or mv. The files need to be on the same filesystem.
On Windows, this is possible if you can control both programs. LockFileEx. For reads, open a shared lock on the lockfile. For writes, open an exclusive lock on the lockfile. Locking is weird in Windows, so I recommend using a separate lock file for this. | 0 | 1 | 0 | 1 | 2010-01-12T13:36:00.000 | 7 | 0.057081 | false | 2,049,247 | 0 | 0 | 0 | 6 | How do I build up an atomic file write operation? The file is to be written by a Java service and read by python scripts.
For the record, reads are far greater than writes. But the write happens in batches and tend to be long. The file size amounts to mega bytes.
Right now my approach is:
Write file contents to a temp file in
same directory
Delete old file
Rename temp file to old filename.
Is this the right approach? How can avoid conditions where the old file is deleted but the new filename is yet to be renamed?
Do these programming languages (python and java) offer constructs to lock and avoid this situation? |
Atomic file write operations (cross platform) | 2,049,282 | 13 | 31 | 15,100 | 0 | java,python,file,file-io | AFAIK no.
And the reason is that for such an atomic operation to be possible, there has to be OS support in the form of a transactional file system. And none of the mainstream operating system offer a transactional file system.
EDIT - I'm wrong for POSIX-compliant systems at least. The POSIX rename syscall performs an atomic replace if a file with the target name already exists ... as pointed out by @janneb. That should be sufficient to do the OP's operation atomically.
However, the fact remains that the Java File.renameTo() method is explicitly not guaranteed to be atomic, so it does not provide a cross-platform solution to the OP's problem.
EDIT 2 - With Java 7 you can use java.nio.file.Files.move(Path source, Path target, CopyOption... options) with copyOptions and ATOMIC_MOVE. If this is not supported (by the OS / file system) you should get an exception. | 0 | 1 | 0 | 1 | 2010-01-12T13:36:00.000 | 7 | 1.2 | true | 2,049,247 | 0 | 0 | 0 | 6 | How do I build up an atomic file write operation? The file is to be written by a Java service and read by python scripts.
For the record, reads are far greater than writes. But the write happens in batches and tend to be long. The file size amounts to mega bytes.
Right now my approach is:
Write file contents to a temp file in
same directory
Delete old file
Rename temp file to old filename.
Is this the right approach? How can avoid conditions where the old file is deleted but the new filename is yet to be renamed?
Do these programming languages (python and java) offer constructs to lock and avoid this situation? |
Atomic file write operations (cross platform) | 2,049,334 | 6 | 31 | 15,100 | 0 | java,python,file,file-io | At least on POSIX platforms, leave out step 3 (delete old file). In POSIX, rename within a filesystem is guaranteed to be atomic, and renaming on top of an existing file replaces it atomically. | 0 | 1 | 0 | 1 | 2010-01-12T13:36:00.000 | 7 | 1 | false | 2,049,247 | 0 | 0 | 0 | 6 | How do I build up an atomic file write operation? The file is to be written by a Java service and read by python scripts.
For the record, reads are far greater than writes. But the write happens in batches and tend to be long. The file size amounts to mega bytes.
Right now my approach is:
Write file contents to a temp file in
same directory
Delete old file
Rename temp file to old filename.
Is this the right approach? How can avoid conditions where the old file is deleted but the new filename is yet to be renamed?
Do these programming languages (python and java) offer constructs to lock and avoid this situation? |
Atomic file write operations (cross platform) | 2,049,333 | 1 | 31 | 15,100 | 0 | java,python,file,file-io | You could try and use an extra file to act as a lock, but I'm not sure if that will work out ok. (It would force you to create lock-checking and retry logic at both sides, java and python)
Another solution might be to not create files at all, maybe you could make your java process listen on a port and serve data from there rather than from a file? | 0 | 1 | 0 | 1 | 2010-01-12T13:36:00.000 | 7 | 0.028564 | false | 2,049,247 | 0 | 0 | 0 | 6 | How do I build up an atomic file write operation? The file is to be written by a Java service and read by python scripts.
For the record, reads are far greater than writes. But the write happens in batches and tend to be long. The file size amounts to mega bytes.
Right now my approach is:
Write file contents to a temp file in
same directory
Delete old file
Rename temp file to old filename.
Is this the right approach? How can avoid conditions where the old file is deleted but the new filename is yet to be renamed?
Do these programming languages (python and java) offer constructs to lock and avoid this situation? |
Atomic file write operations (cross platform) | 2,049,395 | 3 | 31 | 15,100 | 0 | java,python,file,file-io | It's a classic producer/consumer problem. You should be able to solve this by using file renaming, which is atomic on POSIX systems. | 0 | 1 | 0 | 1 | 2010-01-12T13:36:00.000 | 7 | 0.085505 | false | 2,049,247 | 0 | 0 | 0 | 6 | How do I build up an atomic file write operation? The file is to be written by a Java service and read by python scripts.
For the record, reads are far greater than writes. But the write happens in batches and tend to be long. The file size amounts to mega bytes.
Right now my approach is:
Write file contents to a temp file in
same directory
Delete old file
Rename temp file to old filename.
Is this the right approach? How can avoid conditions where the old file is deleted but the new filename is yet to be renamed?
Do these programming languages (python and java) offer constructs to lock and avoid this situation? |
Atomic file write operations (cross platform) | 2,049,386 | 1 | 31 | 15,100 | 0 | java,python,file,file-io | Have the python scripts request permission from the service. While the service is writing it would place a lock on the file. If the lock exists, the service would reject the python request. | 0 | 1 | 0 | 1 | 2010-01-12T13:36:00.000 | 7 | 0.028564 | false | 2,049,247 | 0 | 0 | 0 | 6 | How do I build up an atomic file write operation? The file is to be written by a Java service and read by python scripts.
For the record, reads are far greater than writes. But the write happens in batches and tend to be long. The file size amounts to mega bytes.
Right now my approach is:
Write file contents to a temp file in
same directory
Delete old file
Rename temp file to old filename.
Is this the right approach? How can avoid conditions where the old file is deleted but the new filename is yet to be renamed?
Do these programming languages (python and java) offer constructs to lock and avoid this situation? |
AppEngine fetch through a free proxy | 2,218,463 | 0 | 4 | 888 | 0 | python,google-app-engine,proxy | I'm currently having the same problem and i was thinking about this solution (not yet tried) :
-> develop an app that fetch what you want
-> run it locally
-> fetch your local server from your initial
so the proxy is your computer which you know as not blocked
Let me know if it's works ! | 0 | 1 | 0 | 0 | 2010-01-12T15:57:00.000 | 5 | 0 | false | 2,050,256 | 0 | 0 | 1 | 3 | My (Python) AppEngine program fetches a web page from another site to scrape data from it -- but it seems like the 3rd party site is blocking requests from Google App Engine! -- I can fetch the page from development mode, but not when deployed.
Can I get around this by using a free proxy of some sort?
Can I use a free proxy to hide the fact that I am requesting from App Engine?
How do I find/choose a proxy? -- what do I need? -- how do I perform the fetch?
Is there anything else I need to know or watch out for? |
AppEngine fetch through a free proxy | 2,050,288 | 2 | 4 | 888 | 0 | python,google-app-engine,proxy | Probably the correct approach is to request permission from the owners of the site you are scraping.
Even if you use a proxy, there is still a big chance that requests coming through the proxy will end up blocked as well. | 0 | 1 | 0 | 0 | 2010-01-12T15:57:00.000 | 5 | 0.07983 | false | 2,050,256 | 0 | 0 | 1 | 3 | My (Python) AppEngine program fetches a web page from another site to scrape data from it -- but it seems like the 3rd party site is blocking requests from Google App Engine! -- I can fetch the page from development mode, but not when deployed.
Can I get around this by using a free proxy of some sort?
Can I use a free proxy to hide the fact that I am requesting from App Engine?
How do I find/choose a proxy? -- what do I need? -- how do I perform the fetch?
Is there anything else I need to know or watch out for? |
AppEngine fetch through a free proxy | 3,731,700 | 0 | 4 | 888 | 0 | python,google-app-engine,proxy | Well to be fair, if they don't want you doing that then you probably shouldn't. It's not nice to be mean.
But if you really want to do it, the best approach would be creating a simple proxy script and running it on a VPS or some computer with a decent enough connection.
Basically you expose a REST API from your server to your GAE, then the server just makes all the same requests it gets to the target site and returns the output. | 0 | 1 | 0 | 0 | 2010-01-12T15:57:00.000 | 5 | 0 | false | 2,050,256 | 0 | 0 | 1 | 3 | My (Python) AppEngine program fetches a web page from another site to scrape data from it -- but it seems like the 3rd party site is blocking requests from Google App Engine! -- I can fetch the page from development mode, but not when deployed.
Can I get around this by using a free proxy of some sort?
Can I use a free proxy to hide the fact that I am requesting from App Engine?
How do I find/choose a proxy? -- what do I need? -- how do I perform the fetch?
Is there anything else I need to know or watch out for? |
How to disable shell interception of control characters? | 2,054,648 | 2 | 3 | 543 | 0 | python,unix,signals,curses | See the termios module, and the termios(3) man page. | 0 | 1 | 0 | 1 | 2010-01-13T05:21:00.000 | 2 | 1.2 | true | 2,054,626 | 0 | 0 | 0 | 1 | I'm writing a curses application in Python under UNIX. I want to enable the user to use C-Y to yank from a kill ring a la Emacs.
The trouble is, of course, that C-Y is caught by my shell which then sends SIGTSTP to my process. In addition, C-Z also results in SIGTSTP being sent, so catching the signal means that C-Y and C-Z are not distinguishable (though even without this the only solutions I can think of are extremely hackish).
I know what I'm asking is possible (in C if not in Python), since Emacs does it. How can I disable the shell's special handling of certain control characters sent from the keyboard and have the characters in question appear on the process' stdin? |
Recover process with subprocess.Popen? | 2,056,745 | 3 | 2 | 1,293 | 0 | python,popen | The Popen object is effectively just a wrapper for the child processes PID, stdin, stdout, and stderr, plus some convenience functions for using those.
So the question is why do you need access to the Popen object? Do you want to communicate with the child, terminate it, or check whether it's still running?
In any case there is no way reacquire a Popen object for an already running process.
The proper way to approach this is to launch the child as a daemon, like Tobu suggested. Part of the procedure for daemonising a process is to close stdin and stdout, so you cannot use those to talk to the child process. Instead most daemons use either pipes or sockets to allow clients to connect to them and to send them messages.
The easiest way to talk to the child is to open a named pipe from the child process at e.g. /etc/my_pipe, open that named pipe from the parent / controlling process, and write / read to / from it.
After a quick look at python-daemon it seems to me that python-daemon will help you daemonise your child process, which is tricky to get right, but it doesn't help you with the messaging side of things.
But like I said, I think you need to tell us why you need a Popen object for the child process before we can help you any further. | 0 | 1 | 0 | 0 | 2010-01-13T12:33:00.000 | 3 | 1.2 | true | 2,056,594 | 0 | 0 | 0 | 2 | I have a python program that uses subprocess.Popen to launch another process (python process or whatever), and after launching it I save the child's PID to a file. Let's suppose that suddenly the parent process dies (because of an exception or whatever). Is there any way to access again to the object returned by Popen?
I mean, the basic idea is to read the file at first, and if it exists and it has a PID written on it, then access to that process someway, in order to know the return code or whatever. If there isn't a PID, then launch the process with Popen.
Thanks a lot!! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.