Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
how to process long-running requests in python workers?
| 1,676,102 | 0 | 4 | 2,509 | 0 |
python,nginx,load-balancing,wsgi,reverse-proxy
|
You could use nginx load balancer to proxy to PythonPaste paster (which serves WSGI, for example Pylons), that launches each request as separate thread anyway.
| 0 | 1 | 1 | 1 |
2009-11-04T15:51:00.000
| 7 | 0 | false | 1,674,696 | 0 | 0 | 0 | 4 |
I have a python (well, it's php now but we're rewriting) function that takes some parameters (A and B) and compute some results (finds best path from A to B in a graph, graph is read-only), in typical scenario one call takes 0.1s to 0.9s to complete. This function is accessed by users as a simple REST web-service (GET bestpath.php?from=A&to=B). Current implementation is quite stupid - it's a simple php script+apache+mod_php+APC, every requests needs to load all the data (over 12MB in php arrays), create all structures, compute a path and exit. I want to change it.
I want a setup with N independent workers (X per server with Y servers), each worker is a python app running in a loop (getting request -> processing -> sending reply -> getting req...), each worker can process one request at a time. I need something that will act as a frontend: get requests from users, manage queue of requests (with configurable timeout) and feed my workers with one request at a time.
how to approach this? can you propose some setup? nginx + fcgi or wsgi or something else? haproxy? as you can see i'am a newbie in python, reverse-proxy, etc. i just need a starting point about architecture (and data flow)
btw. workers are using read-only data so there is no need to maintain locking and communication between them
|
Can I add Runtime Properties to a Python App Engine App?
| 1,674,790 | 1 | 0 | 158 | 0 |
python,google-app-engine,properties,runtime
|
You can:
edit records in the datastore through the dashboard ( if you really have to )
upload new scripts / files ( you can access files in READ-ONLY )
export a WEB Service API to configuration records in the datastore ( probably not what you had in mind )
access a page somewhere through an HTTP end-point
| 0 | 1 | 0 | 0 |
2009-11-04T15:58:00.000
| 2 | 1.2 | true | 1,674,764 | 0 | 0 | 1 | 1 |
Coming from a java background I'm used to having a bunch of properties files I can swap round at runtime dependent on what server I'm running on e.g. dev/production.
Is there a method in python to do similar, specifically on Google's App Engine framework?
At the minute I have them defined in .py files, obviously I'd like a better separation.
|
GTK+ Startup Notification Icon
| 1,680,360 | 0 | 7 | 840 | 0 |
python,linux,ubuntu,gnome
|
This normally happens automatically when calling the gtk.main() function
| 1 | 1 | 0 | 0 |
2009-11-05T12:39:00.000
| 4 | 0 | false | 1,680,311 | 0 | 0 | 0 | 1 |
In Gnome, whenever an application is started, the mouse cursor changes from normal to an activity indicator (a spinning wheel type thing on Ubuntu). Is there any way to inform Gnome (through some system call) when the application has finished launching so that the mouse cursor returns to normal without waiting for the usual timeout of 30 seconds to occur.
I have a program in Pythong using GTK+ that is showing the icon even after launching, so what system call do I make?
|
Python: Platform independent way to modify PATH environment variable
| 1,681,256 | 6 | 102 | 88,027 | 0 |
python,path,cross-platform,environment-variables
|
The caveat to be aware of with modifying environment variables in Python, is that there is no equivalent of the "export" shell command. There is no way of injecting changes into the current process, only child processes.
| 0 | 1 | 0 | 0 |
2009-11-05T15:17:00.000
| 4 | 1 | false | 1,681,208 | 1 | 0 | 0 | 1 |
Is there a way to modify the PATH environment variable in a platform independent way using python?
Something similar to os.path.join()?
|
Limitations of TEMP directory in Windows?
| 1,683,853 | 1 | 5 | 13,119 | 0 |
python,windows,temporary-files
|
There shouldn't be such space limitation in Temp. If you wrote the app, I would recommend creating your files in ProgramData...
| 0 | 1 | 0 | 0 |
2009-11-05T21:39:00.000
| 4 | 0.049958 | false | 1,683,831 | 0 | 0 | 0 | 3 |
I have an application written in Python that's writing large amounts of data to the %TEMP% folder. Oddly, every once and awhile, it dies, returning IOError: [Errno 28] No space left on device. The drive has plenty of free space, %TEMP% is not its own partition, I'm an administrator, and the system has no quotas.
Does Windows artificially put some types of limits on the data in %TEMP%? If not, any ideas on what could be causing this issue?
EDIT: Following discussions below, I clarified the question to better explain what's going on.
|
Limitations of TEMP directory in Windows?
| 1,683,908 | 2 | 5 | 13,119 | 0 |
python,windows,temporary-files
|
Using a FAT32 filesystem I can imagine this happening when:
Writing a lot of data to one file, and you reach the 4GB file size cap.
Or when you are creating a lot of small files and reaching the 2^16-2 files per directory cap.
Apart from this, I don't know of any limitations the system can impose on the temp folder, apart from the phyiscal partition actually being full.
Another limitation is as Mike Atlas has suggested the GetTempFileName() function which creates files of type tmpXXXX.tmp. Although you might not be using it directly, verify that the %TEMP% folder does not contain too many of them (2^16).
And maybe the obvious, have you tried emptying the %TEMP% folder before running the utility?
| 0 | 1 | 0 | 0 |
2009-11-05T21:39:00.000
| 4 | 0.099668 | false | 1,683,831 | 0 | 0 | 0 | 3 |
I have an application written in Python that's writing large amounts of data to the %TEMP% folder. Oddly, every once and awhile, it dies, returning IOError: [Errno 28] No space left on device. The drive has plenty of free space, %TEMP% is not its own partition, I'm an administrator, and the system has no quotas.
Does Windows artificially put some types of limits on the data in %TEMP%? If not, any ideas on what could be causing this issue?
EDIT: Following discussions below, I clarified the question to better explain what's going on.
|
Limitations of TEMP directory in Windows?
| 1,683,911 | 0 | 5 | 13,119 | 0 |
python,windows,temporary-files
|
There should be no trouble whatsoever with regard to your %TEMP% directory.
What is your disk quota set to for %TEMP%'s hosting volume? Depending in part on what the apps themselves are doing, one of them may be throwing an error due to the disk quota being reached, which is a pain if this quota is set unreasonably high. If the quota is very high, try lowering it, which you can do as Administrator.
| 0 | 1 | 0 | 0 |
2009-11-05T21:39:00.000
| 4 | 0 | false | 1,683,831 | 0 | 0 | 0 | 3 |
I have an application written in Python that's writing large amounts of data to the %TEMP% folder. Oddly, every once and awhile, it dies, returning IOError: [Errno 28] No space left on device. The drive has plenty of free space, %TEMP% is not its own partition, I'm an administrator, and the system has no quotas.
Does Windows artificially put some types of limits on the data in %TEMP%? If not, any ideas on what could be causing this issue?
EDIT: Following discussions below, I clarified the question to better explain what's going on.
|
Updating Python on Mac
| 67,923,827 | -4 | 140 | 519,726 | 0 |
python,macos,python-3.x
|
You can do it from Terminal too. It's quite easy. You just need to type python3 --version and
| 0 | 1 | 0 | 0 |
2009-11-06T12:45:00.000
| 23 | -1 | false | 1,687,357 | 1 | 0 | 0 | 7 |
I wanted to update my python 2.6.1 to 3.x on mac but i was wondering if its possible to do it using terminal or i have to download the installer from python website?
The reason why i am asking this question is because installer is not updating my terminal python version.
|
Updating Python on Mac
| 1,687,431 | 5 | 140 | 519,726 | 0 |
python,macos,python-3.x
|
I believe Python 3 can coexist with Python 2. Try invoking it using "python3" or "python3.1". If it fails, you might need to uninstall 2.6 before installing 3.1.
| 0 | 1 | 0 | 0 |
2009-11-06T12:45:00.000
| 23 | 0.043451 | false | 1,687,357 | 1 | 0 | 0 | 7 |
I wanted to update my python 2.6.1 to 3.x on mac but i was wondering if its possible to do it using terminal or i have to download the installer from python website?
The reason why i am asking this question is because installer is not updating my terminal python version.
|
Updating Python on Mac
| 1,688,349 | 3 | 140 | 519,726 | 0 |
python,macos,python-3.x
|
I personally wouldn't mess around with OSX's python like they said. My personally preference for stuff like this is just using MacPorts and installing the versions I want via command line. MacPorts puts everything into a separate direction (under /opt I believe), so it doesn't override or directly interfere with the regular system. It has all the usually features of any package management utilities if you are familiar with Linux distros.
I would also suggest installing python_select via MacPorts and using that to select which python you want "active" (it will change the symlinks to point to the version you want). So at any time you can switch back to the Apple maintained version of python that came with OSX or you can switch to any of the ones installed via MacPorts.
| 0 | 1 | 0 | 0 |
2009-11-06T12:45:00.000
| 23 | 0.026081 | false | 1,687,357 | 1 | 0 | 0 | 7 |
I wanted to update my python 2.6.1 to 3.x on mac but i was wondering if its possible to do it using terminal or i have to download the installer from python website?
The reason why i am asking this question is because installer is not updating my terminal python version.
|
Updating Python on Mac
| 47,333,658 | 0 | 140 | 519,726 | 0 |
python,macos,python-3.x
|
First, install Homebrew (The missing package manager for macOS) if you haven': Type this in your terminal
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
Now you can update your Python to python 3 by this command
brew install python3 && cp /usr/local/bin/python3 /usr/local/bin/python
Python 2 and python 3 can coexist so to open python 3, type python3 instead of python
That's the easiest and the best way.
| 0 | 1 | 0 | 0 |
2009-11-06T12:45:00.000
| 23 | 0 | false | 1,687,357 | 1 | 0 | 0 | 7 |
I wanted to update my python 2.6.1 to 3.x on mac but i was wondering if its possible to do it using terminal or i have to download the installer from python website?
The reason why i am asking this question is because installer is not updating my terminal python version.
|
Updating Python on Mac
| 66,423,786 | 2 | 140 | 519,726 | 0 |
python,macos,python-3.x
|
Sometimes when you install Python from the install wizard on MAC it will not link to your bash profile. Since you are using homebrew, just to brew install python This would install the latest version of Python and then to link them brew link python@3.9
| 0 | 1 | 0 | 0 |
2009-11-06T12:45:00.000
| 23 | 0.01739 | false | 1,687,357 | 1 | 0 | 0 | 7 |
I wanted to update my python 2.6.1 to 3.x on mac but i was wondering if its possible to do it using terminal or i have to download the installer from python website?
The reason why i am asking this question is because installer is not updating my terminal python version.
|
Updating Python on Mac
| 71,035,575 | 1 | 140 | 519,726 | 0 |
python,macos,python-3.x
|
Install Home brew /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Install python 3 brew install python3 && cp /usr/local/bin/python3 /usr/local/bin/python
Update python to latest version ln -s -f /usr/local/bin/python[your-latest-version-just-installed] /usr/local/bin/python
| 0 | 1 | 0 | 0 |
2009-11-06T12:45:00.000
| 23 | 0.008695 | false | 1,687,357 | 1 | 0 | 0 | 7 |
I wanted to update my python 2.6.1 to 3.x on mac but i was wondering if its possible to do it using terminal or i have to download the installer from python website?
The reason why i am asking this question is because installer is not updating my terminal python version.
|
Updating Python on Mac
| 61,615,371 | 1 | 140 | 519,726 | 0 |
python,macos,python-3.x
|
If it were me, I would just leave it as it is.
Use python3 and pip3 to run your files since python and python3 can coexist.
brew install python3 && cp /usr/local/bin/python3 /usr/local/bin/python
You can use the above line but it might have unintended consequences.
| 0 | 1 | 0 | 0 |
2009-11-06T12:45:00.000
| 23 | 0.008695 | false | 1,687,357 | 1 | 0 | 0 | 7 |
I wanted to update my python 2.6.1 to 3.x on mac but i was wondering if its possible to do it using terminal or i have to download the installer from python website?
The reason why i am asking this question is because installer is not updating my terminal python version.
|
Run Python script without Windows console appearing
| 1,689,269 | 36 | 71 | 105,900 | 0 |
python,windows,shell
|
If you name your files with the ".pyw" extension, then windows will execute them with the pythonw.exe interpreter. This will not open the dos console for running your script.
| 0 | 1 | 0 | 0 |
2009-11-06T17:16:00.000
| 9 | 1 | false | 1,689,015 | 0 | 0 | 0 | 3 |
Is there any way to run a Python script in Windows XP without a command shell momentarily appearing? I often need to automate WordPerfect (for work) with Python, and even if my script has no output, if I execute it from without WP an empty shell still pops up for a second before disappearing. Is there any way to prevent this? Some kind of output redirection perhaps?
|
Run Python script without Windows console appearing
| 62,431,147 | -2 | 71 | 105,900 | 0 |
python,windows,shell
|
I had the same problem. I tried many options, and all of them failed
But I tried this method, and it magically worked!!!!!
So, I had this python file (mod.py) in a folder, I used to run using command prompt
When I used to close the cmd the gui is automatically closed.....(SAD),
So I run it as follows
C:\....>pythonw mod.py
Don't forget pythonw "w" is IMP
| 0 | 1 | 0 | 0 |
2009-11-06T17:16:00.000
| 9 | -0.044415 | false | 1,689,015 | 0 | 0 | 0 | 3 |
Is there any way to run a Python script in Windows XP without a command shell momentarily appearing? I often need to automate WordPerfect (for work) with Python, and even if my script has no output, if I execute it from without WP an empty shell still pops up for a second before disappearing. Is there any way to prevent this? Some kind of output redirection perhaps?
|
Run Python script without Windows console appearing
| 69,071,490 | 0 | 71 | 105,900 | 0 |
python,windows,shell
|
Turn of your window defender. And install pyinstaller package using pip install pyinstaller .
After installing open cmd and type pyinstaller --onefile --noconsole filename.py
| 0 | 1 | 0 | 0 |
2009-11-06T17:16:00.000
| 9 | 0 | false | 1,689,015 | 0 | 0 | 0 | 3 |
Is there any way to run a Python script in Windows XP without a command shell momentarily appearing? I often need to automate WordPerfect (for work) with Python, and even if my script has no output, if I execute it from without WP an empty shell still pops up for a second before disappearing. Is there any way to prevent this? Some kind of output redirection perhaps?
|
How do I upload data to Google App Engine periodically?
| 1,690,155 | 0 | 0 | 400 | 0 |
python,security,google-app-engine,automation
|
Can you break up the scraping process into independent chunks that can each finish in the timeframe of an appengine request? (which can run longer than one second btw). Then you can just spawn a bunch of tasks using the task API that when combined, accomplish the full scrape. Then use the cron API to spawn off those tasks every N minutes.
| 0 | 1 | 0 | 0 |
2009-11-06T18:56:00.000
| 5 | 0 | false | 1,689,570 | 0 | 0 | 1 | 4 |
I'm writing an aggregation application which scrapes data from a couple of web sources and displays that data with a novel interface. The sites from which I'm scraping update every couple of minutes, and I want to make sure the data on my aggregator is up-to-date.
What's the best way to periodically submit fresh data to my App Engine application from an automated script?
Constraints:
The application is written in Python.
The scraping process for each site takes longer than one second, thus I cannot process the data in an App Engine handler.
The host on which the updater script would run is shared, so I'd rather not store my password on disk.
I'd like to check the code for the application into to our codebase. While my associates aren't malicious, they're pranksters, and I'd like to prevent them from inserting fake data into my app.
I'm aware that App Engine supports some remote_api thingey, but I'd have to put that entry point behind authentication (see constraint 3) or hide the URL (see constraint 4).
Suggestions?
|
How do I upload data to Google App Engine periodically?
| 1,690,150 | 3 | 0 | 400 | 0 |
python,security,google-app-engine,automation
|
Write a Task Queue task or an App Engine cron job to handle this. I'm not sure where you heard that there's a limit of 1 second on any sort of App Engine operations - requests are limited to 30 seconds, and URL fetches have a maximum deadline of 10 seconds.
| 0 | 1 | 0 | 0 |
2009-11-06T18:56:00.000
| 5 | 1.2 | true | 1,689,570 | 0 | 0 | 1 | 4 |
I'm writing an aggregation application which scrapes data from a couple of web sources and displays that data with a novel interface. The sites from which I'm scraping update every couple of minutes, and I want to make sure the data on my aggregator is up-to-date.
What's the best way to periodically submit fresh data to my App Engine application from an automated script?
Constraints:
The application is written in Python.
The scraping process for each site takes longer than one second, thus I cannot process the data in an App Engine handler.
The host on which the updater script would run is shared, so I'd rather not store my password on disk.
I'd like to check the code for the application into to our codebase. While my associates aren't malicious, they're pranksters, and I'd like to prevent them from inserting fake data into my app.
I'm aware that App Engine supports some remote_api thingey, but I'd have to put that entry point behind authentication (see constraint 3) or hide the URL (see constraint 4).
Suggestions?
|
How do I upload data to Google App Engine periodically?
| 1,689,805 | 0 | 0 | 400 | 0 |
python,security,google-app-engine,automation
|
The only way to get data into AppEngine is to call up a Web app of yours (as a Web app) and feed it data through the usual HTTP-ish means, i.e. as parameters to a GET request (for short data) or to a POST (if long or binary).
In other words, you'll have to craft your own little dataloader, which you will access as a Web app and which will in turn stash the data into the database behind AppEngine.
You'll probably want at least password protection on that app so nobody loads bogus data into your app.
| 0 | 1 | 0 | 0 |
2009-11-06T18:56:00.000
| 5 | 0 | false | 1,689,570 | 0 | 0 | 1 | 4 |
I'm writing an aggregation application which scrapes data from a couple of web sources and displays that data with a novel interface. The sites from which I'm scraping update every couple of minutes, and I want to make sure the data on my aggregator is up-to-date.
What's the best way to periodically submit fresh data to my App Engine application from an automated script?
Constraints:
The application is written in Python.
The scraping process for each site takes longer than one second, thus I cannot process the data in an App Engine handler.
The host on which the updater script would run is shared, so I'd rather not store my password on disk.
I'd like to check the code for the application into to our codebase. While my associates aren't malicious, they're pranksters, and I'd like to prevent them from inserting fake data into my app.
I'm aware that App Engine supports some remote_api thingey, but I'd have to put that entry point behind authentication (see constraint 3) or hide the URL (see constraint 4).
Suggestions?
|
How do I upload data to Google App Engine periodically?
| 1,693,701 | 0 | 0 | 400 | 0 |
python,security,google-app-engine,automation
|
I asked around and some friends came up with two solutions:
Upload a file with a shared secret token along with the application, but when committing to the codebase, change the token.
Create a small datastore model with one row, a secret token.
In both cases the token can be used to authenticate POST requests used to upload new data.
| 0 | 1 | 0 | 0 |
2009-11-06T18:56:00.000
| 5 | 0 | false | 1,689,570 | 0 | 0 | 1 | 4 |
I'm writing an aggregation application which scrapes data from a couple of web sources and displays that data with a novel interface. The sites from which I'm scraping update every couple of minutes, and I want to make sure the data on my aggregator is up-to-date.
What's the best way to periodically submit fresh data to my App Engine application from an automated script?
Constraints:
The application is written in Python.
The scraping process for each site takes longer than one second, thus I cannot process the data in an App Engine handler.
The host on which the updater script would run is shared, so I'd rather not store my password on disk.
I'd like to check the code for the application into to our codebase. While my associates aren't malicious, they're pranksters, and I'd like to prevent them from inserting fake data into my app.
I'm aware that App Engine supports some remote_api thingey, but I'd have to put that entry point behind authentication (see constraint 3) or hide the URL (see constraint 4).
Suggestions?
|
Is TCP Guaranteed to arrive in order?
| 1,691,189 | 53 | 43 | 30,177 | 0 |
python,tcp,twisted,protocols
|
As long as the two messages were sent on the same TCP connection, order will be maintained. If multiple connections are opened between the same pair of processes, you may be in trouble.
Regarding Twisted, or any other asynchronous event system: I expect you'll get the dataReceived messages in the order that bytes are received. However, if you start pushing work off onto deferred calls, you can, erm... "twist" your control flow beyond recognition.
| 0 | 1 | 0 | 0 |
2009-11-06T23:16:00.000
| 4 | 1.2 | true | 1,691,179 | 0 | 0 | 0 | 3 |
If I send two TCP messages, do I need to handle the case where the latter arrives before the former? Or is it guaranteed to arrive in the order I send it? I assume that this is not a Twisted-specific example, because it should conform to the TCP standard, but if anyone familiar with Twisted could provide a Twisted-specific answer for my own peace of mind, that'd be appreciated :-)
|
Is TCP Guaranteed to arrive in order?
| 1,691,197 | 25 | 43 | 30,177 | 0 |
python,tcp,twisted,protocols
|
TCP is connection-oriented and offers its Clients in-order delivery. Of course this applies to the connection level: individual connections are independent.
You should note that normally we refer to "TCP streams" and "UDP messages".
Whatever Client library you use (e.g. Twisted), the underlying TCP connection is independent of it. TCP will deliver the "protocol messages" in order to your client. By "protocol message" I refer of course to the protocol you use on the TCP layer.
Further note that I/O operation are async in nature and very dependent on system load + also compounding network delays & losses, you cannot rely on message ordering between TCP connections.
| 0 | 1 | 0 | 0 |
2009-11-06T23:16:00.000
| 4 | 1 | false | 1,691,179 | 0 | 0 | 0 | 3 |
If I send two TCP messages, do I need to handle the case where the latter arrives before the former? Or is it guaranteed to arrive in the order I send it? I assume that this is not a Twisted-specific example, because it should conform to the TCP standard, but if anyone familiar with Twisted could provide a Twisted-specific answer for my own peace of mind, that'd be appreciated :-)
|
Is TCP Guaranteed to arrive in order?
| 1,691,194 | 8 | 43 | 30,177 | 0 |
python,tcp,twisted,protocols
|
TCP is a stream, UDP is a message. You're mixing up terms. For TCP it is true that the stream will arrive in the same order as it was send. There are no distict messages in TCP, bytes appear as they arrive, interpreting them as messages is up to you.
| 0 | 1 | 0 | 0 |
2009-11-06T23:16:00.000
| 4 | 1 | false | 1,691,179 | 0 | 0 | 0 | 3 |
If I send two TCP messages, do I need to handle the case where the latter arrives before the former? Or is it guaranteed to arrive in the order I send it? I assume that this is not a Twisted-specific example, because it should conform to the TCP standard, but if anyone familiar with Twisted could provide a Twisted-specific answer for my own peace of mind, that'd be appreciated :-)
|
Parsing output of apt-get install for progress bar
| 1,692,347 | 6 | 6 | 3,478 | 0 |
python,progress-bar,subprocess,popen,apt-get
|
Instead of parsing the output of the apt-get, you can use python-apt to install packages. AFAIK it also has modules for reporting the progress.
| 0 | 1 | 0 | 0 |
2009-11-07T05:18:00.000
| 2 | 1 | false | 1,692,082 | 0 | 0 | 0 | 1 |
I'm working on a simple GUI Python script to do some simple tasks on a system. Some of that work involves apt-get install to install some packages.
While this is going on, I want to display a progress bar that should update with the progress of the download, using the little percentage shown in apt-get's interface in the terminal.
BUT! I can't find a way to get the progress info. Piping or redirecting the output of apt-get just gives static lines that show the "completed download" message for each package, and same for reading via subprocess.Popen() in my script.
How can I read from apt-get's output to get the percentages of the file downloaded?
|
Move or copy an entity to another kind
| 1,693,856 | 1 | 0 | 162 | 0 |
python,google-app-engine,indexing,archive,bigtable
|
Unless someone's written utilities for this kind of thing, the way to go is to read from one and write to the other kind!
| 0 | 1 | 0 | 0 |
2009-11-07T17:39:00.000
| 2 | 0.099668 | false | 1,693,815 | 0 | 0 | 1 | 2 |
Is there a way to move an entity to another kind in appengine.
Say you have a kind defines, and you want to keep a record of deleted entities of that kind.
But you want to separate the storage of live object and archived objects.
Kinds are basically just serialized dicts in the bigtable anyway. And maybe you don't need to index the archive in the same way as the live data.
So how would you make a move or copy of a entity of one kind to another kind.
|
Move or copy an entity to another kind
| 1,693,979 | 1 | 0 | 162 | 0 |
python,google-app-engine,indexing,archive,bigtable
|
No - once created, the kind is a part of the entity's immutable key. You need to create a new entity and copy everything across. One way to do this would be to use the low-level google.appengine.api.datastore interface, which treats entities as dicts.
| 0 | 1 | 0 | 0 |
2009-11-07T17:39:00.000
| 2 | 1.2 | true | 1,693,815 | 0 | 0 | 1 | 2 |
Is there a way to move an entity to another kind in appengine.
Say you have a kind defines, and you want to keep a record of deleted entities of that kind.
But you want to separate the storage of live object and archived objects.
Kinds are basically just serialized dicts in the bigtable anyway. And maybe you don't need to index the archive in the same way as the live data.
So how would you make a move or copy of a entity of one kind to another kind.
|
Environment on google Appengine
| 1,701,239 | 3 | 0 | 461 | 0 |
python,google-app-engine
|
To answer the actual question from the title of your post, assuming you're still wondering: to get environment variables, simple import os and the environment is available in os.environ.
| 0 | 1 | 0 | 0 |
2009-11-09T11:24:00.000
| 3 | 0.197375 | false | 1,700,441 | 0 | 0 | 1 | 1 |
does someone have an idea how to get the environment variables on Google-AppEngine ?
I'm trying to write a simple Script that shall use the Client-IP (for Authentication) and a parameter (geturl or so) from the URL (for e.g. http://thingy.appspot.dom/index?geturl=www.google.at)
I red that i should be able to get the Client-IP via "request.remote_addr" but i seem to lack 'request' even tho i imported webapp from google.appengine.ext
Many thanks in advance,
Birt
|
How to Execute a Python Script in Notepad++?
| 24,143,304 | 5 | 140 | 455,952 | 0 |
python,notepad++
|
I wish people here would post steps instead of just overall concepts. I eventually got the cmd /k version to work.
The step-by-step instructions are:
In NPP, click on the menu item: Run
In the submenu, click on: Run
In the Run... dialog box, in the field The Program to Run, delete any existing text and type in: cmd /K "$(FULL_CURRENT_PATH)"
The /K is optional, it keeps open the window created when the script runs, if you want that.
Hit the Save... button.
The Shortcut dialogue box opens; fill it out if you want a keyboard shortcut (there's a note saying "This will disable the accelerator" whatever that is, so maybe you don't want to use the keyboard shortcut, though it probably doesn't hurt to assign one when you don't need an accelerator).
Somewhere I think you have to tell NPP where the Python.exe file is (e.g., for me: C:\Python33\python.exe). I don't know where or how you do this, but in trying various things here, I was able to do that--I don't recall which attempt did the trick.
| 0 | 1 | 0 | 0 |
2009-11-09T17:41:00.000
| 21 | 0.047583 | false | 1,702,586 | 1 | 0 | 0 | 1 |
I prefer using Notepad++ for developing,
How do I execute the files in Python through Notepad++?
|
Python library for Linux process management
| 1,705,099 | 6 | 14 | 10,686 | 0 |
python,linux,process
|
Checking the list of running processes is accomplished (even by core utilities like "ps") by looking at the contents of the /proc directory.
As such, the library you're interested for querying running processes is the same as used for working with any other files and directories (i.e. sys or os, depending on the flavor you're after. Pay special attention to os.path though, it does most of what you're after). To terminate or otherwise interact with processes, you send them signals, which is accomplished with os.kill. Finally, you start new processes using os.popen and friends.
| 0 | 1 | 0 | 0 |
2009-11-10T01:14:00.000
| 8 | 1 | false | 1,705,077 | 0 | 0 | 0 | 1 |
Through my web interface I would like to start/stop certain processes and determine whether a started process is still running.
My existing website is Python based and running on a Linux server, so do you know of a suitable library that supports this functionality?
Thanks
|
Python 2.6 on Debian Lenny. Where should the executable go?
| 1,711,271 | 0 | 0 | 6,560 | 0 |
python,linux,debian
|
Your safest bet is to put Python 2.6 in /opt (./configure --prefix=/opt), and modify /etc/profile so that /opt/bin is searched first.
| 0 | 1 | 0 | 0 |
2009-11-10T21:08:00.000
| 4 | 0 | false | 1,711,200 | 1 | 0 | 0 | 2 |
I am building python2.6 from source on Debian Lenny.
( ./configure make make altinstall )
I don't want it to conflict with anything existing, but I want it to be in the default search path for bash.
Suggestions?
(ps, I'm using a vm, so I can trash it and rebuild.)
|
Python 2.6 on Debian Lenny. Where should the executable go?
| 1,711,264 | 3 | 0 | 6,560 | 0 |
python,linux,debian
|
I strongly recommend you do one of these two options.
Build a .deb package, and then install the .deb package; the installed then go in the usual places (/usr/bin/python26 for the main interpreter).
Build from source, and install from source into /usr/local/bin.
I think it is a very bad idea to start putting files in the usual places, but not known or understood by the package manager. If you built it by hand and installed it by hand, it should be confined in the /usr/local tree.
| 0 | 1 | 0 | 0 |
2009-11-10T21:08:00.000
| 4 | 0.148885 | false | 1,711,200 | 1 | 0 | 0 | 2 |
I am building python2.6 from source on Debian Lenny.
( ./configure make make altinstall )
I don't want it to conflict with anything existing, but I want it to be in the default search path for bash.
Suggestions?
(ps, I'm using a vm, so I can trash it and rebuild.)
|
Non-blocking file access with Twisted
| 44,355,634 | 2 | 27 | 8,205 | 0 |
python,twisted
|
The fdesc module might be useful for asynchronously talking to a socket or pipe, but when given an fd that refers to an ordinary filesystem file, it does blocking io (and via a rather odd interface at that). For disk io, fdesc is effectively snake oil; don't use it.
As of May 2017, the only reasonable way to get async disk io in twisted is by wrapping synchronous io calls in a deferToThread.
| 0 | 1 | 0 | 0 |
2009-11-12T08:47:00.000
| 5 | 0.07983 | false | 1,720,816 | 0 | 0 | 0 | 2 |
I'm trying to figure out if there is a defacto pattern for file access using twisted. Lots of examples I've looked at (twisted.python.log, twisted.persisted.dirdbm, twisted.web.static) actually don't seem to worry about blocking for file access.
It seems like there should be some obvious interface, probably inheriting from abstract.FileDescriptor, that all file access should be going through it as a producer/consumer.
Have I missed something or is it just that the primary use for twisted in asynchronous programming is for networking and it hasn't really been worked out for other file descriptor operations, not worrying about the purity of non-blocking IO ?
|
Non-blocking file access with Twisted
| 1,720,836 | -8 | 27 | 8,205 | 0 |
python,twisted
|
I'm not sure what you want to achieve. When you do logging, then Python will make sure (by the global interpreter log) that log messages from several threads go into the file one after the other.
If you're concerned about blocking IO, then the OS adds default buffers for your files (usually 4KB), and you can pass a buffer size in the open() call.
If you're concerned about something else, then please clarify your question.
| 0 | 1 | 0 | 0 |
2009-11-12T08:47:00.000
| 5 | -1 | false | 1,720,816 | 0 | 0 | 0 | 2 |
I'm trying to figure out if there is a defacto pattern for file access using twisted. Lots of examples I've looked at (twisted.python.log, twisted.persisted.dirdbm, twisted.web.static) actually don't seem to worry about blocking for file access.
It seems like there should be some obvious interface, probably inheriting from abstract.FileDescriptor, that all file access should be going through it as a producer/consumer.
Have I missed something or is it just that the primary use for twisted in asynchronous programming is for networking and it hasn't really been worked out for other file descriptor operations, not worrying about the purity of non-blocking IO ?
|
Is there any "remote console" for twisted server?
| 1,721,715 | 6 | 5 | 1,027 | 0 |
python,console,twisted
|
Take a look at twisted.manhole
| 0 | 1 | 0 | 1 |
2009-11-12T11:55:00.000
| 2 | 1 | false | 1,721,699 | 0 | 0 | 0 | 1 |
I am developing a twisted server. I need to control the memory usage. It is not a good idea to modify code, insert some memory logging command and restart the server. I think it is better to use a "remote console", so that I can type heapy command and see the response from the server directly. All I need is a remote console, I can build one by myself, but I don't like to rebuild a wheel. My question is: is there already any remote console for twisted?
Thanks.
|
Is the first entry in sys.path supposed to represent the current working directory?
| 1,722,943 | 1 | 1 | 550 | 0 |
python,pythonpath,sys.path
|
You can get the current directory with os.getcwd().
| 0 | 1 | 0 | 0 |
2009-11-12T15:12:00.000
| 2 | 0.099668 | false | 1,722,901 | 1 | 0 | 0 | 1 |
I had always assumed that the first entry in sys.path by default was the current working directory. But as it turns out, on my system the first entry is the path on which the script resides. So if I'm executing a script that's in /usr/bin from /some/directory, the first entry in sys.path is /usr/bin. Is something misconfigured on my system, or is this the expected behavior?
|
Multiple programs using the same UDP port? Possible?
| 1,723,643 | 1 | 0 | 5,210 | 0 |
python,udp,communication,daemon,ports
|
I'm pretty sure this is possible on Linux; I don't know about other UNIXes.
There are two ways to propagate a file descriptor from one process to another:
When a process fork()s, the child inherits all the file descriptors of the parent.
A process can send a file descriptor to another process over a "UNIX Domain Socket". See sendmsg() and recvmsg(). In Python, the _multiprocessing extension module will do this for you; see _multiprocessing.sendfd() and _multiprocessing.recvfd().
I haven't experimented with multiple processes listening on UDP sockets. But for TCP, on Linux, if multiple processes all listen on a single TCP socket, one of them will be randomly chosen when a connection comes in. So I suspect Linux does something sensible when multiple processes are all listening on the same UDP socket.
Try it and let us know!
| 0 | 1 | 1 | 0 |
2009-11-12T15:22:00.000
| 2 | 0.099668 | false | 1,722,993 | 0 | 0 | 0 | 2 |
I currently have a small Python script that I'm using to spawn multiple executables, (voice chat servers), and in the next version of the software, the servers have the ability to receive heartbeat signals on the UDP port. (There will be possibly thousands of servers on one machine, ranging from ports 7878 and up)
My problem is that these servers might (read: will) be running on the same machine as my Python script and I had planned on opening a UDP port, and just sending the heartbeat, waiting for the reply, and voila...I could restart servers when/if they weren't responding by killing the task and re-loading the server.
Problem is that I cannot open a UDP port that the server is already using. Is there a way around this? The project lead is implementing the heartbeat still, so I'm sure any suggestions in how the heartbeat system could be implemented would be welcome also. -- This is a pretty generic script though that might apply to other programs so my main focus is still communicating on that UDP port.
|
Multiple programs using the same UDP port? Possible?
| 1,723,017 | 2 | 0 | 5,210 | 0 |
python,udp,communication,daemon,ports
|
This isn't possible. What you'll have to do is have one UDP master program that handles all UDP communication over the one port, and communicates with your servers in another way (UDP on different ports, named pipes, ...)
| 0 | 1 | 1 | 0 |
2009-11-12T15:22:00.000
| 2 | 1.2 | true | 1,722,993 | 0 | 0 | 0 | 2 |
I currently have a small Python script that I'm using to spawn multiple executables, (voice chat servers), and in the next version of the software, the servers have the ability to receive heartbeat signals on the UDP port. (There will be possibly thousands of servers on one machine, ranging from ports 7878 and up)
My problem is that these servers might (read: will) be running on the same machine as my Python script and I had planned on opening a UDP port, and just sending the heartbeat, waiting for the reply, and voila...I could restart servers when/if they weren't responding by killing the task and re-loading the server.
Problem is that I cannot open a UDP port that the server is already using. Is there a way around this? The project lead is implementing the heartbeat still, so I'm sure any suggestions in how the heartbeat system could be implemented would be welcome also. -- This is a pretty generic script though that might apply to other programs so my main focus is still communicating on that UDP port.
|
How to get process's grandparent id
| 1,728,361 | 0 | 7 | 14,912 | 0 |
python,linux,process,subprocess
|
I do not think you can do this portably in the general case.
You need to get this information from the process list (e.g. through the ps command), which is obtained in a system-specific way.
| 0 | 1 | 0 | 0 |
2009-11-13T10:13:00.000
| 7 | 0 | false | 1,728,330 | 0 | 0 | 0 | 1 |
How can i get process id of the current process's parent?
In general given a process id how can I get its parent process id?
e.g. os.getpid() can be used to get the proccess id, and os.getppid() for the parent, how do I get grandparent,
My target is linux(ubuntu) so platform specific answers are ok.
|
Twisted Spread suitable for multiplayer racing sim?
| 1,732,990 | 5 | 2 | 708 | 0 |
python,networking,udp,twisted,multiplayer
|
It's almost certainly a reasonable protocol to start with. Remember the cardinal rule of optimization: don't do it. Working with any TCP-based protocol is going to be considerably easier than working with any UDP-based protocol. This is initially much more important to the success of your project than whether it takes 30 milliseconds or 45 milliseconds to send a message between your client and server. Eventually, when you've gotten to the point where it's clear your project might actually succeed and you really need to make up those 15 (or however many) milliseconds, you can revisit the network layer and consider whether the performance bottleneck (be it latency or some other metric) is due to your choice of protocol. If so, that is the time to spend time evaluating various alternatives. It's only at that point that the effort of selecting the ideal protocol might pay off (since you're that much closer to a completed project) and by then you will have a significantly improved understanding of the problem and should have nailed down your requirements very specifically, two more things which will make the task of selecting the appropriate protocol (be it TCP- or UDP-based) that much easier and more likely to be correct.
| 0 | 1 | 0 | 0 |
2009-11-14T01:49:00.000
| 2 | 1.2 | true | 1,732,957 | 0 | 0 | 0 | 1 |
Do you think that Twisted Spread may be suitable (in terms of performance) for a multiplayer racing simulator? The rest of the application is based on Python-Ogre.
Can Perspective Broker run upon (reliable?) UDP?
|
ubuntu9.10 : how to use python's lib-dynload and site-packages directories?
| 1,733,409 | 1 | 3 | 10,075 | 0 |
python,ubuntu,installation
|
Sounds like they're an accident from some package(s) you have installed.
The Python version in use determines the locations searched to find installed Python packages/modules, and the "system version" of Python in Ubuntu 9.10 is 2.6, so that's what practically everything should be using. If you were to install the python2.5 package (or it gets installed as a dependency of something else), then it would use /usr/lib/python2.5/*. Try running which python and python --version; also which python2.6 and which python2.5.
From what I understand, though I'm not sure exactly why at all, Debian (from which Ubuntu is derived) uses a dist-packages naming scheme instead of site-packages.
Terminology: Python has packages and Debian (and so Ubuntu) has packages. They aren't the same kind of package, though individual Debian packages will install specific Python packages.
| 0 | 1 | 0 | 0 |
2009-11-14T05:23:00.000
| 4 | 0.049958 | false | 1,733,364 | 1 | 0 | 0 | 1 |
In ubuntu 9.10, in usr/lib/ there are the directories python2.4, python2.5, python2.6 and python3.0
Only python 2.6 is actually working.
python2.4 has only a lib-dynload directory,
python2.5 has only lib-dynload and site-packages,
python3.0 has only a dist-packages directory.
Now i'm wondering what is the idea behind this?
Because when i install python2.5 with ./configure, make, make install | altinstall
this goes into usr/local/lib and not usr/lib/ so why were these directories added tu ubuntu, how am i supposed to install python to use them?
|
Setting Python Interpreter in Eclipse (Mac)
| 1,735,193 | 7 | 10 | 20,985 | 0 |
python,eclipse,macos
|
Running $which python should help locate your Python installation.
| 0 | 1 | 0 | 0 |
2009-11-14T18:18:00.000
| 4 | 1 | false | 1,735,109 | 1 | 0 | 0 | 2 |
How do I direct Eclipse to the Python interpreter on my Mac?
I've looked in Library which contains the directory 'Python' then '2.3' and '2.5', however they contain nothing except 'Site-packages' - Which is weird considering I can go into the terminal and type python. I then installed the latest 2.6 version with package manager and still can't find it. Can anyone help?
|
Setting Python Interpreter in Eclipse (Mac)
| 1,735,148 | 7 | 10 | 20,985 | 0 |
python,eclipse,macos
|
An alias to the python interpreter was likely installed into /usr/local/bin. So, to invoke python2.6, type /usr/local/bin/python2.6 or, most likely, just python2.6. If you want python to invoke python2.6, try rearranging your $PATH so that /usr/local/bin precedes /usr/bin.
| 0 | 1 | 0 | 0 |
2009-11-14T18:18:00.000
| 4 | 1.2 | true | 1,735,109 | 1 | 0 | 0 | 2 |
How do I direct Eclipse to the Python interpreter on my Mac?
I've looked in Library which contains the directory 'Python' then '2.3' and '2.5', however they contain nothing except 'Site-packages' - Which is weird considering I can go into the terminal and type python. I then installed the latest 2.6 version with package manager and still can't find it. Can anyone help?
|
Native Python Editor for Mac?
| 4,642,775 | 0 | 2 | 9,940 | 0 |
python,macos,ide,editor,python-idle
|
going to vote for Fraise. handles almost anything, has indentations, and uses color! its free too.
| 0 | 1 | 0 | 0 |
2009-11-15T03:19:00.000
| 12 | 0 | false | 1,736,451 | 1 | 0 | 0 | 6 |
I'm currently using IDLE, its decent, but I'd like to know if there're better lightweight IDEs built especially for Mac — free or commercial.
|
Native Python Editor for Mac?
| 1,736,831 | 0 | 2 | 9,940 | 0 |
python,macos,ide,editor,python-idle
|
vim, emacs, BBEdit, WingIDE, or my favorite - eclipse (although, I don't think this is very lightweight)
| 0 | 1 | 0 | 0 |
2009-11-15T03:19:00.000
| 12 | 0 | false | 1,736,451 | 1 | 0 | 0 | 6 |
I'm currently using IDLE, its decent, but I'd like to know if there're better lightweight IDEs built especially for Mac — free or commercial.
|
Native Python Editor for Mac?
| 2,854,788 | 0 | 2 | 9,940 | 0 |
python,macos,ide,editor,python-idle
|
I would recommend you to look at Aptana(it's more attractive then Eclipse for me)+PyDev or PyCarm. I use TextMate too, but those are easy for debug.
| 0 | 1 | 0 | 0 |
2009-11-15T03:19:00.000
| 12 | 0 | false | 1,736,451 | 1 | 0 | 0 | 6 |
I'm currently using IDLE, its decent, but I'd like to know if there're better lightweight IDEs built especially for Mac — free or commercial.
|
Native Python Editor for Mac?
| 3,432,322 | 0 | 2 | 9,940 | 0 |
python,macos,ide,editor,python-idle
|
I use FRAISE (free) it is simple and useful, auto indentation, colorize, auto completion, shell.
| 0 | 1 | 0 | 0 |
2009-11-15T03:19:00.000
| 12 | 0 | false | 1,736,451 | 1 | 0 | 0 | 6 |
I'm currently using IDLE, its decent, but I'd like to know if there're better lightweight IDEs built especially for Mac — free or commercial.
|
Native Python Editor for Mac?
| 1,736,465 | 1 | 2 | 9,940 | 0 |
python,macos,ide,editor,python-idle
|
There is a commercial one - Textmate.
Most of the good free editors are cross-platform (if you are ok with it, I'd recommend EditRa - but it doesn't work properly under 10.6 yet, because of some bugs in wxPython).
| 0 | 1 | 0 | 0 |
2009-11-15T03:19:00.000
| 12 | 0.016665 | false | 1,736,451 | 1 | 0 | 0 | 6 |
I'm currently using IDLE, its decent, but I'd like to know if there're better lightweight IDEs built especially for Mac — free or commercial.
|
Native Python Editor for Mac?
| 7,764,682 | 0 | 2 | 9,940 | 0 |
python,macos,ide,editor,python-idle
|
check out Sublime Text 2 Alpha. seriously awesome.
| 0 | 1 | 0 | 0 |
2009-11-15T03:19:00.000
| 12 | 0 | false | 1,736,451 | 1 | 0 | 0 | 6 |
I'm currently using IDLE, its decent, but I'd like to know if there're better lightweight IDEs built especially for Mac — free or commercial.
|
Compress data before storage on Google App Engine
| 2,125,539 | 0 | 1 | 2,545 | 0 |
python,google-app-engine,compression,gzip,zlib
|
You can store up to 10Mb with a list of Blobs. Search for google file service.
It's much more versatile than BlobStore in my opinion, since I just started using BlobStore Api yesterday and I'm still figuring out if it is possible to access the data bytewise.. as in changing doc to pdf, jpeg to gif..
You can storage Blobs of 1Mb * 10 = 10 Mb (max entity size I think), or you can use BlobStore API and get the same 10Mb or get 50Mb if you enable billing (you can enable it but if you don't pass the free quota you don't pay).
| 0 | 1 | 0 | 0 |
2009-11-16T00:55:00.000
| 5 | 0 | false | 1,739,543 | 0 | 0 | 1 | 2 |
I im trying to store 30 second user mp3 recordings as Blobs in my app engine data store. However, in order to enable this feature (App Engine has a 1MB limit per upload) and to keep the costs down I would like to compress the file before upload and decompress the file every time it is requested. How would you suggest I accomplish this (It can happen in the background by the way via a task queue but an efficient solution is always good)
Based on my own tests and research - I see two possible approaches to accomplish this
Zlib
For this I need to compress a certain number of blocks at a time using a While loop. However, App Engine doesnt allow you to write to the file system. I thought about using a Temporary File to accomplish this but I havent had luck with this approach when trying to decompress the content from a Temporary File
Gzip
From reading around the web, it appears that the app engine url fetch function requests content gzipped already and then decompresses it. Is there a way to stop the function from decompressing the content so that I can just put it in the datastore in gzipped format and then decompress it when I need to play it back to a user on demand?
Let me know how you would suggest using zlib or gzip or some other solution to accmoplish this. Thanks
|
Compress data before storage on Google App Engine
| 1,739,598 | 2 | 1 | 2,545 | 0 |
python,google-app-engine,compression,gzip,zlib
|
"Compressing before upload" implies doing it in the user's browser -- but no text in your question addresses that! It seems to be about compression in your GAE app, where of course the data will only be after the upload. You could do it with a Firefox extension (or other browsers' equivalents), if you can develop those and convince your users to install them, but that has nothing much to do with GAE!-) Not to mention that, as @RageZ's comment mentions, MP3 is, essentially, already compressed, so there's little or nothing to gain (though maybe you could, again with a browser extension for the user, reduce the MP3's bit rate and thus the file's dimension, that could impact the audio quality, depending on your intended use for those audio files).
So, overall, I have to second @jldupont's suggestion (also in a comment) -- use a different server for storage of large files (S3, Amazon's offering, is surely a possibility though not the only one).
| 0 | 1 | 0 | 0 |
2009-11-16T00:55:00.000
| 5 | 1.2 | true | 1,739,543 | 0 | 0 | 1 | 2 |
I im trying to store 30 second user mp3 recordings as Blobs in my app engine data store. However, in order to enable this feature (App Engine has a 1MB limit per upload) and to keep the costs down I would like to compress the file before upload and decompress the file every time it is requested. How would you suggest I accomplish this (It can happen in the background by the way via a task queue but an efficient solution is always good)
Based on my own tests and research - I see two possible approaches to accomplish this
Zlib
For this I need to compress a certain number of blocks at a time using a While loop. However, App Engine doesnt allow you to write to the file system. I thought about using a Temporary File to accomplish this but I havent had luck with this approach when trying to decompress the content from a Temporary File
Gzip
From reading around the web, it appears that the app engine url fetch function requests content gzipped already and then decompresses it. Is there a way to stop the function from decompressing the content so that I can just put it in the datastore in gzipped format and then decompress it when I need to play it back to a user on demand?
Let me know how you would suggest using zlib or gzip or some other solution to accmoplish this. Thanks
|
Unable to unpickle a file on Mac that was pickled on Windows
| 1,748,972 | 5 | 3 | 1,555 | 0 |
python,windows,macos,pickle
|
Pickle with the newest protocol version and open the files in binary mode in all cases. That should solve the problem.
| 0 | 1 | 0 | 0 |
2009-11-17T13:40:00.000
| 2 | 1.2 | true | 1,748,958 | 1 | 0 | 0 | 2 |
I've got a simple class that I am pickling(dumping) to a file. On OS X this works fine, and on Windows this works fine.
However, while on windows I can load/unpickle the object fine - when windows then pickles this file and saves it back to disk, it becomes unreadable on OS X (although in Windows it still behaves as normal).
The error I get back from OS X is that it is unable to import the require class.
I'm confused as this all works fine as long as I don't pickle anything in windows! (Even then it still works fine in Windows)
I've heard it could be line endings, my other thoughts are possibly something to do with the encoding type used being different across operating systems? But I really have no idea what to try to fully diagnose and/or solve this problem, so any help would be appreciated!
|
Unable to unpickle a file on Mac that was pickled on Windows
| 1,748,998 | 3 | 3 | 1,555 | 0 |
python,windows,macos,pickle
|
It will be line endings - if you are using ASCII pickle open file in ascii mode 'r' or 'w' - if you are using a binary pickle open in binary mode 'rb' 'wb'. From the docstring:
The default
protocol is 0, to be backwards compatible. (Protocol 0 is the
only protocol that can be written to a file opened in text
mode and read back successfully. When using a protocol higher
than 0, make sure the file is opened in binary mode, both when
pickling and unpickling.)
| 0 | 1 | 0 | 0 |
2009-11-17T13:40:00.000
| 2 | 0.291313 | false | 1,748,958 | 1 | 0 | 0 | 2 |
I've got a simple class that I am pickling(dumping) to a file. On OS X this works fine, and on Windows this works fine.
However, while on windows I can load/unpickle the object fine - when windows then pickles this file and saves it back to disk, it becomes unreadable on OS X (although in Windows it still behaves as normal).
The error I get back from OS X is that it is unable to import the require class.
I'm confused as this all works fine as long as I don't pickle anything in windows! (Even then it still works fine in Windows)
I've heard it could be line endings, my other thoughts are possibly something to do with the encoding type used being different across operating systems? But I really have no idea what to try to fully diagnose and/or solve this problem, so any help would be appreciated!
|
Restarting a self-updating python script
| 1,750,798 | 2 | 47 | 41,683 | 0 |
python,auto-update
|
The cleanest solution is a separate update script!
Run your program inside it, report back (when exiting) that a new version is available. This allows your program to save all of its data, the updater to apply the update, and run the new version, which then loads the saved data and continues. To the user this can be completely transparent, as they just run the updater-shell which runs the real program.
| 0 | 1 | 0 | 0 |
2009-11-17T18:16:00.000
| 8 | 0.049958 | false | 1,750,757 | 1 | 0 | 0 | 1 |
I have written a script that will keep itself up to date by downloading the latest version from a website and overwriting the running script.
I am not sure what the best way to restart the script after it has been updated.
Any ideas?
I don't really want to have a separate update script.
oh and it has to work on both linux/windows too.
|
Checking files retrieved by Twisted's FTPClient.retrieveFile method for completeness
| 1,757,848 | 4 | 3 | 1,008 | 0 |
python,ftp,client,twisted
|
There are a couple unit tests for behavior in this area.
twisted.test.test_ftp.FTPClientTestCase.test_failedRETR is the most directly relevant one. It covers the case where the control and data connections are lost while a file transfer is in progress.
It seems to me that test coverage in this area could be significantly improved. There are no tests covering the case where just the data connection is lost while a transfer is in progress, for example. One thing that makes this tricky, though, is that FTP is not a very robust protocol. The end of a file transfer is signaled by the data connection closing. To be safe, you have to check to see if you received as many bytes as you expected to receive. The only way to perform this check is to know the file size in advance or ask the server for it using LIST (FTPClient.list).
Given all this, I'd suggest that when a file transfer completes, you always ask the server how many bytes you should have gotten and make sure it agrees with the number of bytes delivered to your protocol. You may sometimes get an errback on the Deferred returned from retrieveFile, but this will keep you safe even in the cases where you don't.
| 0 | 1 | 0 | 0 |
2009-11-18T16:32:00.000
| 1 | 1.2 | true | 1,757,276 | 0 | 0 | 0 | 1 |
I'm writing a custom ftp client to act as a gatekeeper for incoming multimedia content from subcontractors hired by one of our partners. I chose twisted because it allows me to parse the file contents before writing the files to disk locally, and I've been looking for occasion to explore twisted anyway. I'm using 'twisted.protocols.ftp.FTPClient.retrieveFile' to get the file, passing the escaped path to the file, and a protocol to the 'retrieveFile' method. I want to be absolutely sure that the entire file has been retrieved because the event handler in the call back is going to write the file to disk locally, then delete the remote file from the ftp server alla '-E' switch behavior in the lftp client. My question is, do I really need to worry about this, or can I assume that an err back will happen if the file is not fully retrieved?
|
python prompt with a bash like interface
| 1,758,845 | 6 | 2 | 1,113 | 0 |
python
|
If you compile python with readline support, the REPL environment should do this for you.
| 0 | 1 | 0 | 0 |
2009-11-18T20:27:00.000
| 4 | 1 | false | 1,758,819 | 1 | 0 | 0 | 1 |
I am using the python prompt to practice some regular expressions. I was wondering if there was a way to use the up/down arrows (like bash) to cycle through the old commands typed. I know its possible since it works on python on cygwin/windows.
thanks
|
Is there a sendKey for Mac in Python?
| 1,770,681 | 0 | 19 | 19,837 | 0 |
python,macos,sendkeys
|
Maybe you could run an OSA script (man osascript) from Python, for instance, and drive the application?
| 0 | 1 | 0 | 0 |
2009-11-20T13:04:00.000
| 5 | 0 | false | 1,770,312 | 0 | 0 | 0 | 1 |
In Mac 10.6, I want to cause an active application to become de-active, or minimized by Python
I know I could use sendKey in Windows with Python, then what about in Mac?
|
reactor.iterate seems to block a program with Py2exe
| 1,770,870 | 1 | 1 | 116 | 0 |
python,twisted,py2exe
|
The typical use of the reactor is not to call reactor.iterate. It's hard to say why exactly you're getting the behavior you are without seeing your program, but for a wild guess, I'd say switching to reactor.run might help.
| 0 | 1 | 0 | 0 |
2009-11-20T14:25:00.000
| 1 | 1.2 | true | 1,770,754 | 1 | 0 | 0 | 1 |
I'm currently using an application in python which works quite well but when I'm converting it with py2exe, the application seems to be suspended at the first "reactor.iterate"
Each time I press Ctrl+C to stop the application, the error is always the same and the application seems to be bloqued on a "reactor.iterate(4)"
This problem never occur with normal python interpreter.
Have you got an idea ?
|
Python programs coexisting on Windows
| 1,779,645 | 1 | 3 | 512 | 0 |
python,windows
|
One solution would be to craft a batch file that invokes the correct interpreter for a given application. THis way, you can install additional interpreters in separate folders.
Probably not perfect but it works.
| 0 | 1 | 0 | 0 |
2009-11-22T19:02:00.000
| 6 | 0.033321 | false | 1,779,630 | 1 | 0 | 0 | 3 |
I'm looking for a way to let multiple Python programs coexist on the same Windows machine.
Here's the problem: suppose program A needs Python 2.5, B needs 2.6, C needs 3, and each of them needs its own version of Qt, Wx or whatever other modules or whatever.
Trying to install all these dependencies on the same machine will break things, e.g. you can install different versions of Python side-by-side but only one of them can have the .py file association, so if you give that to Python 2.5 then B and C won't work, etc.
The ideal state of affairs would be if program A could live in C:\A along with its own Python interpreter, Qt/Wx/MySQL driver/whatever and never touch anything outside that directory, ditto for B and C.
Is there any way to accomplish this, other than going the full virtual box route?
edit: I tried the batch file solution, but it doesn't work. That is, it works on simple test scripts but e.g. OpenRPG fails at some point in its loading process if its required version of Python doesn't own the file association.
|
Python programs coexisting on Windows
| 1,779,665 | 0 | 3 | 512 | 0 |
python,windows
|
write a python script that mimics the way unix shells handle scirpts -- look at the first line and see if it matches #!(name-of-shell). Then have your python script exec that interpreter and feed it the rest of its arguments.
Then, associate .py with your script.
| 0 | 1 | 0 | 0 |
2009-11-22T19:02:00.000
| 6 | 0 | false | 1,779,630 | 1 | 0 | 0 | 3 |
I'm looking for a way to let multiple Python programs coexist on the same Windows machine.
Here's the problem: suppose program A needs Python 2.5, B needs 2.6, C needs 3, and each of them needs its own version of Qt, Wx or whatever other modules or whatever.
Trying to install all these dependencies on the same machine will break things, e.g. you can install different versions of Python side-by-side but only one of them can have the .py file association, so if you give that to Python 2.5 then B and C won't work, etc.
The ideal state of affairs would be if program A could live in C:\A along with its own Python interpreter, Qt/Wx/MySQL driver/whatever and never touch anything outside that directory, ditto for B and C.
Is there any way to accomplish this, other than going the full virtual box route?
edit: I tried the batch file solution, but it doesn't work. That is, it works on simple test scripts but e.g. OpenRPG fails at some point in its loading process if its required version of Python doesn't own the file association.
|
Python programs coexisting on Windows
| 1,779,655 | 2 | 3 | 512 | 0 |
python,windows
|
Use batch files to run scripts, write in notepad for example:
c:\python26\python.exe C:\Script_B\B.py
and save it as runB.bat (or anything .bat). It will run with interpreter in c:\python26\python.exe file specified after a whitespace.
| 0 | 1 | 0 | 0 |
2009-11-22T19:02:00.000
| 6 | 0.066568 | false | 1,779,630 | 1 | 0 | 0 | 3 |
I'm looking for a way to let multiple Python programs coexist on the same Windows machine.
Here's the problem: suppose program A needs Python 2.5, B needs 2.6, C needs 3, and each of them needs its own version of Qt, Wx or whatever other modules or whatever.
Trying to install all these dependencies on the same machine will break things, e.g. you can install different versions of Python side-by-side but only one of them can have the .py file association, so if you give that to Python 2.5 then B and C won't work, etc.
The ideal state of affairs would be if program A could live in C:\A along with its own Python interpreter, Qt/Wx/MySQL driver/whatever and never touch anything outside that directory, ditto for B and C.
Is there any way to accomplish this, other than going the full virtual box route?
edit: I tried the batch file solution, but it doesn't work. That is, it works on simple test scripts but e.g. OpenRPG fails at some point in its loading process if its required version of Python doesn't own the file association.
|
Python : fork and exec a process to run on different terminal
| 1,794,679 | 1 | 1 | 1,575 | 0 |
python,process
|
If you want "real" (pseudo-;-) terminals, and are using X11 (almost every GUI interface on Linux does;-), you could exec xterm -e python node.py instead of just python node.py -- substitute for xterm whatever terminal emulator program you prefer, of course (I'm sure they all have command-line switches equivalent to good old xterm's -e, to specify what program they should run!-).
| 0 | 1 | 0 | 0 |
2009-11-25T04:04:00.000
| 2 | 1.2 | true | 1,794,536 | 0 | 0 | 0 | 1 |
I am trying to simulate a a network consisting of several clients and servers. I have written node.py which contains client-server code. I want to run multiple instances node.py. But I don't want to do it manually so I have written another file spawn.py which spawns multiple instances of node.py using fork and exec. However, I need to run each instance of node.py on different terminal(shell) so that I can easily debug what is happening inside each node.
How can we do that? Please help.
EDIT : I am working on linux and using python 2.5 and
I want to run all processes on the same box
|
Launching default application for given type of file, OS X
| 1,798,364 | 1 | 2 | 576 | 0 |
python,subprocess
|
Do you know about the open command in Mac OS X? I think you can solve your problem by calling it from Python.
man open for details:
The open command opens a file (or a directory or URL), just as if you had double-clicked the
file's icon. If no application name is specified, the default application as determined via
LaunchServices is used to open the specified files.
| 0 | 1 | 0 | 0 |
2009-11-25T16:56:00.000
| 3 | 0.066568 | false | 1,798,351 | 0 | 0 | 1 | 2 |
I'm writing a python script that generates html file. Every time I run this script I'd like at the end to open default system browser for this file. It's all in OS X environment.
What python code can launch Safari/Firefox/whatever is system default html viewer and open given file? subprocess.call doesn't seem to do the trick.
|
Launching default application for given type of file, OS X
| 1,799,054 | 0 | 2 | 576 | 0 |
python,subprocess
|
import ic
ic.launchurl('file:///somefile.html')
| 0 | 1 | 0 | 0 |
2009-11-25T16:56:00.000
| 3 | 0 | false | 1,798,351 | 0 | 0 | 1 | 2 |
I'm writing a python script that generates html file. Every time I run this script I'd like at the end to open default system browser for this file. It's all in OS X environment.
What python code can launch Safari/Firefox/whatever is system default html viewer and open given file? subprocess.call doesn't seem to do the trick.
|
What is the performance cost of named keys or "pre-generated" keys in Google App Engine?
| 1,805,774 | 4 | 2 | 198 | 0 |
python,google-app-engine
|
There is no intrinsic penalty to using a key name instead of an auto-generated ID, except the overhead of a (potentially) longer key on the entity and any ReferenceProperties that reference it.
In certain cases, in fact, using auto-allocated IDs can have a performance penalty: If you insert new entities at a very high rate (several hundred per second), since all the new entities have IDs in the same range, they will all be written to the same Bigtable tablet, and can cause contention and increased timeouts. The vast majority of apps never have to worry about this, though.
There's no performance impact to allocating as many IDs as you want - App Engine simply increases the ID counter by the number you request. (This is a simplification, but generally accurate).
In answer to your concerns, App Engine doesn't randomly generate keys. It either uses an auto-allocated id, which is allocated using a counter, and thus guaranteed unique, or it uses the key you supplied. So in answer to your last 3 bullet points:
No.
Only in storage for the (potentially) longer keys
No, and the cost is roughly O(1) regardless of how many you ask for.
| 0 | 1 | 0 | 0 |
2009-11-26T20:38:00.000
| 1 | 1.2 | true | 1,805,555 | 0 | 0 | 1 | 1 |
If you used named keys in Google App Engine, does this incur any additional cost? Put another way, is it any more expensive to create a new entity with a named key rather than a randomly generated id?
In a similar line of reasoning, I note that you can ask Google App Engine to give you a set of keys that will not be used by Google App Engine as auto generated keys? Would generating a large number of these keys result in reduced performance?
These questions both bother me for the following reason. Let us say Google App Engine was attempting to persist entity A, and as such it is creating a key for A. It would seem intuitively, that when a new key is randomly generated, Google App Engine would need to first check if the key was already in existence. If the key already existed, then Google App Engine might need to generate another randomly generated new key. It would continue to do this until it succeeded in generating a unique new key. It would then assign this key to entity A. Alright, that is fine and good.
My problem with this is it seems to imply that keys cause some sort of application level lock? This would be neccesary when Google App Engine is checking if the randomly generated key already exist. This can't be right, as it isn't scalable at all? What is wrong about my reasoning?
So, since this was long, I will re-iterate my 3 questions:
Does Google App Engine create an application level lock when generating new keys?
Do named keys incur any additional cost over automatically generated keys? If so, what cost (constant, linear, exponential,...)?
Does asking app engine for keys that app engine promises not to use cause a degradation in key creation performance? If so, what would the cost for this be?
|
How to safely write to a file?
| 1,812,351 | 4 | 21 | 9,770 | 0 |
python,windows,file
|
The standard solution is this.
Write a new file with a similar name. X.ext# for example.
When that file has been closed (and perhaps even read and checksummed), then you two two renames.
X.ext (the original) to X.ext~
X.ext# (the new one) to X.ext
(Only for the crazy paranoids) call the OS sync function to force dirty buffer writes.
At no time is anything lost or corruptable. The only glitch can happen during the renames. But you haven't lost anything or corrupted anything. The original is recoverable right up until the final rename.
| 0 | 1 | 0 | 0 |
2009-11-28T09:46:00.000
| 8 | 0.099668 | false | 1,812,115 | 1 | 0 | 0 | 2 |
Imagine you have a library for working with some sort of XML file or configuration file. The library reads the whole file into memory and provides methods for editing the content. When you are done manipulating the content you can call a write to save the content back to file. The question is how to do this in a safe way.
Overwriting the existing file (starting to write to the original file) is obviously not safe. If the write method fails before it is done you end up with a half written file and you have lost data.
A better option would be to write to a temporary file somewhere, and when the write method has finished, you copy the temporary file to the original file.
Now, if the copy somehow fails, you still have correctly saved data in the temporary file. And if the copy succeeds, you can remove the temporary file.
On POSIX systems I guess you can use the rename system call which is an atomic operation. But how would you do this best on a Windows system? In particular, how do you handle this best using Python?
Also, is there another scheme for safely writing to files?
|
How to safely write to a file?
| 1,812,604 | 17 | 21 | 9,770 | 0 |
python,windows,file
|
If you see Python's documentation, it clearly mentions that os.rename() is an atomic operation. So in your case, writing data to a temporary file and then renaming it to the original file would be quite safe.
Another way could work like this:
let original file be abc.xml
create abc.xml.tmp and write new data to it
rename abc.xml to abc.xml.bak
rename abc.xml.tmp to abc.xml
after new abc.xml is properly put in place, remove abc.xml.bak
As you can see that you have the abc.xml.bak with you which you can use to restore if there are any issues related with the tmp file and of copying it back.
| 0 | 1 | 0 | 0 |
2009-11-28T09:46:00.000
| 8 | 1.2 | true | 1,812,115 | 1 | 0 | 0 | 2 |
Imagine you have a library for working with some sort of XML file or configuration file. The library reads the whole file into memory and provides methods for editing the content. When you are done manipulating the content you can call a write to save the content back to file. The question is how to do this in a safe way.
Overwriting the existing file (starting to write to the original file) is obviously not safe. If the write method fails before it is done you end up with a half written file and you have lost data.
A better option would be to write to a temporary file somewhere, and when the write method has finished, you copy the temporary file to the original file.
Now, if the copy somehow fails, you still have correctly saved data in the temporary file. And if the copy succeeds, you can remove the temporary file.
On POSIX systems I guess you can use the rename system call which is an atomic operation. But how would you do this best on a Windows system? In particular, how do you handle this best using Python?
Also, is there another scheme for safely writing to files?
|
Global disk resource becomes unavailable
| 1,814,406 | 1 | 0 | 65 | 0 |
python,linux,fileserver,flock
|
If this happens intermittently, you might just want to try waiting a short period and retrying. Other than that... log the error and fail. Maybe throw an exception that someone higher up can catch and deal with more gracefully.
| 0 | 1 | 0 | 0 |
2009-11-29T02:00:00.000
| 1 | 0.197375 | false | 1,814,393 | 0 | 0 | 0 | 1 |
If I've got a global disk resource (mount point on an isilon file server) that multiple servers use to access a lock file. What is a good way to handle the situation if that global disk becomes unavailable and the servers can't access the global lock file?
Thanks,
Doug
|
why Ghost Process appears after kill -9
| 1,853,165 | 0 | 1 | 845 | 0 |
python,process,kill
|
Zombie processes are actually just an entry in the process table. They do not run, they don't consume memory; the entry just stays because the parent hasn't checked their exit code.
You can either do a double fork as Gonzalo suggests, or you can filter out all ps lines with a Z in the S column.
| 0 | 1 | 0 | 0 |
2009-12-02T02:49:00.000
| 3 | 0 | false | 1,830,370 | 0 | 0 | 0 | 2 |
In my Python script, I first launch a subprocess by subprocess.Popen(). Then later on, I want to kill that subprocess by kill -9 Pid.
What I found is that after the kill is executed, the subprocess is "stopped" because the GUI window of that process disappeared immediately. But when I perform a "ps aux" right after the kill, the same process (with same pid) is still shown in the result. The difference is the command of the process is included in a pair of () like below:
root 30506 0.0 0.0 0 0 s000 Z+ 6:13PM
0:00.00 (sample process)
This breaks my process detect logical since the dead process still can be found by ps.
Anyone know why this is happening?
Thanks!
|
why Ghost Process appears after kill -9
| 1,830,395 | 0 | 1 | 845 | 0 |
python,process,kill
|
I think -9 signal lets the process to try to handle kill and spend some time housekeeping. You can try just kill the process without signal.
Edit: oh, its actually -15 signal, that lets process die gracefully. never mind.
| 0 | 1 | 0 | 0 |
2009-12-02T02:49:00.000
| 3 | 0 | false | 1,830,370 | 0 | 0 | 0 | 2 |
In my Python script, I first launch a subprocess by subprocess.Popen(). Then later on, I want to kill that subprocess by kill -9 Pid.
What I found is that after the kill is executed, the subprocess is "stopped" because the GUI window of that process disappeared immediately. But when I perform a "ps aux" right after the kill, the same process (with same pid) is still shown in the result. The difference is the command of the process is included in a pair of () like below:
root 30506 0.0 0.0 0 0 s000 Z+ 6:13PM
0:00.00 (sample process)
This breaks my process detect logical since the dead process still can be found by ps.
Anyone know why this is happening?
Thanks!
|
Python, PowerShell, or Other?
| 1,835,086 | 46 | 32 | 44,901 | 0 |
python,powershell,scripting
|
Python works as a great, all-purpose tool if you're looking to replace CMD and BAT scripts on your Windows boxes, and can also be written to run scripts on your (L)inux boxes, too. It's a great, flexible language and can handle many tasks you throw at it.
That being said, PowerShell is an amazingly versatile tool for administering all manner of Windows boxes; it has all the power of .NET, with many more interfaces into MS products such as Exchange and Active Directory, which are a timesaver. Depending on your situation, you may get more use of of PS than other scripting languages just because of the interfaces available to MS products, and I know MS seems to have made a commitment to providing those APIs in a lot of products. PowerShell comes installed on all current versions of Windows (Windows 7+, Windows Server 2008+), and is fairly easily installed on older versions.
To address your edit that your scripts will be used to launch other processes, I think in that case either of the tools fit the bill. I would recommend PS if you plan on adding any admin-ish tasks to the scripts rather than just service calls, but if you stick to what you described, Python is good.
| 0 | 1 | 0 | 1 |
2009-12-02T18:26:00.000
| 8 | 1 | false | 1,834,850 | 1 | 0 | 0 | 6 |
What are the advantages of Python, PowerShell, and other scripting environments? We would like to standardize our scripting and are currently using bat and cmd files as the standard. I think Python would be a better option than these, but am also researching PowerShell and other scripting tools.
The scripts would be used to trigger processes such as wget etc to call web services, or other applications/tools that need to run in a specific order with specific parameters.
We primarily work with the Windows stack, but there is a good chance we will need to support Unix in the future.
|
Python, PowerShell, or Other?
| 1,836,471 | 3 | 32 | 44,901 | 0 |
python,powershell,scripting
|
If all you do is spawning a lot of system specific programs with no or little programming logic behind then OS specific shell might be a better choice than a full general purpose programming language.
| 0 | 1 | 0 | 1 |
2009-12-02T18:26:00.000
| 8 | 0.07486 | false | 1,834,850 | 1 | 0 | 0 | 6 |
What are the advantages of Python, PowerShell, and other scripting environments? We would like to standardize our scripting and are currently using bat and cmd files as the standard. I think Python would be a better option than these, but am also researching PowerShell and other scripting tools.
The scripts would be used to trigger processes such as wget etc to call web services, or other applications/tools that need to run in a specific order with specific parameters.
We primarily work with the Windows stack, but there is a good chance we will need to support Unix in the future.
|
Python, PowerShell, or Other?
| 1,835,112 | 2 | 32 | 44,901 | 0 |
python,powershell,scripting
|
I find it sad no one yet mentioend good ol' Perl.
| 0 | 1 | 0 | 1 |
2009-12-02T18:26:00.000
| 8 | 0.049958 | false | 1,834,850 | 1 | 0 | 0 | 6 |
What are the advantages of Python, PowerShell, and other scripting environments? We would like to standardize our scripting and are currently using bat and cmd files as the standard. I think Python would be a better option than these, but am also researching PowerShell and other scripting tools.
The scripts would be used to trigger processes such as wget etc to call web services, or other applications/tools that need to run in a specific order with specific parameters.
We primarily work with the Windows stack, but there is a good chance we will need to support Unix in the future.
|
Python, PowerShell, or Other?
| 1,834,895 | 2 | 32 | 44,901 | 0 |
python,powershell,scripting
|
The questions is kind of vague, but Python is much more portable than PowerShell; however, Python isn't that prevalent on Windows. But on the other hand, I don't believe PowerShell scripts will work on a Windows machine that doesn't have PowerShell. Meaning they may not work in the old fashioned cmd shell. I think you'll find more documentation and libraries for Python as well.
PowerShell is more like Bash than it is a programming language like Python.
Maybe you could explain what you want to do with your scripts and you'll probably get better answers.
| 0 | 1 | 0 | 1 |
2009-12-02T18:26:00.000
| 8 | 0.049958 | false | 1,834,850 | 1 | 0 | 0 | 6 |
What are the advantages of Python, PowerShell, and other scripting environments? We would like to standardize our scripting and are currently using bat and cmd files as the standard. I think Python would be a better option than these, but am also researching PowerShell and other scripting tools.
The scripts would be used to trigger processes such as wget etc to call web services, or other applications/tools that need to run in a specific order with specific parameters.
We primarily work with the Windows stack, but there is a good chance we will need to support Unix in the future.
|
Python, PowerShell, or Other?
| 1,836,342 | 20 | 32 | 44,901 | 0 |
python,powershell,scripting
|
We would like to standardize our scripting and are currently using bat and cmd files as the standard.
It sounds like Windows is your predominate environment.
If so, PowerShell would be much better than Python.
PowerShell is included with Windows
Server 2008. No need to
deploy/install Python runtime on
every new server that rolls in.
The entire Microsoft server related software (Exchange, Systems Center, etc) is transitioning to PowerShell cmdlets for functionality and extensions
3rd party vendors (e.g. SCOM plugins) will also use PowerShell scripts/cmdlets to expose functionality
I have more experience with Python than PowerShell but the writing is on the wall as far as the Microsoft ecosystem is concerned: go with PowerShell. Otherwise, you'll just be going against the grain and constantly interop-ing between Python and everyone else's cmdlets.
Just because you can code import win32com.client in Python does not put it on equal footing with PowerShell in the Windows environment.
| 0 | 1 | 0 | 1 |
2009-12-02T18:26:00.000
| 8 | 1 | false | 1,834,850 | 1 | 0 | 0 | 6 |
What are the advantages of Python, PowerShell, and other scripting environments? We would like to standardize our scripting and are currently using bat and cmd files as the standard. I think Python would be a better option than these, but am also researching PowerShell and other scripting tools.
The scripts would be used to trigger processes such as wget etc to call web services, or other applications/tools that need to run in a specific order with specific parameters.
We primarily work with the Windows stack, but there is a good chance we will need to support Unix in the future.
|
Python, PowerShell, or Other?
| 1,834,979 | 2 | 32 | 44,901 | 0 |
python,powershell,scripting
|
One advantage to Python is the availability of third-party libraries and an extensive built-in standard library. You can do a lot of powerful operations quickly and easily with Python on a variety of operating systems and environments. That's one reason we use Python here at the office not only as a scripting language but for all of our database backend applications as well.
We also use it for XML and HTML scraping, using ElementTree and BeautifulSoup, which are very powerful and flexible Python-specific libraries for this sort of work.
| 0 | 1 | 0 | 1 |
2009-12-02T18:26:00.000
| 8 | 0.049958 | false | 1,834,850 | 1 | 0 | 0 | 6 |
What are the advantages of Python, PowerShell, and other scripting environments? We would like to standardize our scripting and are currently using bat and cmd files as the standard. I think Python would be a better option than these, but am also researching PowerShell and other scripting tools.
The scripts would be used to trigger processes such as wget etc to call web services, or other applications/tools that need to run in a specific order with specific parameters.
We primarily work with the Windows stack, but there is a good chance we will need to support Unix in the future.
|
Need a zip of Python 2.6 for windows
| 1,835,980 | 8 | 1 | 1,012 | 0 |
python,zip
|
How would it overtake 2.5? You can install both in parallel, just make sure that you unselect the option to "Register Extensions" during the install of 2.6.
I have several Python installations on my PC in parallel, one of them my "standard" one that I expect to run when I doubleclick on a .py file, and the other one to invoke manually if I need it.
I have found, though, that sometimes file associations are lost completely after installing a new version without the "Register Extensions" option set. In that case just run a "repair installation" with your preferred standard version, and you should be good to go.
| 0 | 1 | 0 | 0 |
2009-12-02T21:27:00.000
| 2 | 1.2 | true | 1,835,930 | 1 | 0 | 0 | 2 |
Not the source codes, thats the only thing i seem to find. I can't install py2.6 because it would overtake 2.5 and cause mayor mess in my pc.
|
Need a zip of Python 2.6 for windows
| 1,836,803 | 2 | 1 | 1,012 | 0 |
python,zip
|
I have Pythons 2.3, 2.4, 2.5, 2.6, and 3.1 all installed on my PC. Download the .msi from python.org, and install it.
| 0 | 1 | 0 | 0 |
2009-12-02T21:27:00.000
| 2 | 0.197375 | false | 1,835,930 | 1 | 0 | 0 | 2 |
Not the source codes, thats the only thing i seem to find. I can't install py2.6 because it would overtake 2.5 and cause mayor mess in my pc.
|
python web app logging through pipe? (performance concerned)
| 1,839,366 | 1 | 0 | 514 | 0 |
python,logging,pipe
|
Pipes are one of the fastest I/O mechanisms available. It's just a shared buffer. Nothing more. If the receiving end of your pipe is totally overwhelmed, you may have an issue. But you have no evidence of that right now.
If you have 10's of processes started by FastCGI, each can have their own independent log file. That's the ideal situation: use Python logging -- make each process have a unique log file.
In the rare event that you need to examine all log files, cat them together for analysis.
| 0 | 1 | 0 | 1 |
2009-12-03T11:28:00.000
| 1 | 1.2 | true | 1,839,348 | 0 | 0 | 1 | 1 |
I'm writing a web app using python with web.py, and I want to implement my own logging system. I'd like to log detailed information about each request that come to python (static files are handled by web servers).
Currently I'm thinking about writing the logs to a pipe. On the other side, there should be cronolog.
My main concern is that will the performance be good? How is the time/resource consumed in piping the logs compared to the normal processing of a request (less than 5 database queries, and page generation from templates)?
Or are there other better approaches? I don't want to write the log file in python because tens of processes will be started by fastcgi.
|
Hashing Multiple Files
| 1,847,365 | 1 | 3 | 4,430 | 0 |
python,perl,bash,hash,batch-processing
|
Hm, interesting problem.
Try the following (the mktest function is just for testing -- TDD for bash! :)
Edit:
Added support for whirlpool hashes.
code cleanup
better quoting of filenames
changed array-syntax for test part-- should now work with most korn-like shells. Note that pdksh does not support :-based parameter expansion (or rather
it means something else)
Note also that when in md5-mode it fails for filenames with whirlpool-like hashes, and
possibly vice-versa.
#!/usr/bin/env bash
#Tested with:
# GNU bash, version 4.0.28(1)-release (x86_64-pc-linux-gnu)
# ksh (AT&T Research) 93s+ 2008-01-31
# mksh @(#)MIRBSD KSH R39 2009/08/01 Debian 39.1-4
# Does not work with pdksh, dash
DEFAULT_SUM="md5"
#Takes a parameter, as root path
# as well as an optional parameter, the hash function to use (md5 or wp for whirlpool).
main()
{
case $2 in
"wp")
export SUM="wp"
;;
"md5")
export SUM="md5"
;;
*)
export SUM=$DEFAULT_SUM
;;
esac
# For all visible files in all visible subfolders, move the file
# to a name including the correct hash:
find $1 -type f -not -regex '.*/\..*' -exec $0 hashmove '{}' \;
}
# Given a file named in $1 with full path, calculate it's hash.
# Output the filname, with the hash inserted before the extention
# (if any) -- or: replace an existing hash with the new one,
# if a hash already exist.
hashname_md5()
{
pathname="$1"
full_hash=`md5sum "$pathname"`
hash=${full_hash:0:32}
filename=`basename "$pathname"`
prefix=${filename%%.*}
suffix=${filename#$prefix}
#If the suffix starts with something that looks like an md5sum,
#remove it:
suffix=`echo $suffix|sed -r 's/\.[a-z0-9]{32}//'`
echo "$prefix.$hash$suffix"
}
# Same as hashname_md5 -- but uses whirlpool hash.
hashname_wp()
{
pathname="$1"
hash=`whirlpool "$pathname"`
filename=`basename "$pathname"`
prefix=${filename%%.*}
suffix=${filename#$prefix}
#If the suffix starts with something that looks like an md5sum,
#remove it:
suffix=`echo $suffix|sed -r 's/\.[a-z0-9]{128}//'`
echo "$prefix.$hash$suffix"
}
#Given a filepath $1, move/rename it to a name including the filehash.
# Try to replace an existing hash, an not move a file if no update is
# needed.
hashmove()
{
pathname="$1"
filename=`basename "$pathname"`
path="${pathname%%/$filename}"
case $SUM in
"wp")
hashname=`hashname_wp "$pathname"`
;;
"md5")
hashname=`hashname_md5 "$pathname"`
;;
*)
echo "Unknown hash requested"
exit 1
;;
esac
if [[ "$filename" != "$hashname" ]]
then
echo "renaming: $pathname => $path/$hashname"
mv "$pathname" "$path/$hashname"
else
echo "$pathname up to date"
fi
}
# Create som testdata under /tmp
mktest()
{
root_dir=$(tempfile)
rm "$root_dir"
mkdir "$root_dir"
i=0
test_files[$((i++))]='test'
test_files[$((i++))]='testfile, no extention or spaces'
test_files[$((i++))]='.hidden'
test_files[$((i++))]='a hidden file'
test_files[$((i++))]='test space'
test_files[$((i++))]='testfile, no extention, spaces in name'
test_files[$((i++))]='test.txt'
test_files[$((i++))]='testfile, extention, no spaces in name'
test_files[$((i++))]='test.ab8e460eac3599549cfaa23a848635aa.txt'
test_files[$((i++))]='testfile, With (wrong) md5sum, no spaces in name'
test_files[$((i++))]='test spaced.ab8e460eac3599549cfaa23a848635aa.txt'
test_files[$((i++))]='testfile, With (wrong) md5sum, spaces in name'
test_files[$((i++))]='test.8072ec03e95a26bb07d6e163c93593283fee032db7265a29e2430004eefda22ce096be3fa189e8988c6ad77a3154af76f582d7e84e3f319b798d369352a63c3d.txt'
test_files[$((i++))]='testfile, With (wrong) whirlpoolhash, no spaces in name'
test_files[$((i++))]='test spaced.8072ec03e95a26bb07d6e163c93593283fee032db7265a29e2430004eefda22ce096be3fa189e8988c6ad77a3154af76f582d7e84e3f319b798d369352a63c3d.txt']
test_files[$((i++))]='testfile, With (wrong) whirlpoolhash, spaces in name'
test_files[$((i++))]='test space.txt'
test_files[$((i++))]='testfile, extention, spaces in name'
test_files[$((i++))]='test multi-space .txt'
test_files[$((i++))]='testfile, extention, multiple consequtive spaces in name'
test_files[$((i++))]='test space.h'
test_files[$((i++))]='testfile, short extention, spaces in name'
test_files[$((i++))]='test space.reallylong'
test_files[$((i++))]='testfile, long extention, spaces in name'
test_files[$((i++))]='test space.reallyreallyreallylong.tst'
test_files[$((i++))]='testfile, long extention, double extention,
might look like hash, spaces in name'
test_files[$((i++))]='utf8test1 - æeiaæå.txt'
test_files[$((i++))]='testfile, extention, utf8 characters, spaces in name'
test_files[$((i++))]='utf8test1 - 漢字.txt'
test_files[$((i++))]='testfile, extention, Japanese utf8 characters, spaces in name'
for s in . sub1 sub2 sub1/sub3 .hidden_dir
do
#note -p not needed as we create dirs top-down
#fails for "." -- but the hack allows us to use a single loop
#for creating testdata in all dirs
mkdir $root_dir/$s
dir=$root_dir/$s
i=0
while [[ $i -lt ${#test_files[*]} ]]
do
filename=${test_files[$((i++))]}
echo ${test_files[$((i++))]} > "$dir/$filename"
done
done
echo "$root_dir"
}
# Run test, given a hash-type as first argument
runtest()
{
sum=$1
root_dir=$(mktest)
echo "created dir: $root_dir"
echo "Running first test with hashtype $sum:"
echo
main $root_dir $sum
echo
echo "Running second test:"
echo
main $root_dir $sum
echo "Updating all files:"
find $root_dir -type f | while read f
do
echo "more content" >> "$f"
done
echo
echo "Running final test:"
echo
main $root_dir $sum
#cleanup:
rm -r $root_dir
}
# Test md5 and whirlpool hashes on generated data.
runtests()
{
runtest md5
runtest wp
}
#For in order to be able to call the script recursively, without splitting off
# functions to separate files:
case "$1" in
'test')
runtests
;;
'hashname')
hashname "$2"
;;
'hashmove')
hashmove "$2"
;;
'run')
main "$2" "$3"
;;
*)
echo "Use with: $0 test - or if you just want to try it on a folder:"
echo " $0 run path (implies md5)"
echo " $0 run md5 path"
echo " $0 run wp path"
;;
esac
| 0 | 1 | 0 | 1 |
2009-12-03T18:00:00.000
| 13 | 0.015383 | false | 1,841,737 | 1 | 0 | 0 | 1 |
Problem Specification:
Given a directory, I want to iterate through the directory and its non-hidden sub-directories,
and add a whirlpool hash into the non-hidden
file's names.
If the script is re-run it would would replace an old hash with a new one.
<filename>.<extension> ==> <filename>.<a-whirlpool-hash>.<extension>
<filename>.<old-hash>.<extension> ==> <filename>.<new-hash>.<extension>
Question:
a) How would you do this?
b) Out of the all methods available to you, what makes your method most suitable?
Verdict:
Thanks all, I have chosen SeigeX's answer for it's speed and portability.
It is emprically quicker than the other bash variants,
and it worked without alteration on my Mac OS X machine.
|
Python Performance on Windows
| 1,845,290 | 1 | 13 | 9,943 | 0 |
python,windows,performance,macos,mercurial
|
I run Python locally on Windows XP and 7 as well as OSX on my Macbook. I've seen no noticable performance differences in the command line interpreter, wx widget apps run the same, and Django apps also perform virtually identically.
One thing I noticed at work was that the Kaspersky virus scanner tended to slow the python interpreter WAY down. It would take 3-5 seconds for the python prompt to properly appear and 7-10 seconds for Django's test server to fully load. Properly disabling its active scanning brought the start up times back to 0 seconds.
| 0 | 1 | 0 | 0 |
2009-12-03T20:44:00.000
| 6 | 0.033321 | false | 1,842,798 | 1 | 0 | 0 | 5 |
Is Python generally slower on Windows vs. a *nix machine? Python seems to blaze on my Mac OS X machine whereas it seems to run slower on my Window's Vista machine. The machines are similar in processing power and the vista machine has 1GBs more memory.
I particularly notice this in Mercurial but I figure this may simply be how Mercurial is packaged on windows.
|
Python Performance on Windows
| 1,845,271 | 1 | 13 | 9,943 | 0 |
python,windows,performance,macos,mercurial
|
Maybe the python has more depend on a lot of files open (import different modules).
Windows doesn't handle file open as efficiently as Linux.
Or maybe Linux probably have more utilities depend on python and python scripts/modules are more likely to be buffered in the system cache.
| 0 | 1 | 0 | 0 |
2009-12-03T20:44:00.000
| 6 | 0.033321 | false | 1,842,798 | 1 | 0 | 0 | 5 |
Is Python generally slower on Windows vs. a *nix machine? Python seems to blaze on my Mac OS X machine whereas it seems to run slower on my Window's Vista machine. The machines are similar in processing power and the vista machine has 1GBs more memory.
I particularly notice this in Mercurial but I figure this may simply be how Mercurial is packaged on windows.
|
Python Performance on Windows
| 37,372,277 | 0 | 13 | 9,943 | 0 |
python,windows,performance,macos,mercurial
|
Interestingly I ran a direct comparison of a popular Python app on a Windows 10 x64 Machine (low powered admittedly) and a Ubuntu 14.04 VM running on the same machine.
I have not tested load speeds etc, but am just looking at processor usage between the two. To make the test fair, both were fresh installs and I duplicated a part of my media library and applied the same config in both scenarios. Each test was run independently.
On Windows Python was using 20% of my processor power and it triggered System Compressed Memory to run up at 40% (this is an old machine with 6GB or RAM).
With the VM on Ubuntu (linked to my windows file system) the processor usage is about 5% with compressed memory down to about 20%.
This is a huge difference. My trigger for running this test was that the app using python was running my CPU up to 100% and failing to operate. I have now been running it in the VM for 2 weeks and my processor usage is down to 65-70% on average. So both on a long and short term test, and taking into account the overhead of running a VM and second operating system, this Python app is significantly faster on Linux. I can also confirm that the Python app responds better, as does everything else on my machine.
Now this could be very application specific, but it is at minimum interesting.
The PC is an old AMD II X2 X265 Processor, 6GB of RAM, SSD HD (which Python ran from but the VM used a regular 5200rpm HD which gets used for a ton of other stuff including recording of 2 CCTV cameras).
| 0 | 1 | 0 | 0 |
2009-12-03T20:44:00.000
| 6 | 0 | false | 1,842,798 | 1 | 0 | 0 | 5 |
Is Python generally slower on Windows vs. a *nix machine? Python seems to blaze on my Mac OS X machine whereas it seems to run slower on my Window's Vista machine. The machines are similar in processing power and the vista machine has 1GBs more memory.
I particularly notice this in Mercurial but I figure this may simply be how Mercurial is packaged on windows.
|
Python Performance on Windows
| 1,843,044 | 1 | 13 | 9,943 | 0 |
python,windows,performance,macos,mercurial
|
No real numbers here but it certainly feels like the start up time is slower on Windows platforms. I regularly switch between Ubuntu at home and Windows 7 at work and it's an order of magnitude faster starting up on Ubuntu, despite my work machine being at least 4x the speed.
As for runtime performance, it feels about the same for "quiet" applications. If there are any GUI operations using Tk on Windows, they are definitely slower. Any console applications on windows are slower, but this is most likely due to the Windows cmd rendering being slow more than python running slowly.
| 0 | 1 | 0 | 0 |
2009-12-03T20:44:00.000
| 6 | 0.033321 | false | 1,842,798 | 1 | 0 | 0 | 5 |
Is Python generally slower on Windows vs. a *nix machine? Python seems to blaze on my Mac OS X machine whereas it seems to run slower on my Window's Vista machine. The machines are similar in processing power and the vista machine has 1GBs more memory.
I particularly notice this in Mercurial but I figure this may simply be how Mercurial is packaged on windows.
|
Python Performance on Windows
| 1,846,141 | 0 | 13 | 9,943 | 0 |
python,windows,performance,macos,mercurial
|
With the OS and network libraries, I can confirm slower performance on Windows, at least for versions =< 2.6.
I wrote a CLI podcast-fetcher script which ran great on Ubuntu, but then wouldn't download anything faster than about 80 kB/s (where ~1.6 MB/s is my usual max) on either XP or 7.
I could partially correct this by tweaking the buffer size for download streams, but there was definitely a major bottleneck on Windows, either over the network or IO, that simply wasn't a problem on Linux.
Based on this, it seems that system and OS-interfacing tasks are better optimized for *nixes than they are for Windows.
| 0 | 1 | 0 | 0 |
2009-12-03T20:44:00.000
| 6 | 0 | false | 1,842,798 | 1 | 0 | 0 | 5 |
Is Python generally slower on Windows vs. a *nix machine? Python seems to blaze on my Mac OS X machine whereas it seems to run slower on my Window's Vista machine. The machines are similar in processing power and the vista machine has 1GBs more memory.
I particularly notice this in Mercurial but I figure this may simply be how Mercurial is packaged on windows.
|
Is pickle file of python cross-platform?
| 1,850,806 | 36 | 21 | 24,001 | 0 |
python,file-io,pickle
|
Python's pickle is perfectly cross-platform.
This is likely due to EOL (End-Of-Line) differences between Windows and Linux. Make sure to open your pickle files in binary mode both when writing them and when reading them, using open()'s "wb" and "rb" modes respectively.
Note: Passing pickles between different versions of Python can cause trouble, so try to have the same version on both platforms.
| 0 | 1 | 0 | 0 |
2009-12-04T20:39:00.000
| 6 | 1.2 | true | 1,849,523 | 1 | 0 | 0 | 2 |
I have created a small python script of mine. I saved the pickle file on Linux and then used it on windows and then again used it back on Linux but now that file is not working on Linux but it is working perfectly on windows.
Is is so that python is coss-platform but the pickle file is not.
Is there any solution to this one???
|
Is pickle file of python cross-platform?
| 1,849,549 | 1 | 21 | 24,001 | 0 |
python,file-io,pickle
|
You could use json instead of pickle. If it can save your data, you know it's cross platform.
| 0 | 1 | 0 | 0 |
2009-12-04T20:39:00.000
| 6 | 0.033321 | false | 1,849,523 | 1 | 0 | 0 | 2 |
I have created a small python script of mine. I saved the pickle file on Linux and then used it on windows and then again used it back on Linux but now that file is not working on Linux but it is working perfectly on windows.
Is is so that python is coss-platform but the pickle file is not.
Is there any solution to this one???
|
How to auto-run a script
| 1,854,777 | -3 | 4 | 35,644 | 0 |
python,autorun
|
You want the script to download the weather information online and output the clothes based on your predefined rules?
If this is the case, use urllib to download the page and do some ad hoc parsing over the downloaded html page to get the whether information. And write your logic using nested IF THEN ELSE blocks.
| 0 | 1 | 0 | 0 |
2009-12-06T08:10:00.000
| 5 | -0.119427 | false | 1,854,718 | 0 | 0 | 0 | 3 |
I created a script that will tell me what to wear in the morning based on the weather (i.e. rain slicker if it will rain, heavy jacket if it will be cold, etc). I have fairly basic programming experience with python and the script works perfectly, but I want to be able to create a file that I can just double-click from my desktop and the script will automatically run.
My goal is to be able to simply double click [something] in the morning and it will automatically run the script and thus tell me what to wear. How could I go about doing this?
System Specifications:
python
Mac OSX
|
How to auto-run a script
| 34,125,858 | 0 | 4 | 35,644 | 0 |
python,autorun
|
Use a batch file to make it automatic
Example :
1. Open Notepad -> type the following.
This one's for Windows..It might give you a hint
:start
C:\Python34\python.exe(your python file location)Your *.py file location.
:end
Save this with a *.bat extension
That's it ..you can configure more on this batch,I guess batch is the automation for day to day script
| 0 | 1 | 0 | 0 |
2009-12-06T08:10:00.000
| 5 | 0 | false | 1,854,718 | 0 | 0 | 0 | 3 |
I created a script that will tell me what to wear in the morning based on the weather (i.e. rain slicker if it will rain, heavy jacket if it will be cold, etc). I have fairly basic programming experience with python and the script works perfectly, but I want to be able to create a file that I can just double-click from my desktop and the script will automatically run.
My goal is to be able to simply double click [something] in the morning and it will automatically run the script and thus tell me what to wear. How could I go about doing this?
System Specifications:
python
Mac OSX
|
How to auto-run a script
| 51,091,871 | -2 | 4 | 35,644 | 0 |
python,autorun
|
In Linux/unix based OS , add #!/usr/bin/python3 line on top of your script file with extension .py , if you have python version 3. Or change it to the version installed in the machine
Further , make the file executable by
sudo chmod +x <fileName>
for windows, add windows python path and make the file executable
| 0 | 1 | 0 | 0 |
2009-12-06T08:10:00.000
| 5 | -0.07983 | false | 1,854,718 | 0 | 0 | 0 | 3 |
I created a script that will tell me what to wear in the morning based on the weather (i.e. rain slicker if it will rain, heavy jacket if it will be cold, etc). I have fairly basic programming experience with python and the script works perfectly, but I want to be able to create a file that I can just double-click from my desktop and the script will automatically run.
My goal is to be able to simply double click [something] in the morning and it will automatically run the script and thus tell me what to wear. How could I go about doing this?
System Specifications:
python
Mac OSX
|
Google App Engine Application Extremely slow
| 2,908,765 | 3 | 17 | 8,372 | 0 |
python,django,google-app-engine
|
I used pingdom for obvious reasons - no cold starts is a bonus. Of course the customers will soon come flocking and it will be a non-issue
| 0 | 1 | 0 | 0 |
2009-12-06T08:58:00.000
| 8 | 0.07486 | false | 1,854,821 | 0 | 0 | 1 | 5 |
I created a Hello World website in Google App Engine. It is using Django 1.1 without any patch.
Even though it is just a very simple web page, it takes long time and often it times out.
Any suggestions to solve this?
Note: It is responding fast after the first call.
|
Google App Engine Application Extremely slow
| 1,856,432 | 3 | 17 | 8,372 | 0 |
python,django,google-app-engine
|
I encounteres the same with pylons based app. I have the initial page server as static, and have a dummy ajax call in it to bring the app up, before the user types in credentials. It is usually enough to avoid a lengthy response... Just an idea that you might use before you actually have a million users ;).
| 0 | 1 | 0 | 0 |
2009-12-06T08:58:00.000
| 8 | 0.07486 | false | 1,854,821 | 0 | 0 | 1 | 5 |
I created a Hello World website in Google App Engine. It is using Django 1.1 without any patch.
Even though it is just a very simple web page, it takes long time and often it times out.
Any suggestions to solve this?
Note: It is responding fast after the first call.
|
Google App Engine Application Extremely slow
| 1,854,829 | 4 | 17 | 8,372 | 0 |
python,django,google-app-engine
|
If it's responding quickly after the first request, it's probably just a case of getting the relevant process up and running. Admittedly it's slightly surprising that it takes so long that it times out. Is this after you've updated the application and verified that the AppEngine dashboard shows it as being ready?
"First hit slowness" is quite common in many web frameworks. It's a bit of a pain during development, but not a problem for production.
| 0 | 1 | 0 | 0 |
2009-12-06T08:58:00.000
| 8 | 0.099668 | false | 1,854,821 | 0 | 0 | 1 | 5 |
I created a Hello World website in Google App Engine. It is using Django 1.1 without any patch.
Even though it is just a very simple web page, it takes long time and often it times out.
Any suggestions to solve this?
Note: It is responding fast after the first call.
|
Google App Engine Application Extremely slow
| 1,854,875 | 14 | 17 | 8,372 | 0 |
python,django,google-app-engine
|
This is a horrible suggestion but I'll make it anyway:
Build a little client application or just use wget with cron to periodically access your app, maybe once every 5 minutes or so. That should keep Google from putting it into a dormant state.
I say this is a horrible suggestion because it's a waste of resources and an abuse of Google's free service. I'd expect you to do this only during a short testing/startup phase.
| 0 | 1 | 0 | 0 |
2009-12-06T08:58:00.000
| 8 | 1.2 | true | 1,854,821 | 0 | 0 | 1 | 5 |
I created a Hello World website in Google App Engine. It is using Django 1.1 without any patch.
Even though it is just a very simple web page, it takes long time and often it times out.
Any suggestions to solve this?
Note: It is responding fast after the first call.
|
Google App Engine Application Extremely slow
| 1,856,888 | 4 | 17 | 8,372 | 0 |
python,django,google-app-engine
|
One more tip which might increase the response time.
Enabling billing does increase the quotas, and, to my personal experience, increase the overall response of an application as well. Probably because of the higher priority for billing-enabled applications google has. For instance, an app with billing disabled, can send up to 5-10 emails/request, an app with billing enabled easily copes with 200 emails/request.
Just be sure to set low billing levels - you never know when Slashdot, Digg or HackerNews notices your site :)
| 0 | 1 | 0 | 0 |
2009-12-06T08:58:00.000
| 8 | 0.099668 | false | 1,854,821 | 0 | 0 | 1 | 5 |
I created a Hello World website in Google App Engine. It is using Django 1.1 without any patch.
Even though it is just a very simple web page, it takes long time and often it times out.
Any suggestions to solve this?
Note: It is responding fast after the first call.
|
Proper way to implement a Direct Connect client in Twisted?
| 1,857,145 | 3 | 3 | 1,176 | 0 |
python,twisted,p2p
|
Without knowing all the details of the protocol, I would still recommend using a single reactor -- a reactor scales quite well (especially advanced ones such as PollReactor) and this way you will avoid the overhead connected with threads (that's how Twisted and other async systems get their fundamental performance boost, after all -- by avoiding such overhead). In practice, threads in Twisted are useful mainly when you need to interface to a library whose functions could block on you.
| 0 | 1 | 1 | 0 |
2009-12-06T22:10:00.000
| 1 | 1.2 | true | 1,856,786 | 0 | 0 | 0 | 1 |
I'm working on writing a Python client for Direct Connect P2P networks. Essentially, it works by connecting to a central server, and responding to other users who are searching for files.
Occasionally, another client will ask us to connect to them, and they might begin downloading a file from us. This is a direct connection to the other client, and doesn't go through the central server.
What is the best way to handle these connections to other clients? I'm currently using one Twisted reactor to connect to the server, but is it better have multiple reactors, one per client, with each one running in a different thread? Or would it be better to have a completely separate Python script that performs the connection to the client?
If there's some other solution that I don't know about, I'd love to hear it. I'm new to programming with Twisted, so I'm open to suggestions and other resources.
Thanks!
|
Using AppEngine XMPP for Client Notifications
| 1,859,664 | 0 | 1 | 1,077 | 0 |
python,web-services,google-app-engine,xmpp
|
In that situation, I would perform ajax calls every 5 minutes in example to check it.
It's easy to implement and the data exchanged can be reduced to the max (taking advantage of "fast query/response" bonifications of google-app).
Regards.
| 0 | 1 | 0 | 0 |
2009-12-07T12:12:00.000
| 3 | 0 | false | 1,859,634 | 0 | 0 | 1 | 2 |
I've been looking for a way to tell clients about expired objects and AppEngine's XMPP implementation seems really interesting because it's scalable, should be reliable and can contain up to 100kb of data.
But as I understand it, before a client can listen to messages, he should have a gmail account. That's very impractical.
Is there maybe a way to make temporary readonly XMPP accounts to use with this?
|
Using AppEngine XMPP for Client Notifications
| 1,859,647 | 1 | 1 | 1,077 | 0 |
python,web-services,google-app-engine,xmpp
|
No this isn't true: you can have the AppEngine robot as contact over any Jabber/XMPP based networks.
Unless you are talking about the need for a GMAIL account to create an AppEngine robot... in which case YES you need to have a Google account.
| 0 | 1 | 0 | 0 |
2009-12-07T12:12:00.000
| 3 | 1.2 | true | 1,859,634 | 0 | 0 | 1 | 2 |
I've been looking for a way to tell clients about expired objects and AppEngine's XMPP implementation seems really interesting because it's scalable, should be reliable and can contain up to 100kb of data.
But as I understand it, before a client can listen to messages, he should have a gmail account. That's very impractical.
Is there maybe a way to make temporary readonly XMPP accounts to use with this?
|
How do I determine an open file's size in Python?
| 1,867,399 | 2 | 12 | 11,581 | 0 |
python,linux,file,filesystems,ext2
|
Most reliable would be create a wrapping class which would check file's size when you open it, track write and seek operations, count current size based on those operations and prevent from exceeding size limit.
| 0 | 1 | 0 | 0 |
2009-12-08T14:33:00.000
| 7 | 0.057081 | false | 1,867,357 | 0 | 0 | 0 | 1 |
There's a file that I would like to make sure does not grow larger than 2 GB (as it must run on a system that uses ext 2). What's a good way to check a file's size bearing in mind that I will be writing to this file in between checks? In particular, do I need to worry about buffered, unflushed changes that haven't been written to disk yet?
|
Determine if Python is running inside virtualenv
| 57,109,196 | 5 | 397 | 237,037 | 0 |
python,virtualenv
|
Easiest way is to just run: which python, if you are in a virtualenv it will point to its python instead of the global one
| 0 | 1 | 0 | 0 |
2009-12-09T04:18:00.000
| 15 | 0.066568 | false | 1,871,549 | 1 | 0 | 0 | 3 |
Is it possible to determine if the current script is running inside a virtualenv environment?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.