Q_CreationDate
stringlengths
23
23
Title
stringlengths
11
149
Question
stringlengths
25
6.53k
Answer
stringlengths
15
5.1k
Score
float64
-1
1.2
Is_accepted
bool
2 classes
N_answers
int64
1
17
Q_Id
int64
0
6.76k
2009-02-05 21:19:20.087
How to clear the interpreter console?
Like most Python developers, I typically keep a console window open with the Python interpreter running to test commands, dir() stuff, help() stuff, etc. Like any console, after a while the visible backlog of past commands and prints gets to be cluttered, and sometimes confusing when re-running the same command several times. I'm wondering if, and how, to clear the Python interpreter console. I've heard about doing a system call and either calling cls on Windows or clear on Linux, but I was hoping there was something I could command the interpreter itself to do. Note: I'm running on Windows, so Ctrl+L doesn't work.
OK, so this is a much less technical answer, but I'm using the Python plugin for Notepad++ and it turns out you can just clear the console manually by right-clicking on it and clicking "clear". Hope this helps someone out there!
0.013605
false
7
94
2009-02-05 21:19:20.087
How to clear the interpreter console?
Like most Python developers, I typically keep a console window open with the Python interpreter running to test commands, dir() stuff, help() stuff, etc. Like any console, after a while the visible backlog of past commands and prints gets to be cluttered, and sometimes confusing when re-running the same command several times. I'm wondering if, and how, to clear the Python interpreter console. I've heard about doing a system call and either calling cls on Windows or clear on Linux, but I was hoping there was something I could command the interpreter itself to do. Note: I'm running on Windows, so Ctrl+L doesn't work.
Use idle. It has many handy features. Ctrl+F6, for example, resets the console. Closing and opening the console are good ways to clear it.
0.054368
false
7
94
2009-02-05 21:19:20.087
How to clear the interpreter console?
Like most Python developers, I typically keep a console window open with the Python interpreter running to test commands, dir() stuff, help() stuff, etc. Like any console, after a while the visible backlog of past commands and prints gets to be cluttered, and sometimes confusing when re-running the same command several times. I'm wondering if, and how, to clear the Python interpreter console. I've heard about doing a system call and either calling cls on Windows or clear on Linux, but I was hoping there was something I could command the interpreter itself to do. Note: I'm running on Windows, so Ctrl+L doesn't work.
just use this.. print '\n'*1000
0.027204
false
7
94
2009-02-07 00:27:40.520
how to search for specific file type with yahoo search API?
Does anyone know if there is some parameter available for programmatic search on yahoo allowing to restrict results so only links to files of specific type will be returned (like PDF for example)? It's possible to do that in GUI, but how to make it happen through API? I'd very much appreciate a sample code in Python, but any other solutions might be helpful as well.
Thank you. I found myself that something like this works OK (file type is the first argument, and query is the second): format = sys.argv[1] query = " ".join(sys.argv[2:]) srch = create_search("Web", app_id, query=query, format=format)
1.2
true
1
95
2009-02-07 16:30:31.217
Insert Command into Bash Shell
Is there any way to inject a command into a bash prompt in Linux? I am working on a command history app - like the Ctrl+R lookup but different. I am using python for this. I will show a list of commands from history based on the user's search term - if the user presses enter, the app will execute the command and print the results. So far, so good. If the user chooses a command and then press the right or left key, I want to insert the command into the prompt - so that the user can edit the command before executing it. If you are on Linux, just fire up a bash console, press Ctrl+r, type cd(or something), and then press the right arrow key - the selected command will be shown at the prompt. This is the functionality I am looking for - but I want to know how to do that from within python.
You can do this, but only if the shell runs as a subprocess of your Python program; you can't feed content into the stdin of your parent process. (If you could, UNIX would have a host of related security issues when folks run processes with fewer privileges than the calling shell!) If you're familiar with how Expect allows passthrough to interactive subprocesses (with specific key sequences from the user or strings received from the child process triggering matches and sending control back to your program), the same thing can be done from Python with pexpect. Alternately, as another post mentioned, the curses module provides full control over the drawing of terminal displays -- which you'll want if this history menu is happening within the window rather than in a graphical (X11/win32) pop-up.
0.296905
false
1
96
2009-02-07 23:18:03.840
Python, SQLite and threading
I'm working on an application that will gather data through HTTP from several places, cache the data locally and then serve it through HTTP. So I was looking at the following. My application will first create several threads that will gather data at a specified interval and cache that data locally into a SQLite database. Then in the main thread start a CherryPy application that will query that SQLite database and serve the data. My problem is: how do I handle connections to the SQLite database from my threads and from the CherryPy application? If I'd do a connection per thread to the database will I also be able to create/use an in memory database?
Depending on the application the DB could be a real overhead. If we are talking about volatile data, maybe you could skip the communication via DB completely and share the data between the data gathering process and the data serving process(es) via IPC. This is not an option if the data has to be persisted, of course.
0
false
4
97
2009-02-07 23:18:03.840
Python, SQLite and threading
I'm working on an application that will gather data through HTTP from several places, cache the data locally and then serve it through HTTP. So I was looking at the following. My application will first create several threads that will gather data at a specified interval and cache that data locally into a SQLite database. Then in the main thread start a CherryPy application that will query that SQLite database and serve the data. My problem is: how do I handle connections to the SQLite database from my threads and from the CherryPy application? If I'd do a connection per thread to the database will I also be able to create/use an in memory database?
"...create several threads that will gather data at a specified interval and cache that data locally into a sqlite database. Then in the main thread start a CherryPy app that will query that sqlite db and serve the data." Don't waste a lot of time on threads. The things you're describing are simply OS processes. Just start ordinary processes to do gathering and run Cherry Py. You have no real use for concurrent threads in a single process for this. Gathering data at a specified interval -- when done with simple OS processes -- can be scheduled by the OS very simply. Cron, for example, does a great job of this. A CherryPy App, also, is an OS process, not a single thread of some larger process. Just use processes -- threads won't help you.
0.067922
false
4
97
2009-02-07 23:18:03.840
Python, SQLite and threading
I'm working on an application that will gather data through HTTP from several places, cache the data locally and then serve it through HTTP. So I was looking at the following. My application will first create several threads that will gather data at a specified interval and cache that data locally into a SQLite database. Then in the main thread start a CherryPy application that will query that SQLite database and serve the data. My problem is: how do I handle connections to the SQLite database from my threads and from the CherryPy application? If I'd do a connection per thread to the database will I also be able to create/use an in memory database?
Short answer: Don't use Sqlite3 in a threaded application. Sqlite3 databases scale well for size, but rather terribly for concurrency. You will be plagued with "Database is locked" errors. If you do, you will need a connection per thread, and you have to ensure that these connections clean up after themselves. This is traditionally handled using thread-local sessions, and is performed rather well (for example) using SQLAlchemy's ScopedSession. I would use this if I were you, even if you aren't using the SQLAlchemy ORM features.
1.2
true
4
97
2009-02-07 23:18:03.840
Python, SQLite and threading
I'm working on an application that will gather data through HTTP from several places, cache the data locally and then serve it through HTTP. So I was looking at the following. My application will first create several threads that will gather data at a specified interval and cache that data locally into a SQLite database. Then in the main thread start a CherryPy application that will query that SQLite database and serve the data. My problem is: how do I handle connections to the SQLite database from my threads and from the CherryPy application? If I'd do a connection per thread to the database will I also be able to create/use an in memory database?
Depending on the data rate sqlite could be exactly the correct way to do this. The entire database is locked for each write so you aren't going to scale to 1000s of simultaneous writes per second. But if you only have a few it is the safest way of assuring you don't overwrite each other.
0
false
4
97
2009-02-09 07:05:27.367
How to modify existing panels in Maya using MEL or Python?
I've been writing tools in Maya for years using MEL and Python. I'd consider myself an expert in custom window/gui design in Maya except for one area; modifying existing panels and editors. Typically, I'm building tools that need totally custom UIs, so its customary for me to build them from scratch. However, recently I've found myself wanting to add some additional functionality to the layer editor in Maya. I've seen tutorials that explain how to do this, but now that I want to do it, I can't find any. Links to tutorials or a brief code snippet to get me started would be great. I just need to know how to find the layer editor/panel and, say, add a button or text field to it.
Have you tried searching ui item names in MEL files under maya installation directory? It should be one of the MEL scripts included, and from there you can just modify it.
1.2
true
1
98
2009-02-11 09:19:22.170
Outputting data a row at a time from mysql using sqlalchemy
I want to fetch data from a mysql database using sqlalchemy and use the data in a different class.. Basically I fetch a row at a time, use the data, fetch another row, use the data and so on.. I am running into some problem doing this.. Basically, how do I output data a row at a time from mysql data?.. I have looked into all tutorials but they are not helping much.
Exactly what problems are you running into? You can simply iterate over the ResultProxy object: for row in conn_or_sess_or_engine.execute(selectable_obj_or_SQLstring): do_something_with(row)
0.201295
false
1
99
2009-02-11 16:11:44.397
Standard python interpreter has a vi command mode?
I was working a bit in the python interpreter (python 2.4 on RHEL 5.3), and suddenly found myself in what seems to be a 'vi command mode'. That is, I can edit previous commands with typical vi key bindings, going left with h, deleting with x... I love it - the only thing is, I don't know how I got here (perhaps it's through one of the modules I've imported: pylab/matplotlib?). Can anyone shed some light on how to enable this mode in the interpreter?
For Mac OS X 10.10.3, python2.7, vi mode can be configured by placing bind -v in ~/.editrc. The last few paragraphs of the man page hint at this.
0.545705
false
1
100
2009-02-12 20:41:14.033
Granularity of Paradigm Mixing
When using a multi-paradigm language such as Python, C++, D, or Ruby, how much do you mix paradigms within a single application? Within a single module? Do you believe that mixing the functional, procedural and OO paradigms at a fine granularity leads to clearer, more concise code because you're using the right tool for every subproblem, or an inconsistent mess because you're doing similar things 3 different ways?
Different problems require different solutions, but it helps if you solve things the same way in the same layer. And varying to wildly will just confuse you and everyone else in the project. For C++, I've found that statically typed OOP (use zope.interface in Python) work well for higher-level parts (connecting, updating, signaling, etc) and functional stuff solves many lower-level problems (parsing, nuts 'n bolts data processing, etc) more nicely. And usually, a dynamically typed scripting system is good for selecting and configuring the specific app, game level, whatnot. This may be the language itself (i.e. Python) or something else (an xml-script engine + necessary system for dynamic links in C++).
0
false
4
101
2009-02-12 20:41:14.033
Granularity of Paradigm Mixing
When using a multi-paradigm language such as Python, C++, D, or Ruby, how much do you mix paradigms within a single application? Within a single module? Do you believe that mixing the functional, procedural and OO paradigms at a fine granularity leads to clearer, more concise code because you're using the right tool for every subproblem, or an inconsistent mess because you're doing similar things 3 different ways?
Mixing paradigms has an advantage of letting you express solutions in most natural and esy way. Which is very good thing when it help keeping your program logic smaller. For example, filtering a list by some criteria is several times simpler to express with functional solution compared to traditional loop. On the other hand, to get benefit from mixing two or more paradigms programmer should be reasonably fluent with all of them. So this is powerful tool that should be used with care.
0.101688
false
4
101
2009-02-12 20:41:14.033
Granularity of Paradigm Mixing
When using a multi-paradigm language such as Python, C++, D, or Ruby, how much do you mix paradigms within a single application? Within a single module? Do you believe that mixing the functional, procedural and OO paradigms at a fine granularity leads to clearer, more concise code because you're using the right tool for every subproblem, or an inconsistent mess because you're doing similar things 3 different ways?
I am not sure that I ever think about it like this. Once you start "thinking in Ruby" the multi-paradigms just merge into ... well, Ruby. Ruby is object-oriented, but I find that other things such as the functional aspect tend to mean that some of the "traditional" design patters present in OO languages are just simply not relevant. The iterator is a classic example ... iteration is something that is handled elegantly in Ruby and the heavy-weight OO iteration patterns no longer really apply. This seems to be true throughout the language.
0.201295
false
4
101
2009-02-12 20:41:14.033
Granularity of Paradigm Mixing
When using a multi-paradigm language such as Python, C++, D, or Ruby, how much do you mix paradigms within a single application? Within a single module? Do you believe that mixing the functional, procedural and OO paradigms at a fine granularity leads to clearer, more concise code because you're using the right tool for every subproblem, or an inconsistent mess because you're doing similar things 3 different ways?
Different paradigms mix in different ways. For example, Using OOP doesn't eliminate the use of subroutines and procedural code from an outside library. It merely moves the procedures around into a different place. It is impossible to purely program with one paradigm. You may think you have a single one in mind when you program, but that's your illusion. Your resultant code will land along the borders and within the bounds of many paradigms.
0.201295
false
4
101
2009-02-17 13:16:25.157
Python list serialization - fastest method
I need to load (de-serialize) a pre-computed list of integers from a file in a Python script (into a Python list). The list is large (upto millions of items), and I can choose the format I store it in, as long as loading is fastest. Which is the fastest method, and why? Using import on a .py file that just contains the list assigned to a variable Using cPickle's load Some other method (perhaps numpy?) Also, how can one benchmark such things reliably? Addendum: measuring this reliably is difficult, because import is cached so it can't be executed multiple times in a test. The loading with pickle also gets faster after the first time probably because page-precaching by the OS. Loading 1 million numbers with cPickle takes 1.1 sec the first time run, and 0.2 sec on subsequent executions of the script. Intuitively I feel cPickle should be faster, but I'd appreciate numbers (this is quite a challenge to measure, I think). And yes, it's important for me that this performs quickly. Thanks
cPickle will be the fastest since it is saved in binary and no real python code has to be parsed. Other advantates are that it is more secure (since it does not execute commands) and you have no problems with setting $PYTHONPATH correctly.
0.067922
false
1
102
2009-02-17 14:50:22.530
How to re-use a reusable app in Django
I am trying to create my first site in Django and as I'm looking for example apps out there to draw inspiration from, I constantly stumble upon a term called "reusable apps". I understand the concept of an app that is reusable easy enough, but the means of reusing an app in Django are quite lost for me. Few questions that are bugging me in the whole business are: What is the preferred way to re-use an existing Django app? Where do I put it and how do I reference it? From what I understand, the recommendation is to put it on your "PYTHONPATH", but that breaks as soon as I need to deploy my app to a remote location that I have limited access to (e.g. on a hosting service). So, if I develop my site on my local computer and intend to deploy it on an ISP where I only have ftp access, how do I re-use 3rd party Django apps so that if I deploy my site, the site keeps working (e.g. the only thing I can count on is that the service provider has Python 2.5 and Django 1.x installed)? How do I organize my Django project so that I could easily deploy it along with all of the reusable apps I want to use?
An old question, but here's what I do: If you're using a version control system (VCS), I suggest putting all of the reusable apps and libraries (including django) that your software needs in the VCS. If you don't want to put them directly under your project root, you can modify settings.py to add their location to sys.path. After that deployment is as simple as cloning or checking out the VCS repository to wherever you want to use it. This has two added benefits: Version mismatches; your software always uses the version that you tested it with, and not the version that was available at the time of deployment. If multiple people work on the project, nobody else has to deal with installing the dependencies. When it's time to update a component's version, update it in your VCS and then propagate the update to your deployments via it.
0.386912
false
1
103
2009-02-18 15:25:25.633
How to convert an integer to the shortest url-safe string in Python?
I want the shortest possible way of representing an integer in a URL. For example, 11234 can be shortened to '2be2' using hexadecimal. Since base64 uses is a 64 character encoding, it should be possible to represent an integer in base64 using even less characters than hexadecimal. The problem is I can't figure out the cleanest way to convert an integer to base64 (and back again) using Python. The base64 module has methods for dealing with bytestrings - so maybe one solution would be to convert an integer to its binary representation as a Python string... but I'm not sure how to do that either.
If you are looking for a way to shorten the integer representation using base64, I think you need to look elsewhere. When you encode something with base64 it doesn't get shorter, in fact it gets longer. E.g. 11234 encoded with base64 would yield MTEyMzQ= When using base64 you have overlooked the fact that you are not converting just the digits (0-9) to a 64 character encoding. You are converting 3 bytes into 4 bytes so you are guaranteed your base64 encoded string would be 33.33% longer.
0.054368
false
2
104
2009-02-18 15:25:25.633
How to convert an integer to the shortest url-safe string in Python?
I want the shortest possible way of representing an integer in a URL. For example, 11234 can be shortened to '2be2' using hexadecimal. Since base64 uses is a 64 character encoding, it should be possible to represent an integer in base64 using even less characters than hexadecimal. The problem is I can't figure out the cleanest way to convert an integer to base64 (and back again) using Python. The base64 module has methods for dealing with bytestrings - so maybe one solution would be to convert an integer to its binary representation as a Python string... but I'm not sure how to do that either.
Base64 takes 4 bytes/characters to encode 3 bytes and can only encode multiples of 3 bytes (and adds padding otherwise). So representing 4 bytes (your average int) in Base64 would take 8 bytes. Encoding the same 4 bytes in hex would also take 8 bytes. So you wouldn't gain anything for a single int.
0.108416
false
2
104
2009-02-19 19:50:27.033
Python: Am I missing something?
I'm in the process of learning Python while implementing build scripts and such. And for the moment everything is working fine in that the scripts do what they need to do. But I keep having the feeling I'm missing something, such as "The Python Way". I know build scripts and glue scripts are not really the most exciting development work and may hardly be a candidate for revealing the true power of Python but I'd still like the opportunity to have my mind blown. I develop mostly in C# and I find that my Python code looks awfully similar in structure and style to a lot of my C# code. In other words I feel like I'm thinking in C# but writing in Python. Am I really missing something? (Note: I realize this isn't so much a programming question and it's quite broad and there may not be a definitive answer so mod me down into oblivion if you have to.)
I would suggest finding a personal python guru. Show them some of your code and have them review/rewrite it into idiomatic python. Thus will you be enlightened.
0
false
5
105
2009-02-19 19:50:27.033
Python: Am I missing something?
I'm in the process of learning Python while implementing build scripts and such. And for the moment everything is working fine in that the scripts do what they need to do. But I keep having the feeling I'm missing something, such as "The Python Way". I know build scripts and glue scripts are not really the most exciting development work and may hardly be a candidate for revealing the true power of Python but I'd still like the opportunity to have my mind blown. I develop mostly in C# and I find that my Python code looks awfully similar in structure and style to a lot of my C# code. In other words I feel like I'm thinking in C# but writing in Python. Am I really missing something? (Note: I realize this isn't so much a programming question and it's quite broad and there may not be a definitive answer so mod me down into oblivion if you have to.)
Think like this: If you are writing too much for little work, something is wrong, this is not pythonic. Most Python code you will write is very simple and direct. Usually you don't need much work for anything simple. If you are writing too much, stop and think if there is a better way. (and this is how I learned many things in Python!)
0.101688
false
5
105
2009-02-19 19:50:27.033
Python: Am I missing something?
I'm in the process of learning Python while implementing build scripts and such. And for the moment everything is working fine in that the scripts do what they need to do. But I keep having the feeling I'm missing something, such as "The Python Way". I know build scripts and glue scripts are not really the most exciting development work and may hardly be a candidate for revealing the true power of Python but I'd still like the opportunity to have my mind blown. I develop mostly in C# and I find that my Python code looks awfully similar in structure and style to a lot of my C# code. In other words I feel like I'm thinking in C# but writing in Python. Am I really missing something? (Note: I realize this isn't so much a programming question and it's quite broad and there may not be a definitive answer so mod me down into oblivion if you have to.)
Write some Python code and post it on SO for review and feedback whether it is pythonic.
0.050976
false
5
105
2009-02-19 19:50:27.033
Python: Am I missing something?
I'm in the process of learning Python while implementing build scripts and such. And for the moment everything is working fine in that the scripts do what they need to do. But I keep having the feeling I'm missing something, such as "The Python Way". I know build scripts and glue scripts are not really the most exciting development work and may hardly be a candidate for revealing the true power of Python but I'd still like the opportunity to have my mind blown. I develop mostly in C# and I find that my Python code looks awfully similar in structure and style to a lot of my C# code. In other words I feel like I'm thinking in C# but writing in Python. Am I really missing something? (Note: I realize this isn't so much a programming question and it's quite broad and there may not be a definitive answer so mod me down into oblivion if you have to.)
To echo TLHOLADAY, read the standard library. That's where the "pythonic" stuff is. If you're not getting a good feel there, then read the source for sqlachemy or django or your project of choice.
0
false
5
105
2009-02-19 19:50:27.033
Python: Am I missing something?
I'm in the process of learning Python while implementing build scripts and such. And for the moment everything is working fine in that the scripts do what they need to do. But I keep having the feeling I'm missing something, such as "The Python Way". I know build scripts and glue scripts are not really the most exciting development work and may hardly be a candidate for revealing the true power of Python but I'd still like the opportunity to have my mind blown. I develop mostly in C# and I find that my Python code looks awfully similar in structure and style to a lot of my C# code. In other words I feel like I'm thinking in C# but writing in Python. Am I really missing something? (Note: I realize this isn't so much a programming question and it's quite broad and there may not be a definitive answer so mod me down into oblivion if you have to.)
To add to the answers of Andrew Hare and Baishampayan Ghose... To learn the idiom of any language must involve reading code written in that idiom. I'm still learning the Python idiom, but I've been through this with other languages. I can read about list comprehensions, but the lightbulb only really comes on when you see such things in use and say, "Wow! That's awesome! Two lines of code and it's crystal clear!" So go find some pythonic code that you find interesting and start reading it and understanding it. The knowledge will stay in your head better if you see everything in the context of a working program.
0.076382
false
5
105
2009-02-20 16:04:25.480
Statistics with numpy
I am working at some plots and statistics for work and I am not sure how I can do some statistics using numpy: I have a list of prices and another one of basePrices. And I want to know how many prices are with X percent above basePrice, how many are with Y percent above basePrice. Is there a simple way to do that using numpy?
In addition to df's answer, if you want to know the specific prices that are above the base prices, you can do: prices[prices > (1.10 * base_prices)]
0.101688
false
1
106
2009-02-21 19:39:59.540
Set up a scheduled job?
I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this. Does anyone know how to set this up? To clarify: I know I can set up a cron job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero). I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.
I had something similar with your problem today. I didn't wanted to have it handled by the server trhough cron (and most of the libs were just cron helpers in the end). So i've created a scheduling module and attached it to the init . It's not the best approach, but it helps me to have all the code in a single place and with its execution related to the main app.
0.034
false
1
107
2009-02-22 00:58:37.450
How do YOU deploy your WSGI application? (and why it is the best way)
Deploying a WSGI application. There are many ways to skin this cat. I am currently using apache2 with mod-wsgi, but I can see some potential problems with this. So how can it be done? Apache Mod-wsgi (the other mod-wsgi's seem to not be worth it) Pure Python web server eg paste, cherrypy, Spawning, Twisted.web as 2 but with reverse proxy from nginx, apache2 etc, with good static file handling Conversion to other protocol such as FCGI with a bridge (eg Flup) and running in a conventional web server. More? I want to know how you do it, and why it is the best way to do it. I would absolutely love you to bore me with details about the whats and the whys, application specific stuff, etc. I will upvote any non-insane answer.
Apache+mod_wsgi, Simple, clean. (only four lines of webserver config), easy for other sysadimns to get their head around.
0.04532
false
4
108
2009-02-22 00:58:37.450
How do YOU deploy your WSGI application? (and why it is the best way)
Deploying a WSGI application. There are many ways to skin this cat. I am currently using apache2 with mod-wsgi, but I can see some potential problems with this. So how can it be done? Apache Mod-wsgi (the other mod-wsgi's seem to not be worth it) Pure Python web server eg paste, cherrypy, Spawning, Twisted.web as 2 but with reverse proxy from nginx, apache2 etc, with good static file handling Conversion to other protocol such as FCGI with a bridge (eg Flup) and running in a conventional web server. More? I want to know how you do it, and why it is the best way to do it. I would absolutely love you to bore me with details about the whats and the whys, application specific stuff, etc. I will upvote any non-insane answer.
Apache httpd + mod_fcgid using web.py (which is a wsgi application). Works like a charm.
0.135221
false
4
108
2009-02-22 00:58:37.450
How do YOU deploy your WSGI application? (and why it is the best way)
Deploying a WSGI application. There are many ways to skin this cat. I am currently using apache2 with mod-wsgi, but I can see some potential problems with this. So how can it be done? Apache Mod-wsgi (the other mod-wsgi's seem to not be worth it) Pure Python web server eg paste, cherrypy, Spawning, Twisted.web as 2 but with reverse proxy from nginx, apache2 etc, with good static file handling Conversion to other protocol such as FCGI with a bridge (eg Flup) and running in a conventional web server. More? I want to know how you do it, and why it is the best way to do it. I would absolutely love you to bore me with details about the whats and the whys, application specific stuff, etc. I will upvote any non-insane answer.
We are using pure Paste for some of our web services. It is easy to deploy (with our internal deployment mechanism; we're not using Paste Deploy or anything like that) and it is nice to minimize the difference between production systems and what's running on developers' workstations. Caveat: we don't expect low latency out of Paste itself because of the heavyweight nature of our requests. In some crude benchmarking we did we weren't getting fantastic results; it just ended up being moot due to the expense of our typical request handler. So far it has worked fine. Static data has been handled by completely separate (and somewhat "organically" grown) stacks, including the use of S3, Akamai, Apache and IIS, in various ways.
0.04532
false
4
108
2009-02-22 00:58:37.450
How do YOU deploy your WSGI application? (and why it is the best way)
Deploying a WSGI application. There are many ways to skin this cat. I am currently using apache2 with mod-wsgi, but I can see some potential problems with this. So how can it be done? Apache Mod-wsgi (the other mod-wsgi's seem to not be worth it) Pure Python web server eg paste, cherrypy, Spawning, Twisted.web as 2 but with reverse proxy from nginx, apache2 etc, with good static file handling Conversion to other protocol such as FCGI with a bridge (eg Flup) and running in a conventional web server. More? I want to know how you do it, and why it is the best way to do it. I would absolutely love you to bore me with details about the whats and the whys, application specific stuff, etc. I will upvote any non-insane answer.
Nginx reverse proxy and static file sharing + XSendfile + uploadprogress_module. Nothing beats it for the purpose. On the WSGI side either Apache + mod_wsgi or cherrypy server. I like to use cherrypy wsgi server for applications on servers with less memory and less requests. Reasoning: I've done benchmarks with different tools for different popular solutions. I have more experience with lower level TCP/IP than web development, especially http implementations. I'm more confident that I can recognize a good http server than I can recognize a good web framework. I know Twisted much more than Django or Pylons. The http stack in Twisted is still not up to this but it will be there.
0.265586
false
4
108
2009-02-24 09:15:24.853
Python web programming
Good morning. As the title indicates, I've got some questions about using python for web development. What is the best setup for a development environment, more specifically, what webserver to use, how to bind python with it. Preferably, I'd like it to be implementable in both, *nix and win environment. My major concern when I last tried apache + mod_python + CherryPy was having to reload webserver to see the changes. Is it considered normal? For some reason cherrypy's autoreload didn't work at all. What is the best setup to deploy a working Python app to production and why? I'm now using lighttpd for my PHP web apps, but how would it do for python compared to nginx for example? Is it worth diving straight with a framework or to roll something simple of my own? I see that Django has got quite a lot of fans, but I'm thinking it would be overkill for my needs, so I've started looking into CherryPy. How exactly are Python apps served if I have to reload httpd to see the changes? Something like a permanent process spawning child processes, with all the major file includes happening on server start and then just lazy loading needed resources? Python supports multithreading, do I need to look into using that for a benefit when developing web apps? What would be that benefit and in what situations? Big thanks!
What is the best setup for a development environment? Doesn't much matter. We use Django, which runs in Windows and Unix nicely. For production, we use Apache in Red Hat. Is having to reload webserver to see the changes considered normal? Yes. Not clear why you'd want anything different. Web application software shouldn't be dynamic. Content yes. Software no. In Django, we develop without using a web server of any kind on our desktop. The Django "runserver" command reloads the application under most circumstances. For development, this works great. The times when it won't reload are when we've damaged things so badly that the app doesn't properly. What is the best setup to deploy a working Python app to production and why? "Best" is undefined in this context. Therefore, please provide some qualification for "nest" (e.g., "fastest", "cheapest", "bluest") Is it worth diving straight with a framework or to roll something simple of my own? Don't waste time rolling your own. We use Django because of the built-in admin page that we don't have to write or maintain. Saves mountains of work. How exactly are Python apps served if I have to reload httpd to see the changes? Two methods: Daemon - mod_wsgi or mod_fastcgi have a Python daemon process to which they connect. Change your software. Restart the daemon. Embedded - mod_wsgi or mod_python have an embedded mode in which the Python interpreter is inside the mod, inside Apache. You have to restart httpd to restart that embedded interpreter. Do I need to look into using multi-threaded? Yes and no. Yes you do need to be aware of this. No, you don't need to do very much. Apache and mod_wsgi and Django should handle this for you.
0.573727
false
2
109
2009-02-24 09:15:24.853
Python web programming
Good morning. As the title indicates, I've got some questions about using python for web development. What is the best setup for a development environment, more specifically, what webserver to use, how to bind python with it. Preferably, I'd like it to be implementable in both, *nix and win environment. My major concern when I last tried apache + mod_python + CherryPy was having to reload webserver to see the changes. Is it considered normal? For some reason cherrypy's autoreload didn't work at all. What is the best setup to deploy a working Python app to production and why? I'm now using lighttpd for my PHP web apps, but how would it do for python compared to nginx for example? Is it worth diving straight with a framework or to roll something simple of my own? I see that Django has got quite a lot of fans, but I'm thinking it would be overkill for my needs, so I've started looking into CherryPy. How exactly are Python apps served if I have to reload httpd to see the changes? Something like a permanent process spawning child processes, with all the major file includes happening on server start and then just lazy loading needed resources? Python supports multithreading, do I need to look into using that for a benefit when developing web apps? What would be that benefit and in what situations? Big thanks!
When you use mod_python on a threaded Apache server (the default on Windows), CherryPy runs in the same process as Apache. In that case, you almost certainly don't want CP to restart the process. Solution: use mod_rewrite or mod_proxy so that CherryPy runs in its own process. Then you can autoreload to your heart's content. :)
0.081452
false
2
109
2009-02-25 15:39:46.823
What are good ways to upload bulk .csv data into a webapp using Django/Python?
I have a very basic CSV file upload module working to bulk upload my user's data into my site. I process the CSV file in the backend with a python script that runs on crontab and then email the user the results of the bulk upload. This process works ok operationally, but my issue is with the format of the csv file. Are there good tools or even basic rules on how to accept different formats of the csv file? The user may have a different order of data columns, slightly different names for the column headers (I want the email column to be entitled "Email", but it may say "Primary Email", "Email Address"), or missing additional data columns. Any good examples of CSV upload functionality that is very permissive and user friendly? Also, how do I tell the user to export as CSV data? I'm importing address book information, so this data often comes from Outlook, Thunderbird, other software packages that have address books. Are there other popular data formats that I should accept?
Look at csv module from stdlib. It contains presets for popualr CSV dialects like one produced by Excel. Reader class support field mapping and if file contains column header it coes not depend on column order. For more complex logic, like looking up several alternative names for a field, you'll need to write your own implementation.
0.067922
false
2
110
2009-02-25 15:39:46.823
What are good ways to upload bulk .csv data into a webapp using Django/Python?
I have a very basic CSV file upload module working to bulk upload my user's data into my site. I process the CSV file in the backend with a python script that runs on crontab and then email the user the results of the bulk upload. This process works ok operationally, but my issue is with the format of the csv file. Are there good tools or even basic rules on how to accept different formats of the csv file? The user may have a different order of data columns, slightly different names for the column headers (I want the email column to be entitled "Email", but it may say "Primary Email", "Email Address"), or missing additional data columns. Any good examples of CSV upload functionality that is very permissive and user friendly? Also, how do I tell the user to export as CSV data? I'm importing address book information, so this data often comes from Outlook, Thunderbird, other software packages that have address books. Are there other popular data formats that I should accept?
If you'll copy excel table into clipboard and then paste results into notepad, you'll notice that it's tab separated. I once used it to make bulk import from most of table editors by copy-pasting data from the editor into textarea on html page. You can use a background for textarea as a hint for number of columns and place your headers at the top suggesting the order for a user. Javascript will process pasted data and display them to the user immediately with simple prevalidation making it easy to fix an error and repaste. Then import button is clicked, data is validated again and import results are displayed. Unfortunately, I've never heard any feedback about whenever this was easy to use or not. Anyway, I still see it as an option when implementing bulk import.
0.067922
false
2
110
2009-02-26 06:44:48.240
Python - How to check if a file is used by another application?
I want to open a file which is periodically written to by another application. This application cannot be modified. I'd therefore like to only open the file when I know it is not been written to by an other application. Is there a pythonic way to do this? Otherwise, how do I achieve this in Unix and Windows? edit: I'll try and clarify. Is there a way to check if the current file has been opened by another application? I'd like to start with this question. Whether those other application read/write is irrelevant for now. I realize it is probably OS dependent, so this may not really be python related right now.
One thing I've done is have python very temporarily rename the file. If we're able to rename it, then no other process is using it. I only tested this on Windows.
0
false
1
111
2009-02-28 19:55:21.193
Calling function defined in exe
I need to know a way to call a function defined in the exe from a python script. I know how to call entire exe from py file.
Not sure if it is for windows. But you can treat an exe like a dll (if functions are exported). And they can be used by other programs.
0.135221
false
2
112
2009-02-28 19:55:21.193
Calling function defined in exe
I need to know a way to call a function defined in the exe from a python script. I know how to call entire exe from py file.
Unless the said executable takes command line arguments which will specify which function to use, I don't think this is possible. With that being said, if you created the EXE, command line arguments are a good way to implement the functionality you're looking for.
0
false
2
112
2009-02-28 20:54:02.567
External classes in Python
I'm just beginning Python, and I'd like to use an external RSS class. Where do I put that class and how do I import it? I'd like to eventually be able to share python programs.
If you want to store your RSS file in a different place use sys.append("") and pout the module in that directory and use import or from import *
0
false
1
113
2009-03-01 03:40:26.020
How can I manually register distributions with pkg_resources?
I'm trying to get a package installed on Google App Engine. The package relies rather extensively on pkg_resources, but there's no way to run setup.py on App Engine. There's no platform-specific code in the source, however, so it's no problem to just zip up the source and include those in the system path. And I've gotten a version of pkg_resources installed and working as well. The only problem is getting the package actually registered with pkg_resources so when it calls iter_entry_points it can find the appropriate plugins. What methods do I need to call to register modules on sys.path with all the appropriate metadata, and how do I figure out what that metadata needs to be?
On your local development system, run python setup.py bdist_egg, which will create a Zip archive with the necessary metadata included. Add it to your sys.path, and it should work properly.
0
false
2
114
2009-03-01 03:40:26.020
How can I manually register distributions with pkg_resources?
I'm trying to get a package installed on Google App Engine. The package relies rather extensively on pkg_resources, but there's no way to run setup.py on App Engine. There's no platform-specific code in the source, however, so it's no problem to just zip up the source and include those in the system path. And I've gotten a version of pkg_resources installed and working as well. The only problem is getting the package actually registered with pkg_resources so when it calls iter_entry_points it can find the appropriate plugins. What methods do I need to call to register modules on sys.path with all the appropriate metadata, and how do I figure out what that metadata needs to be?
Create a setup.py for the package just as you would normally, and then use "setup.py sdist --formats=zip" to build your source zip. The built source zip will include an .egg-info metadata directory, which will then be findable by pkg_resources. Alternately, you can use bdist_egg for all your packages.
1.2
true
2
114
2009-03-02 20:24:41.853
How do you get default headers in a urllib2 Request?
I have a Python web client that uses urllib2. It is easy enough to add HTTP headers to my outgoing requests. I just create a dictionary of the headers I want to add, and pass it to the Request initializer. However, other "standard" HTTP headers get added to the request as well as the custom ones I explicitly add. When I sniff the request using Wireshark, I see headers besides the ones I add myself. My question is how do a I get access to these headers? I want to log every request (including the full set of HTTP headers), and can't figure out how. any pointers? in a nutshell: How do I get all the outgoing headers from an HTTP request created by urllib2?
see urllib2.py:do_request (line 1044 (1067)) and urllib2.py:do_open (line 1073) (line 293) self.addheaders = [('User-agent', client_version)] (only 'User-agent' added)
0
false
1
115
2009-03-05 03:18:42.560
Discovering public IP programmatically
I'm behind a router, I need a simple command to discover my public ip (instead of googling what's my ip and clicking one the results) Are there any standard protocols for this? I've heard about STUN but I don't know how can I use it? P.S. I'm planning on writing a short python script to do it
Here are a few public services that support IPv4 and IPv6: curl http://icanhazip.com curl http://www.trackip.net/ip curl https://ipapi.co/ip curl http://api6.ipify.org curl http://www.cloudflare.com/cdn-cgi/trace curl http://checkip.dns.he.net The following seem to support only IPv4 at this time: curl http://bot.whatismyipaddress.com curl http://checkip.dyndns.org curl http://ifconfig.me curl http://ip-api.com curl http://api.infoip.io/ip It's easy to make an HTTP call programmatically. So all should be relatively easy to use, and you can try multiple different URLs in case one fails.
0.025505
false
3
116
2009-03-05 03:18:42.560
Discovering public IP programmatically
I'm behind a router, I need a simple command to discover my public ip (instead of googling what's my ip and clicking one the results) Are there any standard protocols for this? I've heard about STUN but I don't know how can I use it? P.S. I'm planning on writing a short python script to do it
Your simplest way may be to ask some server on the outside of your network. One thing to keep in mind is that different destinations may see a different address for you. The router may be multihomed. And really that's just where problems begin.
0.076382
false
3
116
2009-03-05 03:18:42.560
Discovering public IP programmatically
I'm behind a router, I need a simple command to discover my public ip (instead of googling what's my ip and clicking one the results) Are there any standard protocols for this? I've heard about STUN but I don't know how can I use it? P.S. I'm planning on writing a short python script to do it
If the network has an UpNp server running on the gateway you are able to talk to the gateway and ask it for your outside IP address.
0.076382
false
3
116
2009-03-06 16:11:46.483
Tools to ease executing raw SQL with Django ORM
I often need to execute custom sql queries in django, and manually converting query results into objects every time is kinda painful. I wonder how fellow Slackers deal with this. Maybe someone had written some kind of a library to help dealing with custom SQL in Django?
Since the issue is "manually converting query results into objects," the simplest solution is often to see if your custom SQL can fit into an ORM .extra() call rather than being a pure-SQL query. Often it can, and then you let the ORM do all the work of building up objects as usual.
1.2
true
1
117
2009-03-08 04:44:41.143
For my app, how many threads would be optimal?
I have a simple Python web crawler. It uses SQLite to store its output and also to keep a queue. I want to make the crawler multi-threaded so that it can crawl several pages at a time. I figured i would make a thread and just run several instances of the class at once, so they all run concurrently. But the question is, how many should i run at once? should i stick to two? can i go higher? what would be a reasonable limit for a number of threads? Keep in mind that each thread goes out to a web page, downloads the html, runs a few regex searches through it, stores the info it finds in a SQLite db, and then pops the next url off the queue.
It's usually simpler to make multiple concurrent processes. Simply use subprocess to create as many Popens as you feel it necessary to run concurrently. There's no "optimal" number. Generally, when you run just one crawler, your PC spends a lot of time waiting. How much? Hard to say. When you're running some small number of concurrent crawlers, you'll see that they take about the same amount of time as one. Your CPU switches among the various processes, filling up the wait time on one with work on the others. You you run some larger number, you see that the overall elapsed time is longer because there's now more to do than your CPU can manage. So the overall process takes longer. You can create a graph that shows how the process scales. Based on this you can balance the number of processes and your desirable elapsed time. Think of it this way. 1 crawler does it's job in 1 minute. 100 pages done serially could take a 100 minutes. 100 crawlers concurrently might take on hour. Let's say that 25 crawlers finishes the job in 50 minutes. You don't know what's optimal until you run various combinations and compare the results.
0.173164
false
3
118
2009-03-08 04:44:41.143
For my app, how many threads would be optimal?
I have a simple Python web crawler. It uses SQLite to store its output and also to keep a queue. I want to make the crawler multi-threaded so that it can crawl several pages at a time. I figured i would make a thread and just run several instances of the class at once, so they all run concurrently. But the question is, how many should i run at once? should i stick to two? can i go higher? what would be a reasonable limit for a number of threads? Keep in mind that each thread goes out to a web page, downloads the html, runs a few regex searches through it, stores the info it finds in a SQLite db, and then pops the next url off the queue.
One thing you should keep in mind is that some servers may interpret too many concurrent requests from the same IP address as a DoS attack and abort connections or return error pages for requests that would otherwise succeed. So it might be a good idea to limit the number of concurrent requests to the same server to a relatively low number (5 should be on the safe side).
0.058243
false
3
118
2009-03-08 04:44:41.143
For my app, how many threads would be optimal?
I have a simple Python web crawler. It uses SQLite to store its output and also to keep a queue. I want to make the crawler multi-threaded so that it can crawl several pages at a time. I figured i would make a thread and just run several instances of the class at once, so they all run concurrently. But the question is, how many should i run at once? should i stick to two? can i go higher? what would be a reasonable limit for a number of threads? Keep in mind that each thread goes out to a web page, downloads the html, runs a few regex searches through it, stores the info it finds in a SQLite db, and then pops the next url off the queue.
You will probably find your application is bandwidth limited not CPU or I/O limited. As such, add as many as you like until performance begins to degrade. You may come up against other limits depending on your network setup. Like if you're behind an ADSL router, there will be a limit on the number of concurrent NAT sessions, which may impact making too many HTTP requests at once. Make too many and your provider may treat you as being infected by a virus or the like. There's also the issue of how many requests the server you're crawling can handle and how much of a load you want to put on it. I wrote a crawler once that used just one thread. It took about a day to process all the information I wanted at about one page every two seconds. I could've done it faster but I figured this was less of a burden for the server. So really theres no hard and fast answer. Assuming a 1-5 megabit connection I'd say you could easily have up to 20-30 threads without any problems.
1.2
true
3
118
2009-03-09 09:57:36.500
Is there a way to configure the Application Pool's "Idle timeout" in web.config?
I know one can set the session timeout. But, if the application itself has received no requests for a given period of time, IIS shuts down the application. This behavior is configurable in the IIS management console, and I know how to do this. Still, I wonder if it is possible to configure this in web.config.
Not in IIS 6. In IIS 6, Application Pools are controlled by Worker Processes, which map to a Request Queue handled by HTTP.sys. HTTP.sys handles the communication with the WWW Server to determine when to start and stop Worker Processes. Since IIS 6 was created before .Net, there's no communication hooks between .Net and the low-level http handlers. ASP.net is implimented as an ISAPI filter, which is loaded by the Worker Process itself. You have a chicken-before-the-egg issue if you are looking at the web.config controlling a worker process. This is primarily why MS did the major re-write of IIS 7 which integrates .Net through the entire request life-cycle, not just the ISAPI filter portion.
1.2
true
1
119
2009-03-09 15:48:55.827
How do I find the Windows common application data folder using Python?
I would like my application to store some data for access by all users. Using Python, how can I find where the data should go?
You can access all of your OS environment variables using the os.environ dictionary in the os module. Choosing which key to use from that dictionary could be tricky, though. In particular, you should remain aware of internationalized (i.e., non-English) versions of Windows when using these paths. os.environ['ALLUSERSPROFILE'] should give you the root directory for all users on the computer, but after that be careful not to hard code subdirectory names like "Application Data," because these directories don't exist on non-English versions of Windows. For that matter, you may want to do some research on what versions of Windows you can expect to have the ALLUSERSPROFILE environment variable set (I don't know myself -- it may be universal). My XP machine here has a COMMONAPPDATA environment variable which points to the All Users\Application Data folder, but my Win2K3 system does not have this environment variable.
0.201295
false
1
120
2009-03-09 21:58:14.723
Python equivalent to java.util.SortedSet?
Does anybody know if Python has an equivalent to Java's SortedSet interface? Heres what I'm looking for: lets say I have an object of type foo, and I know how to compare two objects of type foo to see whether foo1 is "greater than" or "less than" foo2. I want a way of storing many objects of type foo in a list L, so that whenever I traverse the list L, I get the objects in order, according to the comparison method I define. Edit: I guess I can use a dictionary or a list and sort() it every time I modify it, but is this the best way?
Do you have the possibility of using Jython? I just mention it because using TreeMap, TreeSet, etc. is trivial. Also if you're coming from a Java background and you want to head in a Pythonic direction Jython is wonderful for making the transition easier. Though I recognise that use of TreeSet in this case would not be part of such a "transition". For Jython superusers I have a question myself: the blist package can't be imported because it uses a C file which must be imported. But would there be any advantage of using blist instead of TreeSet? Can we generally assume the JVM uses algorithms which are essentially as good as those of CPython stuff?
0
false
1
121
2009-03-11 02:32:55.053
Viewing all defined variables
I'm currently working on a computation in python shell. What I want to have is Matlab style listout where you can see all the variables that have been defined up to a point (so I know which names I've used, their values and such). Is there a way, and how can I do that?
globals(), locals(), vars(), and dir() may all help you in what you want.
0.573727
false
3
122
2009-03-11 02:32:55.053
Viewing all defined variables
I'm currently working on a computation in python shell. What I want to have is Matlab style listout where you can see all the variables that have been defined up to a point (so I know which names I've used, their values and such). Is there a way, and how can I do that?
In my Python 2.7 interpreter, the same whos command that exists in MATLAB exists in Python. It shows the same details as the MATLAB analog (variable name, type, and value/data). Note that in the Python interpreter, whos lists all variables in the "interactive namespace".
0.201295
false
3
122
2009-03-11 02:32:55.053
Viewing all defined variables
I'm currently working on a computation in python shell. What I want to have is Matlab style listout where you can see all the variables that have been defined up to a point (so I know which names I've used, their values and such). Is there a way, and how can I do that?
A few things you could use: dir() will give you the list of in scope variables: globals() will give you a dictionary of global variables locals() will give you a dictionary of local variables
1
false
3
122
2009-03-13 00:09:14.370
Email integration
I was wondering if someone could help me out. In some web application, the app will send out emails, say when a new message has been posted. Then instead of signing into the application to post a reply you can just simply reply to the email and it will automatically update the web app with your response. My question is, how is this done and what is it called? Thanks
Generally: 1) Set up a dedicated email account for the purpose. 2) Have a programm monitor the mailbox (let's say fetchmail, since that's what I do). 3) When an email arrives at the account, fetchmail downloads the email, writes it to disk, and calls script or program you have written with the email file as an argument. 4) Your script or program parses the email and takes an appropriate action. The part that's usually mysterious to people is the fetchmail part (#2). Specifically on Mail Servers (iff you control the mailserver enough to redirect emails to scripts): 1-3) Configure an address to be piped to a script you have written. 4) Same as above.
0.443188
false
1
123
2009-03-16 08:22:52.843
Directory checksum with python?
So I'm in the middle of web-based filesystem abstraction layer development. Just like file browser, except it has some extra features like freaky permissions etc. I would like users to be notified somehow about directory changes. So, i.e. when someone uploads a new file via FTP, certain users should get a proper message. It is not required for the message to be extra detailed, I don't really need to show the exact resource changed. The parent directory name should be enough. What approach would you recommend?
A simple approach would be to monitor/check the last modification date of the working directory (using os.stat() for example). Whenever a file in a directory is modified, the working directory's (the directory the file is in) last modification date changes as well. At least this works on the filesystems I am working on (ufs, ext3). I'm not sure if all filesystems do it this way.
0
false
1
124
2009-03-17 01:33:18.047
How to unlock an sqlite3 db?
OMG! What an apparent problem... my django based scripts have locked my sqlite db... Does anyone know how to fix?
Your database is locked because you have a transaction running somewhere. Stop all your Django apps. If necessary, reboot. It's also remotely possible that you crashed a SQLite client in the middle of a transaction and the file lock was left in place.
1.2
true
1
125
2009-03-23 18:57:43.417
Examples for string find in Python
I am trying to find some examples but no luck. Does anyone know of some examples on the net? I would like to know what it returns when it can't find, and how to specify from start to end, which I guess is going to be 0, -1.
if x is a string and you search for y which also a string their is two cases : case 1: y is exist in x so x.find(y) = the index (the position) of the y in x . case 2: y is not exist so x.find (y) = -1 this mean y is not found in x.
0
false
1
126
2009-03-24 11:34:34.207
How do I notify a process of an SQLite database change done in a different process?
Let's say I have two or more processes dealing with an SQLite database - a "player" process and many "editor" processes. The "player" process reads the database and updates a view - in my case it would be a waveform being mixed to the soundcard depending on events stored in the database. An "editor" process is any editor for that database: it changes the database constantly. Now I want the player to reflect the editing changes quickly. I know that SQLite supplies hooks to trace database changes within the same process, but there seems to be little info on how to do this with multiple processes. I could poll the database constantly, compare records and trigger events, but that seems to be quite inefficient, especially when the database grows to a large size. I am thinking about using a log table and triggers, but I wonder if there is a simpler method.
I think in that case, I would make a process to manage the database read/writes. Each editor that want to make some modifications to the database makes a call to this proccess, be it through IPC or network, or whatever method. This process can then notify the player of a change in the database. The player, when he wants to retrieve some data should make a request of the data it wants to the process managing the database. (Or the db process tells it what it needs, when it notifies of a change, so no request from the player needed) Doing this will have the advantage of having only one process accessing the SQLite DB, so no locking or concurrency issues on the database.
0.101688
false
5
127
2009-03-24 11:34:34.207
How do I notify a process of an SQLite database change done in a different process?
Let's say I have two or more processes dealing with an SQLite database - a "player" process and many "editor" processes. The "player" process reads the database and updates a view - in my case it would be a waveform being mixed to the soundcard depending on events stored in the database. An "editor" process is any editor for that database: it changes the database constantly. Now I want the player to reflect the editing changes quickly. I know that SQLite supplies hooks to trace database changes within the same process, but there seems to be little info on how to do this with multiple processes. I could poll the database constantly, compare records and trigger events, but that seems to be quite inefficient, especially when the database grows to a large size. I am thinking about using a log table and triggers, but I wonder if there is a simpler method.
A relational database is not your best first choice for this. Why? You want all of your editors to pass changes to your player. Your player is -- effectively -- a server for all those editors. Your player needs multiple open connections. It must listen to all those connections for changes. It must display those changes. If the changes are really large, you can move to a hybrid solution where the editors persist the changes and notify the player. Either way, the editors must notify they player that they have a change. It's much, much simpler than the player trying to discover changes in a database. A better design is a server which accepts messages from the editors, persists them, and notifies the player. This server is neither editor nor player, but merely a broker that assures that all the messages are handled. It accepts connections from editors and players. It manages the database. There are two implementations. Server IS the player. Server is separate from the player. The design of server doesn't change -- only the protocol. When server is the player, then server calls the player objects directly. When server is separate from the player, then the server writes to the player's socket. When the player is part of the server, player objects are invoked directly when a message is received from an editor. When the player is separate, a small reader collects the messages from a socket and calls the player objects. The player connects to the server and then waits for a stream of information. This can either be input from the editors or references to data that the server persisted in the database. If your message traffic is small enough so that network latency is not a problem, editor sends all the data to the server/player. If message traffic is too large, then the editor writes to a database and sends a message with just a database FK to the server/player. Please clarify "If the editor crashes while notifying, the player is permanently messed up" in your question. This sounds like a poor design for the player service. It can't be "permanently messed up" unless it's not getting state from the various editors. If it's getting state from the editors (but attempting to mirror that state, for example) then you should consider a design where the player simply gets state from the editor and cannot get "permanently messed up".
1.2
true
5
127
2009-03-24 11:34:34.207
How do I notify a process of an SQLite database change done in a different process?
Let's say I have two or more processes dealing with an SQLite database - a "player" process and many "editor" processes. The "player" process reads the database and updates a view - in my case it would be a waveform being mixed to the soundcard depending on events stored in the database. An "editor" process is any editor for that database: it changes the database constantly. Now I want the player to reflect the editing changes quickly. I know that SQLite supplies hooks to trace database changes within the same process, but there seems to be little info on how to do this with multiple processes. I could poll the database constantly, compare records and trigger events, but that seems to be quite inefficient, especially when the database grows to a large size. I am thinking about using a log table and triggers, but I wonder if there is a simpler method.
Just open a socket between the two processes and have the editor tell all the players about the update.
0.101688
false
5
127
2009-03-24 11:34:34.207
How do I notify a process of an SQLite database change done in a different process?
Let's say I have two or more processes dealing with an SQLite database - a "player" process and many "editor" processes. The "player" process reads the database and updates a view - in my case it would be a waveform being mixed to the soundcard depending on events stored in the database. An "editor" process is any editor for that database: it changes the database constantly. Now I want the player to reflect the editing changes quickly. I know that SQLite supplies hooks to trace database changes within the same process, but there seems to be little info on how to do this with multiple processes. I could poll the database constantly, compare records and trigger events, but that seems to be quite inefficient, especially when the database grows to a large size. I am thinking about using a log table and triggers, but I wonder if there is a simpler method.
How many editor processes (why processes?), and how often do you expect updates? This doesn't sound like a good design, especially not considering sqlite really isn't too happy about multiple concurrent accesses to the database. If multiple processes makes sense and you want persistence, it would probably be smarter to have the editors notify your player via sockets, pipes, shared memory or the like and then have the player (aka server process) do the persisting.
0.050976
false
5
127
2009-03-24 11:34:34.207
How do I notify a process of an SQLite database change done in a different process?
Let's say I have two or more processes dealing with an SQLite database - a "player" process and many "editor" processes. The "player" process reads the database and updates a view - in my case it would be a waveform being mixed to the soundcard depending on events stored in the database. An "editor" process is any editor for that database: it changes the database constantly. Now I want the player to reflect the editing changes quickly. I know that SQLite supplies hooks to trace database changes within the same process, but there seems to be little info on how to do this with multiple processes. I could poll the database constantly, compare records and trigger events, but that seems to be quite inefficient, especially when the database grows to a large size. I am thinking about using a log table and triggers, but I wonder if there is a simpler method.
If it's on the same machine, the simplest way would be to have named pipe, "player" with blocking read() and "editors" putting a token in pipe whenever they modify DB.
0.101688
false
5
127
2009-03-25 04:23:52.267
How do I tell which Python interpreter I'm using?
I am using Python 2.5.2. How can I tell whether it is CPython or IronPython or Jython? Another question: how can I use a DLL developed in VB.NET in my project?
If you are typing "python" to launch it, it is probably CPython. IronPython's executable name is "ipy".
0.3154
false
1
128
2009-03-26 01:19:41.687
How to re import an updated package while in Python Interpreter?
I often test my module in the Python Interpreter, and when I see an error, I quickly update the .py file. But how do I make it reflect on the Interpreter ? So, far I have been exiting and reentering the Interpreter because re importing the file again is not working for me.
So, far I have been exiting and reentering the Interpreter because re importing the file again is not working for me. Yes, just saying import again gives you the existing copy of the module from sys.modules. You can say reload(module) to update sys.modules and get a new copy of that single module, but if any other modules have a reference to the original module or any object from the original module, they will keep their old references and Very Confusing Things will happen. So if you've got a module a, which depends on module b, and b changes, you have to ‘reload b’ followed by ‘reload a’. If you've got two modules which depend on each other, which is extremely common when those modules are part of the same package, you can't reload them both: if you reload p.a it'll get a reference to the old p.b, and vice versa. The only way to do it is to unload them both at once by deleting their items from sys.modules, before importing them again. This is icky and has some practical pitfalls to do with modules entries being None as a failed-relative-import marker. And if you've got a module which passes references to its objects to system modules — for example it registers a codec, or adds a warnings handler — you're stuck; you can't reload the system module without confusing the rest of the Python environment. In summary: for all but the simplest case of one self-contained module being loaded by one standalone script, reload() is very tricky to get right; if, as you imply, you are using a ‘package’, you will probably be better off continuing to cycle the interpreter.
0.998005
false
1
129
2009-03-26 13:01:52.357
In Python, how do I take a list and reduce it to a list of duplicates?
I have a list of strings that should be unique. I want to be able to check for duplicates quickly. Specifically, I'd like to be able to take the original list and produce a new list containing any repeated items. I don't care how many times the items are repeated so it doesn't have to have a word twice if there are two duplicates. Unfortunately, I can't think of a way to do this that wouldn't be clunky. Any suggestions? EDIT: Thanks for the answers and I thought I'd make a clarification. I'm not concerned with having a list of uniques for it's own sake. I'm generating the list based off of text files and I want to know what the duplicates are so I can go in the text files and remove them if any show up.
the solutions based on 'set' have a small drawback, namely they only work for hashable objects. the solution based on itertools.groupby on the other hand works for all comparable objects (e.g.: dictionaries and lists).
0
false
1
130
2009-04-01 05:44:36.447
How do I schedule a process' termination?
I need to run a process, wait a few hours, kill it, and start it again. Is there an easy way that I can accomplish this with Python or Bash? I can run it in the background but how do I identify it to use kill on it?
One idea: Save the process's PID (returned by fork() in your child process) to a file, then either schedule a cron job to kill it or kill it manually, reading the PID from the file. Another option: Create a shell script wrapper that automatically kills and restarts the process. Same as above, but you can keep the PID in memory, sleep for as long as you need, kill the process, then loop.
0
false
1
131
2009-04-02 06:35:38.017
Python SAX parser says XML file is not well-formed
I stripped some tags that I thought were unnecessary from an XML file. Now when I try to parse it, my SAX parser throws an error and says my file is not well-formed. However, I know every start tag has an end tag. The file's opening tag has a link to an XML schema. Could this be causing the trouble? If so, then how do I fix it? Edit: I think I've found the problem. My character data contains "&lt" and "&gt" characters, presumably from html tags. After being parsed, these are converted to "<" and ">" characters, which seems to bother the SAX parser. Is there any way to prevent this from happening?
I would suggest putting those tags back in and making sure it still works. Then, if you want to take them out, do it one at a time until it breaks. However, I question the wisdom of taking them out. If it's your XML file, you should understand it better. If it's a third-party XML file, you really shouldn't be fiddling with it (until you understand it better :-).
0.201295
false
3
132
2009-04-02 06:35:38.017
Python SAX parser says XML file is not well-formed
I stripped some tags that I thought were unnecessary from an XML file. Now when I try to parse it, my SAX parser throws an error and says my file is not well-formed. However, I know every start tag has an end tag. The file's opening tag has a link to an XML schema. Could this be causing the trouble? If so, then how do I fix it? Edit: I think I've found the problem. My character data contains "&lt" and "&gt" characters, presumably from html tags. After being parsed, these are converted to "<" and ">" characters, which seems to bother the SAX parser. Is there any way to prevent this from happening?
I would second recommendation to try to parse it using another XML parser. That should give an indication as to whether it's the document that's wrong, or parser. Also, the actual error message might be useful. One fairly common problem for example is that the xml declaration (if one is used, it's optional) must be the very first thing -- not even whitespace is allowed before it.
0
false
3
132
2009-04-02 06:35:38.017
Python SAX parser says XML file is not well-formed
I stripped some tags that I thought were unnecessary from an XML file. Now when I try to parse it, my SAX parser throws an error and says my file is not well-formed. However, I know every start tag has an end tag. The file's opening tag has a link to an XML schema. Could this be causing the trouble? If so, then how do I fix it? Edit: I think I've found the problem. My character data contains "&lt" and "&gt" characters, presumably from html tags. After being parsed, these are converted to "<" and ">" characters, which seems to bother the SAX parser. Is there any way to prevent this from happening?
You could load it into Firefox, if you don't have an XML editor. Firefox shows you the error.
0
false
3
132
2009-04-02 11:54:59.983
Import an existing python project to XCode
I've got a python project I've been making in terminal with vim etc.. I've read that XCode supports Python development at that it supports SVN (which I am using) but I can't find documentation on how to start a new XCode project from an existing code repository. Other developers are working on the project not using XCode - They won't mind if I add a project file or something, but they will mind if I have to reorganise the whole thing.
There are no special facilities for working with non-Cocoa Python projects with Xcode. Therefore, you probably just want to create a project with the "Empty Project" template (under "Other") and just drag in your source code. For convenience, you may want to set up an executable in the project. You can do this by ctrl/right-clicking in the project source list and choosing "Add" > "New Custom Executable...". You can also add a target, although I'm not sure what this would buy you.
0.101688
false
1
133
2009-04-05 16:07:30.077
How can you make a vote-up-down button like in Stackoverflow?
Problems how to make an Ajax buttons (upward and downward arrows) such that the number can increase or decrease how to save the action af an user to an variable NumberOfVotesOfQuestionID I am not sure whether I should use database or not for the variable. However, I know that there is an easier way too to save the number of votes. How can you solve those problems? [edit] The server-side programming language is Python.
You create the buttons, which can be links or images or whatever. Now hook a JavaScript function up to each button's click event. On clicking, the function fires and Sends a request to the server code that says, more or less, +1 or -1. Server code takes over. This will vary wildly depending on what framework you use (or don't) and a bunch of other things. Code connects to the database and runs a query to +1 or -1 the score. How this happens will vary wildly depending on your database design, but it'll be something like UPDATE posts SET score=score+1 WHERE score_id={{insert id here}};. Depending on what the database says, the server returns a success code or a failure code as the AJAX request response. Response gets sent to AJAX, asynchronously. The JS response function updates the score if it's a success code, displays an error if it's a failure. You can store the code in a variable, but this is complicated and depends on how well you know the semantics of your code's runtime environment. It eventually needs to be pushed to persistent storage anyway, so using the database 100% is a good initial solution. When the time for optimizing performance comes, there are enough software in the world to cache database queries to make you feel woozy so it's not that big a deal.
0.296905
false
1
134
2009-04-05 23:34:32.857
Help Me Figure Out A Random Scheduling Algorithm using Python and PostgreSQL
I am trying to do the schedule for the upcoming season for my simulation baseball team. I have an existing Postgresql database that contains the old schedule. There are 648 rows in the database: 27 weeks of series for 24 teams. The problem is that the schedule has gotten predictable and allows teams to know in advance about weak parts of their schedule. What I want to do is take the existing schedule and randomize it. That way teams are still playing each other the proper number of times but not in the same order as before. There is one rule that has been tripping me up: each team can only play one home and one road series PER week. I had been fooling around with SELECT statements based on ORDER BY RANDOM() but I haven't figured out how to make sure a team only has one home and one road series per week. Now, I could do this in PHP (which is the language I am most comfortable with) but I am trying to make the shift to Python so I'm not sure how to get this done in Python. I know that Python doesn't seem to handle two dimensional arrays very well. Any help would be greatly appreciated.
Have you considered keeping your same "schedule", and just shuffling the teams? Generating a schedule where everyone plays each other the proper number of times is possible, but if you already have such a schedule then it's much easier to just shuffle the teams. You could keep your current table, but replace each team in it with an id (0-23, or A-X, or whatever), then randomly generate into another table where you assign each team to each id (0 = TeamJoe, 1 = TeamBob, etc). Then when it's time to shuffle again next year, just regenerate that mapping table. Not sure if this answers the question the way you want, but is probably what I would go with (and is actually how I do it on my fantasy football website).
0.265586
false
2
135
2009-04-05 23:34:32.857
Help Me Figure Out A Random Scheduling Algorithm using Python and PostgreSQL
I am trying to do the schedule for the upcoming season for my simulation baseball team. I have an existing Postgresql database that contains the old schedule. There are 648 rows in the database: 27 weeks of series for 24 teams. The problem is that the schedule has gotten predictable and allows teams to know in advance about weak parts of their schedule. What I want to do is take the existing schedule and randomize it. That way teams are still playing each other the proper number of times but not in the same order as before. There is one rule that has been tripping me up: each team can only play one home and one road series PER week. I had been fooling around with SELECT statements based on ORDER BY RANDOM() but I haven't figured out how to make sure a team only has one home and one road series per week. Now, I could do this in PHP (which is the language I am most comfortable with) but I am trying to make the shift to Python so I'm not sure how to get this done in Python. I know that Python doesn't seem to handle two dimensional arrays very well. Any help would be greatly appreciated.
I'm not sure I fully understand the problem, but here is how I would do it: 1. create a complete list of matches that need to happen 2. iterate over the weeks, selecting which match needs to happen in this week. You can use Python lists to represent the matches that still need to happen, and, for each week, the matches that are happening in this week. In step 2, selecting a match to happen would work this way: a. use random.choice to select a random match to happen. b. determine which team has a home round for this match, using random.choice([1,2]) (if it could have been a home round for either team) c. temporarily remove all matches that get blocked by this selection. a match is blocked if one of its teams has already two matches in the week, or if both teams already have a home match in this week, or if both teams already have a road match in this week. d. when there are no available matches anymore for a week, proceed to the next week, readding all the matches that got blocked for the previous week.
0.135221
false
2
135
2009-04-07 00:39:25.627
Import XML into SQL database
I'm working with a 20 gig XML file that I would like to import into a SQL database (preferably MySQL, since that is what I am familiar with). This seems like it would be a common task, but after Googling around a bit I haven't been able to figure out how to do it. What is the best way to do this? I know this ability is built into MySQL 6.0, but that is not an option right now because it is an alpha development release. Also, if I have to do any scripting I would prefer to use Python because that's what I am most familiar with. Thanks.
It may be a common task, but maybe 20GB isn't as common with MySQL as it is with SQL Server. I've done this using SQL Server Integration Services and a bit of custom code. Whether you need either of those depends on what you need to do with 20GB of XML in a database. Is it going to be a single column of a single row of a table? One row per child element? SQL Server has an XML datatype if you simply want to store the XML as XML. This type allows you to do queries using XQuery, allows you to create XML indexes over the XML, and allows the XML column to be "strongly-typed" by referring it to a set of XML schemas, which you store in the database.
0
false
1
136
2009-04-07 08:36:07.227
Python distutils, how to get a compiler that is going to be used?
For example, I may use python setup.py build --compiler=msvc or python setup.py build --compiler=mingw32 or just python setup.py build, in which case the default compiler (say, bcpp) will be used. How can I get the compiler name inside my setup.py (e. g. msvc, mingw32 and bcpp, respectively)? UPD.: I don't need the default compiler, I need the one that is actually going to be used, which is not necessarily the default one. So far I haven't found a better way than to parse sys.argv to see if there's a --compiler... string there.
You can subclass the distutils.command.build_ext.build_ext command. Once build_ext.finalize_options() method has been called, the compiler type is stored in self.compiler.compiler_type as a string (the same as the one passed to the build_ext's --compiler option, e.g. 'mingw32', 'gcc', etc...).
0.496174
false
2
137
2009-04-07 08:36:07.227
Python distutils, how to get a compiler that is going to be used?
For example, I may use python setup.py build --compiler=msvc or python setup.py build --compiler=mingw32 or just python setup.py build, in which case the default compiler (say, bcpp) will be used. How can I get the compiler name inside my setup.py (e. g. msvc, mingw32 and bcpp, respectively)? UPD.: I don't need the default compiler, I need the one that is actually going to be used, which is not necessarily the default one. So far I haven't found a better way than to parse sys.argv to see if there's a --compiler... string there.
import distutils.ccompiler compiler_name = distutils.ccompiler.get_default_compiler()
-0.067922
false
2
137
2009-04-13 16:20:47.457
Django models - how to filter out duplicate values by PK after the fact?
I build a list of Django model objects by making several queries. Then I want to remove any duplicates, (all of these objects are of the same type with an auto_increment int PK), but I can't use set() because they aren't hashable. Is there a quick and easy way to do this? I'm considering using a dict instead of a list with the id as the key.
If the order doesn't matter, use a dict.
0
false
1
138
2009-04-15 01:05:26.610
How to visualize IP addresses as they change in python?
I've written a little script that collects my external IP address every time I open a new terminal window and appends it, at well as the current time, to a text file. I'm looking for ideas on a way to visualize when/how often my IP address changes. I bounce between home and campus and could separate them using the script, but it would be nice to visualize them separately. I frequently use matplotlib. Any ideas?
There's a section in the matplotlib user guide about drawing bars on a chart to represent ranges. I've never done that myself but it seems appropriate for what you're looking for.
0
false
1
139
2009-04-15 21:13:47.127
Need to build (or otherwise obtain) python-devel 2.3 and add to LD_LIBRARY_PATH
I am supporting an application with a hard dependency on python-devel 2.3.7. The application runs the python interpreter embedded, attempting to load libpython2.3.so - but since the local machine has libpython2.4.so under /usr/lib64, the application is failing. I see that there are RPMs for python-devel (but not version 2.3.x). Another wrinkle is that I don't want to overwrite the existing python under /usr/lib (I don't have su anyway). What I want to do is place the somewhere in my home directory (i.e. /home/noahz/lib) and use PATH and LD_LIBRARY_PATH to point to the older version for this application. What I'm trying to find out (but can't seem to craft the right google search for) is: 1) Where do I download python-devel-2.3 or libpython2.3.so.1.0 (if either available) 2a) If I can't download python-devel-2.3, how do I build libpython2.3.so from source (already downloaded Python-2.3.tgz and 2b) Is building libpython2.3.so.1.0 from source and pointing to it with LD_LIBRARY_PATH good enough, or am I going to run into other problems (other dependencies) 3) In general, am I approaching this problem the right way? ADDITIONAL INFO: I attempted to symlink (ln -s) to the later version. This caused the app to fail silently. Distro is Red Hat Enterprise Linux 5 (RHEL5) - for x86_64
You can use the python RPM's linked to from the python home page ChristopheD mentioned. You can extract the RPM's using cpio, as they are just specialized cpio archives. Your method of extracting them to your home directory and setting LD_LIBRARY_PATH and PATH should work; I use this all the time for hand-built newer versions of projects I also have installed. Don't focus on the -devel package though; you need the main package. You can unpack the -devel one as well, but the only thing you'll actually use from it is the libpython2.3.so symlink that points to the actual library, and you can just as well create this by hand. Whether this is the right approach depends on what you are trying to do. If all you're trying to do is to get this one application to run for you personally, then this hack sounds fine. If you wanted to actually distribute something to other people for running this application, and you have no way of fixing the actual application, you should consider building an rpm of the older python version that doesn't conflict with the system-installed one.
1.2
true
1
140
2009-04-18 22:02:45.750
How do I modify sys.path from .htaccess to allow mod_python to see Django?
The host I'm considering for hosting a Django site has mod_python installed, but does not have Django. Django's INSTALL file indicates that I can simply copy the django directory to Python's site-packages directory to install Django, so I suspect that it might be possible to configure Python / mod_python to look for it elsewhere (namely my user space) by modifying sys.path, but I don't know how to change it from .htaccess or mod_python. How do I modify sys.path from .htaccess to allow mod_python to see Django? P.S. I can only access the site via FTP (i.e. no shell access). I realize that it sounds like I should just switch hosts, but there are compelling reasons for me to make this work so I'm at least going to try.
You're using mod_python wrong. It was never intended to serve python web applications. You should be using WSGI for this... or at least FastCGI.
0.135221
false
1
141
2009-04-19 18:00:40.033
Convert timedelta to years?
I need to check if some number of years have been since some date. Currently I've got timedelta from datetime module and I don't know how to convert it to years.
How exact do you need it to be? td.days / 365.25 will get you pretty close, if you're worried about leap years.
0.096877
false
4
142
2009-04-19 18:00:40.033
Convert timedelta to years?
I need to check if some number of years have been since some date. Currently I've got timedelta from datetime module and I don't know how to convert it to years.
Get the number of days, then divide by 365.2425 (the mean Gregorian year) for years. Divide by 30.436875 (the mean Gregorian month) for months.
0.135221
false
4
142
2009-04-19 18:00:40.033
Convert timedelta to years?
I need to check if some number of years have been since some date. Currently I've got timedelta from datetime module and I don't know how to convert it to years.
In the end what you have is a maths issue. If every 4 years we have an extra day lets then dived the timedelta in days, not by 365 but 365*4 + 1, that would give you the amount of 4 years. Then divide it again by 4. timedelta / ((365*4) +1) / 4 = timedelta * 4 / (365*4 +1)
0.019434
false
4
142
2009-04-19 18:00:40.033
Convert timedelta to years?
I need to check if some number of years have been since some date. Currently I've got timedelta from datetime module and I don't know how to convert it to years.
If you're trying to check if someone is 18 years of age, using timedelta will not work correctly on some edge cases because of leap years. For example, someone born on January 1, 2000, will turn 18 exactly 6575 days later on January 1, 2018 (5 leap years included), but someone born on January 1, 2001, will turn 18 exactly 6574 days later on January 1, 2019 (4 leap years included). Thus, you if someone is exactly 6574 days old, you can't determine if they are 17 or 18 without knowing a little more information about their birthdate. The correct way to do this is to calculate the age directly from the dates, by subtracting the two years, and then subtracting one if the current month/day precedes the birth month/day.
0.99126
false
4
142
2009-04-19 19:51:07.787
Amazon S3 permissions
Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this?
You will have to build the whole access logic to S3 in your applications
0.101688
false
3
143
2009-04-19 19:51:07.787
Amazon S3 permissions
Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this?
There are various ways to control access to the S3 objects: Use the query string auth - but as you noted this does require an expiration date. You could make it far in the future, which has been good enough for most things I have done. Use the S3 ACLS - but this requires the user to have an AWS account and authenticate with AWS to access the S3 object. This is probably not what you are looking for. You proxy the access to the S3 object through your application, which implements your access control logic. This will bring all the bandwidth through your box. You can set up an EC2 instance with your proxy logic - this keeps the bandwidth closer to S3 and can reduce latency in certain situations. The difference between this and #3 could be minimal, but depends your particular situation.
0.998178
false
3
143
2009-04-19 19:51:07.787
Amazon S3 permissions
Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this?
Have the user hit your server Have the server set up a query-string authentication with a short expiration (minutes, hours?) Have your server redirect to #2
0.673066
false
3
143
2009-04-20 19:27:37.197
How do I load entry-points for a defined set of eggs with Python setuptools?
I would like to use the entry point functionality in setuptools. There are a number of occasions where I would like to tightly control the list of eggs that are run, and thence the extensions that contribute to a set of entry points: egg integration testing, where I want to run multiple test suites on different combinations of eggs. scanning a single directory of eggs/plugins so as to run two different instances of the same program, but with different eggs. development time, where I am developing one or more egg, and would like to run the program as part of the normal edit-run cycle. I have looked through the setuptools documentation, and while it doesn't say that this is not possible, I must have missed something saying how to do it. What is the best way to approach deploying plugins differently to the default system-wide discovery?
We're solving something similar, ability to use setup.py develop if You're mere user without access to global site-packages. So far, we solved it with virtualenv. I'd say it will help for your case too: have minimal system-wide install (or explicitly exclude it), create virtual environment with eggs you want and test there. (Or, for integration tests, create clean environment, install egg and test all dependencies are installed). For 2, I'm not sure, but it should work too, with multiple virtualenvs. For 3, setup.py develop is the way to go.
0
false
1
144
2009-04-21 14:43:48.383
Free word list for use programmatically?
A friend of mine was talking about a word game she liked to play where you try to convert one word to another (they have the same number of letters) by switching one letter at a time, where each iteration produces a real word. Example: MOON --> WOLF GOON GOOF GOLF WOLF I figured it'd be a fun little project to write a program to generate solutions, and taking it further, given 2 words, determine if a solution exists and the number of iterations in optimal solution. Problem is I'm having trouble finding free word lists that I can easily access programmatically. I'm also thinking about using this as an excuse to learn Python, so it'd be great if anyone knows of free word lists and pointers on how to parse and access it from Python. The algorithm for figuring out how to find an optimal path I'll work on my own.
Most unix (which includes osx) have a file /usr/share/dict/words.
0.999839
false
1
145
2009-04-23 12:20:12.113
Python3 Http Web Server: virtual hosts
I am writing an rather simple http web server in python3. The web server needs to be simple - only basic reading from config files, etc. I am using only standard libraries and for now it works rather ok. There is only one requirement for this project, which I can't implement on my own - virtual hosts. I need to have at least two virtual hosts, defined in config files. The problem is, that I can't find a way how can I implement them in python. Does anyone have any guides, articles, maybe some simple implementation how can this be done? I would be grateful for any help.
Virtual hosts work by obeying the Host: header in the HTTP request. Just read the headers of the request, and take action based on the value of the Host: header
0.999909
false
1
146
2009-04-23 19:08:27.440
How to connect to a WCF Service with IronPython
Has anyone done this? I've tried generating a c# proxy class and connecting through it, but I cannot figure out how to get IronPython to use the generated app.config file that defines the endpoint. It tries to connect, but I just get an error about no default endpoint. I would ideally like to make the connection using only IronPython code and not use the proxy class, if possible. The binding for the service I am trying to connect to is a NetTcpBinding if that makes any difference.
Is your WCF service interface available in a shared assembly? If so, you could look at using the ChannelFactory to create your client proxy dynamically (instead of using the generated C# proxy). With that method you can supply all the details of the endpoint when you create the ChannelFactory and you won't require any configuration in your .config file.
0
false
1
147
2009-04-24 14:45:27.703
How to design an email system?
I am working for a company that provides customer support to its clients. I am trying to design a system that would send emails automatically to clients when some event occurs. The system would consist of a backend part and a web interface part. The backend will handle the communication with a web interface (which will be only for internal use to change the email templates) and most important it will check some database tables and based on those results will send emails ... lots of them. Now, I am wondering how can this be designed so it can be made scalable and provide the necessary performance as it will probably have to handle a few thousands emails per hours (this should be the peek). I am mostly interested about how would this kind of architecture should be thought in order to be easily scaled in the future if needed. Python will be used on the backend with Postgres and probably whatever comes first between a Python web framework and GWT on the frontend (which seems the simplest task).
A few thousand emails per hour isn't really that much, as long as your outgoing mail server is willing to accept them in a timely manner. I would send them using a local mta, like postfix, or exim (which would then send them through your outgoing relay if required). That service is then responsible for the mail queues, retries, bounces, etc. If your looking for more "mailing list" features, try adding mailman into the mix. It's written in python, and you've probably seen it, as it runs tons of internet mailing lists.
0.296905
false
2
148
2009-04-24 14:45:27.703
How to design an email system?
I am working for a company that provides customer support to its clients. I am trying to design a system that would send emails automatically to clients when some event occurs. The system would consist of a backend part and a web interface part. The backend will handle the communication with a web interface (which will be only for internal use to change the email templates) and most important it will check some database tables and based on those results will send emails ... lots of them. Now, I am wondering how can this be designed so it can be made scalable and provide the necessary performance as it will probably have to handle a few thousands emails per hours (this should be the peek). I am mostly interested about how would this kind of architecture should be thought in order to be easily scaled in the future if needed. Python will be used on the backend with Postgres and probably whatever comes first between a Python web framework and GWT on the frontend (which seems the simplest task).
You might want to try Twisted Mail for implementing your own backend in pure Python.
0
false
2
148
2009-04-24 14:46:45.890
Basic MVT issue in Django
I have a Django website as follows: site has several views each view has its own template to show its data each template extends a base template base template is the base of the site, has all the JS/CSS and the basic layout So up until now it's all good. So now we have the master head of the site (which exists in the base template), and it is common to all the views. But now I want to make it dynamic, and add some dynamic data to it. On which view do I do this? All my views are basically render_to_response('viewtemplate.html', someContext). So how do add a common view to a base template? Obviously I will not duplicate the common code to each separate view... I think I'm missing something fundamental in the MVT basis of Django.
Context processors and RequestContext (see Tyler's answer) are the way to go for data that is used on every page load. For data that you may need on various views, but not all (especially data that isn't really related to the primary purpose of the view, but appears in something like a navigation sidebar), it often makes most sense to define a custom template tag for retrieving the data.
0.265586
false
1
149
2009-04-25 20:35:44.357
What is the best way to redirect email to a Python script?
I'd like to provide a functionality for users of my website to get assigned an email address upon registration (such as firstname.lastname@mydomain.com) but I don't really think it is feasible to actually support all these emails account normally through a webmail program. I am also not sure if my webhost would be cool with it. What I'd really want is to be able to have a seamless integration of this email into the bigger system that the website is, as it is mostly going to be for intra-site messaging but we want to allow users to put actual email addresses. So what I would like to do instead is have a catch-all account under mydomain and have this email look at incoming mail, see who it was meant to be sent to, and add a message for the user in the system. So, the questions are: 1) Is this the right approach? How expensive would it be to get a host that would allow me to just assign emails to at will to my domain? I am currently using WebFaction's shared hosting. 2) If it is an okay approach, what is the best way to route this catch all account to my python script? I have read about .forward but I am not very good at UNIX stuff. Once I figure that out, how would I get the script to be in the "Django environment" so I can use Django's model functionality to add the new messages to the user? 3) Is there anything Django can do to make this any easier? 4) Are there any tools in Python to help me parse the email address? How do they work?
but I don't really think it is feasible to actually support all these emails account normally through a webmail program I think that your base assumption here is incorrect. You see, most 'webmail' programs are just frontends (or clients) to the backend mail system (postfix etc). You will need to see how your webhost is set up. There is no reason why you can not create these accounts programmatically and then let them use a normal webmail interface like SquirrelMail or RoundCube. For instance, my webhost (bluehost) allows me 2500 email accounts - I am not sure how many yours allows - but I can upgrade to unlimited for a few extra dollars a month. I think that using the builtin email handling facility is a more robust way to go.
0
false
1
150
2009-04-27 21:42:12.717
Is there an easy way to tell how much time is spent waiting for the Python GIL?
I have a long-running Python service and I'd like to know how much cumulative wall clock time has been spent by any runnable threads (i.e., threads that weren't blocked for some other reason) waiting for the GIL. Is there an easy way to do this? E.g., perhaps I could periodically dump some counter to its log file. My underlying motivation is to rule out the GIL as a source of mystery response latency from these long-running processes. There is no particular reason to suspect the GIL (other than that it would fit the symptoms), but other forms of logging haven't turned up anything yet, so, if it is easy, it would be nice to have this information.
I don't think there's an easy way. There's probably an awkward way, involving rebuilding Python to traverse the PyThreadState list and count the threads each time the lock is acquired, but I doubt it's worth the effort! I know this is a speculative question but if you are even moderately concerned about there being delays caused by threading it may be prudent to move to a multiprocessing model instead of a multithreading model. Since processes are both safer and more scalable in Python they are almost always the best choice if practical.
0.545705
false
1
151
2009-04-28 12:56:05.873
Django caching - can it be done pre-emptively?
I have a Django view, which receives part of its data from an external website, which I parse using urllib2/BeautifulSoup. This operation is rather expensive so I cache it using the low-level cache API, for ~5 minutes. However, each user which accesses the site after the cached data expires will receive a significant delay of a few seconds while I go to the external site to parse the new data. Is there any way to load the new data lazily so that no user will ever get that kind of delay? Or is this unavoidable? Please note that I am on a shared hosting server, so keep that in mind with your answers. EDIT: thanks for the help so far. However, I'm still unsure as to how I accomplish this with the python script I will be calling. A basic test I did shows that the django cache is not global. Meaning if I call it from an external script, it does not see the cache data going on in the framework. Suggestions? Another EDIT: coming to think of it, this is probably because I am still using local memory cache. I suspect that if I move the cache to memcached, DB, whatever, this will be solved.
"I'm still unsure as to how I accomplish this with the python script I will be calling. " The issue is that your "significant delay of a few seconds while I go to the external site to parse the new data" has nothing to do with Django cache at all. You can cache it everywhere, and when you go to reparse the external site, there's a delay. The trick is to NOT parse the external site while a user is waiting for their page. The trick is to parse the external site before a user asks for a page. Since you can't go back in time, you have to periodically parse the external site and leave the parsed results in a local file or a database or something. When a user makes a request you already have the results fetched and parsed, and all you're doing is presenting.
0.386912
false
2
152
2009-04-28 12:56:05.873
Django caching - can it be done pre-emptively?
I have a Django view, which receives part of its data from an external website, which I parse using urllib2/BeautifulSoup. This operation is rather expensive so I cache it using the low-level cache API, for ~5 minutes. However, each user which accesses the site after the cached data expires will receive a significant delay of a few seconds while I go to the external site to parse the new data. Is there any way to load the new data lazily so that no user will ever get that kind of delay? Or is this unavoidable? Please note that I am on a shared hosting server, so keep that in mind with your answers. EDIT: thanks for the help so far. However, I'm still unsure as to how I accomplish this with the python script I will be calling. A basic test I did shows that the django cache is not global. Meaning if I call it from an external script, it does not see the cache data going on in the framework. Suggestions? Another EDIT: coming to think of it, this is probably because I am still using local memory cache. I suspect that if I move the cache to memcached, DB, whatever, this will be solved.
I have no proof, but I've read BeautifulSoup is slow and consumes a lot of memory. You may want to look at using the lxml module instead. lxml is supposed to be much faster and efficient, and can do much more than BeautifulSoup. Of course, the parsing probably isn't your bottleneck here; the external I/O is. First off, use memcached! Then, one strategy that can be used is as follows: Your cached object, called A, is stored in the cache with a dynamic key (A_<timestamp>, for example). Another cached object holds the current key for A, called A_key. Your app would then get the key for A by first getting the value at A_key A periodic process would populate the cache with the A_<timestamp> keys and upon completion, change the value at A_key to the new key Using this method, all users every 5 minutes won't have to wait for the cache to be updated, they'll just get older versions until the update happens.
0.386912
false
2
152