Q_CreationDate
stringlengths
23
23
Title
stringlengths
11
149
Question
stringlengths
25
6.53k
Answer
stringlengths
15
5.1k
Score
float64
-1
1.2
Is_accepted
bool
2 classes
N_answers
int64
1
17
Q_Id
int64
0
6.76k
2009-09-29 04:23:55.360
Django slugified urls - how to handle collisions?
I'm currently working on a toy project in Django. Part of my app allows users to leave reviews. I'd like to take the title of the review and slugify it to create a url. So, if a user writes a review called "The best thing ever!", the url would be something like: www.example.com/reviews/the-best-thing-ever. That's all well and good, but what is the best way to handle case where two users pick the same title? I don't want to make the title required to be unique. I've thought about adding the review id in the url somewhere, but I'd like to avoid that extra info for any urls that don't collide. Any ideas?
One thing I never liked about the unique slug fields/methods is that if you have a lot of clashes for a single title, you'll end up running several queries to try and determine an available slug. I know you mentioned you don't want to show the id for non-clashing slugs, but, as far as performance, I think it's the better route to take. To make the URL a little nicer looking, I prefer also to embed the id before the slug, so that a URL takes the form of www.example.com/reviews/1/the-best-thing-ever.
1.2
true
2
274
2009-09-29 04:23:55.360
Django slugified urls - how to handle collisions?
I'm currently working on a toy project in Django. Part of my app allows users to leave reviews. I'd like to take the title of the review and slugify it to create a url. So, if a user writes a review called "The best thing ever!", the url would be something like: www.example.com/reviews/the-best-thing-ever. That's all well and good, but what is the best way to handle case where two users pick the same title? I don't want to make the title required to be unique. I've thought about adding the review id in the url somewhere, but I'd like to avoid that extra info for any urls that don't collide. Any ideas?
I would (in the form validation) just check to see if the slug is used, and then add something to it, either a number "my-cool-idea_2" or the actual id
0
false
2
274
2009-09-29 18:41:33.267
How to get a timestamp older than 1901
I'm trying to find to accurately count the number of seconds since Jan 1, 1850 to the present in a couple of languages (JavaScript, C++, and Python [don't even ask, I stopped asking these questions long ago]). Problem is the platforms store timestamps as 32-bit signed integers, so I can't get a timestamp for dates older than 1901 to easily subtract the present timestamp from etc.. So how do I do what I want to do?
The portable, language-agnostic approach: Step 1. Count the number of seconds between 01/01/1850 00:00 and 01/01/1901 00:00. Save this number somewhere (call it M) Step 2. Use available language functionality to count the number of seconds between 01/01/1901 00:00 and whatever other date and time you want. Step 3. Return the result from Step 2 + M. Remember to cast the result as a long integer if necessary.
0.240117
false
2
275
2009-09-29 18:41:33.267
How to get a timestamp older than 1901
I'm trying to find to accurately count the number of seconds since Jan 1, 1850 to the present in a couple of languages (JavaScript, C++, and Python [don't even ask, I stopped asking these questions long ago]). Problem is the platforms store timestamps as 32-bit signed integers, so I can't get a timestamp for dates older than 1901 to easily subtract the present timestamp from etc.. So how do I do what I want to do?
Under WIN32, you can use SystemTimeToFileTime. FILETIME is a 64-bit unsigned integer that counts the number of 100-nanosecond intervals since January 1, 1601 (UTC). You can convert two timestamps to FILETIME. You can convert it to ULARGE_INTEGER (t.dwLowDateTime + t.dwHighDateTime << 32), and do regular arithmetics to measure the interval.
0.081452
false
2
275
2009-09-29 19:41:26.707
Separate Admin/User authentication system in Django
I've recently started learning/using django; I'm trying to figure out a way to have two separate authentications systems for administrators and users. Rather than create a whole new auth system, I'd like to leverage django's built-in functionality (i.e. session management, @login_required decorator, etc.). Specifically, I want to have two separate login tables - one for admins, one for users. The admin login table should be the default table that django generates with its default fields (ie. id, username, email, is_staff, etc.). The user table, on the other hand, I want to have only 5 fields - id, email, password, first_name, last_name. Furthermore, I want to use django built-in session management for both login tables and the @login_required decorator for their respective views. Lastly, I want two separate and distinct login forms for admins and users. Anyone have any suggestions on how I can achieve my goal or know of any articles/examples that could help me along?
Modify things slightly so that users have a category prefix on their username? You haven't given us much info on what you want to do, it's possible that your needs might be met by using the sites framework, or simply two separate django installs. If what you're trying to do is make the user login page and the admin login page separate, just use the built in framework as detailed in the docs to create a "user" login page and leave the admin one alone. If you're worried that users will somehow start editing admin login stuff, don't be, they won't unless you let them.
0
false
1
276
2009-09-30 17:32:09.643
With multiple Python installs, how does MacPorts know which one to install MySQLdb for?
I just upgraded the default Python 2.5 on Leopard to 2.6 via the installer on www.python.org. Upon doing so, the MySQLdb I had installed was no longer found. So I tried reinstalling it via port install py-mysql, and it succeeded, but MySQLdb was still not importable. So then I tried to python install python26 with python_select python26 and it succeeded, but it doesn't appear that it is getting precedence over the python.org install: $ which python /Library/Frameworks/Python.framework/Versions/2.6/bin/python When I would expect it to be something like /opt/local/bin/python My path environment is: /Library/Frameworks/Python.framework/Versions/2.6/bin:/usr/local/mysql/bin/:/opt/local/bin:/opt/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/usr/local/mysql/bin:/Users/bsr/bin Anyway, when I try port install py-mysql but how does it know where to install the Python MySQL library?
You also need python_select (or is it select_python?) to change the default python used.
0.201295
false
1
277
2009-09-30 20:46:34.553
cProfile and Python: Finding the specific line number that code spends most time on
I'm using cProfile, pstats and Gprof2dot to profile a rather long python script. The results tell me that the most time is spent calling a method in an object I've defined. However, what I would really like is to know exactly what line number within that function is eating up the time. Any idea's how to get this additional information? (By the way, I'm using Python 2.6 on OSX snow leopard if that helps...)
cProfile does not track line numbers within a function; it only tracks the line number of where the function was defined. cProfile attempts to duplicate the behavior of profile (which is pure Python). profile uses pstats to store the data from running, and pstats only stores line numbers for function definitions, not for individual Python statements. If you need to figure out with finer granularity what is eating all your time, then you need to refactor your big function into several, smaller functions.
0.265586
false
1
278
2009-10-01 04:36:27.693
Run a task every hour on the hour with App Engine's cron API
I need to run a task every hour on the hour (00:00, 01:00, 02:00, ..., 23:00) every day of the week, but can't seem to find an example in App Engine's docs of how to do this. There is an example of running at ask every hour, but this doesn't fit because the "start" of that hour depends on when you deploy the application. That is, if I deploy at 4:37 PM, the cron scripts will get executed at 5:37, 6:37, ... instead of 5:00, 6:00, ... So far the only way that looks like it would work is to have 24 different cron entries, one for the specific hour of each day set to run each day at that specific time. Does anyone know of anything that would let me use a schedule like "every hour at :00" or even "every day 00:00, 01:00, ... 23:00"?
Looking over the docs, I agree that your 24 cron entry idea is the only documented way that would work. Not ideal, but should work.
-0.101688
false
2
279
2009-10-01 04:36:27.693
Run a task every hour on the hour with App Engine's cron API
I need to run a task every hour on the hour (00:00, 01:00, 02:00, ..., 23:00) every day of the week, but can't seem to find an example in App Engine's docs of how to do this. There is an example of running at ask every hour, but this doesn't fit because the "start" of that hour depends on when you deploy the application. That is, if I deploy at 4:37 PM, the cron scripts will get executed at 5:37, 6:37, ... instead of 5:00, 6:00, ... So far the only way that looks like it would work is to have 24 different cron entries, one for the specific hour of each day set to run each day at that specific time. Does anyone know of anything that would let me use a schedule like "every hour at :00" or even "every day 00:00, 01:00, ... 23:00"?
The docs say you can have 20 cron entries, so you can't have one for every hour of the day. You could run your task every minute and check if it is the first minute of the hour - exit otherwise.
0.101688
false
2
279
2009-10-01 23:32:12.717
How do I get fluent in Python?
Once you have learned the basic commands in Python, you are often able to solve most programming problem you face. But the way in which this is done is not really Python-ic. What is common is to use the classical c++ or Java mentality to solve problems. But Python is more than that. It has functional programming incorporated; many libraries available; object oriented, and its own ways. In short there are often better, shorter, faster, more elegant ways to the same thing. It is a little bit like learning a new language. First you learn the words and the grammar, but then you need to get fluent. Once you have learned the language, how do you get fluent in Python? How have you done it? What books mostly helped?
Read other people's code. Write some of your own code. Repeat for a year or two. Study the Python documentation and learn the built-in modules. Read Python in a Nutshell. Subscribe your RSS reader to the Python tag on Stack Overflow.
0.470104
false
3
280
2009-10-01 23:32:12.717
How do I get fluent in Python?
Once you have learned the basic commands in Python, you are often able to solve most programming problem you face. But the way in which this is done is not really Python-ic. What is common is to use the classical c++ or Java mentality to solve problems. But Python is more than that. It has functional programming incorporated; many libraries available; object oriented, and its own ways. In short there are often better, shorter, faster, more elegant ways to the same thing. It is a little bit like learning a new language. First you learn the words and the grammar, but then you need to get fluent. Once you have learned the language, how do you get fluent in Python? How have you done it? What books mostly helped?
I guess becoming fluent in any programming language is the same as becoming fluent in a spoken/written language. You do that by speaking and listening to the language, a lot. So my advice is to do some projects using python, and you will soon become fluent in it. You can complement this by reading other people's code who are more experienced in the language to see how they solve certain problems.
0.101688
false
3
280
2009-10-01 23:32:12.717
How do I get fluent in Python?
Once you have learned the basic commands in Python, you are often able to solve most programming problem you face. But the way in which this is done is not really Python-ic. What is common is to use the classical c++ or Java mentality to solve problems. But Python is more than that. It has functional programming incorporated; many libraries available; object oriented, and its own ways. In short there are often better, shorter, faster, more elegant ways to the same thing. It is a little bit like learning a new language. First you learn the words and the grammar, but then you need to get fluent. Once you have learned the language, how do you get fluent in Python? How have you done it? What books mostly helped?
The same way you get fluent in any language - program a lot. I'd recommend working on a project (hopefully something you'll actually use later). While working on the project, every time you need some basic piece of functionality, try writing it yourself, and then checking online how other people did it. This both lets you learn how to actually get stuff done in Python, but will also allow you to see what are the "Pythonic" counterparts to common coding cases.
0.151877
false
3
280
2009-10-03 02:45:36.913
Python Pypi: what is your process for releasing packages for different Python versions? (Linux)
I've got several eggs I maintain on Pypi but up until now I've always focused on Python 2.5x. I'd like to release my eggs under both Python 2.5 & Python 2.6 in an automated fashion i.e. running tests generating doc preparing eggs uploading to Pypi How do you guys achieve this? A related question: how do I tag an egg to be "version independent" ? works under all version of Python?
You don't need to release eggs for anything else than Windows, and then only if your package uses C extensions so that they have compiled parts. Otherwise you simply release one source distribution. That will be enough for all Python versions on all platforms. Running the tests for different versions automated is tricky if you don't have a buildbot. But once you have run the tests with both 2.5 and 2.6 releasing is just a question of running python setup.py sdist register upload and it doesn't matter what Python version you use to run that.
1.2
true
1
281
2009-10-03 11:40:18.963
verifiedEmail AOL OpenID
I can't seem to fetch the verifiedEmail field when trying to login to AOLs OpenID on my site. Every other provider that I know of provides this property, but not AOL. I realize that AOL somehow uses an old OpenID version, although is it feasible to just assume that their e-mail ends in @aol.com? I'm using the RPXNow library with Python.
I believe that OpenID lets the user decide how much information to "share" during the login process. I can't say that I am an expert on the subject, but I know that my identity at myopenid.com lets me specify precisely what information to make available. Is it possible that the AOL default is to share nothing? If this is the case, then you may want to do an email authorization directly with the user if the OpenID provider doesn't seem to have the information. OpenID doesn't mandate that this information is available so I would assume that you will have to handle the case of it not being there in application code.
1.2
true
1
282
2009-10-04 05:35:02.623
Non Blocking Server in Python
Can someone please tell how to write a Non-Blocking server code using the socket library alone.Thanks
Why socket alone? It's so much simpler to use another standard library module, asyncore -- and if you can't, at the very least select! If you're constrained by your homework's condition to only use socket, then I hope you can at least add threading (or multiprocessing), otherwise you're seriously out of luck -- you can make sockets with timeout, but juggling timing-out sockets without the needed help from any of the other obvious standard library modules (to support either async or threaded serving) is a serious mess indeed-y...;-).
0.201295
false
1
283
2009-10-04 21:29:10.883
Coming from a Visual Studio background, what do you recommend I use to start my VERY FIRST Python project?
I'm locked in using C# and I don't like it one bit. I have to start branching out to better myself as a professional and as a person, so I've decided to start making things in my own time using Python. The problem is, I've basically programmed only in C#. What IDE should I use to make programs using Python? My goal is to make a sort of encyclopedic program for a game I'm playing right now, displaying hero information, names, stats, picture, etc. All of this information I'm going to parse from an XML file. My plan is for this application to be able to run under Windows, Linux and Mac (I'm under the impression that any code written in Python works 100% cross-platform, right?) Thanks a lot for your tremendous help brothers of SO. :P Edit: I guess I should clarify that I'm looking for an IDE that supports drag and drop GUI design. I'm used to using VS and I'm not really sure how you can do it any other way.
Eclipse + Pydev is currently the gold standard IDE for Python. It's cross platform and since it's a general purpose IDE it has support for just about every other programming activity you might want to consider. Eclipse is not bad for C++, and very mature for Java developers. It's quite amazing when you realize that all this great stuff costs nothing.
0
false
2
284
2009-10-04 21:29:10.883
Coming from a Visual Studio background, what do you recommend I use to start my VERY FIRST Python project?
I'm locked in using C# and I don't like it one bit. I have to start branching out to better myself as a professional and as a person, so I've decided to start making things in my own time using Python. The problem is, I've basically programmed only in C#. What IDE should I use to make programs using Python? My goal is to make a sort of encyclopedic program for a game I'm playing right now, displaying hero information, names, stats, picture, etc. All of this information I'm going to parse from an XML file. My plan is for this application to be able to run under Windows, Linux and Mac (I'm under the impression that any code written in Python works 100% cross-platform, right?) Thanks a lot for your tremendous help brothers of SO. :P Edit: I guess I should clarify that I'm looking for an IDE that supports drag and drop GUI design. I'm used to using VS and I'm not really sure how you can do it any other way.
Good IDE for python are Komodo or Eclipse with PyDev. But even Notepad++ or any other text editor will enough to get you started, since you don't need to compile your code, just have a good editor. The benefit of the above IDEs, is that you can use them to manage a large scale project and debug your code. As for the cross platform issue, as long as you don't use the specific os libs (such as win32api), you are safe in being cross platform. Seems like a very large project for a first time. Is it going to be web based or desktop? Since it will greatly change your design and choice of python libs.
0.037089
false
2
284
2009-10-05 14:52:40.987
Getting proper code completion for Python on Vim?
I've gotten omnicompletion with Pysmell to work before, but I can't seem to do it again. I tried following some steps online, but most, if not all, of them are to vague and assume too much that you know what you are doing to some extent. Can someone post a full, step-by-step tutorial on how to get code completion working properly, for complete Vim newbies (for dummies?)?
Try hitting Ctrl-p while typing mid-word. Ctrl-p inserts the most recent word that starts with the prefix you're typing and Ctrl-n inserts the next match. If you have several possibilities, you can hit ctrl-p more than once to substitute each candidate in order.
0
false
2
285
2009-10-05 14:52:40.987
Getting proper code completion for Python on Vim?
I've gotten omnicompletion with Pysmell to work before, but I can't seem to do it again. I tried following some steps online, but most, if not all, of them are to vague and assume too much that you know what you are doing to some extent. Can someone post a full, step-by-step tutorial on how to get code completion working properly, for complete Vim newbies (for dummies?)?
There's also Ctrl+n in insert mode which will autocomplete based on the words it has seen in any of the open buffers (even in other tabs).
0.386912
false
2
285
2009-10-05 18:28:55.523
Close Python when Parent is closed
I have a Python program (PP) that loads another Program(AP) via COM, gets its window handle and sets it to be the PP parent. This works pretty well except that I can't control that AP still has their [X] button available in the top left corner. Since this is a pretty obvious place for the user to close when they are done with the program, I tried this and it left the PP in the Task Manager running, but not visible with no possible way to kill it other than through Task Manager. Any ideas on how to handle this? I expect it to be rather Common that the user closes in this manner. Thanks!
How's PP's control flow? If it's event-driven it could get appropriate events upon closure of that parent window or termination of that AP process; otherwise it could "poll" to check if the window or process are still around.
0.201295
false
1
286
2009-10-06 05:13:48.790
Werkzeug in General, and in Python 3.1
I've been looking really hard at all of the way**(s)** one can develop web applications using Python. For reference, we are using RHEL 64bit, apache, mod_wsgi. History: PHP + MySQL years ago PHP + Python 2.x + MySQL recently and current Python + PostgreSQL working on it We use a great library for communicating between PHP and Python (interface in PHP, backend in Python)... However, with a larger upcoming project starting, using 100% python may be very advantagous. We typically prefer not to have a monolithic framework dictating how things are done. A collection of useful helpers and utilities are much preferred (be it PHP or Python). Question 1: In reading a number of answers from experienced Python users, I've seen Werkzeug recommended a number of times. I would love it if several people with direct experience using Werkzeug to develop professional web applications could comment (in as much detail as their fingers feel like) why they use it, why they like it, and anything to watch out for. Question 2: Is there a version of Werkzeug that supports Python 3.1.1. I've succefully installed mod_wsgi on Apache 2.2 with Python 3.1.1. If there is not a version, what would it take to upgrade it to work on Python 3.1? Note: I've run 2to3 on the Werkzeug source code, and it does python-compile without Edit: The project that we are starting is not slated to be finished until nearly a year from now. At which point, I'm guessing Python 3.X will be a lot more mainstream. Furthermore, considering that we are running the App (not distributing it), can anyone comment on the viability of bashing through some of the Python 3 issues now, so that when a year from now arrives, we are more-or-less already there? Thoughts appreciated!
mod_wsgi for Python 3.x is also not ready. There is no satisfactory definition of WSGI for Python 3.x yet; the WEB-SIG are still bashing out the issues. mod_wsgi targets a guess at what might be in it, but there are very likely to be changes to both the spec and to standard libraries. Any web application you write today in Python 3.1 is likely to break in the future. It's a bit of a shambles. Today, for webapps you can only realistically use Python 2.x.
0.386912
false
3
287
2009-10-06 05:13:48.790
Werkzeug in General, and in Python 3.1
I've been looking really hard at all of the way**(s)** one can develop web applications using Python. For reference, we are using RHEL 64bit, apache, mod_wsgi. History: PHP + MySQL years ago PHP + Python 2.x + MySQL recently and current Python + PostgreSQL working on it We use a great library for communicating between PHP and Python (interface in PHP, backend in Python)... However, with a larger upcoming project starting, using 100% python may be very advantagous. We typically prefer not to have a monolithic framework dictating how things are done. A collection of useful helpers and utilities are much preferred (be it PHP or Python). Question 1: In reading a number of answers from experienced Python users, I've seen Werkzeug recommended a number of times. I would love it if several people with direct experience using Werkzeug to develop professional web applications could comment (in as much detail as their fingers feel like) why they use it, why they like it, and anything to watch out for. Question 2: Is there a version of Werkzeug that supports Python 3.1.1. I've succefully installed mod_wsgi on Apache 2.2 with Python 3.1.1. If there is not a version, what would it take to upgrade it to work on Python 3.1? Note: I've run 2to3 on the Werkzeug source code, and it does python-compile without Edit: The project that we are starting is not slated to be finished until nearly a year from now. At which point, I'm guessing Python 3.X will be a lot more mainstream. Furthermore, considering that we are running the App (not distributing it), can anyone comment on the viability of bashing through some of the Python 3 issues now, so that when a year from now arrives, we are more-or-less already there? Thoughts appreciated!
I haven't used Werkzeug, so I can only answer question 2: No, Werkzeug does not work on Python 3. In fact, very little works on Python 3 as of today. Porting is not difficult, but you can't port until all your third-party libraries have been ported, so progress is slow. One big stopper has been setuptools, which is a very popular package to use. Setuptools is unmaintained, but there is a maintained fork called Distribute. Distribute was released with Python 3 support just a week or two ago. I hope package support for Python 3 will pick up now. But it will still be a long time, at least months probably a year or so, before any major project like Werkzeug will be ported to Python 3.
0.135221
false
3
287
2009-10-06 05:13:48.790
Werkzeug in General, and in Python 3.1
I've been looking really hard at all of the way**(s)** one can develop web applications using Python. For reference, we are using RHEL 64bit, apache, mod_wsgi. History: PHP + MySQL years ago PHP + Python 2.x + MySQL recently and current Python + PostgreSQL working on it We use a great library for communicating between PHP and Python (interface in PHP, backend in Python)... However, with a larger upcoming project starting, using 100% python may be very advantagous. We typically prefer not to have a monolithic framework dictating how things are done. A collection of useful helpers and utilities are much preferred (be it PHP or Python). Question 1: In reading a number of answers from experienced Python users, I've seen Werkzeug recommended a number of times. I would love it if several people with direct experience using Werkzeug to develop professional web applications could comment (in as much detail as their fingers feel like) why they use it, why they like it, and anything to watch out for. Question 2: Is there a version of Werkzeug that supports Python 3.1.1. I've succefully installed mod_wsgi on Apache 2.2 with Python 3.1.1. If there is not a version, what would it take to upgrade it to work on Python 3.1? Note: I've run 2to3 on the Werkzeug source code, and it does python-compile without Edit: The project that we are starting is not slated to be finished until nearly a year from now. At which point, I'm guessing Python 3.X will be a lot more mainstream. Furthermore, considering that we are running the App (not distributing it), can anyone comment on the viability of bashing through some of the Python 3 issues now, so that when a year from now arrives, we are more-or-less already there? Thoughts appreciated!
I can only answer question one: I started using it for some small webstuff but now moved on to rework larger apps with it. Why Werkzeug? The modular concept is really helpful. You can hook in modules as you like, make stuff easily context aware and you get good request file handling for free which is able to cope with 300mb+ files by not storing it in memory. Disadvantages... Well sometimes modularity needs some upfront thought (django f.ex. gives you everything all at once, stripping stuff out is hard to do there though) but for me it works fine.
0.135221
false
3
287
2009-10-06 10:15:14.313
How do I display images at different times on webpage
I'm supposed to display images at certain times of the day on the webpage, Please can anyone tell me how to go about it
If you need to change the image before a page refresh, you could use jquery ajax call to get the correct image. jquery has some interval functionality which would allow this.
0
false
2
288
2009-10-06 10:15:14.313
How do I display images at different times on webpage
I'm supposed to display images at certain times of the day on the webpage, Please can anyone tell me how to go about it
You could make a Date object in javascript. Check the current time and depending on the time, you set the img src to whatever image you want for that time of day :) or hide the image through myimg.style.visibility = "hidden" if you dont want to display an image at that moment.
0
false
2
288
2009-10-07 05:07:44.390
how to create new file using python
how can i create new file in /var/log directory using python language in OSX leopard? i tried to do it using os.open function but i get "permission denied" thanks in advance
It probably failed because /var/log has user set to root and group set to wheel. Try running your python code as root and it will probably work.
0.135221
false
1
289
2009-10-07 06:41:49.477
Debugging a scripting language like ruby
I am basically from the world of C language programming, now delving into the world of scripting languages like Ruby and Python. I am wondering how to do debugging. At present the steps I follow is, I complete a large script, Comment everything but the portion I want to check Execute the script Though it works, I am not able to debug like how I would do in, say, a VC++ environment or something like that. My question is, is there any better way of debugging? Note: I guess it may be a repeated question, if so, please point me to the answer.
My question is, is there any better way of debugging?" Yes. Your approach, "1. I complete a large script, 2. Comment everything but the portion I want to check, 3. Execute the script" is not really the best way to write any software in any language (sorry, but that's the truth.) Do not write a large anything. Ever. Do this. Decompose your problem into classes of objects. For each class, write the class by 2a. Outline the class, focus on the external interface, not the implementation details. 2b. Write tests to prove that interface works. 2c. Run the tests. They'll fail, since you only outlined the class. 2d. Fix the class until it passes the test. 2e. At some points, you'll realize your class designs aren't optimal. Refactor your design, assuring your tests still pass. Now, write your final script. It should be short. All the classes have already been tested. 3a. Outline the script. Indeed, you can usually write the script. 3b. Write some test cases that prove the script works. 3c. Runt the tests. They may pass. You're done. 3d. If the tests don't pass, fix things until they do. Write many small things. It works out much better in the long run that writing a large thing and commenting parts of it out.
0.074076
false
4
290
2009-10-07 06:41:49.477
Debugging a scripting language like ruby
I am basically from the world of C language programming, now delving into the world of scripting languages like Ruby and Python. I am wondering how to do debugging. At present the steps I follow is, I complete a large script, Comment everything but the portion I want to check Execute the script Though it works, I am not able to debug like how I would do in, say, a VC++ environment or something like that. My question is, is there any better way of debugging? Note: I guess it may be a repeated question, if so, please point me to the answer.
The debugging method you described is perfect for a static language like C++, but given that the language is so different, the coding methods are similarly different. One of the big very important things in a dynamic language such as Python or Ruby is the interactive toplevel (what you get by typing, say python on the command line). This means that running a part of your program is very easy. Even if you've written a large program before testing (which is a bad idea), it is hopefully separated into many functions. So, open up your interactive toplevel, do an import thing (for whatever thing happens to be) and then you can easily start testing your functions one by one, just calling them on the toplevel. Of course, for a more mature project, you probably want to write out an actual test suite, and most languages have a method to do that (in Python, this is doctest and nose, don't know about other languages). At first, though, when you're writing something not particularly formal, just remember a few simple rules of debugging dynamic languages: Start small. Don't write large programs and test them. Test each function as you write it, at least cursorily. Use the toplevel. Running small pieces of code in a language like Python is extremely lightweight: fire up the toplevel and run it. Compare with writing a complete program and the compile-running it in, say, C++. Use that fact that you can quickly change the correctness of any function. Debuggers are handy. But often, so are print statements. If you're only running a single function, debugging with print statements isn't that inconvenient, and also frees you from dragging along an IDE.
0
false
4
290
2009-10-07 06:41:49.477
Debugging a scripting language like ruby
I am basically from the world of C language programming, now delving into the world of scripting languages like Ruby and Python. I am wondering how to do debugging. At present the steps I follow is, I complete a large script, Comment everything but the portion I want to check Execute the script Though it works, I am not able to debug like how I would do in, say, a VC++ environment or something like that. My question is, is there any better way of debugging? Note: I guess it may be a repeated question, if so, please point me to the answer.
Your sequence seems entirely backwards to me. Here's how I do it: I write a test for the functionality I want. I start writing the script, executing bits and verifying test results. I review what I'd done to document and publish. Specifically, I execute before I complete. It's way too late by then. There are debuggers, of course, but with good tests and good design, I've almost never needed one.
1.2
true
4
290
2009-10-07 06:41:49.477
Debugging a scripting language like ruby
I am basically from the world of C language programming, now delving into the world of scripting languages like Ruby and Python. I am wondering how to do debugging. At present the steps I follow is, I complete a large script, Comment everything but the portion I want to check Execute the script Though it works, I am not able to debug like how I would do in, say, a VC++ environment or something like that. My question is, is there any better way of debugging? Note: I guess it may be a repeated question, if so, please point me to the answer.
Script languages have no differences compared with other languages in the sense that you still have to break your problems into manageable pieces -- that is, functions. So, instead of testing the whole script after finishing the whole script, I prefer to test those small functions before integrating them. TDD always helps.
0.074076
false
4
290
2009-10-07 11:08:36.103
Incrementally building a numpy array and measuring memory usage
I have a series of large text files (up to 1 gig) that are output from an experiment that need to be analysed in Python. They would be best loaded into a 2D numpy array, which presents the first question: As the number of rows is unknown at the beginning of the loading, how can a very large numpy array be most efficiently built, row by row? Simply adding the row to the array would be inefficient in memory terms, as two large arrays would momentarily co-exist. The same problem would seem to be occur if you use numpy.append. The stack functions are promising, but ideally I would want to grow the array in place. This leads to the second question: What is the best way to observe the memory usage of a Python program that heavily uses numpy arrays? To study the above problem, I've used the usual memory profiling tools - heapy and pympler - but am only getting the size of the outer array objects (80 bytes) and not the data they are containing. Asides from a crude measuring of how much memory the Python process is using, how can I get at the "full" size of the arrays as they grow? Local details: OSX 10.6, Python 2.6, but general solutions are welcome.
On possible option is to do a single pass through the file first to count the number of rows, without loading them. The other option is to double your table size each time, which has two benefits: You will only re-alloc memory log(n) times where n is the number of rows. You only need 50% more ram than your largest table size If you take the dynamic route, you could measure the length of the first row in bytes, then guess the number of rows by calculating (num bytes in file / num bytes in first row). Start with a table of this size.
0.101688
false
1
291
2009-10-07 15:20:19.907
Reading HKEY CURRENT USER from the registry in Python, specifying the user
In my application I run subprocesses under several different user accounts. I need to be able to read some of the information written to the registry by these subprocesses. Each one is writing to HKEY_CURRENT_USER, and I know the user account name that they are running under. In Python, how can I read values from HKEY_CURRENT_USER for a specific user? I assume I need to somehow load the registry values under the user's name, and then read them from there, but how? edit: Just to make sure it's clear, my Python program is running as Administrator, and I have accounts "user1", "user2", and "user3", which each have information in their own HKEY_CURRENT_USER. As Administrator, how do I read user1's HKEY_CURRENT_USER data?
HKEY_CURRENT_USER maps to a HKEY_USERS\{id} key. Try finding the id by matching the HKEY_USERS{id}\Volatile Environment\USERNAME key to the username of the user (by enumerating/iterating over the {id}s that are present on the system). When you find the match just use HKEY_USERS{id} as if it was HKEY_CURRENT_USER
0.265586
false
1
292
2009-10-07 16:28:08.397
the fastest way to create checksum for large files in python
i need to transfer large files across network and need to create checksum for them on hourly basis. so the speed for generating checksum is critical for me. somehow i can't make zlib.crc32 and zlib.adler32 working with files larger than 4GB on Windows XP Pro 64bit machine. i suspect i've hit the 32bit limitation here? using hashlib.md5 i could get a result but the problem is the speed. it takes roughly about 5 minutes to generate an md5 for 4.8GB file. task manager shows that the process is using one core only. my questions are: is there a way to make crc works on large file? i prefer to use crc than md5 if not then is there a way to speed up the md5.hexdigest()/md5.digest? or in this case any hashlib hexdigest/digest? maybe spliting it into multi thread process? how do i do that? PS: i'm working on somethimg similar like an "Asset Management" system, kind of like svn but the asset consist of large compressed image files. the files have tiny bit incremental changes. the hashing/checksum is needed for detecting changes and error detection.
It's an algorithm selection problem, rather than a library/language selection problem! There appears to be two points to consider primarily: how much would the disk I/O affect the overall performance? what is the expected reliability of the error detection feature? Apparently, the answer to the second question is something like 'some false negative allowed' since the reliability of any 32 bits hash, relative to a 4Gb message, even in a moderately noisy channel, is not going to be virtually absolute. Assuming that I/O can be improved through multithreading, we may choose a hash that doesn't require a sequential scan of the complete message. Instead we can maybe work the file in parallel, hashing individual sections and either combining the hash values or appending them, to form a longer, more reliable error detection device. The next step could be to formalize this handling of files as ordered sections, and to transmit them as such (to be re-glued together at the recipient's end). This approach, along additional information about the way the files are produced (for ex. they may be exclusively modified by append, like log files), may even allow to limit the amount of hash calculation required. The added complexity of this approach needs to weighted against the desire to have zippy fast CRC calculation. Side note: Alder32 is not limited to message sizes below a particular threshold. It may just be a limit of the zlib API. (BTW, the reference I found about zlib.adler32 used a buffer, and well... this approach is to be avoided in the context of our huge messages, in favor of streamed processes: read a little from file, calculate, repeat..)
0.327599
false
5
293
2009-10-07 16:28:08.397
the fastest way to create checksum for large files in python
i need to transfer large files across network and need to create checksum for them on hourly basis. so the speed for generating checksum is critical for me. somehow i can't make zlib.crc32 and zlib.adler32 working with files larger than 4GB on Windows XP Pro 64bit machine. i suspect i've hit the 32bit limitation here? using hashlib.md5 i could get a result but the problem is the speed. it takes roughly about 5 minutes to generate an md5 for 4.8GB file. task manager shows that the process is using one core only. my questions are: is there a way to make crc works on large file? i prefer to use crc than md5 if not then is there a way to speed up the md5.hexdigest()/md5.digest? or in this case any hashlib hexdigest/digest? maybe spliting it into multi thread process? how do i do that? PS: i'm working on somethimg similar like an "Asset Management" system, kind of like svn but the asset consist of large compressed image files. the files have tiny bit incremental changes. the hashing/checksum is needed for detecting changes and error detection.
You cannot possibly use more than one core to calculate MD5 hash of a large file because of the very nature of MD5: it expects a message to be broken up in chunks and fed into hashing function in strict sequence. However, you can use one thread to read a file into internal queue, and then calculate hash in a separate thread so that. I do not think though that this will give you any significant performance boost. The fact that it takes so long to process a big file might be due to "unbuffered" reads. Try reading, say, 16 Kb at a time and then feed the content in chunks to hashing function.
0.067922
false
5
293
2009-10-07 16:28:08.397
the fastest way to create checksum for large files in python
i need to transfer large files across network and need to create checksum for them on hourly basis. so the speed for generating checksum is critical for me. somehow i can't make zlib.crc32 and zlib.adler32 working with files larger than 4GB on Windows XP Pro 64bit machine. i suspect i've hit the 32bit limitation here? using hashlib.md5 i could get a result but the problem is the speed. it takes roughly about 5 minutes to generate an md5 for 4.8GB file. task manager shows that the process is using one core only. my questions are: is there a way to make crc works on large file? i prefer to use crc than md5 if not then is there a way to speed up the md5.hexdigest()/md5.digest? or in this case any hashlib hexdigest/digest? maybe spliting it into multi thread process? how do i do that? PS: i'm working on somethimg similar like an "Asset Management" system, kind of like svn but the asset consist of large compressed image files. the files have tiny bit incremental changes. the hashing/checksum is needed for detecting changes and error detection.
md5 itself can't be run in parallel. However you can md5 the file in sections (in parallel) and the take an md5 of the list of hashes. However that assumes that the hashing is not IO-limited, which I would suspect it is. As Anton Gogolev suggests - make sure that you're reading the file efficiently (in large power-of-2 chunks). Once you've done that, make sure the file isn't fragmented. Also a hash such as sha256 should be selected rather than md5 for new projects. Are the zlib checksums much faster than md5 for 4Gb files?
0.067922
false
5
293
2009-10-07 16:28:08.397
the fastest way to create checksum for large files in python
i need to transfer large files across network and need to create checksum for them on hourly basis. so the speed for generating checksum is critical for me. somehow i can't make zlib.crc32 and zlib.adler32 working with files larger than 4GB on Windows XP Pro 64bit machine. i suspect i've hit the 32bit limitation here? using hashlib.md5 i could get a result but the problem is the speed. it takes roughly about 5 minutes to generate an md5 for 4.8GB file. task manager shows that the process is using one core only. my questions are: is there a way to make crc works on large file? i prefer to use crc than md5 if not then is there a way to speed up the md5.hexdigest()/md5.digest? or in this case any hashlib hexdigest/digest? maybe spliting it into multi thread process? how do i do that? PS: i'm working on somethimg similar like an "Asset Management" system, kind of like svn but the asset consist of large compressed image files. the files have tiny bit incremental changes. the hashing/checksum is needed for detecting changes and error detection.
You might be hitting a size limit for files in XP. The 64-bit gives you more addressing space (removing the 2GB (or so) addressing space per application), but probably does nothing for the file size problem.
0.135221
false
5
293
2009-10-07 16:28:08.397
the fastest way to create checksum for large files in python
i need to transfer large files across network and need to create checksum for them on hourly basis. so the speed for generating checksum is critical for me. somehow i can't make zlib.crc32 and zlib.adler32 working with files larger than 4GB on Windows XP Pro 64bit machine. i suspect i've hit the 32bit limitation here? using hashlib.md5 i could get a result but the problem is the speed. it takes roughly about 5 minutes to generate an md5 for 4.8GB file. task manager shows that the process is using one core only. my questions are: is there a way to make crc works on large file? i prefer to use crc than md5 if not then is there a way to speed up the md5.hexdigest()/md5.digest? or in this case any hashlib hexdigest/digest? maybe spliting it into multi thread process? how do i do that? PS: i'm working on somethimg similar like an "Asset Management" system, kind of like svn but the asset consist of large compressed image files. the files have tiny bit incremental changes. the hashing/checksum is needed for detecting changes and error detection.
First, there is nothing inherent in any of the CRC algorithms that would prevent them working on an arbitrary length of data (however, a particular implementation might well impose a limit). However, in a file syncing application, that probably doesn't matter, as you may not want to hash the entire file when it gets large, just chunks anyway. If you hash the entire file, and the hashes at each end differ, you have to copy the entire file. If you hash fixed sized chunks, then you only have to copy the chunks whose hash has changed. If most of the changes to the files are localized (e.g. database) then this will likely require much less copying (and it' easier to spread per chunk calculations across multiple cores). As for the hash algorithm itself, the basic tradeoff is speed vs. lack of collisions (two different data chunks yielding the same hash). CRC-32 is fast, but with only 2^32 unique values, collisions may be seen. MD5 is much slower, but has 2^128 unique values, so collisions will almost never be seen (but are still theoretically possible). The larger hashes (SHA1, SHA256, ...) have even more unique values, but are slower still: I doubt you need them: you're worried about accidental collisions, unlike digital signature applications, where you're worried about deliberately (malicously) engineered collisions. It sounds like you're trying to do something very similar to what the rsync utility does. Can you just use rsync?
0.201295
false
5
293
2009-10-07 20:40:37.623
Python: how to show results on a web page?
Most likely it's a dumb question for those who knows the answer, but I'm a beginner, and here it goes: I have a Python script which I run in a command-line with some parameter, and it prints me some results. Let's say results are some HTML code. I never done any Python programming for web, and couldn't figure it out... I need to have a page (OK, I know how to upload files to a server, Apache is running, Python is installed on the server...) with an edit field, which will accept that parameter, and Submit button, and I need it to "print" the results on a web page after the user submitted a proper parameter, or show any output that in a command-line situation are printed. I've read Dive Into Python's chapters about "HTML Processing" and "HTTP Web Services", but they do not describe what I'm looking for. If the answer isn't short, I would very much appreciate links to the more relevant stuff to read or maybe the key words to google for it.
Whose web server? If it is a web server provided by a web hosting company or someone else and you don't have control over it, you need to ascertain in what way they support the use of Python for writing web applications. It isn't enough to know just that they have Python available. As pointed out by others, is likely that at least CGI scripts which use Python will be supported. CGI however is not really practical for running large Python web frameworks such as Django. It is possible though that the web server might also support FASTCGI. If that is the case it becomes possible to use such larger Python web frameworks as FASTCGI uses persistent processes where as CGI creates a process for each request, where the overhead of large Python web frameworks generally makes the use of CGI impractical as a result. If the Apache server is controlled by others using a standalone Python web server such as wsgiref and proxying to it from Apache isn't going to be doable either as you can't setup Apache to do it. So, find out how use of Python for web applications is supported and work out whether you are just a user of the Apache instance, or whether you have some measure of control of changing its global configuration files and restarting it. This will dictate what you can do and use.
0.101688
false
1
294
2009-10-08 15:36:25.743
How to sort all possible words out of a string?
I'm wondering how to proceed with this task, take this string for example "thingsandstuff". How could I generate all possible strings out of this string as to look them up individually against an english dictionary? The goal is to find valid english words in a string that does not contain space. Thanks
Well here is my idea Find all possible strings containing 1 character from the original Find all possible strings containing 2 characters from the original ... Same thing up to the length of the original string Then add all up and go match with your dictionary
0
false
3
295
2009-10-08 15:36:25.743
How to sort all possible words out of a string?
I'm wondering how to proceed with this task, take this string for example "thingsandstuff". How could I generate all possible strings out of this string as to look them up individually against an english dictionary? The goal is to find valid english words in a string that does not contain space. Thanks
The brute force approach, i.e. checking every substring, is computationally unfeasible even for strings of middling lengths (a string of length N has O(N**2) substrings). Unless there is a pretty tight bound on the length of strings you care about, that doesn't scale well. To make things more feasible, more knowledge is required -- are you interested in overlapping words (eg 'things' and 'sand' in your example) and/or words which would leave unaccounted for characters (eg 'thing' and 'and' in your example, leaving the intermediate 's' stranded), or you do you want a strict partition of the string into juxtaposed (not overlapping) words with no residue? The latter would be the simplest problem, because the degrees of freedom drop sharply -- essentially to trying to determine a sequence of "breaking points", each between two adjacent characters, that would split the string into words. If that's the case, do you need every possible valid split (i.e. do you need both "thing sand" and "things and"), or will any single valid split do, or are there criteria that your split must optimize? If you clarify all of these issues it may be possible to give you more help!
0.11086
false
3
295
2009-10-08 15:36:25.743
How to sort all possible words out of a string?
I'm wondering how to proceed with this task, take this string for example "thingsandstuff". How could I generate all possible strings out of this string as to look them up individually against an english dictionary? The goal is to find valid english words in a string that does not contain space. Thanks
What if you break it up into syllables and then use those to construct words to compare to your dictionary. It's still a brute force method, but it would surely speed things up a bit.
0
false
3
295
2009-10-09 04:24:03.590
Upgrade Python to 2.6 on Mac
I'd like to upgrade the default python installation (2.5.1) supplied with OS X Leopard to the latest version. Please let me know how I can achieve this. Thanks
May I suggest you leave the "Default" be, and install Python in /usr/local/bin. Download python Unzip it ./configure make sudo make install done. Since /usr/local/bin comes before /usr/bin in the $PATH, you will invoke 2.6 when you type python, but the OS will remain stable...
0.336246
false
2
296
2009-10-09 04:24:03.590
Upgrade Python to 2.6 on Mac
I'd like to upgrade the default python installation (2.5.1) supplied with OS X Leopard to the latest version. Please let me know how I can achieve this. Thanks
When an OS is distributed with some specific Python release and uses it for some OS functionality (as is the case with Mac OS X, as well as many Linux distros &c), you should not tamper in any way with the system-supplied Python (as in, "upgrading" it and the like): while Python strives for backwards compatibility within any major release (such as 2.* or 3.*, this can never be 100% guaranted; your OS supplied tested all functionality thoroughly with the specific Python version they distribute; if you manage to alter that version, "on your head be it" -- neither your OS supplier nor the PSF accepts any responsibility for whatever damage that might perhaps do to your system. Rather, as other answers already suggested, install any other release you wish "besides" the system one -- why tamper with that crucial one, and risk breaking things, when installing others is so easy anyway?! On typical Mac OS X 10.5 machines (haven't upgraded any of my several macs to 10.6 yet), I have the Apple-supplied 2.5, a 2.4 on the side to support some old projects not worth the bother to upgrate, the latest 2.6 for new stuff, 3.1 as well to get the very newest -- they all live together in peace and quiet, I just type the release number explicitly, i.e. using python2.6 at the prompt, when I want a specific release. What release gets used when at the shell prompt you just say python is up to you (I personally prefer that to mean "the system-supplied Python", but it's a matter of taste: by setting paths, or shell aliases, &c, you can make it mean whatever you wish).
0.993425
false
2
296
2009-10-10 17:56:09.687
Python for C++ or Java Programmer
I have a background in C++ and Java and Objective C programming, but i am finding it hard to learn python, basically where its "Main Function" or from where the program start executing. So is there any tutorial/book which can teach python to people who have background in C++ or Java. Basically something which can show if how you were doing this in C++ and how this is done in Python. OK i think i did not put the question heading or question right, basically i was confused about the "Main" Function, otherwise other things are quite obvious from python official documentation except this concept. Thanks to all
The pithiest comment I guess is that the entry point is the 1st line of your script that is not a function or a class. You don't necessarily need to use the if hack unless you want to and your script is meant to be imported.
0
false
2
297
2009-10-10 17:56:09.687
Python for C++ or Java Programmer
I have a background in C++ and Java and Objective C programming, but i am finding it hard to learn python, basically where its "Main Function" or from where the program start executing. So is there any tutorial/book which can teach python to people who have background in C++ or Java. Basically something which can show if how you were doing this in C++ and how this is done in Python. OK i think i did not put the question heading or question right, basically i was confused about the "Main" Function, otherwise other things are quite obvious from python official documentation except this concept. Thanks to all
I started Python over a year ago too, also C++ background. I've learned that everything is simpler in Python, you don't need to worry so much if you're doing it right, you probably are. Most of the things came natural. I can't say I've read a book or anything, I usually pested the guys in #python on freenode a lot and looked at lots of other great code out there. Good luck :)
0
false
2
297
2009-10-12 07:06:48.173
Generate random directories/files given number of files and depth
I'd like to profile some VCS software, and to do so I want to generate a set of random files, in randomly arranged directories. I'm writing the script in Python, but my question is briefly: how do I generate a random directory tree with an average number of sub-directories per directory and some broad distribution of files per directory? Clarification: I'm not comparing different VCS repo formats (eg. SVN vs Git vs Hg), but profiling software that deals with SVN (and eventually other) working copies and repos. The constraints I'd like are to specify the total number of files (call it 'N', probably ~10k-100k) and the maximum depth of the directory structure ('L', probably 2-10). I don't care how many directories are generated at each level, and I don't want to end up with 1 file per dir, or 100k all in one dir. The distribution is something I'm not sure about, since I don't know whether VCS' (SVN in particular) would perform better or worse with a very uniform structure or a very skewed structure. Nonetheless, it would be nice if I could come up with an algorithm that didn't "even out" for large numbers. My first thoughts were: generate the directory tree using some method, and then uniformly populate the tree with files (treating each dir equally, with no regard as to nesting). My back-of-the-envelope calcs tell me that if there are 'L' levels, with 'D' subdirs per dir, and about sqrt(N) files per dir, then there will be about D^L dirs, so N =~ sqrt(N)*(D^L) => D =~ N^(1/2L). So now I have an approximate value for 'D', how can I generate the tree? How do I populate the files? I'd be grateful just for some pointers to good resources on algorithms I could use. My searching only found pretty applets/flash.
Why not download some real open source repos and use those? Have you thought about what goes into the files? is that random data too?
1.2
true
1
298
2009-10-12 13:22:07.443
When and how to use the builtin function property() in python
It appears to me that except for a little syntactic sugar, property() does nothing good. Sure, it's nice to be able to write a.b=2 instead of a.setB(2), but hiding the fact that a.b=2 isn't a simple assignment looks like a recipe for trouble, either because some unexpected result can happen, such as a.b=2 actually causes a.b to be 1. Or an exception is raised. Or a performance problem. Or just being confusing. Can you give me a concrete example for a good usage of it? (using it to patch problematic code doesn't count ;-)
A basic reason is really simply that it looks better. It is more pythonic. Especially for libraries. something.getValue() looks less nice than something.value In plone (a pretty big CMS), you used to have document.setTitle() which does a lot of things like storing the value, indexing it again and so. Just doing document.title = 'something' is nicer. You know that a lot is happening behind the scenes anyway.
0.249709
false
4
299
2009-10-12 13:22:07.443
When and how to use the builtin function property() in python
It appears to me that except for a little syntactic sugar, property() does nothing good. Sure, it's nice to be able to write a.b=2 instead of a.setB(2), but hiding the fact that a.b=2 isn't a simple assignment looks like a recipe for trouble, either because some unexpected result can happen, such as a.b=2 actually causes a.b to be 1. Or an exception is raised. Or a performance problem. Or just being confusing. Can you give me a concrete example for a good usage of it? (using it to patch problematic code doesn't count ;-)
but hiding the fact that a.b=2 isn't a simple assignment looks like a recipe for trouble You're not hiding that fact though; that fact was never there to begin with. This is python -- a high-level language; not assembly. Few of the "simple" statements in it boil down to single CPU instructions. To read simplicity into an assignment is to read things that aren't there. When you say x.b = c, probably all you should think is that "whatever just happened, x.b should now be c".
0.644192
false
4
299
2009-10-12 13:22:07.443
When and how to use the builtin function property() in python
It appears to me that except for a little syntactic sugar, property() does nothing good. Sure, it's nice to be able to write a.b=2 instead of a.setB(2), but hiding the fact that a.b=2 isn't a simple assignment looks like a recipe for trouble, either because some unexpected result can happen, such as a.b=2 actually causes a.b to be 1. Or an exception is raised. Or a performance problem. Or just being confusing. Can you give me a concrete example for a good usage of it? (using it to patch problematic code doesn't count ;-)
You are correct, it is just syntactic sugar. It may be that there are no good uses of it depending on your definition of problematic code. Consider that you have a class Foo that is widely used in your application. Now this application has got quite large and further lets say it's a webapp that has become very popular. You identify that Foo is causing a bottleneck. Perhaps it is possible to add some caching to Foo to speed it up. Using properties will let you do that without changing any code or tests outside of Foo. Yes of course this is problematic code, but you just saved a lot of $$ fixing it quickly. What if Foo is in a library that you have hundreds or thousands of users for? Well you saved yourself having to tell them to do an expensive refactor when they upgrade to the newest version of Foo. The release notes have a lineitem about Foo instead of a paragraph porting guide. Experienced Python programmers don't expect much from a.b=2 other than a.b==2, but they know even that may not be true. What happens inside the class is it's own business.
0.151877
false
4
299
2009-10-12 13:22:07.443
When and how to use the builtin function property() in python
It appears to me that except for a little syntactic sugar, property() does nothing good. Sure, it's nice to be able to write a.b=2 instead of a.setB(2), but hiding the fact that a.b=2 isn't a simple assignment looks like a recipe for trouble, either because some unexpected result can happen, such as a.b=2 actually causes a.b to be 1. Or an exception is raised. Or a performance problem. Or just being confusing. Can you give me a concrete example for a good usage of it? (using it to patch problematic code doesn't count ;-)
getters and setters are needed for many purposes, and are very useful because they are transparent to the code. Having object Something the property height, you assign a value as Something.height = 10, but if height has a getter and setter then at the time you do assign that value you can do many things in the procedures, like validating a min or max value, like triggering an event because the height changed, automatically setting other values in function of the new height value, all that may occur at the moment Something.height value was assigned. Remember, you don't need to call them in your code, they are auto executed at the moment you read or write the property value. In some way they are like event procedures, when the property X changes value and when the property X value is read.
0.050976
false
4
299
2009-10-12 20:22:08.483
Recommendations for perl-to-python interoperation?
We have a sizable code base in Perl. For the forseeable future, our codebase will remain in Perl. However, we're looking into adding a GUI-based dashboard utility. We are considering writing the dashboard in Python (using tkinter or wx). The problem, however, is that we would like to leverage our existing Perl codebase in the Python GUI. So... any suggestions on how achieve this? We are considering a few options: Write executables (in Perl) that mimic function calls; invoke those Perl executables in python as system calls. Write Perl executables on-the-fly inside the Python dashboard, and invoke the (temporary) Perl executable. Find some kind of Perl-to-Python converter or binding. Any other ideas? I'd love to hear if other people have confronted this problem. Unfortunately, it's not an option to convert the codebase itself to Python at this time.
Interesting project: I would opt for loose-coupling and consider an XML-RPC or JSON based approach.
0.050976
false
3
300
2009-10-12 20:22:08.483
Recommendations for perl-to-python interoperation?
We have a sizable code base in Perl. For the forseeable future, our codebase will remain in Perl. However, we're looking into adding a GUI-based dashboard utility. We are considering writing the dashboard in Python (using tkinter or wx). The problem, however, is that we would like to leverage our existing Perl codebase in the Python GUI. So... any suggestions on how achieve this? We are considering a few options: Write executables (in Perl) that mimic function calls; invoke those Perl executables in python as system calls. Write Perl executables on-the-fly inside the Python dashboard, and invoke the (temporary) Perl executable. Find some kind of Perl-to-Python converter or binding. Any other ideas? I'd love to hear if other people have confronted this problem. Unfortunately, it's not an option to convert the codebase itself to Python at this time.
I hate to be another one in the chorus, but... Avoid the use of an alternate language Use Wx so it's native look and feel makes the application look "real" to non-technical audiences. Download the Padre source code and see how it does Wx Perl code, then steal rampantly from it's best tricks or maybe just gut it and use the application skeleton (using the Artistic half of the Perl dual license to make it legal). Build your own Strawberry Perl subclass to package the application as an MSI installer and push it out across the corporate Active Directory domain. Of course, I only say all this because you said "Dashboard" which I read as "Corporate", which then makes me assume a Microsoft AD network...
0.342695
false
3
300
2009-10-12 20:22:08.483
Recommendations for perl-to-python interoperation?
We have a sizable code base in Perl. For the forseeable future, our codebase will remain in Perl. However, we're looking into adding a GUI-based dashboard utility. We are considering writing the dashboard in Python (using tkinter or wx). The problem, however, is that we would like to leverage our existing Perl codebase in the Python GUI. So... any suggestions on how achieve this? We are considering a few options: Write executables (in Perl) that mimic function calls; invoke those Perl executables in python as system calls. Write Perl executables on-the-fly inside the Python dashboard, and invoke the (temporary) Perl executable. Find some kind of Perl-to-Python converter or binding. Any other ideas? I'd love to hear if other people have confronted this problem. Unfortunately, it's not an option to convert the codebase itself to Python at this time.
Well, if you really want to write the GUI in another language (which, seriously, is just a bad idea, since it will cost you more than it could ever benefit you), the thing you should do is the following: Document your Perl app in terms of the services it provides. You should do it with XML Schema Definition - XSD - for the data types and Web Service Description Language - WSDL - for the actual service. Implement the services in Perl, possibly using Catalyst::Controller::SOAP, or just XML::Compile::SOAP. Consume the services from your whatever-language GUI interface. Profit. But honestly, I really suggest you taking a look at the Perl GTK2 binding, it is awesome, including features such as implementing a Gtk class entirely in Perl and using it as argument to a function written in C - for instance, you can write a model class for a gtk tree entirely in Perl.
0.101688
false
3
300
2009-10-15 05:13:02.970
Create a user-group in linux using python
I want to create a user group using python on CentOS system. When I say 'using python' I mean I don't want to do something like os.system and give the unix command to create a new group. I would like to know if there is any python module that deals with this. Searching on the net did not reveal much about what I want, except for python user groups.. so I had to ask this. I learned about the grp module by searching here on SO, but couldn't find anything about creating a group. EDIT: I dont know if I have to start a new question for this, but I would also like to know how to add (existing) users to the newly created group. Any help appreciated. Thank you.
There are no library calls for creating a group. This is because there's really no such thing as creating a group. A GID is simply a number assigned to a process or a file. All these numbers exist already - there is nothing you need to do to start using a GID. With the appropriate privileges, you can call chown(2) to set the GID of a file to any number, or setgid(2) to set the GID of the current process (there's a little more to it than that, with effective IDs, supplementary IDs, etc). Giving a name to a GID is done by an entry in /etc/group on basic Unix/Linux/POSIX systems, but that's really just a convention adhered to by the Unix/Linux/POSIX userland tools. Other network-based directories also exist, as mentioned by Jack Lloyd. The man page group(5) describes the format of the /etc/group file, but it is not recommended that you write to it directly. Your distribution will have policies on how unnamed GIDs are allocated, such as reserving certain spaces for different purposes (fixed system groups, dynamic system groups, user groups, etc). The range of these number spaces differs on different distributions. These policies are usually encoded in the command-line tools that a sysadmin uses to assign unnamed GIDs. This means the best way to add a group locally is to use the command-line tools.
0.101688
false
3
301
2009-10-15 05:13:02.970
Create a user-group in linux using python
I want to create a user group using python on CentOS system. When I say 'using python' I mean I don't want to do something like os.system and give the unix command to create a new group. I would like to know if there is any python module that deals with this. Searching on the net did not reveal much about what I want, except for python user groups.. so I had to ask this. I learned about the grp module by searching here on SO, but couldn't find anything about creating a group. EDIT: I dont know if I have to start a new question for this, but I would also like to know how to add (existing) users to the newly created group. Any help appreciated. Thank you.
I don't know of a python module to do it, but the /etc/group and /etc/gshadow format is pretty standard, so if you wanted you could just open the files, parse their current contents and then add the new group if necessary. Before you go doing this, consider: What happens if you try to add a group that already exists on the system What happens when multiple instances of your program try to add a group at the same time What happens to your code when an incompatible change is made to the group format a couple releases down the line NIS, LDAP, Kerberos, ... If you're not willing to deal with these kinds of problems, just use the subprocess module and run groupadd. It will be way less likely to break your customers machines. Another thing you could do that would be less fragile than writing your own would be to wrap the code in groupadd.c (in the shadow package) in Python and do it that way. I don't see this buying you much versus just exec'ing it, though, and it would add more complexity and fragility to your build.
1.2
true
3
301
2009-10-15 05:13:02.970
Create a user-group in linux using python
I want to create a user group using python on CentOS system. When I say 'using python' I mean I don't want to do something like os.system and give the unix command to create a new group. I would like to know if there is any python module that deals with this. Searching on the net did not reveal much about what I want, except for python user groups.. so I had to ask this. I learned about the grp module by searching here on SO, but couldn't find anything about creating a group. EDIT: I dont know if I have to start a new question for this, but I would also like to know how to add (existing) users to the newly created group. Any help appreciated. Thank you.
I think you should use the commandline programs from your program, a lot of care has gone into making sure that they don't break the groups file if something goes wrong. However the file format is quite straight forward to write something yourself if you choose to go that way
0.470104
false
3
301
2009-10-17 10:21:43.707
How check if a task is already in python Queue?
I'm writing a simple crawler in Python using the threading and Queue modules. I fetch a page, check links and put them into a queue, when a certain thread has finished processing page, it grabs the next one from the queue. I'm using an array for the pages I've already visited to filter the links I add to the queue, but if there are more than one threads and they get the same links on different pages, they put duplicate links to the queue. So how can I find out whether some url is already in the queue to avoid putting it there again?
SQLite is so simple to use and would fit perfectly... just a suggestion.
0.031387
false
5
302
2009-10-17 10:21:43.707
How check if a task is already in python Queue?
I'm writing a simple crawler in Python using the threading and Queue modules. I fetch a page, check links and put them into a queue, when a certain thread has finished processing page, it grabs the next one from the queue. I'm using an array for the pages I've already visited to filter the links I add to the queue, but if there are more than one threads and they get the same links on different pages, they put duplicate links to the queue. So how can I find out whether some url is already in the queue to avoid putting it there again?
The way I solved this (actually I did this in Scala, not Python) was to use both a Set and a Queue, only adding links to the queue (and set) if they did not already exist in the set. Both the set and queue were encapsulated in a single thread, exposing only a queue-like interface to the consumer threads. Edit: someone else suggested SQLite and that is also something I am considering, if the set of visited URLs needs to grow large. (Currently each crawl is only a few hundred pages so it easily fits in memory.) But the database is something that can also be encapsulated within the set itself, so the consumer threads need not be aware of it.
0.031387
false
5
302
2009-10-17 10:21:43.707
How check if a task is already in python Queue?
I'm writing a simple crawler in Python using the threading and Queue modules. I fetch a page, check links and put them into a queue, when a certain thread has finished processing page, it grabs the next one from the queue. I'm using an array for the pages I've already visited to filter the links I add to the queue, but if there are more than one threads and they get the same links on different pages, they put duplicate links to the queue. So how can I find out whether some url is already in the queue to avoid putting it there again?
Why only use the array (ideally, a dictionary would be even better) to filter things you've already visited? Add things to your array/dictionary as soon as you queue them up, and only add them to the queue if they're not already in the array/dict. Then you have 3 simple separate things: Links not yet seen (neither in queue nor array/dict) Links scheduled to be visited (in both queue and array/dict) Links already visited (in array/dict, not in queue)
0.031387
false
5
302
2009-10-17 10:21:43.707
How check if a task is already in python Queue?
I'm writing a simple crawler in Python using the threading and Queue modules. I fetch a page, check links and put them into a queue, when a certain thread has finished processing page, it grabs the next one from the queue. I'm using an array for the pages I've already visited to filter the links I add to the queue, but if there are more than one threads and they get the same links on different pages, they put duplicate links to the queue. So how can I find out whether some url is already in the queue to avoid putting it there again?
Also, instead of a set you might try using a dictionary. Operations on sets tend to get rather slow when they're big, whereas a dictionary lookup is nice and quick. My 2c.
-0.093914
false
5
302
2009-10-17 10:21:43.707
How check if a task is already in python Queue?
I'm writing a simple crawler in Python using the threading and Queue modules. I fetch a page, check links and put them into a queue, when a certain thread has finished processing page, it grabs the next one from the queue. I'm using an array for the pages I've already visited to filter the links I add to the queue, but if there are more than one threads and they get the same links on different pages, they put duplicate links to the queue. So how can I find out whether some url is already in the queue to avoid putting it there again?
instead of "array of pages already visited" make an "array of pages already added to the queue"
0
false
5
302
2009-10-18 20:56:01.813
PHP, Python, Ruby application with multiple RDBMS
I start feeling old fashioned when I see all these SQL generating database abstraction layers and all those ORMs out there, although I am far from being old. I understand the need for them, but their use spreads to places they normally don't belong to. I firmly believe that using database abstraction layers for SQL generation is not the right way of writing database applications that should run on multiple database engines, especially when you throw in really expensive databases like Oracle. And this is more or less global, it doesn't apply to only a few languages. Just a simple example, using query pagination and insertion: when using Oracle one could use the FIRST_ROWS and APPEND hints(where appropriate). Going to advanced examples I could mention putting in the database lots of Stored Procedures/Packages where it makes sense. And those are different for every RDBMS. By using only a limited set of features, commonly available to many RDBMS one doesn't exploit the possibilities that those expensive and advanced database engines have to offers. So getting back to the heart of the question: how do you develop PHP, Python, Ruby etc. applications that should run on multiple database engines? I am especially interested hearing how you separate/use the queries that are especially written for running on a single RDBMS. Say you've got a statement that should run on 3 RDBMS: Oracle, DB2 and Sql Server and for each of these you write a separate SQL statement in order to make use of all features this RDBMS has to offer. How do you do it? Letting this aside, what is you opinion walking this path? Is it worth it in your experience? Why? Why not?
You cannot eat a cake and have it, choose on of the following options. Use your database abstraction layer whenever you can and in the rare cases when you have a need for a hand-made query (eg. performance reasons) stick to the lowest common denominator and don't use stored procedures or any proprietary extensions that you database has to offer. In this case deploying the application on a different RDBMS should be trivial. Use the full power of your expensive RDBMS, but take into account that your application won't be easily portable. When the need arises you will have to spend considerable effort on porting and maintenance. Of course a decent layered design encapsulating all the differences in a single module or class will help in this endeavor. In other words you should consider how probable is it that your application will be deployed to multiple RDBMSes and make an informed choice.
1.2
true
3
303
2009-10-18 20:56:01.813
PHP, Python, Ruby application with multiple RDBMS
I start feeling old fashioned when I see all these SQL generating database abstraction layers and all those ORMs out there, although I am far from being old. I understand the need for them, but their use spreads to places they normally don't belong to. I firmly believe that using database abstraction layers for SQL generation is not the right way of writing database applications that should run on multiple database engines, especially when you throw in really expensive databases like Oracle. And this is more or less global, it doesn't apply to only a few languages. Just a simple example, using query pagination and insertion: when using Oracle one could use the FIRST_ROWS and APPEND hints(where appropriate). Going to advanced examples I could mention putting in the database lots of Stored Procedures/Packages where it makes sense. And those are different for every RDBMS. By using only a limited set of features, commonly available to many RDBMS one doesn't exploit the possibilities that those expensive and advanced database engines have to offers. So getting back to the heart of the question: how do you develop PHP, Python, Ruby etc. applications that should run on multiple database engines? I am especially interested hearing how you separate/use the queries that are especially written for running on a single RDBMS. Say you've got a statement that should run on 3 RDBMS: Oracle, DB2 and Sql Server and for each of these you write a separate SQL statement in order to make use of all features this RDBMS has to offer. How do you do it? Letting this aside, what is you opinion walking this path? Is it worth it in your experience? Why? Why not?
If you want to leverage the bells and whistles of various RDBMSes, you can certainly do it. Just apply standard OO Principles. Figure out what kind of API your persistence layer will need to provide. You'll end up writing a set of isomorphic persistence adapter classes. From the perspective of your model code (which will be calling adapter methods to load and store data), these classes are identical. Writing good test coverage should be easy, and good tests will make life a lot easier. Deciding how much abstraction is provided by the persistence adapters is the trickiest part, and is largely application-specific. As for whether this is worth the trouble: it depends. It's a good exercise if you've never done it before. It may be premature if you don't actually know for sure what your target databases are. A good strategy might be to implement two persistence adapters to start. Let's say you expect the most common back end will be MySQL. Implement one adapter tuned for MySQL. Implement a second that uses your database abstraction library of choice, and uses only standard and widely available SQL features. Now you've got support for a ton of back ends (everything supported by your abstraction library of choice), plus tuned support for mySQL. If you decide you then want to provide an optimized adapter from Oracle, you can implement it at your leisure, and you'll know that your application can support swappable database back-ends.
0.201295
false
3
303
2009-10-18 20:56:01.813
PHP, Python, Ruby application with multiple RDBMS
I start feeling old fashioned when I see all these SQL generating database abstraction layers and all those ORMs out there, although I am far from being old. I understand the need for them, but their use spreads to places they normally don't belong to. I firmly believe that using database abstraction layers for SQL generation is not the right way of writing database applications that should run on multiple database engines, especially when you throw in really expensive databases like Oracle. And this is more or less global, it doesn't apply to only a few languages. Just a simple example, using query pagination and insertion: when using Oracle one could use the FIRST_ROWS and APPEND hints(where appropriate). Going to advanced examples I could mention putting in the database lots of Stored Procedures/Packages where it makes sense. And those are different for every RDBMS. By using only a limited set of features, commonly available to many RDBMS one doesn't exploit the possibilities that those expensive and advanced database engines have to offers. So getting back to the heart of the question: how do you develop PHP, Python, Ruby etc. applications that should run on multiple database engines? I am especially interested hearing how you separate/use the queries that are especially written for running on a single RDBMS. Say you've got a statement that should run on 3 RDBMS: Oracle, DB2 and Sql Server and for each of these you write a separate SQL statement in order to make use of all features this RDBMS has to offer. How do you do it? Letting this aside, what is you opinion walking this path? Is it worth it in your experience? Why? Why not?
It would be great if code written for one platform would work on every other without any modification whatsoever, but this is usually not the case and probably never will be. What the current frameworks do is about all anyone can.
0
false
3
303
2009-10-19 10:42:57.790
Caching system for dynamically created files?
I have a web server that is dynamically creating various reports in several formats (pdf and doc files). The files require a fair amount of CPU to generate, and it is fairly common to have situations where two people are creating the same report with the same input. Inputs: raw data input as a string (equations, numbers, and lists of words), arbitrary length, almost 99% will be less than about 200 words the version of the report creation tool When a user attempts to generate a report, I would like to check to see if a file already exists with the given input, and if so return a link to the file. If the file doesn't already exist, then I would like to generate it as needed. What solutions are already out there? I've cached simple http requests before, but the keys were extremely simple (usually database id's) If I have to do this myself, what is the best way. The input can be several hundred words, and I was wondering how I should go about transforming the strings into keys sent to the cache. //entire input, uses too much memory, one to one mapping cache['one two three four five six seven eight nine ten eleven...'] //short keys cache['one two'] => 5 results, then I must narrow these down even more Is this something that should be done in a database, or is it better done within the web app code (python in my case) Thanks you everyone.
This is what Apache is for. Create a directory that will have the reports. Configure Apache to serve files from that directory. If the report exists, redirect to a URL that Apache will serve. Otherwise, the report doesn't exist, so create it. Then redirect to a URL that Apache will serve. There's no "hashing". You have a key ("a string (equations, numbers, and lists of words), arbitrary length, almost 99% will be less than about 200 words") and a value, which is a file. Don't waste time on a hash. You just have a long key. You can compress this key somewhat by making a "slug" out of it: remove punctuation, replace spaces with _, that kind of thing. You should create an internal surrogate key which is a simple integer. You're simply translating a long key to a "report" which either exists as a file or will be created as a file.
1.2
true
1
304
2009-10-22 00:25:16.950
Object oriented design?
I'm trying to learn object oriented programming, but am having a hard time overcoming my structured programming background (mainly C, but many others over time). I thought I'd write a simple check register program as an exercise. I put something together pretty quickly (python is a great language), with my data in some global variables and with a bunch of functions. I can't figure out if this design can be improved by creating a number of classes to encapsulate some of the data and functions and, if so, how to change the design. My data is basically a list of accounts ['checking', 'saving', 'Amex'], a list of categories ['food', 'shelter', 'transportation'] and lists of dicts that represent transactions [{'date':xyz, 'cat':xyz, 'amount':xyz, 'description':xzy]. Each account has an associated list of dicts. I then have functions at the account level (create-acct(), display-all-accts(), etc.) and the transaction level (display-entries-in-account(), enter-a-transaction(), edit-a-transaction(), display-entries-between-dates(), etc.) The user sees a list of accounts, then can choose an account and see the underlying transactions, with ability to add, delete, edit, etc. the accounts and transactions. I currently implement everything in one large class, so that I can use self.variable throughout, rather than explicit globals. In short, I'm trying to figure out if re-organizing this into some classes would be useful and, if so, how to design those classes. I've read some oop books (most recently Object-Oriented Thought Process). I like to think my existing design is readable and does not repeat itself. Any suggestions would be appreciated.
"My data is basically a list of accounts" Account is a class. "dicts that represent transactions" Transaction appears to be a class. You happen to have elected to represent this as a dict. That's your first pass at OO design. Focus on the Responsibilities and Collaborators. You have at least two classes of objects.
0.240117
false
3
305
2009-10-22 00:25:16.950
Object oriented design?
I'm trying to learn object oriented programming, but am having a hard time overcoming my structured programming background (mainly C, but many others over time). I thought I'd write a simple check register program as an exercise. I put something together pretty quickly (python is a great language), with my data in some global variables and with a bunch of functions. I can't figure out if this design can be improved by creating a number of classes to encapsulate some of the data and functions and, if so, how to change the design. My data is basically a list of accounts ['checking', 'saving', 'Amex'], a list of categories ['food', 'shelter', 'transportation'] and lists of dicts that represent transactions [{'date':xyz, 'cat':xyz, 'amount':xyz, 'description':xzy]. Each account has an associated list of dicts. I then have functions at the account level (create-acct(), display-all-accts(), etc.) and the transaction level (display-entries-in-account(), enter-a-transaction(), edit-a-transaction(), display-entries-between-dates(), etc.) The user sees a list of accounts, then can choose an account and see the underlying transactions, with ability to add, delete, edit, etc. the accounts and transactions. I currently implement everything in one large class, so that I can use self.variable throughout, rather than explicit globals. In short, I'm trying to figure out if re-organizing this into some classes would be useful and, if so, how to design those classes. I've read some oop books (most recently Object-Oriented Thought Process). I like to think my existing design is readable and does not repeat itself. Any suggestions would be appreciated.
You don't have to throw out structured programming to do object-oriented programming. The code is still structured, it just belongs to the objects rather than being separate from them. In classical programming, code is the driving force that operates on data, leading to a dichotomy (and the possibility that code can operate on the wrong data). In OO, data and code are inextricably entwined - an object contains both data and the code to operate on that data (although technically the code (and sometimes some data) belongs to the class rather than an individual object). Any client code that wants to use those objects should do so only by using the code within that object. This prevents the code/data mismatch problem. For a bookkeeping system, I'd approach it as follows: Low-level objects are accounts and categories (actually, in accounting, there's no difference between these, this is a false separation only exacerbated by Quicken et al to separate balance sheet items from P&L - I'll refer to them as accounts only). An account object consists of (for example) an account code, name and starting balance, although in the accounting systems I've worked on, starting balance is always zero - I've always used a "startup" transaction to set the balanaces initially. Transactions are a balanced object which consist of a group of accounts/categories with associated movements (changes in dollar value). By balanced, I mean they must sum to zero (this is the crux of double entry accounting). This means it's a date, description and an array or vector of elements, each containing an account code and value. The overall accounting "object" (the ledger) is then simply the list of all accounts and transactions. Keep in mind that this is the "back-end" of the system (the data model). You will hopefully have separate classes for viewing the data (the view) which will allow you to easily change it, depending on user preferences. For example, you may want the whole ledger, just the balance sheet or just the P&L. Or you may want different date ranges. One thing I'd stress to make a good accounting system. You do need to think like a bookkeeper. By that I mean lose the artificial difference between "accounts" and "categories" since it will make your system a lot cleaner (you need to be able to have transactions between two asset-class accounts (such as a bank transfer) and this won't work if every transaction needs a "category". The data model should reflect the data, not the view. The only difficulty there is remembering that asset-class accounts have the opposite sign from which you expect (negative values for your cash-at-bank mean you have money in the bank and your very high positive value loan for that company sports car is a debt, for example). This will make the double-entry aspect work perfectly but you have to remember to reverse the signs of asset-class accounts (assets, liabilities and equity) when showing or printing the balance sheet.
0.516408
false
3
305
2009-10-22 00:25:16.950
Object oriented design?
I'm trying to learn object oriented programming, but am having a hard time overcoming my structured programming background (mainly C, but many others over time). I thought I'd write a simple check register program as an exercise. I put something together pretty quickly (python is a great language), with my data in some global variables and with a bunch of functions. I can't figure out if this design can be improved by creating a number of classes to encapsulate some of the data and functions and, if so, how to change the design. My data is basically a list of accounts ['checking', 'saving', 'Amex'], a list of categories ['food', 'shelter', 'transportation'] and lists of dicts that represent transactions [{'date':xyz, 'cat':xyz, 'amount':xyz, 'description':xzy]. Each account has an associated list of dicts. I then have functions at the account level (create-acct(), display-all-accts(), etc.) and the transaction level (display-entries-in-account(), enter-a-transaction(), edit-a-transaction(), display-entries-between-dates(), etc.) The user sees a list of accounts, then can choose an account and see the underlying transactions, with ability to add, delete, edit, etc. the accounts and transactions. I currently implement everything in one large class, so that I can use self.variable throughout, rather than explicit globals. In short, I'm trying to figure out if re-organizing this into some classes would be useful and, if so, how to design those classes. I've read some oop books (most recently Object-Oriented Thought Process). I like to think my existing design is readable and does not repeat itself. Any suggestions would be appreciated.
There are many 'mindsets' that you could adopt to help in the design process (some of which point towards OO and some that don't). I think it is often better to start with questions rather than answers (i.e. rather than say, 'how can I apply inheritance to this' you should ask how this system might expect to change over time). Here's a few questions to answer that might point you towards design principles: Are other's going to use this API? Are they likely to break it? (info hiding) do I need to deploy this across many machines? (state management, lifecycle management) do i need to interoperate with other systems, runtimes, languages? (abstraction and standards) what are my performance constraints? (state management, lifecycle management) what kind of security environment does this component live in? (abstraction, info hiding, interoperability) how would i construct my objects, assuming I used some? (configuration, inversion of control, object decoupling, hiding implementation details) These aren't direct answers to your question, but they might put you in the right frame of mind to answer it yourself. :)
0.081452
false
3
305
2009-10-22 03:06:24.590
How to get output?
I am using the Python/C API with my app and am wondering how you can get console output with a gui app. When there is a script error, it is displayed via printf but this obviously has no effect with a gui app. I want to be able to obtain the output without creating a console. Can this be done? Edit - Im using Windows, btw. Edit - The Python/C library internally calls printf and does so before any script can be loaded and run. If there is an error I want to be able to get it.
If by printf you mean exactly thqt call from C code, you need to redirect (and un-buffer) your standard output (file descriptor 0) to somewhere you can pick up the data from -- far from trivial, esp. in Windows, although maybe doable. But why not just change that call in your C code to something more sensible? (Worst case, a geprintf function of your own devising that mimics printf to build a string then directs that string appropriately). If you actually mean print statements in Python code, it's much easier -- just set sys.stdout to an object with a write method accepting a string, and you can have that method do whatever you want, including logging, writing on a GUI windows, whatever you wish. Ah were it that simple at the C level!-)
0.201295
false
1
306
2009-10-23 14:41:46.160
Python Debugging: code editing on the fly
I am new to python and haven't been able to find out whether this is possible or not. I am using the PyDev plugin under Eclipse, and basically all I want to find out is, is it possible to edit code whilst you're sitting at a breakpoint? I.e. Edit code whilst you're debugging. It allows me to do this at present, but it seems to still be executing the line of code that previously existed before I made changes. Also, are you able to drag program execution back like you can in VBA and C# for example? If either of these are possible, how can I enable them?
You can run arbitrary commands in the console during the breakpoint. For my needs, this typically achieves the same purpose as coding live, although I do wish it were as elegant as simply using the editor.
0.265586
false
2
307
2009-10-23 14:41:46.160
Python Debugging: code editing on the fly
I am new to python and haven't been able to find out whether this is possible or not. I am using the PyDev plugin under Eclipse, and basically all I want to find out is, is it possible to edit code whilst you're sitting at a breakpoint? I.e. Edit code whilst you're debugging. It allows me to do this at present, but it seems to still be executing the line of code that previously existed before I made changes. Also, are you able to drag program execution back like you can in VBA and C# for example? If either of these are possible, how can I enable them?
When you start a Python program, it will be compiled into bytecode (and possibly saved as .pyc file). That means you can change the source but since you don't "open" the source again, the change won't be picked up. There are systems like TurboGears (a web framework) which detect these changes and restart themselves but that's probably going to confuse the debugger. Going back in time also isn't possible currently since the bytecode interpreter would need support for this.
0.265586
false
2
307
2009-10-23 15:24:32.580
How do I add text describing the code into a Python source file?
When writing code in Python, how can you write something next to it that explains what the code is doing, but which doesn't affect the code?
Write a comment? Python comments start with #.
0.265586
false
1
308
2009-10-26 02:52:45.483
wx python card game
I know python and I'm a newibe with wx python but I would like to make a card game. However I have no idea how to make a image follow the mouse and put it in the middle of the screen when the program running. It will be nice if you guys can help me out.
Going through the wxPython demo and looking at all the examples would be a good start. You'll likely find page Using Images | DragImage to be useful, since you'll probably want cards that you can drag. Generally, the demo can help you do most things in wxPython, and also show you what wxPython can do, and it's worth the time to see every demo. This approach works for everything except the very first step of getting an app running and putting a frame in it (since the demo itself is an app, but not a simple one). Any of the basic tutorials can help you get started with an app and frame in just a very lines of code.
0.386912
false
1
309
2009-10-28 00:47:25.467
How do I make powerpoint play presentations/load up ppts automatically?
I was wondering how I can make a script load powerpoint file, advance slides automatically and put it on full screen. Is there a way to make windows do that? Can I just load powerpoint.exe and maybe use some sort of API/Pipe to give commands from another script. To make a case: I'm making a script that automatically scans a folder in windows (using python) and loads up the powerpoint presentations and keeps playing them in order.
Save the file with the extension ".pps". That will make powerpoint open the file in presentation mode. The presentaion needs to designed to advance slides, else you will have to script that part.
0.101688
false
1
310
2009-10-29 12:26:51.543
Killing Python webservers
I am looking for a simple Python webserver that is easy to kill from within code. Right now, I'm playing with Bottle, but I can't find any way at all to kill it in code. If you know how to kill Bottle (in code, no Ctrl+C) that would be super, but I'll take anything that's Python, simple, and killable.
Raise exeption and handle it in main or use sys.exit
0.101688
false
1
311
2009-10-29 18:07:48.057
How this keyword is provided for object instances in C#?
When you have an object instance in C#, you can use the this keyword inside the instance scope. How does the compiler handles it? Is there any assistance for this at runtime? I am mainly wondering how C# does it vs in python you have to provide self for every function manually.
This is supported at the CLR level. The argument variable at slot 0 represents the "this" pointer. C# essentially generates calls to this as ldarg.0
0.545705
false
1
312
2009-10-30 16:21:54.357
MS-Access Database getting very large during inserts
I have a database which I regularly need to import large amounts of data into via some python scripts. Compacted, the data for a single months imports takes about 280mb, but during the import file size swells to over a gb. Given the 2gb size limit on mdb files, this is a bit of a concern. Apart from breaking the inserts into chunks and compacting inbetween each, are there any techniques for avoiding the increase in file size? Note that no temporary tables are being created/deleted during the process: just inserts into existing tables. And to forstall the inevitable comments: yes, I am required to store this data in Access 2003. No, I can't upgrade to Access 2007. If it could help, I could preprocess in sqlite. Edit: Just to add some further information (some already listed in my comments): The data is being generated in Python on a table by table basis, and then all of the records for that table batch inserted via odbc All processing is happening in Python: all the mdb file is doing is storing the data All of the fields being inserted are valid fields (none are being excluded due to unique key violations, etc.) Given the above, I'll be looking into how to disable row level locking via odbc and considering presorting the data and/or removing then reinstating indexes. Thanks for the suggestions. Any further suggestions still welcome.
File --> Options --> Current Database -> Check below options * Use the Cache format that is compatible with Microsoft Access 2010 and later * Clear Cache on Close Then, you file will be saved by compacting to the original size.
-0.067922
false
4
313
2009-10-30 16:21:54.357
MS-Access Database getting very large during inserts
I have a database which I regularly need to import large amounts of data into via some python scripts. Compacted, the data for a single months imports takes about 280mb, but during the import file size swells to over a gb. Given the 2gb size limit on mdb files, this is a bit of a concern. Apart from breaking the inserts into chunks and compacting inbetween each, are there any techniques for avoiding the increase in file size? Note that no temporary tables are being created/deleted during the process: just inserts into existing tables. And to forstall the inevitable comments: yes, I am required to store this data in Access 2003. No, I can't upgrade to Access 2007. If it could help, I could preprocess in sqlite. Edit: Just to add some further information (some already listed in my comments): The data is being generated in Python on a table by table basis, and then all of the records for that table batch inserted via odbc All processing is happening in Python: all the mdb file is doing is storing the data All of the fields being inserted are valid fields (none are being excluded due to unique key violations, etc.) Given the above, I'll be looking into how to disable row level locking via odbc and considering presorting the data and/or removing then reinstating indexes. Thanks for the suggestions. Any further suggestions still welcome.
A common trick, if feasible with regard to the schema and semantics of the application, is to have several MDB files with Linked tables. Also, the way the insertions take place matters with regards to the way the file size balloons... For example: batched, vs. one/few records at a time, sorted (relative to particular index(es)), number of indexes (as you mentioned readily dropping some during the insert phase)... Tentatively a pre-processing approach with say storing of new rows to a separate linked table, heap fashion (no indexes), then sorting/indexing this data is a minimal fashion, and "bulk loading" it to its real destination. Similar pre-processing in SQLite (has hinted in question) would serve the serve purpose. Keeping it "ALL MDB" is maybe easier (fewer languages/processes to learn, fewer inter-op issues [hopefuly ;-)]...) EDIT: on why inserting records in a sorted/bulk fashion may slow down the MDB file's growth (question from Tony Toews) One of the reasons for MDB files' propensity to grow more quickly than the rate at which text/data added to them (and their counterpart ability to be easily compacted back down) is that as information is added, some of the nodes that constitute the indexes have to be re-arranged (for overflowing / rebalancing etc.). Such management of the nodes seems to be implemented in a fashion which favors speed over disk space and harmony, and this approach typically serves simple applications / small data rather well. I do not know the specific logic in use for such management but I suspect that in several cases, node operations cause a particular node (or much of it) to be copied anew, and the old location simply being marked as free/unused but not deleted/compacted/reused. I do have "clinical" (if only a bit outdated) evidence that by performing inserts in bulk we essentially limit the number of opportunities for such duplication to occur and hence we slow the growth. EDIT again: After reading and discussing things from Tony Toews and Albert Kallal it appears that a possibly more significant source of bloat, in particular in Jet Engine 4.0, is the way locking is implemented. It is therefore important to set the database in single user mode to avoid this. (Read Tony's and Albert's response for more details.
0.201295
false
4
313
2009-10-30 16:21:54.357
MS-Access Database getting very large during inserts
I have a database which I regularly need to import large amounts of data into via some python scripts. Compacted, the data for a single months imports takes about 280mb, but during the import file size swells to over a gb. Given the 2gb size limit on mdb files, this is a bit of a concern. Apart from breaking the inserts into chunks and compacting inbetween each, are there any techniques for avoiding the increase in file size? Note that no temporary tables are being created/deleted during the process: just inserts into existing tables. And to forstall the inevitable comments: yes, I am required to store this data in Access 2003. No, I can't upgrade to Access 2007. If it could help, I could preprocess in sqlite. Edit: Just to add some further information (some already listed in my comments): The data is being generated in Python on a table by table basis, and then all of the records for that table batch inserted via odbc All processing is happening in Python: all the mdb file is doing is storing the data All of the fields being inserted are valid fields (none are being excluded due to unique key violations, etc.) Given the above, I'll be looking into how to disable row level locking via odbc and considering presorting the data and/or removing then reinstating indexes. Thanks for the suggestions. Any further suggestions still welcome.
One thing to watch out for is records which are present in the append queries but aren't inserted into the data due to duplicate key values, null required fields, etc. Access will allocate the space taken by the records which aren't inserted. About the only significant thing I'm aware of is to ensure you have exclusive access to the database file. Which might be impossible if doing this during the day. I noticed a change in behavior from Jet 3.51 (used in Access 97) to Jet 4.0 (used in Access 2000) when the Access MDBs started getting a lot larger when doing record appends. I think that if the MDB is being used by multiple folks then records are inserted once per 4k page rather than as many as can be stuffed into a page. Likely because this made index insert/update operations faster. Now compacting does indeed put as many records in the same 4k page as possible but that isn't of help to you.
0.201295
false
4
313
2009-10-30 16:21:54.357
MS-Access Database getting very large during inserts
I have a database which I regularly need to import large amounts of data into via some python scripts. Compacted, the data for a single months imports takes about 280mb, but during the import file size swells to over a gb. Given the 2gb size limit on mdb files, this is a bit of a concern. Apart from breaking the inserts into chunks and compacting inbetween each, are there any techniques for avoiding the increase in file size? Note that no temporary tables are being created/deleted during the process: just inserts into existing tables. And to forstall the inevitable comments: yes, I am required to store this data in Access 2003. No, I can't upgrade to Access 2007. If it could help, I could preprocess in sqlite. Edit: Just to add some further information (some already listed in my comments): The data is being generated in Python on a table by table basis, and then all of the records for that table batch inserted via odbc All processing is happening in Python: all the mdb file is doing is storing the data All of the fields being inserted are valid fields (none are being excluded due to unique key violations, etc.) Given the above, I'll be looking into how to disable row level locking via odbc and considering presorting the data and/or removing then reinstating indexes. Thanks for the suggestions. Any further suggestions still welcome.
Is your script executing a single INSERT statement per row of data? If so, pre-processing the data into a text file of many rows that could then be inserted with a single INSERT statement might improve the efficiency and cut down on the accumulating temporary crud that's causing it to bloat. You might also make sure the INSERT is being executed without transactions. Whether or not that happens implicitly depends on the Jet version and the data interface library you're using to accomplish the task. By explicitly making sure it's off, you could improve the situation. Another possibility is to drop the indexes before the insert, compact, run the insert, compact, re-instate the indexes, and run a final compact.
0.067922
false
4
313
2009-11-01 10:48:38.313
How can a recursive regexp be implemented in python?
I'm interested how can be implemented recursive regexp matching in Python (I've not found any examples :( ). For example how would one write expression which matches "bracket balanced" string like "foo(bar(bar(foo)))(foo1)bar1"
You can't do it with a regexp. Python doesn't support recursive regexp
0.3154
false
1
314
2009-11-03 10:27:04.573
What if setuptools isn't installed?
I'm just learning the art of writing a setup.py file for my project. I see there's lots of talk about setuptools, which is supposed to be superior to distutils. There's one thing though that I fail to understand, and I didn't see it addressed in any tutorial I've read about this: What if setuptools isn't installed? I understand it's not part of the standard library, so how can you assume the person who wants to install your program will have it installed?
In most librarys I ever installed for python, a warning apears "You have to install setuptools". You could do it as well I think, you could add a link so the user don't have to search the internet for it.
0.135221
false
5
315
2009-11-03 10:27:04.573
What if setuptools isn't installed?
I'm just learning the art of writing a setup.py file for my project. I see there's lots of talk about setuptools, which is supposed to be superior to distutils. There's one thing though that I fail to understand, and I didn't see it addressed in any tutorial I've read about this: What if setuptools isn't installed? I understand it's not part of the standard library, so how can you assume the person who wants to install your program will have it installed?
The standard way to distribute packages with setuptools includes an ez_setup.py script which will automatically download and install setuptools itself - on Windows I believe it will actually install an executable for easy_install. You can get this from the standard setuptools/easy_install distribution.
0.265586
false
5
315
2009-11-03 10:27:04.573
What if setuptools isn't installed?
I'm just learning the art of writing a setup.py file for my project. I see there's lots of talk about setuptools, which is supposed to be superior to distutils. There's one thing though that I fail to understand, and I didn't see it addressed in any tutorial I've read about this: What if setuptools isn't installed? I understand it's not part of the standard library, so how can you assume the person who wants to install your program will have it installed?
You can't assume it's installed. There are ways around that, you can fall back to distutils (but then why have setuptools in the first place) or you can install setuptools in setup.py (but I think that's evil). Use setuptools only if you need it. When it comes to setuptools vs distrubute, they are compatible, and choosing one over the other is mainly up to the user. The setup.py is identical.
1.2
true
5
315
2009-11-03 10:27:04.573
What if setuptools isn't installed?
I'm just learning the art of writing a setup.py file for my project. I see there's lots of talk about setuptools, which is supposed to be superior to distutils. There's one thing though that I fail to understand, and I didn't see it addressed in any tutorial I've read about this: What if setuptools isn't installed? I understand it's not part of the standard library, so how can you assume the person who wants to install your program will have it installed?
I would say it depends on what kind of user you are addressing. If they are simply users and not Python programmers, or if they are basic programmers, using setuptools might be a little bit too much at first. For those the distutils is perfect. For clients, I would definitely stick to distutils. For more enthusiast programmers the setuptools would be fine. Somehow, it also depends on how you want to distribute updates, and how often. For example, do the users have an access to the Internet without a nasty proxy setup by their company that would block setuptools? - We do have one and it's an extra step to configure and make it work on every workstation.
0
false
5
315
2009-11-03 10:27:04.573
What if setuptools isn't installed?
I'm just learning the art of writing a setup.py file for my project. I see there's lots of talk about setuptools, which is supposed to be superior to distutils. There's one thing though that I fail to understand, and I didn't see it addressed in any tutorial I've read about this: What if setuptools isn't installed? I understand it's not part of the standard library, so how can you assume the person who wants to install your program will have it installed?
I have used setuptools to compile many python scripts that I have written into windows EXEs. However, it has always been my understanding (from experience) that the computer running the compiled EXE does not need to have setup tools installed. Hope that helps
0.135221
false
5
315
2009-11-03 22:31:35.183
Changing web service url in SUDS library
Using SUDS SOAP client how do I specify web service URL. I can see clearly that WSDL path is specified in Client constructor but what if I wan't to change web service url?
I think you have to create a new Client object for each different URL.
0.135221
false
1
316
2009-11-04 13:30:16.327
How to match alphabetical chars without numeric chars with Python regexp?
Using Python module re, how to get the equivalent of the "\w" (which matches alphanumeric chars) WITHOUT matching the numeric characters (those which can be matched by "[0-9]")? Notice that the basic need is to match any character (including all unicode variation) without numerical chars (which are matched by "[0-9]"). As a final note, I really need a regexp as it is part of a greater regexp. Underscores should not be matched. EDIT: I hadn't thought about underscores state, so thanks for warnings about this being matched by "\w" and for the elected solution that addresses this issue.
You want [^\W\d]: the group of characters that is not (either a digit or not an alphanumeric). Add an underscore in that negated set if you don't want them either. A bit twisted, if you ask me, but it works. Should be faster than the lookahead alternative.
1.2
true
1
317
2009-11-04 15:51:16.753
how to process long-running requests in python workers?
I have a python (well, it's php now but we're rewriting) function that takes some parameters (A and B) and compute some results (finds best path from A to B in a graph, graph is read-only), in typical scenario one call takes 0.1s to 0.9s to complete. This function is accessed by users as a simple REST web-service (GET bestpath.php?from=A&to=B). Current implementation is quite stupid - it's a simple php script+apache+mod_php+APC, every requests needs to load all the data (over 12MB in php arrays), create all structures, compute a path and exit. I want to change it. I want a setup with N independent workers (X per server with Y servers), each worker is a python app running in a loop (getting request -> processing -> sending reply -> getting req...), each worker can process one request at a time. I need something that will act as a frontend: get requests from users, manage queue of requests (with configurable timeout) and feed my workers with one request at a time. how to approach this? can you propose some setup? nginx + fcgi or wsgi or something else? haproxy? as you can see i'am a newbie in python, reverse-proxy, etc. i just need a starting point about architecture (and data flow) btw. workers are using read-only data so there is no need to maintain locking and communication between them
You could use nginx load balancer to proxy to PythonPaste paster (which serves WSGI, for example Pylons), that launches each request as separate thread anyway.
0
false
4
318
2009-11-04 15:51:16.753
how to process long-running requests in python workers?
I have a python (well, it's php now but we're rewriting) function that takes some parameters (A and B) and compute some results (finds best path from A to B in a graph, graph is read-only), in typical scenario one call takes 0.1s to 0.9s to complete. This function is accessed by users as a simple REST web-service (GET bestpath.php?from=A&to=B). Current implementation is quite stupid - it's a simple php script+apache+mod_php+APC, every requests needs to load all the data (over 12MB in php arrays), create all structures, compute a path and exit. I want to change it. I want a setup with N independent workers (X per server with Y servers), each worker is a python app running in a loop (getting request -> processing -> sending reply -> getting req...), each worker can process one request at a time. I need something that will act as a frontend: get requests from users, manage queue of requests (with configurable timeout) and feed my workers with one request at a time. how to approach this? can you propose some setup? nginx + fcgi or wsgi or something else? haproxy? as you can see i'am a newbie in python, reverse-proxy, etc. i just need a starting point about architecture (and data flow) btw. workers are using read-only data so there is no need to maintain locking and communication between them
I think you can configure modwsgi/Apache so it will have several "hot" Python interpreters in separate processes ready to go at all times and also reuse them for new accesses (and spawn a new one if they are all busy). In this case you could load all the preprocessed data as module globals and they would only get loaded once per process and get reused for each new access. In fact I'm not sure this isn't the default configuration for modwsgi/Apache. The main problem here is that you might end up consuming a lot of "core" memory (but that may not be a problem either). I think you can also configure modwsgi for single process/multiple thread -- but in that case you may only be using one CPU because of the Python Global Interpreter Lock (the infamous GIL), I think. Don't be afraid to ask at the modwsgi mailing list -- they are very responsive and friendly.
0
false
4
318
2009-11-04 15:51:16.753
how to process long-running requests in python workers?
I have a python (well, it's php now but we're rewriting) function that takes some parameters (A and B) and compute some results (finds best path from A to B in a graph, graph is read-only), in typical scenario one call takes 0.1s to 0.9s to complete. This function is accessed by users as a simple REST web-service (GET bestpath.php?from=A&to=B). Current implementation is quite stupid - it's a simple php script+apache+mod_php+APC, every requests needs to load all the data (over 12MB in php arrays), create all structures, compute a path and exit. I want to change it. I want a setup with N independent workers (X per server with Y servers), each worker is a python app running in a loop (getting request -> processing -> sending reply -> getting req...), each worker can process one request at a time. I need something that will act as a frontend: get requests from users, manage queue of requests (with configurable timeout) and feed my workers with one request at a time. how to approach this? can you propose some setup? nginx + fcgi or wsgi or something else? haproxy? as you can see i'am a newbie in python, reverse-proxy, etc. i just need a starting point about architecture (and data flow) btw. workers are using read-only data so there is no need to maintain locking and communication between them
Another option is a queue table in the database. The worker processes run in a loop or off cron and poll the queue table for new jobs.
0
false
4
318
2009-11-04 15:51:16.753
how to process long-running requests in python workers?
I have a python (well, it's php now but we're rewriting) function that takes some parameters (A and B) and compute some results (finds best path from A to B in a graph, graph is read-only), in typical scenario one call takes 0.1s to 0.9s to complete. This function is accessed by users as a simple REST web-service (GET bestpath.php?from=A&to=B). Current implementation is quite stupid - it's a simple php script+apache+mod_php+APC, every requests needs to load all the data (over 12MB in php arrays), create all structures, compute a path and exit. I want to change it. I want a setup with N independent workers (X per server with Y servers), each worker is a python app running in a loop (getting request -> processing -> sending reply -> getting req...), each worker can process one request at a time. I need something that will act as a frontend: get requests from users, manage queue of requests (with configurable timeout) and feed my workers with one request at a time. how to approach this? can you propose some setup? nginx + fcgi or wsgi or something else? haproxy? as you can see i'am a newbie in python, reverse-proxy, etc. i just need a starting point about architecture (and data flow) btw. workers are using read-only data so there is no need to maintain locking and communication between them
The most simple solution in this case is to use the webserver to do all the heavy lifting. Why should you handle threads and/or processes when the webserver will do all that for you? The standard arrangement in deployments of Python is: The webserver start a number of processes each running a complete python interpreter and loading all your data into memory. HTTP request comes in and gets dispatched off to some process Process does your calculation and returns the result directly to the webserver and user When you need to change your code or the graph data, you restart the webserver and go back to step 1. This is the architecture used Django and other popular web frameworks.
0.058243
false
4
318
2009-11-04 21:42:33.000
How to get a reference to a module inside the module itself?
How can I get a reference to a module from within that module? Also, how can I get a reference to the package containing that module?
If all you need is to get access to module variable then use globals()['bzz'] (or vars()['bzz'] if it's module level).
0
false
2
319
2009-11-04 21:42:33.000
How to get a reference to a module inside the module itself?
How can I get a reference to a module from within that module? Also, how can I get a reference to the package containing that module?
If you have a class in that module, then the __module__ property of the class is the module name of the class. Thus you can access the module via sys.modules[klass.__module__]. This is also works for functions.
0.731964
false
2
319
2009-11-05 15:08:46.430
how to get tz_info object corresponding to current timezone?
Is there a cross-platform function in python (or pytz) that returns a tzinfo object corresponding to the timezone currently set on the computer? environment variables cannot be counted on as they are not cross-platform
Maybe try: import time print time.tzname #or time.tzname[time.daylight]
0
false
1
320
2009-11-06 05:15:29.627
how to write regex for below format using python
I want to validate below data using regex and python. Below is the dump of the data which Can be stored in string variable Start 0 .......... group=..... name=...... number=.... end=(digits) Start 1 .......... group=..... name=...... number=.... end=(digits) Start 2 .......... group=..... name=...... number=.... end=(digits) Start 3 .......... group=..... name=...... number=.... end=(digits) Where ......is some random data need not to validate ... .. Start 100 .......... group=..... name=...... number=.... end=(digits) Thanks in advance
You could use r'(Start \d+.*?group=.*?name=.*?number=.*?end=\d+)*'.
1.2
true
1
321
2009-11-07 00:21:37.017
How to implement Symfony Partials or Components in Django?
I've been developing in the Symfony framework for quite a time, but now I have to work with Django and I'm having problems with doing something like a "component" or "partial" in Symfony. That said, here is my goal: I have a webpage with lots of small widgets, all these need their logic - located in a "views.py" I guess. But, how do I tell Django to call all this logic and render it all as one webpage?
Assuming you are going to be using the components in different places on different pages I would suggest trying {% include "foo.html" %}. One of the (several) downsides of the Django templating language is that there is no concept of macros, so you need to be very consistent in the names of values in the context you pass to your main template so that the included template finds things it's looking for. Alternatively, in the view you can invoke the template engine for each component and save the result in a value passed in the context. Then in the main template simply use the value in the context. I'm not fond of either of these approaches. The more complex your template needs become the more you may want to look at Jinja2. (And, no, I don't buy the Django Party Line about 'template designers' -- never saw one in my life.)
0.201295
false
1
322
2009-11-07 17:39:19.647
Move or copy an entity to another kind
Is there a way to move an entity to another kind in appengine. Say you have a kind defines, and you want to keep a record of deleted entities of that kind. But you want to separate the storage of live object and archived objects. Kinds are basically just serialized dicts in the bigtable anyway. And maybe you don't need to index the archive in the same way as the live data. So how would you make a move or copy of a entity of one kind to another kind.
No - once created, the kind is a part of the entity's immutable key. You need to create a new entity and copy everything across. One way to do this would be to use the low-level google.appengine.api.datastore interface, which treats entities as dicts.
1.2
true
2
323
2009-11-07 17:39:19.647
Move or copy an entity to another kind
Is there a way to move an entity to another kind in appengine. Say you have a kind defines, and you want to keep a record of deleted entities of that kind. But you want to separate the storage of live object and archived objects. Kinds are basically just serialized dicts in the bigtable anyway. And maybe you don't need to index the archive in the same way as the live data. So how would you make a move or copy of a entity of one kind to another kind.
Unless someone's written utilities for this kind of thing, the way to go is to read from one and write to the other kind!
0.201295
false
2
323
2009-11-08 10:29:19.143
Does WordNet have "levels"? (NLP)
For example... Chicken is an animal. Burrito is a food. WordNet allows you to do "is-a"...the hiearchy feature. However, how do I know when to stop travelling up the tree? I want a LEVEL. That is consistent. For example, if presented with a bunch of words, I want wordNet to categorize all of them, but at a certain level, so it doesn't go too far up. Categorizing "burrito" as a "thing" is too broad, yet "mexican wrapped food" is too specific. I want to go up the hiearchy or down..until the right LEVEL.
sorry, may I ask which tool could judge "difficulty level" of sentences? I wish to find out "similar difficulty level" of sentences for user to read.
0
false
4
324