text
stringlengths
226
34.5k
Send Info from Script to Module Python Question: Hi I wonder how you can send info over to a module An Example main.py Looks like this from module import * print helloworld() module.py looks like this def helloworld(): print "Hello world!" Anyway i want to send over info from main.py to module.py is it possible? Answer: It is not clear what you mean by "send info", but if you but the typical way of passing a value would be with a function parameter. main.py: helloworld("Hello world!") module.py def helloworld(message): print message Is that what your looking for? Also the two uses of `print` in your example are redundant. Addendum: It might be useful for you to read the [Python documentation regarding function declarations](http://docs.python.org/tutorial/controlflow.html#defining- functions), or, alternatively, most Python introductory tutorials would cover the same ground in fewer words. Anything you read there is going to apply equally regardless of whether the function is in the same module or another module.
overload print python Question: Am i able to overload the print function? and call the normal function? What i want to do is after a specific line i want print to call my print which will call the normal print and write a copy to file. Also i dont know how to overload print. I dont know how to do variable length arguments. i'll look it up soon but <http://stackoverflow.com/questions/550470/overload-print- python/550477#550477> just told me i cant overload print in 2.x which is what i am using. Answer: For those reviewing the previously dated answers, as of version release "Python 2.6" there is a new answer to the original poster's question. In Python 2.6 and up, you can disable the print statement in favor of the print function, and then override the print function with your own print function: from __future__ import print_function # This must be the first statement before other statements. # You may only put a quoted or triple quoted string, # Python comments, other future statements, or blank lines before the __future__ line. try: import __builtin__ except ImportError: # Python 3 import builtins as __builtin__ def print(*args, **kwargs): """My custom print() function.""" # Adding new arguments to the print function signature # is probably a bad idea. # Instead consider testing if custom argument keywords # are present in kwargs __builtin__.print('My overridden print() function!') return __builtin__.print(*args, **kwargs) Of course you'll need to consider that this print function is only module wide at this point. You could choose to override `__builtin__.print`, but you'll need to save the original `__builtin__.print`; likely mucking with the `__builtin__` namespace.
Help me implement Blackjack in Python (updated) Question: I am in the process of writing a blackjack code for python, and i was hoping someone would be able to tell me how to make it: 1. Recognize what someone has typed i.e. "Hit" or "Stand" and react accordingly. 2. Calculate what the player's score is and whether it is an ace and a jack together, and automatically wins. Ok, this is what i have gotten so far. "This imports the random object into Python, it allows it to generate random numbers." import random print("Hello and welcome to Sam's Black Jack!") input("Press <ENTER> to begin.") card1name = 1 card2name = 1 card3name = 1 card4name = 1 card5name = 1 "This defines the values of the character cards." Ace = 1 Jack = 10 Queen = 10 King = 10 decision = 0 "This generates the cards that are in your hand and the dealer's hand to begin with. card1 = int(random.randrange(12) + 1) card2 = int(random.randrange(12) + 1) card3 = int(random.randrange(12) + 1) card4 = int(random.randrange(12) + 1) card5 = int(random.randrange(12) + 1) total1 = card1 + card2 "This makes the value of the Ace equal 11 if the total of your cards is under 21" if total1 <= 21: Ace = 11 "This defines what the cards are" if card1 == 11: card1 = 10 card1name = "Jack" if card1 == 12: card1 = 10 card1name = "Queen" if card1 == 13: card1 = 10 card1name = "King" if card1 == 1: card1 = Ace card1name = "Ace" elif card1: card1name = card1 if card2 == 11: card2 = 10 card2name = "Jack" if card2 == 12: card2 = 10 card2name = "Queen" if card2 == 13: card2 = 10 card2name = "King" if card2 == 1: card2 = Ace card2name = "Ace" elif card2: card2name = card2 if card3 == 11: card3 = 10 card3name = "Jack" if card3 == 12: card3 = 10 card3name = "Queen" if card3 == 13: card3 = 10 card3name= "King" if card3 == 1: card3 = Ace card3name = "Ace" elif card3: card3name = card3 if card4 == 11: card4 = 10 card4name = "Jack" if card4 == 12: card4 = 10 card4name = "Queen" if card4 == 13: card4 = 10 card4name = "King" if card4 == 1: card4 = Ace card4name = "Ace" elif card4: card4name = card4 if card5 == 11: card5 = 10 card5name = "Jack" if card5 == 12: card5 = 10 card5name = "Queen" if card5 == 13: card5 = 10 card5name = "King" if card5 == 1: card5 = Ace card5name = "Ace" elif card5: card5name = card5 "This creates the totals of your hand" total2 = card1 + card2 total3 = card1 + card2 + card3 print("You hand is ", card1name," and", card2name) print("The total of your hand is", total2) decision = input("Do you want to HIT or STAND?").lower() "This is the decision for Hit or Stand" if 'hit' or 'HIT' or 'Hit' in decision: decision = 1 print("You have selected HIT") print("Your hand is ", card1name,",",card2name," and", card3name) print("The total of your hand is", total3) if 'STAND' or 'stand' or 'Stand' in decision: print("You have selected STAND") "Dealer's Hand" dealer = card4 + card5 print() print("The dealer's hand is", card4name," and", card5name) if decision == 1 and dealer < total3: print("Congratulations, you beat the dealer!") if decision == 1 and dealer > total3: print("Too bad, the dealer beat you!") Ok, nevermind, i fixed it :D I just changed the Hit and Stand to Yes or No if total2 < 21: decision = input("Do you want to hit? (Yes or No)") "This is the decision for Hit or Stand" if decision == 'Yes': print("You have selected HIT") print("Your hand is ", card1name,",",card2name," and", card3name) print("The total of your hand is", total3) if decision == 'No': print("You have selected STAND") Answer: This can get you started: <http://docs.python.org/library/random.html> <http://docs.python.org/library/strings.html> <http://docs.python.org/library/stdtypes.html> <http://docs.python.org/reference/index.html> I see you have added some code; that's good. Think about the parts of your program that will need to exist. You will need some representation of "cards" -- cards have important features such as their value, their suit, etc. Given a card, you should be able to tell what its value is, whether it's a Jack or an Ace or a 2 of hearts. Read up on "classes" in Python to get started with this. You will also have a hand of cards -- the cards your dealer is currently holding, and the cards your player is currently holding. A "hand" is a collection of cards, which you (the programmer) can add new cards to (when a card is dealt). You might want to do that using "lists" or "arrays" or "classes" that contain those arrays. A hand also has a value, which is usually the sum of card values, but as you know, Aces are special (they can be 1 or 11), so you'll need to treat that case correctly with some "if statements". You will also have a deck; a deck is a special collection -- it has exactly 52 cards when it starts, and none of the cards are repeated (you could, of course, be using several decks to play, but that's a complication you can solve later). How do you populate a deck like that? Your program will want to "deal" from the deck -- so you'll need a way to keep track of which cards have been dealt to players. That's a lot of stuff. Try writing down all the logic of what your program needs to do in simple sentences, without worrying about Python. This is called "pseudo-code". It's not a real program, it's just a plan for what exactly you are going to do -- it's useful the way a map is useful. If you are going to a place you've been to a 100 times, you don't need a map, but if you are driving to some town you've never been to, you want to plan out your route first, before getting behind the wheel.. Update your question with your pseudocode, and any attempts you have made (or will have made) to translate the pseudocode to Python.
Python list serialization - fastest method Question: I need to load (de-serialize) a pre-computed list of integers from a file in a Python script (into a Python list). The list is large (upto millions of items), and I can choose the format I store it in, as long as loading is fastest. Which is the fastest method, and why? 1. Using `import` on a .py file that just contains the list assigned to a variable 2. Using `cPickle`'s `load` 3. Some other method (perhaps `numpy`?) Also, how can one benchmark such things reliably? **Addendum:** measuring this reliably is difficult, because `import` is cached so it can't be executed multiple times in a test. The loading with pickle also gets faster after the first time probably because page-precaching by the OS. Loading 1 million numbers with `cPickle` takes 1.1 sec the first time run, and 0.2 sec on subsequent executions of the script. Intuitively I feel `cPickle` should be faster, but I'd appreciate numbers (this is quite a challenge to measure, I think). And yes, it's important for me that this performs quickly. Thanks Answer: I would guess [cPickle](http://docs.python.org/library/pickle.html#module- cPickle) will be fastest if you really need the thing in a list. If you can use an [array](http://docs.python.org/library/array.html), which is a built-in sequence type, I timed this at a quarter of a second for 1 million integers: from array import array from datetime import datetime def WriteInts(theArray,filename): f = file(filename,"wb") theArray.tofile(f) f.close() def ReadInts(filename): d = datetime.utcnow() theArray = array('i') f = file(filename,"rb") try: theArray.fromfile(f,1000000000) except EOFError: pass print "Read %d ints in %s" % (len(theArray),datetime.utcnow() - d) return theArray if __name__ == "__main__": a = array('i') a.extend(range(0,1000000)) filename = "a_million_ints.dat" WriteInts(a,filename) r = ReadInts(filename) print "The 5th element is %d" % (r[4])
Django - having middleware communicate with views/templates Question: Alright, this is probably a really silly question but I am new to Python/Django so I can't really wrap my head around its scoping concepts just yet. Right now I am writing a [middleware](http://docs.djangoproject.com/en/dev/topics/http/middleware/#topics- http-middleware) class to handle some stuff, and I want to set 'global' variables that my views and templates can access. What is the "right" way of doing this? I considered doing something like this: ## middleware.py from django.conf import settings class BeforeFilter(object): def process_request(self, request): settings.my_var = 'Hello World' return None ## views.py from django.conf import settings from django.http import HttpResponse def myview(request): return HttpResponse(settings.my_var) Although this works, I am not sure if it is the "Django way" or the "Python way" of doing this. So, my questions are: 1\. Is this the right way? 2\. If it is the right way, what is the right way of adding variables that can be used in the actual template from the middleware? Say I want to evaluate something and I want to set a variable `headername` as 'My Site Name' in the middleware, and I want to be able to do `{{ headername }}` in all templates. Doing it the way I have it now I'd have to add `headername` to the context inside every view. Is there anyway to bypass this? I am thinking something along the lines of CakePHP's `$this->set('headername','My Site Name');` 3\. I am using the middleware class as an equivalent of CakePHP's `beforeFilter` that runs before every view (or controller in CakePHP) is called. Is this the right way of doing this? 4\. Completely unrelated but it is a small question, what is a nice way of printing out the contents of a variable to the browser ala `print_r`? Say I want to see all the stuff inside the `request` that is passed into the view? Is `pprint` the answer? Answer: 1. It's not the best way. You could set my_var on the request rather than on the settings. Settings are global and apply to the whole site. You don't want to modify it for every request. There could be concurrency issues with multiple request updating/reading the variable at the same time. 2. To access request.my_var in your templates you could do **{{ request.my_var }}**. To get access to the request variable in your template you will have to add **django.core.context_processors.request** to your **TEMPLATE_CONTEXT_PROCESSORS** setting. 3. Yes. Other terminology to describe request middleware would be request pre-processor/filter/interceptor. Also, if you want to use a common Site name for the header in your templates, you might want to check out the Django Sites application which provides a site name variable for your use.
Using jep.invoke() method Question: I need to call a function from a python script and pass in parameters into it. I have a test python script which I can call and run from java using Jepp - this then adds the person. Eg Test.py import Finding from Finding import * f = Finding() f.addFinding("John", "Doe", 27) Within my Finding class I have `addFinding(firstname, lastName, age)` However, I wish to be able to do this from within java. Should I be using the `jep.invoke()` method. Does anyone have a hello world example of such a thing being done or forward me to some good examples? Does anyone have any suggestions please? Thanks in advance Answer: Easier way to run python code in java is to use [jython](http://www.jython.org/). **EDIT:** Found an [article with examples](http://wiki.python.org/jython/JythonMonthly/Articles/September2006/1) in the jython website.
How to keep track of thread progress in Python without freezing the PyQt GUI? Question: ## **Questions:** 1. What is the best practice for keeping track of a tread's progress without locking the GUI ("Not Responding")? 2. Generally, what are the best practices for threading as it applies to GUI development? ## **Question Background:** * I have a PyQt GUI for Windows. * It is used to process sets of HTML documents. * It takes anywhere from three seconds to three hours to process a set of documents. * I want to be able to process multiple sets at the same time. * I don't want the GUI to lock. * I'm looking at the threading module to achieve this. * I am relatively new to threading. * The GUI has one progress bar. * I want it to display the progress of the selected thread. * Display results of the selected thread if it's finished. * I'm using Python 2.5. **My Idea:** Have the threads emit a QtSignal when the progress is updated that triggers some function that updates the progress bar. Also signal when finished processing so results can be displayed. #NOTE: this is example code for my idea, you do not have # to read this to answer the question(s). import threading from PyQt4 import QtCore, QtGui import re import copy class ProcessingThread(threading.Thread, QtCore.QObject): __pyqtSignals__ = ( "progressUpdated(str)", "resultsReady(str)") def __init__(self, docs): self.docs = docs self.progress = 0 #int between 0 and 100 self.results = [] threading.Thread.__init__(self) def getResults(self): return copy.deepcopy(self.results) def run(self): num_docs = len(self.docs) - 1 for i, doc in enumerate(self.docs): processed_doc = self.processDoc(doc) self.results.append(processed_doc) new_progress = int((float(i)/num_docs)*100) #emit signal only if progress has changed if self.progress != new_progress: self.emit(QtCore.SIGNAL("progressUpdated(str)"), self.getName()) self.progress = new_progress if self.progress == 100: self.emit(QtCore.SIGNAL("resultsReady(str)"), self.getName()) def processDoc(self, doc): ''' this is tivial for shortness sake ''' return re.findall('<a [^>]*>.*?</a>', doc) class GuiApp(QtGui.QMainWindow): def __init__(self): self.processing_threads = {} #{'thread_name': Thread(processing_thread)} self.progress_object = {} #{'thread_name': int(thread_progress)} self.results_object = {} #{'thread_name': []} self.selected_thread = '' #'thread_name' def processDocs(self, docs): #create new thread p_thread = ProcessingThread(docs) thread_name = "example_thread_name" p_thread.setName(thread_name) p_thread.start() #add thread to dict of threads self.processing_threads[thread_name] = p_thread #init progress_object for this thread self.progress_object[thread_name] = p_thread.progress #connect thread signals to GuiApp functions QtCore.QObject.connect(p_thread, QtCore.SIGNAL('progressUpdated(str)'), self.updateProgressObject(thread_name)) QtCore.QObject.connect(p_thread, QtCore.SIGNAL('resultsReady(str)'), self.updateResultsObject(thread_name)) def updateProgressObject(self, thread_name): #update progress_object for all threads self.progress_object[thread_name] = self.processing_threads[thread_name].progress #update progress bar for selected thread if self.selected_thread == thread_name: self.setProgressBar(self.progress_object[self.selected_thread]) def updateResultsObject(self, thread_name): #update results_object for thread with results self.results_object[thread_name] = self.processing_threads[thread_name].getResults() #update results widget for selected thread try: self.setResultsWidget(self.results_object[thread_name]) except KeyError: self.setResultsWidget(None) Any commentary on this approach (e.g. drawbacks, pitfalls, praises, etc.) will be appreciated. ## **Resolution:** I ended up using the QThread class and associated signals and slots to communicate between threads. This is primarily because my program already uses Qt/PyQt4 for the GUI objects/widgets. This solution also required fewer changes to my existing code to implement. Here is a link to an applicable Qt article that explains how Qt handles threads and signals, <http://www.linuxjournal.com/article/9602>. Excerpt below: > Fortunately, Qt permits signals and slots to be connected across threads—as > long as the threads are running their own event loops. This is a much > cleaner method of communication compared to sending and receiving events, > because it avoids all the bookkeeping and intermediate QEvent-derived > classes that become necessary in any nontrivial application. Communicating > between threads now becomes a matter of connecting signals from one thread > to the slots in another, and the mutexing and thread-safety issues of > exchanging data between threads are handled by Qt. > > Why is it necessary to run an event loop within each thread to which you > want to connect signals? The reason has to do with the inter-thread > communication mechanism used by Qt when connecting signals from one thread > to the slot of another thread. When such a connection is made, it is > referred to as a queued connection. When signals are emitted through a > queued connection, the slot is invoked the next time the destination > object's event loop is executed. If the slot had instead been invoked > directly by a signal from another thread, that slot would execute in the > same context as the calling thread. Normally, this is not what you want (and > especially not what you want if you are using a database connection, as the > database connection can be used only by the thread that created it). The > queued connection properly dispatches the signal to the thread object and > invokes its slot in its own context by piggy-backing on the event system. > This is precisely what we want for inter-thread communication in which some > of the threads are handling database connections. The Qt signal/slot > mechanism is at root an implementation of the inter-thread event-passing > scheme outlined above, but with a much cleaner and easier-to-use interface. **NOTE:** _eliben_ also has a good answer, and if I weren't using PyQt4, which handles thread-safety and mutexing, his solution would have been my choice. Answer: If you want to use signals to indicate progress to the main thread then you should really be using PyQt's QThread class instead of the Thread class from Python's threading module. A simple example which uses QThread, signals and slots can be found on the PyQt Wiki: <https://wiki.python.org/moin/PyQt/Threading,_Signals_and_Slots>
How to obtain the keycodes in Python Question: I have to know what key is pressed, but not need the code of the Character, i want to know when someone press the 'A' key even if the key obtained is 'a' or 'A', and so with all other keys. I can't use PyGame or any other library (including Tkinter). Only Python Standard Library. And this have to be done in a terminal, not a graphical interface. NOT NEED THE CHARACTER CODE. I NEED TO KNOW THE KEY CODE. Ex: ord('a') != ord('A') # 97 != 65 someFunction('a') == someFunction('A') # a_code == A_code Answer: See [tty](http://docs.python.org/library/tty.html) standard module. It allows switching from default line-oriented (cooked) mode into char-oriented (cbreak) mode with [tty.setcbreak(sys.stdin)](http://docs.python.org/library/tty.html#tty.setcbreak). Reading single char from sys.stdin will result into next pressed keyboard key (if it generates code): import sys import tty tty.setcbreak(sys.stdin) while True: print ord(sys.stdin.read(1)) _Note: solution is Unix (including Linux) only._ Edit: On Windows try [msvcrt.getche()](http://docs.python.org/library/msvcrt.html#msvcrt.getche)/[getwche()](http://docs.python.org/library/msvcrt.html#msvcrt.getwche). /me has nowhere to try... * * * Edit 2: Utilize win32 low-level console API via [ctypes.windll](http://docs.python.org/library/ctypes.html) (see [example at SO](http://stackoverflow.com/questions/239020/how-can-i-call-a-dll-from-a- scripting-language)) with `ReadConsoleInput` function. You should filter out keypresses - `e.EventType==KEY_EVENT` and look for `e.Event.KeyEvent.wVirtualKeyCode` value. Example of application (not in Python, just to get an idea) can be found at [http://www.benryves.com/tutorials/?t=winconsole&c=4](http://www.benryves.com/tutorials/?t=winconsole&c=4).
Easiest way to create a scrollable area using wxPython? Question: Okay, so I want to display a series of windows within windows and have the whole lot scrollable. I've been hunting through [the wxWidgets documentation](http://docs.wxwidgets.org/stable/wx_wxscrolledwindow.html#wxscrolledwindow) and a load of examples from various sources on t'internet. Most of those seem to imply that a wx.ScrolledWindow should work if I just pass it a nested group of sizers(?): > The most automatic and newest way is to simply let sizers determine the > scrolling area.This is now the default when you set an interior sizer into a > wxScrolledWindow with wxWindow::SetSizer. The scrolling area will be set to > the size requested by the sizer and the scrollbars will be assigned for each > orientation according to the need for them and the scrolling increment set > by wxScrolledWindow::SetScrollRate. ...but all the example's I've seen seem to use the older methods listed as ways to achieve scrolling. I've got something basic working, but as soon as you start scrolling you lose the child windows: import wx class MyCustomWindow(wx.Window): def __init__(self, parent): wx.Window.__init__(self, parent) self.Bind(wx.EVT_PAINT, self.OnPaint) self.SetSize((50,50)) def OnPaint(self, event): dc = wx.BufferedPaintDC(self) dc.SetPen(wx.Pen('blue', 2)) dc.SetBrush(wx.Brush('blue')) (width, height)=self.GetSizeTuple() dc.DrawRoundedRectangle(0, 0,width, height, 8) class TestFrame(wx.Frame): def __init__(self): wx.Frame.__init__(self, None, -1) self.Bind(wx.EVT_SIZE, self.OnSize) self.scrolling_window = wx.ScrolledWindow( self ) self.scrolling_window.SetScrollRate(1,1) self.scrolling_window.EnableScrolling(True,True) self.sizer_container = wx.BoxSizer( wx.VERTICAL ) self.sizer = wx.BoxSizer( wx.HORIZONTAL ) self.sizer_container.Add(self.sizer,1,wx.CENTER,wx.EXPAND) self.child_windows = [] for i in range(0,50): wind = MyCustomWindow(self.scrolling_window) self.sizer.Add(wind, 0, wx.CENTER|wx.ALL, 5) self.child_windows.append(wind) self.scrolling_window.SetSizer(self.sizer_container) def OnSize(self, event): self.scrolling_window.SetSize(self.GetClientSize()) if __name__=='__main__': app = wx.PySimpleApp() f = TestFrame() f.Show() app.MainLoop() Answer: Oops.. turns out I was creating my child windows badly: wind = MyCustomWindow(self) should be: wind = MyCustomWindow(self.scrolling_window) ..which meant the child windows were waiting for the top-level window (the frame) to be re-drawn instead of listening to the scroll window. Changing that makes it all work wonderfully :)
AuiNotebook, where did the event happend Question: How can I find out from which AuiNotebook page an event occurred? EDIT: Sorry about that. Here are a code example. How do I find the notebook page from witch the mouse was clicked in? #!/usr/bin/python #12_aui_notebook1.py import wx import wx.lib.inspection class MyFrame(wx.Frame): def __init__(self, *args, **kwds): wx.Frame.__init__(self, *args, **kwds) self.nb = wx.aui.AuiNotebook(self) self.new_panel('Pane 1') self.new_panel('Pane 2') self.new_panel('Pane 3') def new_panel(self, nm): pnl = wx.Panel(self) self.nb.AddPage(pnl, nm) self.sizer = wx.BoxSizer() self.sizer.Add(self.nb, 1, wx.EXPAND) self.SetSizer(self.sizer) pnl.Bind(wx.EVT_LEFT_DOWN, self.click) def click(self, event): print 'Mouse click' #How can I find out from witch page this click came from? class MyApp(wx.App): def OnInit(self): frame = MyFrame(None, -1, '12_aui_notebook1.py') frame.Show() self.SetTopWindow(frame) return 1 if __name__ == "__main__": app = MyApp(0) # wx.lib.inspection.InspectionTool().Show() app.MainLoop() Oerjan Pettersen Answer: For a mouse click you can assume the current selected page is the one that got the click. I added a few lines to your code. See comments def new_panel(self, nm): pnl = wx.Panel(self) # just to debug, I added a string attribute to the panel # don't you love dynamic languages? :) pnl.identifierTag = nm self.nb.AddPage(pnl, nm) self.sizer = wx.BoxSizer() self.sizer.Add(self.nb, 1, wx.EXPAND) self.SetSizer(self.sizer) pnl.Bind(wx.EVT_LEFT_DOWN, self.click) def click(self, event): print 'Mouse click' # get the current selected page page = self.nb.GetPage(self.nb.GetSelection()) # notice that it is the panel that you created in new_panel print page.identifierTag
Need to monitor directory change, and perform action Question: 1st of all: I'm not programmer, neither Linux guru, just have to work with Linux, Oracle, shell scripts. My current task is to monitor a table in Oracle (tool: sqlplus), and if it contains a certain row, then watch a linux directory for a growing tmp file, and log its attributes (e.g. `ls -l`), in every 5 second. The most important part is: this tmp file will be deleted if the above record is deleted from the oracle table, and I need the last contents of this tmp file. I can't control the Oracle data, just got query rights. The available tools are: bash, awk, sed, some old version of perl, ruby (not 1.9*), and python (2.5). I don't have install rights, so most of the outside libraries are not accessible. I know I can run some libraries from my $HOME, but I don't have internet connection on that machine: so can't download any library. Inotify is not available (older kernel). Any idea where to start/how to do it? Thanks in advance. Answer: How about creating a hard link in another directory, then, when the file "disappears" in the original location, the hard link will still have access to the content.
How to get the Freebase Python libraries to work in IronPython Question: How do I get the [Python libraries for using the Freebase API](http://code.google.com/p/freebase-python/) to work under [IronPython 2.0](http://www.codeplex.com/IronPython)? When I "import freebase.api", I get "ImportError: No module named django.utils". What gives? Answer: You're missing the simplejson module. Since easy_install doesn't yet work with IronPython, your best bet is to grab the latest code using SVN from their [Google Code project](http://code.google.com/p/simplejson/source/checkout), or [download a zip file](http://apiguy.com/files/simplejson-2.0.9.zip). Put all of the files that you obtained into the lib/site-packages/simplejson folder under your IronPython installation directory. Now you'll be able to successfully "import freebase.api".
How to change a Python module name? Question: Is it only possible if I rename the file? Or is there a `__module__` variable to the file to define what's its name? Answer: If you really want to import the file 'oldname.py' with the statement 'import newname', there is a trick that makes it possible: Import the module _somewhere_ with the old name, then inject it into `sys.modules` with the new name. Subsequent import statements will also find it under the new name. Code sample: # this is in file 'oldname.py' ...module code... Usage: # inject the 'oldname' module with a new name import oldname import sys sys.modules['newname'] = oldname Now you can everywhere your module with `import newname`.
How can I modify password expiration in Windows using Python? Question: How can I modify the password expiration to "never" on Windows XP for a local user with Python? I have the PyWIN and WMI modules on board but have no solution. I managed to query the current settings via WMI(based on Win32_UserAccount class), but how can modify it? Answer: If you are running your python script with ActvePython against Active Directory, then you can use something like this: import win32com.client ads = win32com.client.Dispatch('ADsNameSpaces') user = ads.getObject("", "WinNT://DOMAIN/username,user") user.Getinfo() user.Put('userAccountControl', 65536 | user.Get('userAccountControl')) user.Setinfo() But if your python is running under unix, you need two things to talk to Active Directory: Kerberos and LDAP. Once you have a SASL(GSSAPI(KRB5)) authenticated LDAP connection to your Active Directory server, then you access the target user's "userAccountControl" attribute. userAccountControl is an integer attribute, treated as a bit field, on which you must set the DONT EXPIRE PASSWORD bit. See [this KB article](http://support.microsoft.com/kb/305144) for bit values.
How can you programmatically tell the CPython interpreter to enter interactive mode when done? Question: If you invoke the cpython interpreter with the -i option, it will enter the interactive mode upon completing any commands or scripts it has been given to run. Is there a way, within a program to get the interpreter to do this even when it has not been given -i? The obvious use case is in debugging by interactively inspecting the state when an exceptional condition has occurred. Answer: You want the [code module](http://docs.python.org/library/code.html). #!/usr/bin/env python import code code.interact("Enter Here")
Python parsing Question: I'm trying to parse the title tag in an RSS 2.0 feed into three different variables for each entry in that feed. Using ElementTree I've already parsed the RSS so that I can print each title [minus the trailing `)`] with the code below: > > feed = getfeed("http://www.tourfilter.com/dallas/rss/by_concert_date") > > for item in feed: > print repr(item.title[0:-1]) > I include that because, as you can see, the item.title is a repr() data type, which I don't know much about. A particular `repr(item.title[0:-1])` `print`ed in the interactive window looks like this: > > 'randy travis (Billy Bobs 3/21' > 'Michael Schenker Group (House of Blues Dallas 3/26' > The user selects a band and I hope to, after parsing each `item.title` into 3 variables (one each for band, venue, and date... or possibly an array or I don't know...) select only those related to the band selected. Then they are sent to Google for geocoding, but that's another story. I've seen some examples of `regex` and I'm reading about them, but it seems very complicated. Is it? I thought maybe someone here would have some insight as to exactly how to do this in an intelligent way. Should I use the `re` module? Does it matter that the output is currently is `repr()`s? Is there a better way? I was thinking I'd use a loop like (and this is my pseudoPython, just kind of notes I'm writing): list = bandRaw,venue,date,latLong for item in feed: parse item.title for bandRaw, venue, date if bandRaw == str(band) send venue name + ", Dallas, TX" to google for geocoding return lat,long list = list + return character + bandRaw + "," + venue + "," + date + "," + lat + "," + long else In the end, I need to have the chosen entries in a .csv (comma-delimited) file looking like this: > > band,venue,date,lat,long > randy travis,Billy Bobs,3/21,1234.5678,1234.5678 > Michael Schenker Group,House of Blues Dallas,3/26,4321.8765,4321.8765 > I hope this isn't too much to ask. I'll be looking into it on my own, just thought I should post here to make sure it got answered. So, the question is, how do I best parse each `repr(item.title[0:-1])` in the `feed` into the 3 separate values that I can then concatenate into a .csv file? Answer: Don't let regex scare you off... it's well worth learning. Given the examples above, you might try putting the trailing parenthesis back in, and then using this pattern: import re pat = re.compile('([\w\s]+)\(([\w\s]+)(\d+/\d+)\)') info = pat.match(s) print info.groups() ('Michael Schenker Group ', 'House of Blues Dallas ', '3/26') To get at each group individual, just call them on the `info` object: print info.group(1) # or info.groups()[0] print '"%s","%s","%s"' % (info.group(1), info.group(2), info.group(3)) "Michael Schenker Group","House of Blues Dallas","3/26" The hard thing about regex in this case is making sure you know all the known possible characters in the title. If there are non-alpha chars in the 'Michael Schenker Group' part, you'll have to adjust the regex for that part to allow them. The pattern above breaks down as follows, which is parsed left to right: `([\w\s]+)` : Match any word or space characters (the plus symbol indicates that there should be one or more such characters). The parentheses mean that the match will be captured as a group. This is the "Michael Schenker Group " part. If there can be numbers and dashes here, you'll want to modify the pieces between the square brackets, which are the possible characters for the set. `\(` : A literal parenthesis. The backslash escapes the parenthesis, since otherwise it counts as a regex command. This is the "(" part of the string. `([\w\s]+)` : Same as the one above, but this time matches the "House of Blues Dallas " part. In parentheses so they will be captured as the second group. `(\d+/\d+)` : Matches the digits 3 and 26 with a slash in the middle. In parentheses so they will be captured as the third group. `\)` : Closing parenthesis for the above. The python intro to regex is quite good, and you might want to spend an evening going over it <http://docs.python.org/library/re.html#module-re>. Also, check Dive Into Python, which has a friendly introduction: <http://diveintopython3.ep.io/regular-expressions.html>. EDIT: See zacherates below, who has some nice edits. Two heads are better than one!
Python in tcsh Question: I don't have much experience with tcsh, but I'm interested in learning. I've been having issues getting Python to see PYTHONPATH. I can echo $PYTHONPATH, and it is correct, but when I start up Python, my paths do not show up in sys.path. Any ideas? EDIT: [dmcdonal@tg-steele ~]$ echo $PYTHONPATH /home/ba01/u116/dmcdonal/PyCogent-v1.1 >>> from sys import path >>> from os import environ >>> path ['', '/apps/steele/Python-2.5.2/lib/python2.5/site-packages/setuptools-0.6c8-py2.5.egg', '/apps/steele/Python-2.5.2/lib/python2.5/site-packages/FiPy-2.0-py2.5.egg', '/apps/steele/Python-2.5.2', '/apps/steele/Python-2.5.2/lib/python25.zip', '/apps/steele/Python-2.5.2/lib/python2.5', '/apps/steele/Python-2.5.2/lib/python2.5/plat-linux2', '/apps/steele/Python-2.5.2/lib/python2.5/lib-tk', '/apps/steele/Python-2.5.2/lib/python2.5/lib-dynload', '/apps/steele/Python-2.5.2/lib/python2.5/site-packages', '/apps/steele/Python-2.5.2/lib/python2.5/site-packages/Numeric'] >>> environ['PYTHONPATH'] '/apps/steele/Python-2.5.2' Answer: How are you setting PYTHONPATH? You might be confusing tcsh's set vs. setenv. Use "set" to set what tcsh calls _shell variables_ and use "setenv" to set _environment variables_. So, you need to use setenv in order for Python to see it. For example: $ set FOO='bar' $ echo $FOO bar $ python -c 'import os; print os.getenv("FOO")' None $ setenv BAR 'wiz' $ echo $BAR wiz $ python -c 'import os; print os.getenv("BAR")' wiz There is some more information available in [the variables section of the tcsh documentation](http://wings.buffalo.edu/computing/Documentation/unix/tcsh.html).
What's the best way to find the closest matching type to an existing type? Question: I've got a registry of classes and types in Python 2.5, like so: class ClassA(object): pass class ClassB(ClassA): pass MY_TYPES = { basestring : 'A string', int : 'An integer', ClassA : 'This is ClassA or a subclass', } I'd like to be able to pass types to a function, and have it look up the closest matching type in the hierarchy. So, looking up `str` would return `"A string"` and looking up `ClassB` would return `"This is ClassA or a subclass"` The problem is, I don't know how to find the superclass (or, rather, trace the MRO chain) of a type object. What's the best way of handling this? Answer: from inspect import getmro [st for cls, st in MY_TYPES.items() if cls in getmro(ClassB)] ['This is ClassA or a subclass'] or if you're only interested in first match(es) generator version: (st for cls, st in MY_TYPES.iteritems() if cls in getmro(ClassB))
Why can't I directly add attributes to any python object? Question: I have this code: >>> class G: ... def __init__(self): ... self.x = 20 ... >>> gg = G() >>> gg.x 20 >>> gg.y = 2000 And this code: >>> from datetime import datetime >>> my_obj = datetime.now() >>> my_obj.interesting = 1 *** AttributeError: 'datetime.datetime' object has no attribute 'interesting' From my Python knowledge, I would say that `datetime` overrides `setattr`/`getattr`, but I am not sure. Could you shed some light here? EDIT: I'm not specifically interested in `datetime`. I was wondering about objects in general. Answer: My guess, is that the implementation of datetime uses [__slots__](http://docs.python.org/reference/datamodel.html#id3) for better performance. When using `__slots__`, the interpreter reserves storage for just the attributes listed, nothing else. This gives better performance and uses less storage, but it also means you can't add new attributes at will. Read more here: <http://docs.python.org/reference/datamodel.html>
How should I log while using multiprocessing in Python? Question: Right now I have a central module in a framework that spawns multiple processes using the Python 2.6 [`multiprocessing` module](http://docs.python.org/library/multiprocessing.html?#module- multiprocessing). Because it uses `multiprocessing`, there is module-level multiprocessing-aware log, `LOG = multiprocessing.get_logger()`. Per [the docs](http://docs.python.org/library/multiprocessing.html#logging), this logger has process-shared locks so that you don't garble things up in `sys.stderr` (or whatever filehandle) by having multiple processes writing to it simultaneously. The issue I have now is that the other modules in the framework are not multiprocessing-aware. The way I see it, I need to make all dependencies on this central module use multiprocessing-aware logging. That's annoying _within_ the framework, let alone for all clients of the framework. Are there alternatives I'm not thinking of? Answer: I just now wrote a log handler of my own that just feeds everything to the parent process via a pipe. I've only been testing it for ten minutes but it seems to work pretty well. (**Note:** This is hardcoded to `RotatingFileHandler`, which is my own use case.) * * * ## Update: Implementation! This now uses a queue for correct handling of concurrency, and also recovers from errors correctly. I've now been using this in production for several months, and the current version below works without issue. from logging.handlers import RotatingFileHandler import multiprocessing, threading, logging, sys, traceback class MultiProcessingLog(logging.Handler): def __init__(self, name, mode, maxsize, rotate): logging.Handler.__init__(self) self._handler = RotatingFileHandler(name, mode, maxsize, rotate) self.queue = multiprocessing.Queue(-1) t = threading.Thread(target=self.receive) t.daemon = True t.start() def setFormatter(self, fmt): logging.Handler.setFormatter(self, fmt) self._handler.setFormatter(fmt) def receive(self): while True: try: record = self.queue.get() self._handler.emit(record) except (KeyboardInterrupt, SystemExit): raise except EOFError: break except: traceback.print_exc(file=sys.stderr) def send(self, s): self.queue.put_nowait(s) def _format_record(self, record): # ensure that exc_info and args # have been stringified. Removes any chance of # unpickleable things inside and possibly reduces # message size sent over the pipe if record.args: record.msg = record.msg % record.args record.args = None if record.exc_info: dummy = self.format(record) record.exc_info = None return record def emit(self, record): try: s = self._format_record(record) self.send(s) except (KeyboardInterrupt, SystemExit): raise except: self.handleError(record) def close(self): self._handler.close() logging.Handler.close(self)
Django MOD_PYTHON ERROR Question: I have had django running with mod_python for awhile now but today I get the following error MOD_PYTHON ERROR ProcessId: 4551 Interpreter: 'thatch' ServerName: 'thatch' DocumentRoot: '/var/www/thatch' URI: '/' Location: '/' Directory: None Filename: '/var/www/thatch/' PathInfo: '' Phase: 'PythonHandler' Handler: 'django.core.handlers.modpython' Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1537, in HandlerDispatch default=default_handler, arg=req, silent=hlist.silent) File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1202, in _process_target module = import_module(module_name, path=path) File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 304, in import_module return __import__(module_name, {}, {}, ['*']) ImportError: No module named django.core.handlers.modpython This is my virtual host configuration <VirtualHost *:80> ServerName thatch DocumentRoot /var/www/thatch <Location "/"> SetHandler python-program PythonPath "['/var/www/thatch'] + sys.path" PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE thatch.settings PythonOption django.root /var/www/thatch PythonDebug On </Location> <Location "/media"> SetHandler None </Location> <LocationMatch "\.(jpg|gif|png|css|js)$"> SetHandler None </LocationMatch> </VirtualHost> Any ideas why this is happening? Answer: I can only guess what your problem might be, but here are two standard checks: 1. Import modpython in shell, from a couple different working directories 2. Change users (if you have any besides root set up) and repeat step 1 If #2 looks like more trouble than it's worth, then try _very, very carefully_ examining permissions for each folder along the path to modpython to make sure that "other" has read access at every stage along the way. If you've moved django into your home directory (like I did once...not sure why, but got the same error) that'll be the problem. You either need to move it back into a directory where apache can read or you need to add read permission for other.
What is the AppDelegate for and how do I know when to use it? Question: I'm just beginning to work on iPhone apps. How do I know when I should be putting stuff in AppDelegate versus a custom class? Is there a rule or any type of analogy with another programming language like Python or PHP that uses an AppDelegate like pattern? Answer: I normally avoid the design approach implied by Andrew's use of the term "heart of your application". What I mean by this is that I think you should avoid lumping too many things in a central location -- good program design normally involves separating functionality by "area of concern". A delegate object is an object that gets notified when the object to which it is connected reaches certain events or states. In this case, the Application Delegate is an object which receives notifications when the UIApplication object reaches certain states. In many respects, it is a specialized one-to- one Observer pattern. This means that the "area of concern" for the AppDelegate is handling special UIApplication states. The most important of these are: * applicationDidFinishLaunching: - good for handling on-startup configuration and construction * applicationWillTerminate: - good for cleaning up at the end You should avoid putting other functionality in the AppDelegate since they don't really belong there. Such other functionality includes: * Document data -- you should have a document manager singleton (for multiple document applications) or a document singleton (for single document applications) * Button/table/view controllers, view delegate methods or other view handling (except for construction of the top-level view in applicationDidFinishLaunching:) -- this work should be in respective view controller classes. Many people lump these things into their AppDelegate because they are lazy or they think the AppDelegate controls the whole program. You should avoid centralizing in your AppDelegate since it muddies the areas of concern in the app and doesn't scale.
Is there any preferable way to get user/group information from an Active Directory domain in Python? Question: For a Django application that I'm working on, I wanted to allow group membership to be determined by Active Directory group. After a while of digging through the pywin32 documentation, I came up with this: >>> import win32net >>> win32net.NetUserGetGroups('domain_name.com', 'username') [(u'Domain Users', 7), ...] I spent a while googling before I figured this out though, and the examples I found almost exclusively used LDAP for this kind of thing. Is there any reason why that's to be preferred over this method? Bear a couple things in mind: 1. I'm not using Active Directory to actually perform authentication, only permissions. Authentication is performed by another server. 2. While it would be nice to have some cross-platform capabilities, this will probably run almost exclusively on Windows. Answer: AD's LDAP interface has quite a few 'quirks' that make it more difficult to use than it might appear on the surface, and it tends to lag significantly behind on features. When I worked with it, I mostly dealt with authentication, but it's probably the same no matter what you're doing. There's a lot of weirdness in terms of having to be bound as a certain user just to do simple searches that a normal LDAP server would let you do as anonymous. Also, at least as of a year ago, when I worked on this, python-ldap was the only Python LDAP implementation to support anywhere close to the full feature set, since it's built on top of OpenLDAP, However, OpenLDAP is rather difficult to build on Windows (and in general), so most builds will be missing one or more features. Although you're not doing authentication, a lack of SASL/Kerberos support (which was missing at the time I used it) might make things complicated for you. If you have something that works, and only need to run it on Windows, I would really recommend sticking to it; using AD via LDAP can turn into a big project.
How do I list all tga files in a directory (non recursive) in Python? Question: How do I list all tga files in a directory (non recursive) in Python? Answer: If you are doing it based on file extension, you can do something like this: import os directory = "C:/" extension = ".tga" list_of_files = [file for file in os.listdir(directory) if file.lower().endswith(extension)] Obviously you can omit the lower() if you can garantee the case of the files. Also there is the excellent path.py (<http://pypi.python.org/pypi/path.py>) module. If you do not know the file extension you can use something like PIL (<http://www.pythonware.com/products/pil/>) to detect the file type by decoding the file.
How to automate a process with the sqlite3.exe command line tool? Question: I'm trying to bulk load a lot of data ( 5.5 million rows ) into an SQLite database file. Loading via INSERTs seems to be far too slow, so I'm trying to use the sqlite3 command line tool and the .import command. It works perfectly if I enter the commands by hand, but I can't for the life of me work out how to automate it from a script ( .bat file or python script; I'm working on a Windows machine ). The commands I issue at the command line are these: > sqlite3 database.db sqlite> CREATE TABLE log_entry ( <snip> ); sqlite> .separator "\t" sqlite> .import logfile.log log_entry But nothing I try will get this to work from a bat file or python script. I've been trying things like: sqlite3 "database.db" .separator "\t" .import logfile.log log_entry echo '.separator "\t" .import logfile.log log_entry' | sqlite3 database.db Surely I can do this somehow? Answer: Create a text file with the lines you want to enter into the sqlite command line program, like this: CREATE TABLE log_entry ( ); .separator "\t" .import logfile.log log_entry and then just call `sqlite3 database.db < commands.txt`
Python: How do you login to a page and view the resulting page in a browser? Question: I've been googling around for quite some time now and can't seem to get this to work. A lot of my searches have pointed me to finding similar problems but they all seem to be related to cookie grabbing/storing. I think I've set that up properly, but when I try to open the 'hidden' page, it keeps bringing me back to the login page saying my session has expired. import urllib, urllib2, cookielib, webbrowser username = 'userhere' password = 'passwordhere' url = 'http://example.com' webbrowser.open(url, new=1, autoraise=1) cj = cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj)) login_data = urllib.urlencode({'username' : username, 'j_password' : password}) opener.open('http://example.com', login_data) resp = opener.open('http://example.com/afterlogin') print resp webbrowser.open(url, new=1, autoraise=1) Answer: First off, when doing cookie-based authentication, you need to have a [`CookieJar`](http://docs.python.org/library/cookielib.html#cookielib.CookieJar) to store your cookies in, much in the same way that your browser stores its cookies a place where it can find them again. After opening a login-page through python, and saving the cookie from a successful login, you should use the [`MozillaCookieJar`](http://docs.python.org/library/cookielib.html#cookielib.MozillaCookieJar) to pass the python created cookies to a format a firefox browser can parse. Firefox 3.x no longer uses the cookie format that MozillaCookieJar produces, and I have not been able to find viable alternatives. If all you need to do is to retrieve specific (in advance known format formatted) data, then I suggest you keep all your HTTP interactions within python. It is much easier, and you don't have to rely on specific browsers being available. If it is absolutely necessary to show stuff in a browser, you could render the so-called 'hidden' page through urllib2 (which incidentally integrates very nicely with cookielib), save the html to a temporary file and pass this to the [`webbrowser.open`](http://docs.python.org/library/webbrowser.html?highlight=webbrowser#webbrowser.open) which will then render that specific page. Further redirects are not possible.
Tab-completion in Python interpreter in OS X Terminal Question: Several months ago, I wrote a [blog post](http://igotgenes.blogspot.com/2009/01/tab-completion-and-history-in- python.html) detailing how to achieve tab-completion in the standard Python interactive interpreter--a feature I once thought only available in IPython. I've found it tremendously handy given that I sometimes have to switch to the standard interpreter due to IPython unicode issues. Recently I've done some work in OS X. To my discontent, the script doesn't seem to work for OS X's Terminal application. I'm hoping some of you with experience in OS X might be able to help me trouble-shoot it so it can work in Terminal, as well. I am reproducing the code below import atexit import os.path try: import readline except ImportError: pass else: import rlcompleter class IrlCompleter(rlcompleter.Completer): """ This class enables a "tab" insertion if there's no text for completion. The default "tab" is four spaces. You can initialize with '\t' as the tab if you wish to use a genuine tab. """ def __init__(self, tab=' '): self.tab = tab rlcompleter.Completer.__init__(self) def complete(self, text, state): if text == '': readline.insert_text(self.tab) return None else: return rlcompleter.Completer.complete(self,text,state) #you could change this line to bind another key instead tab. readline.parse_and_bind('tab: complete') readline.set_completer(IrlCompleter('\t').complete) # Restore our command-line history, and save it when Python exits. history_path = os.path.expanduser('~/.pyhistory') if os.path.isfile(history_path): readline.read_history_file(history_path) atexit.register(lambda x=history_path: readline.write_history_file(x)) Note that I have slightly edited it from the version on my blog post so that the `IrlCompleter` is initialized with a true tab, which seems to be what is output by the Tab key in Terminal. Answer: This should work under Leopard's python: import rlcompleter import readline readline.parse_and_bind ("bind ^I rl_complete") Whereas this one does not: import readline, rlcompleter readline.parse_and_bind("tab: complete") Save it in ~/.pythonrc.py and execute in .bash_profile export PYTHONSTARTUP=$HOME/.pythonrc.py
Best way to profile/optimize a website on google's appengine Question: I'm currently trying to optimize my website, which run on the google's appengine. It's not an easy task, because I'm not using any powerful tool. Does anyone have experience in optimizing python code for this purpose? Have you find a good python profiler? Answer: I have found [Gprof2Dot](http://code.google.com/p/jrfonseca/wiki/Gprof2Dot) extremely useful. The output of the profiling modules I've tried as pretty unintuitive to interpret. Gprof2Dot turns the cProfile output into a pretty looking graph, with the slowest chain(?) highlighted, and a bit of information on each function (function name, percentage of time spend on this function, and number of calls). [An example graph (1429x1896px)](http://jrfonseca.googlecode.com/svn/wiki/gprof2dot.png) I've not done much with the App Engine, but when profiling non-webapp scripts, I tend to profile the script that runs all the unittests, which may not be very accurate to real-world situations One (better?) method would be to have a script that does a fake WSGI request, then profile that. WSGI is really simple protocol, it's basically a function that takes two arguments, one with request info and the second with a callback function (which is used for setting headers, among other things). Perhaps something like the following (which is possible-working pseudo code)... class IndexHandler(webapp.RequestHandler): """Your site""" def get(self): self.response.out.write("hi") if __name__ == '__main__': application = webapp.WSGIApplication([ ('.*', IndexHandler), ], debug=True) # Start fake-request/profiling bit urls = [ "/", "/blog/view/hello", "/admin/post/edit/hello", "/makeanerror404", "/makeanerror500" ] def fake_wsgi_callback(response, headers): """Prints heads to stdout""" print("\n".join(["%s: %s" % (n, v) for n, v in headers])) print("\n") for request_url in urls: html = application({ 'REQUEST_METHOD': 'GET', 'PATH_INFO': request_url}, fake_wsgi_callback ) print html Actually, the App Engine documentation explains a better way of profiling your application: From <http://code.google.com/appengine/kb/commontasks.html#profiling>: > To profile your application's performance, first rename your application's > `main()` function to `real_main()`. Then, add a new main function to your > application, named `profile_main()` such as the one below: > > > def profile_main(): > # This is the main function for profiling > # We've renamed our original main() above to real_main() > import cProfile, pstats > prof = cProfile.Profile() > prof = prof.runctx("real_main()", globals(), locals()) > print "<pre>" > stats = pstats.Stats(prof) > stats.sort_stats("time") # Or cumulative > stats.print_stats(80) # 80 = how many to print > # The rest is optional. > # stats.print_callees() > # stats.print_callers() > print "</pre>" > > > [...] > > To enable the profiling with your application, set `main = profile_main`. To > run your application as normal, simply set `main = real_main`.
How can you find unused functions in Python code? Question: So you've got some legacy code lying around in a fairly hefty project. How can you find and delete dead functions? I've seen these two references: [Find unused code](http://stackoverflow.com/questions/245963) and [Tool to find unused functions in php project](http://stackoverflow.com/questions/11532), but they seem specific to C# and PHP, respectively. Is there a Python tool that'll help you find functions that aren't referenced anywhere else in the source code (notwithstanding reflection/etc.)? Answer: In python you can find unused code by using dynamic or static code analyzers. Two examples for dynamic analyzers are **coverage** and **figleaf**. They have the drawback that you have to run all possible branches of your code in order to find unused parts, but they also have the advantage that you get very reliable results. Alternatively, you can use static code analyzers, that just look at your code, but don't actually run it. This has the advantage that they run much faster, but due to python's dynamic nature the results are not 100% percent accurate and you might want to double-check them. Two tools that come to mind here are **pyflakes** and **vulture**. They are complementary: Pyflakes finds unused imports and unused local variables while vulture finds unused functions, methods, classes, variables and attributes. The tools are all available at the Python Package Index <http://pypi.python.org/pypi>.
why doesn't subprocess.Popen(...) always return? Question: I hope this is a simple python question. When I try the following in the python interpreter: >>> import process >>> def test(cmd): ... p = subprocess.Popen(cmd) ... >>> test(['ls', '-l']) It will run the `ls -l`, but I need to hit "return" to get a new >>> prompt. However, when I try the following: >>> import process >>> def test(cmd): ... p = subprocess.Popen(cmd) ... p.wait() ... >>> test(['ls', '-l']) Then the `ls -l` will be run with a >>> prompt immediately present. One other variation: >>> import process >>> def test(cmd): ... p = subprocess.Popen(cmd, stdout=subprocess.PIPE) ... >>> test(['ls', '-l']) This will give me an immediate new prompt. The last example is closest to what I want. My goal is to launch a child process, wait for it to finish and then use its stdout in my parent process by referring to `p.stdout` while letting stderr just print to wherever it would otherwise. Right now in my actual application, the last version just hangs at the: `p = subprocess.Popen(cmd, stdout=subprocess.PIPE)` with or without a `p.wait()`. Thanks, Charlie Answer: In the first variation, `test()` returns immediately after starting the process, but before its output is sent to the console. If you look at the output you _do_ get the prompt, immediately before the output of `ls`. >>> test(['ls', '-l']) >>> total 0 <----- test() returns, new propmpt --rw-r--r-- 1 foo bar 7 Mar 24 17:38 etc etc In the second variation, `test()` is waiting for the process to terminate before it returns. In the third version, you're right that you may have to read from the child process's `stdout` for it to continue.
Help with Python loop weirdness? Question: I'm learning Python as my second programming language (my first real one if you don't count HTML/CSS/Javascript). I'm trying to build something useful as my first real application - an IRC bot that alerts people via SMS when certain things happen in the channel. Per a request by someone, I'm (trying) to build in scheduling preferences where people can choose not to get alerts from between hours X and Y of the day. Anyways, here's the code I'm having trouble with: db = open("db.csv") for line in db: row = line.split(",") # storing stuff in a CSV, reading out of it recipient = row[0] # who the SMS is going to s = row[1] # gets the first hour of the "no alert" time range f = row[2] # gets last hour of above nrt = [] # empty array that will store hours curtime = time.strftime("%H") # current hour if s == "no": print "They always want alerts, sending email" # start time will = "no" if they always want alerts # send mail code goes here else: for hour in range(int(s), int(f)): #takes start, end hours, loops through to get hours in between, stores them in the above list nrt.append(hour) if curtime in nrt: # best way I could find of doing this, probably a better way, like I said I'm new print "They don't want an alert during the current hour, not sending" # <== what it says else: # they do want an alert during the current hour, send an email # send mail code here The only problem I'm having is somehow the script only ends up looping through one of the lines (or something like that) because I only get one result every time, even if I have more than one entry in the CSV file. Answer: If this is a regular CSV file you should not try to parse it yourself. Use the standard library [csv module](http://docs.python.org/library/csv.html). Here is a short example from the docs: import csv reader = csv.reader(open("some.csv", "rb")) for row in reader: print row
How do I get the filepath for a class in Python? Question: Given a class C in Python, how can I determine which file the class was defined in? I need something that can work from either the class C, or from an instance off C. The reason I am doing this, is because I am generally a fan off putting files that belong together in the same folder. I want to create a class that uses a Django template to render itself as HTML. The base implementation should infer the filename for the template based on the filename that the class is defined in. Say I put a class LocationArtifact in the file "base/artifacts.py", then I want the default behaviour to be that the template name is "base/LocationArtifact.html". Answer: You can use the [inspect](http://docs.python.org/library/inspect.html#inspect.getfile) module, like this: import inspect inspect.getfile(C.__class__)
What's the best way to tell if a Python program has anything to read from stdin? Question: I want a program to do one thing if executed like this: cat something | my_program.py and do another thing if run like this my_program.py But if I read from stdin, then it will wait for user input, so I want to see if there is anything to read before trying to read from stdin. Answer: If you want to detect if someone is piping data into your program, or running it interactively you can use isatty to see if stdin is a terminal: $ python -c 'import sys; print sys.stdin.isatty()' True $ echo | python -c 'import sys; print sys.stdin.isatty()' False
navigating through different drive letters in python os.system Question: I am having a problem with a bit of code on one windows machine but not all windows machines. i have the following code: path = "F:/dir/" os.system(path[0:2] + " && cd " + path + " && git init") On all but one of my windows systems it runs fine but on a windows 2003 server it gives a "directory not found" error but if i run the same command flat from the command prompt than it works. I'm sorry if my question comes off as vague but I'm totally stumped Answer: [os.path](http://docs.python.org/library/os.path.html) contains many usefull path manipulation functions. Probably just handling the path cleanly will resolve your problem. >>> import os >>> >>> >>> path = "F:/dir/" >>> >>> clean_path = os.path.normpath(path) >>> clean_path 'F:\\dir' >>> drive, directory = os.path.splitdrive(clean_path) >>> drive 'F:' >>> directory '\\dir' Also, you might want to look into using the [subprocess](http://docs.python.org/library/subprocess.html) module, it gives you more control over processes. [Replacing Older Functions with the subprocess Module](http://docs.python.org/library/subprocess.html#subprocess- replacements)
Python: Testing for unicode, and converting to time() Question: Sometimes self.start is unicode: eg. >>>self.start u'07:30:00' Which makes datetime.combine complain start = datetime.combine(self.job_record.date, self.start) How does one: 1. Test for unicode? 2. Convert from u'07:30:00' to datetime.time? Answer: Checking for unicode: >>> import types >>> type(u'07:30:00') is types.UnicodeType True >>> type('regular string') is types.UnicodeType False Converting strings to time: >>> import time >>> time.strptime(u'07:30:00', '%H:%M:%S') (1900, 1, 1, 7, 30, 0, 0, 1, -1)
Is there a better trivial Python WebDAV server code snippet than this? Question: Does anyone have a better code snippet for a trivial Python [WebDAV](http://en.wikipedia.org/wiki/WebDAV) server? The code below (which is cobbled together from some Google search results) appears to work under Python 2.6, but I wonder if someone has something they have used before, a _little_ more tested and complete. I'd prefer a stdlib-only snippet over a third-party package. It is for some test code to hit so does not have to be production- worthy. import httplib import BaseHTTPServer class WebDAV(BaseHTTPServer.BaseHTTPRequestHandler): """ Ultra-simplistic WebDAV server. """ def do_PUT(self): path = os.path.normpath(self.path) if os.path.isabs(path): path = path[1:] # safe assumption due to normpath above directory = os.path.dirname(path) if not os.path.isdir(directory): os.makedirs(directory) content_length = int(self.headers['Content-Length']) with open(path, "w") as f: f.write(self.rfile.read(content_length)) self.send_response(httplib.OK) def server_main(server_class=BaseHTTPServer.HTTPServer, handler_class=WebDAV): server_class(('', 9231), handler_class).serve_forever() Answer: Or try PyFileServer, which I picked up for further develpment by the name WsgiDAV (<http://code.google.com/p/wsgidav/>) **Edit:** the project has moved to GitHub (<https://github.com/mar10/wsgidav>)
Interpreting Number Ranges in Python Question: In a Pylons webapp, I need to take a string such as "<3, 45, 46, 48-51, 77" and create a list of ints (which are actually IDs of objects) to search on. Any suggestions on ways to do this? I'm new to Python, and I haven't found anything out there that helps with this kind of thing. The list would be: [1, 2, 3, 45, 46, 48, 49, 50, 51, 77] Answer: Use parseIntSet from [here](http://thoughtsbyclayg.blogspot.com/2008/10/parsing-list-of-numbers-in- python.html) I also like the pyparsing implementation in the comments at the end. The parseIntSet has been modified here to handle "<3"-type entries and to only spit out the invalid strings if there are any. #! /usr/local/bin/python import sys import os # return a set of selected values when a string in the form: # 1-4,6 # would return: # 1,2,3,4,6 # as expected... def parseIntSet(nputstr=""): selection = set() invalid = set() # tokens are comma seperated values tokens = [x.strip() for x in nputstr.split(',')] for i in tokens: if len(i) > 0: if i[:1] == "<": i = "1-%s"%(i[1:]) try: # typically tokens are plain old integers selection.add(int(i)) except: # if not, then it might be a range try: token = [int(k.strip()) for k in i.split('-')] if len(token) > 1: token.sort() # we have items seperated by a dash # try to build a valid range first = token[0] last = token[len(token)-1] for x in range(first, last+1): selection.add(x) except: # not an int and not a range... invalid.add(i) # Report invalid tokens before returning valid selection if len(invalid) > 0: print "Invalid set: " + str(invalid) return selection # end parseIntSet print 'Generate a list of selected items!' nputstr = raw_input('Enter a list of items: ') selection = parseIntSet(nputstr) print 'Your selection is: ' print str(selection) And here's the output from the sample run: $ python qq.py Generate a list of selected items! Enter a list of items: <3, 45, 46, 48-51, 77 Your selection is: set([1, 2, 3, 45, 46, 77, 48, 49, 50, 51])
Python's subprocess.Popen returns the same stdout even though it shouldn't Question: I'm having a very strange issue with Python's subprocess.Popen. I'm using it to call several times an external exe and keep the output in a list. Every time you call this external exe, it will return **a different string**. However, if I call it several times using Popen, it will always **return the SAME string**. =:-O It looks like Popen is returning always the same value from stdout, without recalling the exe. Maybe doing some sort of caching without actually calling again the exe. This is my code: def get_key(): from subprocess import Popen, PIPE args = [C_KEY_MAKER, '/26', USER_NAME, ENCRYPTION_TEMPLATE, '0', ] process = Popen(args, stdout=PIPE) output = process.communicate()[0].strip() return output if __name__ == '__main__': print get_key() # Returns a certain string print get_key() # Should return another string, but returns the same! What on Earth am I doing wrong?! Answer: It is possible (if `C_KEY_MAKER`'s random behaviour is based on the current time in seconds, or similar) that when you run it twice on the command line, the time has changed in between runs and so you get a different output, but when python runs it, it runs it twice in such quick succession that the time hasn't changed and so it returns the same value twice in a row.
Does the Python library httplib2 cache URIs with GET strings? Question: In the following example what is cached correctly? Is there a Vary-Header I have to set server-side for the GET string? import httplib2 h = httplib2.Http(".cache") resp, content = h.request("http://test.com/list/") resp, content = h.request("http://test.com/list?limit=10") resp, content = h.request("http://test.com/list?limit=50") Answer: httplib2 uses the full URI for the cache key, so in this case each of the URLs you have in your example will be cached separately by the client. For the chapter and verse from the `__init__.py` file for httplib2, if you would like proof, have a look at call to the cache on around line 1000: cachekey = defrag_uri cached_value = self.cache.get(cachekey) The defrag_uri is defined by the function `urlnorm` (line 170ish) and includes the scheme, authority, path, and query. Of course, as you know, the server may interpret the definition of "resource" quite differently and, so, may still return cached content. Since it sounds like you're controlling the server in this case, you have full control there, so no issues. Either way, on the client side, there would be no client-cached values used for the first call to each of the 3 URLs in your examples.
Python class inclusion wrong behaviour Question: I have into my main.py from modules import controller ctrl = controller help(ctrl) print(ctrl.div(5,2)) and the controllor.py is: class controller: def div(self, x, y): return x // y when I run my main I got the error: Traceback (most recent call last): File "...\main.py", line 8, in ? print(ctrl.div(5,2)) AttributeError: 'module' object has no attribute 'div' WHat is wrong? Answer: This is very confusing as shown. When you say from modules import controller You're making the claim that you have a module with a filename of `modules.py`. OR You're making the claim that you have a package named `modules`. This directory has an `__init__.py` file and a module with a filename of `controller.py` You should clarify this to be precise. It looks like you have mis-named your files and modules in the the example code posted here. When you say from modules import controller That creates a _module_ (not a class) named `controller`. When you say ctrl = controller That creates another name for the `controller` _module_ , `ctrl`. At no time to you reference the class (`controller.controller`). At no time did you create an instance of the class (`controller.controller()`).
How do you automate the launching/debugging of large scale projects? Question: **Scenario:** There is a complex piece of software that is annoying to launch by hand. What I've done is to create a python script to launch the executable and attach _gdb_ for debugging. The process launching script: * ensures an environment variable is set. * ensures a local build directory gets added to the environment's `LD_LIBRARY_PATH` variable. * changes the current working directory to where the executable expects to be (not my design) * launches the executable with a config file the only command line option * pipes the output from the executable to a second logging process * remembers PID of executable, then launches & attaches gdb to running executable. The script works, with one caveat. **ctrl-c doesn't interrupt the debugee and return control to gdb.** So if I "continue" with no active breakpoints I can never stop the process again, it has to be killed/interrupted from another shell. BTW, running "kill -s SIGINT <pid>" where <pid> is the debuggee's pid does get me back to gdb's prompt... but it is really annoying to have to do things this way At first I thought Python was grabbing the SIGINT signal, but this doesn't seem to be the case as I set up signal handlers forward the signal to the debugee and that doesn't fix the problem. I've tried various configurations to the python script (calling os.spawn* instead of subprocess, etc.) It seems that any way I go about it, if python launched the child process, SIGINT (ctrl-c) signals DO NOT to get routed to gdb or the child process. **Current line of thinking** * This might be related to needing a separate process group id for the debugee & gdb...any credence to this? * Possible bug with SELinux? **Info:** * gdb 6.8 * Python 2.5.2 (problem present with Python 2.6.1 as well) * SELinux Environment (bug delivering signals to processes?) **Alternatives I've considered:** * Setting up a .gdbinit file to do as much of what the script does, environment variables and current working directory are a problem with this approach. * Launching executable and attaching gdb manually (yuck) **Question:** How do you automate the launching/debugging of large scale projects? **Update:** I've tried Nicholas Riley's examples below, on my Macintosh at home they all allow cntl-c to work to varrying degrees, on the production boxen (which I now to believe may be running SELinux) they don't... Answer: Instead of forwarding the signal to the debuggee from Python, you could try just ignoring it. The following worked for me: import signal signal.signal(signal.SIGINT, signal.SIG_IGN) import subprocess cat = subprocess.Popen(['cat']) subprocess.call(['gdb', '--pid=%d' % cat.pid]) With this I was able to ^C repeatedly inside GDB and interrupt the debuggee without a problem, however I did see some weird behavior. Incidentally, I also had no problem when forwarding the signal to the target process. import subprocess cat = subprocess.Popen(['cat']) import signal, os signal.signal(signal.SIGINT, lambda signum, frame: os.kill(cat.pid, signum)) subprocess.call(['gdb', '--pid=%d' % cat.pid]) So, maybe something else is going on in your case? It might help if you posted some code that breaks.
python write string directly to tarfile Question: Is there a way to write a string directly to a tarfile? From <http://docs.python.org/library/tarfile.html> it looks like only files already written to the file system can be added. Answer: I would say it's possible, by playing with TarInfo e TarFile.addfile passing a StringIO as a fileobject. Very rough, but works import tarfile import StringIO tar = tarfile.TarFile("test.tar","w") string = StringIO.StringIO() string.write("hello") string.seek(0) info = tarfile.TarInfo(name="foo") info.size=len(string.buf) tar.addfile(tarinfo=info, fileobj=string) tar.close()
semantic markup for Python's difflib.HtmlDiff Question: It appears Python's `difflib.HtmlDiff`, rather than using `INS` and `DEL`, uses `SPAN` elements with custom classes: python -c 'import difflib; txt1 = "lorem ipsum\ndolor sit amet".splitlines(); txt2 = "lorem foo isum\ndolor amet".splitlines(); d = difflib.HtmlDiff(); print d.make_table(txt1, txt2)' Before I go about fixing this myself, has anyone looked into this already? Is there perhaps a valid reason for not using POSH? (Google wasn't a big help here... ) Answer: [This script](http://www.aaronsw.com/2002/diff/) by Aaron Swartz uses difflib to output `ins`/`del`.
How to generate examples of a gettext plural forms expression? In Python? Question: Given a gettext Plural-Forms line, general a few example values for each `n`. I'd like this feature for the web interface for my site's translators, so that they know which plural form to put where. For example, given: `"Plural-Forms: nplurals=3; plural=n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%"` `"10<=4 && (n%100<10 || n%100>=20) ? 1 : 2;\n"` ... I want the first text field to be labeled "1, 21..", then "2, 3, 4...", then "5, 6..." (not sure if this is exactly right, but you get the idea.) Right now the best thing I can come up with is to parse the expression somehow, then iterate x from 0 to 100 and see what n it produces. This isn't guaranteed to work (what if the lowest x is over 100 for some language?) but it's probably good enough. Any better ideas or existing Python code? Answer: Given that it's late, I'll bite. The following solution is hacky, and relies on converting your plural form to python code that can be evaluated (basically converting the x ? y : z statements to the python x and y or z equivalent, and changing &&/|| to and/or) I'm not sure if your plural form rule is a contrived example, and I don't understand what you mean with your first text field, but I'm sure you'll get where I'm going with my example solution: # -*- Mode: Python -*- # vi:si:et:sw=4:sts=4:ts=4 p = "Plural-Forms: nplurals=3; plural=n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2;\n" # extract rule import re matcher = re.compile('plural=(.*);') match = matcher.search(p) rule = match.expand("\\1") # convert rule to python syntax oldrule = None while oldrule != rule: oldrule = rule rule = re.sub('(.*)\?(.*):(.*)', r'(\1) and (\2) or (\3)', oldrule) rule = re.sub('&&', 'and', rule) rule = re.sub('\|\|', 'or', rule) for n in range(40): code = "n = %d" % n print n, eval(rule)
I'm looking for an application/text editor that Question: can best help me systematically modify the "replace" field of a regex search as it encounters each match. For example, I have an xml file that needs the phrase "id = $number" inserted at regular points in the text, and basically, $number++ each time the regex matches (id = 1, id = 2, etc) until the end of the file. I know I could just write a bash/perl/python script or some such, but I'd like it to be at least moderately user-friendly so I could teach my intelligent (but less technically-inclined) workers how to use it and make their own modifications. Regexing is not a problem for them. The closest I've come so far is Notepad++'s Column Editor and 'increase [number] by' function, but with this I have to write a separate regex to align everything, add the increments, and then write another to put it back. Unfortunately, I need to use this function on too many different types of files and 'replace's to make macros feasible. Ideally, the program would also be available for both Windows & Linux (WINE is acceptable but native is much preferred), and have a 'VI/VIM input' option (if it's a text editor), but these are of secondary importance. Of course, it'd be nice if there is an OSS solution, and I'd be glad to donate $20-$50 to the developer(s) if it provides the solution I'm looking for. Apologies for the length, and thanks so much for your help! Answer: emacs (version 22 and later) can do what you're looking for. See [Steve Yegge's blog](http://steve-yegge.blogspot.com/2006/06/shiny-and-new- emacs-22.html) for a really interesting read about it. I think this should work: M-x replace-regexp Replace regexp: _insert pattern regexp here_ Replace regexp with: id = \# `\#` is a special metacharacter that gets replaced by the total number of replacements that have occurred so far, starting from 0. If you want the list to start from 1 instead of 0, use the following replacement string: id = \,(1+ \#)
Programmatically generate video or animated GIF in Python? Question: I have a series of images that I want to create a video from. Ideally I could specify a frame duration for each frame but a fixed frame rate would be fine too. I'm doing this in wxPython, so I can render to a wxDC or I can save the images to files, like PNG. Is there a Python library that will allow me to create either a video (AVI, MPG, etc) or an animated GIF from these frames? Edit: I've already tried PIL and it doesn't seem to work. Can someone correct me with this conclusion or suggest another toolkit? This link seems to backup my conclusion regarding PIL: <http://www.somethinkodd.com/oddthinking/2005/12/06/python-imaging-library- pil-and-animated-gifs/> Answer: As of June 2009 the originally cited blog post has a method to create animated GIFs [in the comments](http://www.somethinkodd.com/oddthinking/2005/12/06/python-imaging- library-pil-and-animated-gifs/#comment-197921). Download the script [images2gif.py](https://pypi.python.org/pypi/images2gif) (formerly [images2gif.py](http://sites.google.com/site/almarklein/files-1/images2gif.py), update courtesy of @geographika). Then, to reverse the frames in a gif, for instance: #!/usr/bin/env python from PIL import Image, ImageSequence import sys, os filename = sys.argv[1] im = Image.open(filename) original_duration = im.info['duration'] frames = [frame.copy() for frame in ImageSequence.Iterator(im)] frames.reverse() from images2gif import writeGif writeGif("reverse_" + os.path.basename(filename), frames, duration=original_duration/1000.0, dither=0)
Python: Read a file (from an external server) Question: Can you tell me how to code a Python script which reads a file from an external server? I look for something similar to PHP's file_get_contents() or file() function. It would be great if someone could post the entire code for such a script. Thanks in advance! Answer: The entire script is: import urllib content = urllib.urlopen('http://www.google.com/').read()
What is the purpose of the sub-interpreter API in CPython? Question: I'm unclear on why the sub-interpreter API exists and why it's used in modules such as the mod_wsgi apache module. Is it mainly used for creating a security sandbox for different applications running within the same process, or is it a way to allow concurrency with multiple threads? Maybe both? Are there other purposes? Answer: I imagine the purpose is to create separate python execution environments. For instance, [mod_wsgi](https://code.google.com/p/modwsgi/) (Apache Python module) hosts a single python interpreter and then hosts multiple applications within sub-interpreters (in the default configuration). Some key points from the [documentation](http://docs.python.org/c-api/init.html#Py_NewInterpreter): * This is an (almost) totally separate environment for the execution of Python code. In particular, the new interpreter has separate, independent versions of all imported modules, including the fundamental modules `__builtin__`, `__main__` and `sys`. * The table of loaded modules (sys.modules) and the module search path (sys.path) are also separate. * Because sub-interpreters (and the main interpreter) are part of the same process, the insulation between them isn’t perfect — for example, using low-level file operations like os.close() they can (accidentally or maliciously) affect each other’s open files. * Because of the way extensions are shared between (sub-)interpreters, some extensions may not work properly; this is especially likely when the extension makes use of (static) global variables, or when the extension manipulates its module’s dictionary after its initialization.
Matplotlib suddenly crashes after reinstalling Xcode? Question: I was happy in my world of python and matplotlib with a good level of familiarity. I notied Xcode on my Mac wasn't working so I installed the latest version from Apple and it somehow broke my install of matplotlib (or numpy?)! I'm now getting ... /sw/lib/python2.5/site-packages/matplotlib-0.91.1-py2.5-macosx- 10.5-i386.egg/matplotlib/numerix/ma/__init__.py in <module>() 14 print "using maskedarray" 15 else: ---> 16 from numpy.core.ma import * 17 #print "using ma" 18 def getmaskorNone(obj): ImportError: No module named ma I've tried reinstalling numpy (Ver 1.3.0) and matplotlib (Ver 0.91.1) to no avail. Any ideas? Answer: You're using Python from Fink, so Xcode doesn't have anything to do with it. `from numpy.core.ma import *` works fine with Apple's bundled Python and NumPy too.
Why csv.reader is not pythonic? Question: I started to use the [csv.reader](https://docs.python.org/2/library/csv.html#csv.reader) in Python 2.6 but you can't use `len` on it, or slice it, etc. What's the reason behind this? It certainly feels very limiting. Or is this just an abandoned module in later versions? Answer: I'm pretty sure you can't use len or slice because it is an iterator. Try this instead. import csv r = csv.reader(...) lines = [line for line in r] print len(lines) #number of lines for odd in lines[1::2]: print odd # print odd lines
SQLite Performance Benchmark -- why is :memory: so slow...only 1.5X as fast as disk? Question: ## Why is :memory: in sqlite so slow? I've been trying to see if there are any performance improvements gained by using in-memory sqlite vs. disk based sqlite. Basically I'd like to trade startup time and memory to get extremely rapid queries which do _not_ hit disk during the course of the application. However, the following benchmark gives me only a factor of 1.5X in improved speed. Here, I'm generating 1M rows of random data and loading it into both a disk and memory based version of the same table. I then run random queries on both dbs, returning sets of size approx 300k. I expected the memory based version to be considerably faster, but as mentioned I'm only getting 1.5X speedups. I experimented with several other sizes of dbs and query sets; the advantage of :memory: _does_ seem to go up as the number of rows in the db increases. I'm not sure why the advantage is so small, though I had a few hypotheses: * the table used isn't big enough (in rows) to make :memory: a huge winner * more joins/tables would make the :memory: advantage more apparent * there is some kind of caching going on at the connection or OS level such that the previous results are accessible somehow, corrupting the benchmark * there is some kind of hidden disk access going on that I'm not seeing (I haven't tried lsof yet, but I did turn off the PRAGMAs for journaling) Am I doing something wrong here? Any thoughts on why :memory: isn't producing nearly instant lookups? Here's the benchmark: ==> sqlite_memory_vs_disk_benchmark.py <== #!/usr/bin/env python """Attempt to see whether :memory: offers significant performance benefits. """ import os import time import sqlite3 import numpy as np def load_mat(conn,mat): c = conn.cursor() #Try to avoid hitting disk, trading safety for speed. #http://stackoverflow.com/questions/304393 c.execute('PRAGMA temp_store=MEMORY;') c.execute('PRAGMA journal_mode=MEMORY;') # Make a demo table c.execute('create table if not exists demo (id1 int, id2 int, val real);') c.execute('create index id1_index on demo (id1);') c.execute('create index id2_index on demo (id2);') for row in mat: c.execute('insert into demo values(?,?,?);', (row[0],row[1],row[2])) conn.commit() def querytime(conn,query): start = time.time() foo = conn.execute(query).fetchall() diff = time.time() - start return diff #1) Build some fake data with 3 columns: int, int, float nn = 1000000 #numrows cmax = 700 #num uniques in 1st col gmax = 5000 #num uniques in 2nd col mat = np.zeros((nn,3),dtype='object') mat[:,0] = np.random.randint(0,cmax,nn) mat[:,1] = np.random.randint(0,gmax,nn) mat[:,2] = np.random.uniform(0,1,nn) #2) Load it into both dbs & build indices try: os.unlink('foo.sqlite') except OSError: pass conn_mem = sqlite3.connect(":memory:") conn_disk = sqlite3.connect('foo.sqlite') load_mat(conn_mem,mat) load_mat(conn_disk,mat) del mat #3) Execute a series of random queries and see how long it takes each of these numqs = 10 numqrows = 300000 #max number of ids of each kind results = np.zeros((numqs,3)) for qq in range(numqs): qsize = np.random.randint(1,numqrows,1) id1a = np.sort(np.random.permutation(np.arange(cmax))[0:qsize]) #ensure uniqueness of ids queried id2a = np.sort(np.random.permutation(np.arange(gmax))[0:qsize]) id1s = ','.join([str(xx) for xx in id1a]) id2s = ','.join([str(xx) for xx in id2a]) query = 'select * from demo where id1 in (%s) AND id2 in (%s);' % (id1s,id2s) results[qq,0] = round(querytime(conn_disk,query),4) results[qq,1] = round(querytime(conn_mem,query),4) results[qq,2] = int(qsize) #4) Now look at the results print " disk | memory | qsize" print "-----------------------" for row in results: print "%.4f | %.4f | %d" % (row[0],row[1],row[2]) Here's the results. Note that disk takes about 1.5X as long as memory for a fairly wide range of query sizes. [ramanujan:~]$python -OO sqlite_memory_vs_disk_clean.py disk | memory | qsize ----------------------- 9.0332 | 6.8100 | 12630 9.0905 | 6.6953 | 5894 9.0078 | 6.8384 | 17798 9.1179 | 6.7673 | 60850 9.0629 | 6.8355 | 94854 8.9688 | 6.8093 | 17940 9.0785 | 6.6993 | 58003 9.0309 | 6.8257 | 85663 9.1423 | 6.7411 | 66047 9.1814 | 6.9794 | 11345 Shouldn't RAM be almost instant relative to disk? What's going wrong here? ## Edit Some good suggestions here. I guess the main takehome point for me is that **there's probably no way to make :memory: _absolutely faster_ , but there is a way to make disk access _relatively slower._ ** In other words, the benchmark is adequately measuring the realistic performance of memory, but not the realistic performance of disk (e.g. because the cache_size pragma is too big or because I'm not doing writes). I'll mess around with those parameters and post my findings when I get a chance. That said, if there is anyone who thinks I can squeeze some more speed out of the in-memory db (other than by jacking up the cache_size and default_cache_size, which I will do), I'm all ears... Answer: It has to do with the fact that SQLite has a page cache. According to the [Documentation](http://www.sqlite.org/compile.html), the default page cache is 2000 1K pages or about 2Mb. Since this is about 75% to 90% of your data, it isn't surprising that the two number are very similar. My guess is that in addition to the SQLite page cache, the rest of the data is still in the OS disk cache. If you got SQLite to flush the page cache (and the disk cache) you would see some really significant differences.
How do I structure Python code into modules/packages? Question: Assume I have this barebones structure: project/ main.py providers/ __init.py__ acme1.py acme2.py acme3.py acme4.py acme5.py acme6.py Assume that `main.py` contains (partial): if complexcondition(): print providers.acme5.get() Where `__init__.py` is empty and `acme*.py` contain (partial): def get(): value=complexcalculation() return value How do I change these files to work? Note: If the answer is "import acme1", "import acme2", and so on in `__init__.py`, is there a way to accomplish that without listing them all by hand? Answer: hey! two years later but... maybe could be helpfull to some one make your **providers/__init__.py** like that: import os import glob module_path = os.path.dirname(__file__) files = glob.glob(os.path.join(module_path, 'acme*.py')) __all__ = [os.path.basename(f)[:-3] for f in files] you don't have to change it later if add or remove any **providers/acme*.py** then use `from providers import *` in **main.py**
access eggs in python? Question: Is there any way to call an installed python egg from python code? I need to cal a sphinx documentation generator from within a python code, and currently i'm doing it like this: `os.system( "sphinx-build.exe -b html c:\\src c:\\dst" )` This works, but requires some additional configuration: 'scripts' folder inside a python installation folder need to be added to a system PATH ( i'm on Windows ). Is it any better, native way to call an installed python egg? Answer: So basically, you want to use Sphinx as a library? Here is what `sphinx-build` does: from pkg_resources import load_entry_point load_entry_point('Sphinx==0.5.1', 'console_scripts', 'sphinx-build')() Looking at `entry-points.txt` in the EGG-INFO directory, notice that the sphinx-build entry point is the `sphinx.main` function (located in `__init__.py`). Have a look at that and duplicate what it does, and you can use sphinx as a library. I have not looked at the code in detail, but it seems that the bulk of the `sphinx-build`-command is done by the `build` method on a `Sphinx` object. In your code, you would have to do something like: from sphinx.application import Sphinx s = Sphinx(...) s.build(...) You need to have a look at the Sphinx source code to figure out the parameters to `Sphinx.__init__()` and `Sphinx.build()`
Multiple simultaneous network connections - Telnet server, Python Question: I'm currently writing a telnet server in Python. It's a content server. People would connect to the server via telnet, and be presented with text-only content. My problem is that the server would obviously need to support more than one simultaneous connection. The current implementation I have now supports only one. This is the basic, proof-of-concept server I began with (while the program has changed greatly over time, the basic telnet framework hasn't): import socket, os class Server: def __init__(self): self.host, self.port = 'localhost', 50000 self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.socket.bind((self.host, self.port)) def send(self, msg): if type(msg) == str: self.conn.send(msg + end) elif type(msg) == list or tuple: self.conn.send('\n'.join(msg) + end) def recv(self): self.conn.recv(4096).strip() def exit(self): self.send('Disconnecting you...'); self.conn.close(); self.run() # closing a connection, opening a new one # main runtime def run(self): self.socket.listen(1) self.conn, self.addr = self.socket.accept() # there would be more activity here # i.e.: sending things to the connection we just made S = Server() S.run() Thanks for your help. Answer: Implemented in [twisted](http://twistedmatrix.com/): from twisted.internet.protocol import Factory, Protocol from twisted.internet import reactor class SendContent(Protocol): def connectionMade(self): self.transport.write(self.factory.text) self.transport.loseConnection() class SendContentFactory(Factory): protocol = SendContent def __init__(self, text=None): if text is None: text = """Hello, how are you my friend? Feeling fine? Good!""" self.text = text reactor.listenTCP(50000, SendContentFactory()) reactor.run() Testing: $ telnet localhost 50000 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. Hello, how are you my friend? Feeling fine? Good! Connection closed by foreign host. Seriously, when it comes to asynchronous network, twisted is the way to go. It handles multiple connections in a single-thread single-process approach.
How to center a GNOME pop-up notification? Question: To display a GNOME pop-up notification at (200,400) on the screen (using Python): import pynotify n = pynotify.Notification("This is my title", "This is my description") n.set_hint('x', 200) n.set_hint('y', 400) n.show() I'm a gtk noob. How can I make this Notification show up centered on the screen, or at the bottom-center of the screen? Perhaps my question should be "what Python snippet gets me the Linux screen dimensions?", and I'll plug those into set_hint() as appropriate. Answer: Since you're using GNOME, here's the GTK way of getting the screen resolution import gtk.gdk import pynotify n = pynotify.Notification("This is my title", "This is my description") n.set_hint('x', gtk.gdk.screen_width()/2.) n.set_hint('y', gtk.gdk.screen_height()/2.) n.show()
What is the correct way to clean up when using PyOpenAL? Question: I'm looking at PyOpenAL for some sound needs with Python (obviously). Documentation is sparse (consisting of a demo script, which doesn't work unmodified) but as far as I can tell, there are two layers. Direct wrapping of OpenAL calls and a lightweight 'pythonic' wrapper - it is the latter I'm concerned with. Specifically, how do you clean up correctly? If we take a small example: import time import pyopenal pyopenal.init(None) l = pyopenal.Listener(22050) b = pyopenal.WaveBuffer("somefile.wav") s = pyopenal.Source() s.buffer = b s.looping = False s.play() while s.get_state() == pyopenal.AL_PLAYING: time.sleep(1) pyopenal.quit() As it is, a message is printed on to the terminal along the lines of "one source not deleted, one buffer not deleted". But I am assuming the we can't use the native OpenAL calls with these objects, so how do I clean up correctly? EDIT: I eventually just ditched pyopenal and wrote a small ctypes wrapper over OpenAL and alure (pyopenal exposes the straight OpenAL functions, but I kept getting SIGFPE). Still curious as to what I was supposed to do here. Answer: #relese reference to l b and s del l del b del s #now the WaveBuffer and Source should be destroyed, so we could: pyopenal.quit() Probably de destructor of pyopenal calls `quit()` before exit so you dont need to call it yourself.
Is there a shorter way to require a file in the same directory in ruby? Question: Is there a shorter way to require a file located in the same directory (as the script being executed)? require File.expand_path(File.dirname(__FILE__) + '/some_other_script') I read that `require "my_script"` and `require "./my_script"` will actually load the script twice (ruby will not recognize that it is actually the same script), and this is the reason why `File.expand_path` is recommended: if it is used every time the script is required, then it will only be loaded once. It seems weird to me that a concise language like Ruby does not seem to have a shorter solution. For example, python simply has this: import .some_other_module_in_the_same_directory I guess I could monkey-patch `require`... but that's just evil! ;-) Answer: Since ruby 1.9 you can use `require_relative`.
How can I import the sqlite3 module into Python 2.4? Question: The sqlite3 module is included in Python version 2.5+. However, I am stuck with version 2.4. I uploaded the sqlite3 module files, added the directory to sys.path, but I get the following error when I try to import it: Traceback (most recent call last): File "<stdin>", line 1, in ? File "sqlite3/__init__.py", line 23, in ? from dbapi2 import * File "sqlite3/dbapi2.py", line 26, in ? from _sqlite3 import * ImportError: No module named _sqlite3 The file '_sqlite3' is in lib-dynload, but if I include this in the sqlite3 directory, I get additional errors. Any suggestions? I am working in a limited environment; I don't have access to GCC, among other things. Answer: I had same problem with CentOS and python 2.4 My solution: yum install python-sqlite2 and try following python code try: import sqlite3 except: from pysqlite2 import dbapi2 as sqlite3
Database Reporting Services in Django or Python Question: I am wondering if there are any django based, or even Python Based Reporting Services ala JasperReports or SQL Server Reporting Services? Basically, I would love to be able to create reports, send them out as emails as CSV or HTML or PDF without having to code the reports. Even if I have to code the report I wouldn't mind, but the whole framework with schedules and so on would be nice! PS. I know I could use Django Apps to do it, but I was hoping if there was any integrated solutions or even projects such as Pinax or Satchmo which brings together the apps needed. PPS: It would have to work off Postgres Thanks and Regards Mark Answer: "I would love to be able to create reports ... without having to code the reports" So would I. Sadly, however, each report seems to be unique and require custom code. From Django model to CSV is easy. Start there with a few of your reports. import csv from myApp.models import This, That, TheOther def parseCommandLine(): # setup optparse to get report query parameters def main(): wtr= csv.DictWriter( sys.stdout, ["Col1", "Col2", "Col3"] ) this, that = parseCommandLine() thisList= This.objects.filter( name=this, that__name=that ) for object in thisList: write.writerow( object.col1, object.that.col2, object.theOther.col3 ) if __name__ == "__main__": main() HTML is pretty easy -- Django has an HTML template language. Rather than render_to_response, you simply render your template and write it to stdout. And the core of the algorithm, interestingly, is very similar to writing a CSV. Similar enough that -- without much cleverness -- you should have a design pattern that does both. Once you have the CSV working, add the HTML using Django's templates. PDF's are harder, because you have to actually work out the formatting in some detail. There are a lot of Python libraries for this. Interestingly, however, the overall pattern for PDF writing is very similar to CSV and HTML writing. Emailing means using Python's [smtplib](http://docs.python.org/library/smtplib.html) directly or Django's [email](http://docs.djangoproject.com/en/dev/topics/email/) package. This isn't too hard. All the pieces are there, you just need to email the output files produced above to some distribution list. Scheduling takes a little thinking to make best use of `crontab`. This -- perhaps -- is the hardest part of the job.
Does order of declaration matter in models.py (Django / Python)? Question: I have something like this in models.py class ZipCode(models.Model): zip = models.CharField(max_length=20) cities = City.objects.filter(zip=self).distinct() class City(models.Model): name = models.CharField(max_length=50) slug = models.CharField(max_length=50) state = models.ForeignKey(State) zip = models.ManyToManyField(ZipCode) When I do this I get: NameError: name 'City' is not defined Is this because the order of declaration matters? And if so, how can I do this, because either way I arrange this, it looks like I'm going to get a NameError. Thanks. Answer: Apart from order issues, this is wrong: cities = City.objects.filter(zip=self).distinct() It is not inside a method, so "self" will also be undefined. It is executed only once, at class-creation time (i.e. when the module is first imported), so the attribute created would be a class attribute and have the same value for all instances. What you might be looking for is this: @property def cities(self): return City.objects.filter(zip=self).distinct() Because this is inside a method, which is not executed until it's accessed, ordering issues will no longer be a problem. As ozan points out, this is a duplication of what Django reverse relations already give you for free: a_zip_code.city_set.all() And you can use related_name to call it what you like: zip = models.ManyToManyField(ZipCode, related_name='cities') ... a_zip_code.cities.all() So I don't think the ordering issue you originally asked about is even relevant to your situation. When it is, others have already pointed out using quoted strings in ForeignKey and ManyToManyField declarations to get around it.
How to use subversion Ctypes Python Bindings? Question: Subversion 1.6 introduce something that is called 'Ctypes Python Binding', but it is not documented. Is it any information available what this bindings are and how to use it? For example, i have a fresh windows XP and want to control SVN repository using subversiion 1.6 and this mysterious python bindings. What exactly i need to download/install/compile in order to do something like import svn from almighty_ctype_subversion_bindings svn.get( "\\rep\\project" ) And how is this connected to pysvn project? Is this a same technology, or different technologies? Answer: You need the Subversion source distribution, Python (>= 2.5), and [ctypesgen](http://code.google.com/p/ctypesgen/). Instructions for building the ctypes bindings are [here](http://svn.apache.org/repos/asf/subversion/trunk/subversion/bindings/ctypes- python/README). You will end up with a package called `csvn`, examples of it's use are [here](http://svn.apache.org/repos/asf/subversion/trunk/subversion/bindings/ctypes- python/examples/).
I need a regex for the href attribute for an mp3 file url in python Question: Based on a previous stack overflow question and contribution by cgoldberg, I came up with this regex using the python re module: import re urls = re.finditer('http://(.*?).mp3', htmlcode) The variable urls is an iterable object and I can use a loop to access each mp3 file url individually if there is more than one : for url in urls: mp3fileurl = url.group(0) This technique, however, only works sometimes. I realize regular expressions will not be as reliable as a fully fledged parser module. But, sometimes, this is not reliable for the same page. I sometimes receive everything before http for some url entries. I am relatively new to regular expressions. So, I am just wondering if there is a more reliable way to go about it. Thanks in advance. New to stackoverflow and looking forward to contributing some answers as well. Answer: As pointed out by the other answers, using regular expressions to parse HTML = bad, bad idea. With that in mind, I will add in code of my favorite parser: [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/): from BeautifulSoup import BeautifulSoup soup = BeautifulSoup(htmlcode) links = soup.findAll('a', href=True) mp3s = [l for l in links if l['href'].endswith('.mp3')] for song in mp3s: print link['href']
Python Iterator Help + lxml Question: I have this script- import lxml from lxml.cssselect import CSSSelector from lxml.etree import fromstring from lxml.html import parse website = parse('http://example.com').getroot() selector = website.cssselect('.name') for i in range(0,18): print selector[i].text_content() As you can see the for loop stops after a number of times that I set beforehand. I want the for loop to stop only after it has printed everything. Answer: The CSSSelector.cssselect() method returns an iterable, so you can just do: for element in selector: print element.text_content()
Piping Batch File output to a Python script Question: I'm trying to write a python script (in windows) that runs a batch file and will take the command line output of that batch file as input. The batch file runs processes that I don't have access to and gives output based on whether those processes are successful. I'd like to take those messages from the batch file and use them in the python script. Anyone have any ideas on how to do this ? Answer: import subprocess output= subprocess.Popen( ("c:\\bin\\batch.bat", "an_argument", "another_argument"), stdout=subprocess.PIPE).stdout for line in output: # do your work here output.close() Note that it's preferable to start your batch file with "`@echo off`".
Emulating pass-by-value behaviour in python Question: I would like to emulate the pass-by-value behaviour in python. In other words, I would like to make absolutely sure that the function I write do not modify user supplied data. One possible way is to use deep copy: from copy import deepcopy def f(data): data = deepcopy(data) #do stuff is there more efficient or more _pythonic_ way to achieve this goal, making as few assumptions as possible about the object being passed (such as .clone() method) **Edit** I'm aware that technically everything in python is passed by value. I was interested in _emulating_ the behaviour, i.e. making sure I don't mess with the data that was passed to the function. I guess the most general way is to clone the object in question either with its own clone mechanism or with deepcopy. Answer: You can make a decorator and put the cloning behaviour in that. >>> def passbyval(func): def new(*args): cargs = [deepcopy(arg) for arg in args] return func(*cargs) return new >>> @passbyval def myfunc(a): print a >>> myfunc(20) 20 This is not the most robust way, and doesn't handle key-value arguments or class methods (lack of self argument), but you get the picture. Note that the following statements are equal: @somedecorator def func1(): pass # ... same as ... def func2(): pass func2 = somedecorator(func2) You could even have the decorator take some kind of function that does the cloning and thus allowing the user of the decorator to decide the cloning strategy. In that case the decorator is probably best implemented as a class with `__call__` overridden.
Read Unicode characters from command-line arguments in Python 2.x on Windows Question: I want my Python script to be able to read Unicode command line arguments in Windows. But it appears that sys.argv is a string encoded in some local encoding, rather than Unicode. How can I read the command line in full Unicode? Example code: `argv.py` import sys first_arg = sys.argv[1] print first_arg print type(first_arg) print first_arg.encode("hex") print open(first_arg) On my PC set up for Japanese code page, I get: C:\temp>argv.py "PC・ソフト申請書08.09.24.doc" PC・ソフト申請書08.09.24.doc <type 'str'> 50438145835c83748367905c90bf8f9130382e30392e32342e646f63 <open file 'PC・ソフト申請書08.09.24.doc', mode 'r' at 0x00917D90> That's Shift-JIS encoded I believe, and it "works" for that filename. But it breaks for filenames with characters that aren't in the Shift-JIS character set—the final "open" call fails: C:\temp>argv.py Jörgen.txt Jorgen.txt <type 'str'> 4a6f7267656e2e747874 Traceback (most recent call last): File "C:\temp\argv.py", line 7, in <module> print open(first_arg) IOError: [Errno 2] No such file or directory: 'Jorgen.txt' Note—I'm talking about Python 2.x, not Python 3.0. I've found that Python 3.0 gives `sys.argv` as proper Unicode. But it's a bit early yet to transition to Python 3.0 (due to lack of 3rd party library support). **Update:** A few answers have said I should decode according to whatever the `sys.argv` is encoded in. The problem with that is that it's not full Unicode, so some characters are not representable. Here's the use case that gives me grief: I have [enabled drag-and-drop of files onto .py files in Windows Explorer](http://stackoverflow.com/q/142844/60075). I have file names with all sorts of characters, including some not in the system default code page. My Python script doesn't get the right Unicode filenames passed to it via sys.argv in all cases, when the characters aren't representable in the current code page encoding. There is certainly some Windows API to read the command line with full Unicode (and Python 3.0 does it). I assume the Python 2.x interpreter is not using it. Answer: Here is a solution that is just what I'm looking for, making a call to the Windows `GetCommandLineArgvW ` function: [Get sys.argv with Unicode characters under Windows](http://code.activestate.com/recipes/572200/) (from ActiveState) But I've made several changes, to simplify its usage and better handle certain uses. Here is what I use: **`win32_unicode_argv.py`** """ win32_unicode_argv.py Importing this will replace sys.argv with a full Unicode form. Windows only. From this site, with adaptations: http://code.activestate.com/recipes/572200/ Usage: simply import this module into a script. sys.argv is changed to be a list of Unicode strings. """ import sys def win32_unicode_argv(): """Uses shell32.GetCommandLineArgvW to get sys.argv as a list of Unicode strings. Versions 2.x of Python don't support Unicode in sys.argv on Windows, with the underlying Windows API instead replacing multi-byte characters with '?'. """ from ctypes import POINTER, byref, cdll, c_int, windll from ctypes.wintypes import LPCWSTR, LPWSTR GetCommandLineW = cdll.kernel32.GetCommandLineW GetCommandLineW.argtypes = [] GetCommandLineW.restype = LPCWSTR CommandLineToArgvW = windll.shell32.CommandLineToArgvW CommandLineToArgvW.argtypes = [LPCWSTR, POINTER(c_int)] CommandLineToArgvW.restype = POINTER(LPWSTR) cmd = GetCommandLineW() argc = c_int(0) argv = CommandLineToArgvW(cmd, byref(argc)) if argc.value > 0: # Remove Python executable and commands if present start = argc.value - len(sys.argv) return [argv[i] for i in xrange(start, argc.value)] sys.argv = win32_unicode_argv() Now, the way I use it is simply to do: import sys import win32_unicode_argv and from then on, `sys.argv` is a list of Unicode strings. The Python `optparse` module seems happy to parse it, which is great.
Cross-platform way of getting temp directory in Python Question: Is there a cross-platform way of getting the path to the _`temp`_ directory in Python 2.6? For example, under Linux that would be `/tmp`, while under XP `C:\Documents and settings\[user]\Application settings\Temp`. Thanks! Answer: That would be the [tempfile](http://docs.python.org/library/tempfile.html) module. It has functions to get the temporary directory, and also has some shortcuts to create temporary files and directories in it, either named or unnamed. Example: import tempfile print tempfile.gettempdir() # prints the current temporary directory f = tempfile.TemporaryFile() f.write('something on temporaryfile') f.seek(0) # return to beginning of file print f.read() # reads data back from the file f.close() # temporary file is automatically deleted here For completeness, here's how it searches for the temporary directory, according to the documentation: 1. The directory named by the `TMPDIR` environment variable. 2. The directory named by the `TEMP` environment variable. 3. The directory named by the `TMP` environment variable. 4. A platform-specific location: * On _RiscOS_ , the directory named by the `Wimp$ScrapDir` environment variable. * On _Windows_ , the directories `C:\TEMP`, `C:\TMP`, `\TEMP`, and `\TMP`, in that order. * On all other platforms, the directories `/tmp`, `/var/tmp`, and `/usr/tmp`, in that order. 5. As a last resort, the current working directory.
How do I check if a disk is in a drive using python? Question: Say I want to manipulate some files on a floppy drive or a USB card reader. How do I check to see if the drive in question is ready? (That is, has a disk physically inserted.) The drive letter _exists_ , so os.exists() will always return True in this case. Also, at this point in the process I don't yet know any file names, so checking to see if a given file exists also won't work. Some clarification: the issue here is exception handling. Most of the win32 API calls in question just throw an exception when you try to access a drive that isn't ready. Normally, this would work fine - look for something like the free space, and then catch the raised exception and assume that means there isn't a disk present. However, even when I catch any and all exceptions, I still get an angry exception dialog box from Windows telling me the floppy / card reader isn't ready. So, I guess the real question is - how do I suppress the windows error box? Answer: And the answer, as with so many things, turns out to be in [an article about C++/Win32 programming from a decade ago](http://bcbjournal.org/articles/vol2/9806/Detecting%5Fdisk%5Ferrors.htm). The issue, in a nutshell, is that Windows handles floppy disk errors slightly differently than other kinds of drive errors. By default, no matter what you program does, or **thinks** it's doing, Windows will intercept any errors thrown by the device and present a dialog box to the user rather than letting the program handle it - the exact issue I was having. But, as it turns out, there's a Win32 API call to solve this issue, primarily `SetErrorMode()` In a nutshell (and I'm handwaving a way a lot of the details here), we can use `SetErrorMode()` to get Windows to stop being quite so paranoid, do our thing and let the program handle the situation, and then reset the Windows error mode back to what it was before as if we had never been there. (There's probably a Keyser Soze joke here, but I've had the wrong amount of caffeine today to be able to find it.) Adapting the C++ sample code from the linked article, that looks about like this: int OldMode; //a place to store the old error mode //save the old error mode and set the new mode to let us do the work: OldMode = SetErrorMode(SEM_FAILCRITICALERRORS); // Do whatever we need to do that might cause an error SetErrorMode(OldMode); //put things back the way they were Under C++, detecting errors the right way needs the `GetLastError()' function, which we fortunately don't need to worry about here, since this is a Python question. In our case, Python's exception handling works fine. This, then, is the function I knocked together to check a drive letter for "readiness", all ready for copy-pasting if anyone else needs it: import win32api def testDrive( currentLetter ): """ Tests a given drive letter to see if the drive is question is ready for access. This is to handle things like floppy drives and USB card readers which have to have physical media inserted in order to be accessed. Returns true if the drive is ready, false if not. """ returnValue = False #This prevents Windows from showing an error to the user, and allows python #to handle the exception on its own. oldError = win32api.SetErrorMode( 1 ) #note that SEM_FAILCRITICALERRORS = 1 try: freeSpace = win32file.GetDiskFreeSpaceEx( letter ) except: returnValue = False else: returnValue = True #restore the Windows error handling state to whatever it was before we #started messing with it: win32api.SetErrorMode( oldError ) return returnValue I've been using this quite a bit the last few days, and it's been working beautifully for both floppies and USB card readers. A few notes: pretty much any function needing disk access will work in the try block - all we're looking for in an exception due to the media not being present. Also, while the python `win32api` package exposes all the functions we need, it dones't seem to have any of the flag constants. After a trip to the ancient bowels of MSDN, it turns out that SEM_FAILCRITICALERRORS is equal to 1, which makes our life awfully easy. I hope this helps someone else with a similar problem!
How does Python import modules from .egg files? Question: How can I open `__init__.pyc` here? >>> import stompservice <module 'stompservice' from 'C:\Python25\lib\site-packages\stompservice-0.1.0-py2.5.egg\stompservice\__init__.pyc'> All I see in `C:\Python25\lib\site-packages\` is the .egg file, but where are the internal files of the package? Answer: <http://peak.telecommunity.com/DevCenter/PythonEggs> .egg files are simply renamed zip files. Open the egg with your zip program, or just rename the extension to .zip, and extract.
What do I need to read Microsoft Access databases using Python? Question: How can I access Microsoft Access databases in Python? With SQL? I'd prefere a solution that works with Linux, but I could also settle for Windows. I only require read access. Answer: On Linux, MDBTools is your only chance as of now. [[disputed]](http://stackoverflow.com/questions/853370/#comment53931912_15400363) On Windows, you can deal with mdb files with pypyodbc. To create an Access mdb file: import pypyodbc pypyodbc.win_create_mdb( "D:\\Your_MDB_file_path.mdb" ) [Here is an Hello World script](https://code.google.com/p/pypyodbc/wiki/pypyodbc_for_access_mdb_file) that fully demostate pypyodbc's Access support functions. Disclaimer: I'm the developer of pypyodbc.
IronPython and instantiating COM objects Question: I'm using IronPython 2.0 in a SharpDevelop 3.1 console window. I'm trying to reference and use the Redemption CDO replacement library. The standard usage for the library is to instantiate an RDOSession object, then use the methods on that object to navigate through the RDO object model. I've registered the Redemption COM dll and referenced it from a project, which created an interop assembly for it. In the IronPython console window, I then do the following: import clr clr.AddReferenceToFileAndPath(r'd:\SharpDevelop Projects\TestPython\TestPython\obj\Debug\Interop.Redemption.dll') import Redemption This works so far. The next step would be to get an RDOSession object and instantiate it. If it worked, this would be how I would think to do that: session = Redemption.RDOSession() However, this gives an error: TypeError: Cannot create instances of RDOSession So my question is (or was): how do I instantiate an object like RDOSession? However, I just answered my own question in my attempt to explain it. I'll continue with the answer since despite the fact that it almost seems silly to do so, perhaps it will be useful to someone else. When I went to type in the line "session = Redemption.RDOSession()", the calltip came up with "RDOSession" as well as "RDOSessionClass" right below it. I don't know how the latter was generated, but it sounded like an instantiable (if that's a word) object. >>> session = Redemption.RDOSessionClass() >>> Lo and behold. Calling the Logon method worked like a champ on the new session object. Thanks stackoverflow! Answer: You need to use: session = Redemption.RDOSessionClass()
Getting pdb-style caller information in python Question: Let's say I have the following method (in a class or a module, I don't think it matters): def someMethod(): pass I'd like to access the caller's state at the time this method is called. `traceback.extract_stack` just gives me some strings about the call stack. I'd like something like `pdb` in which I can set a breakpoint in `someMethod()` and then type 'u' to go up the call stack and then examine the state of the system then. Answer: I figured it out: import inspect def callMe(): tag = '' frame = inspect.currentframe() try: tag = frame.f_back.f_locals['self']._tag finally: del frame return tag
whoami in python Question: What is the best way to find out the user that a python process is running under? I could do this: name = os.popen('whoami').read() But that has to start a whole new process. os.environ["USER"] works sometimes, but sometimes that environment variable isn't set. Answer: import getpass print getpass.getuser() See the documentation of the [getpass](http://docs.python.org/library/getpass.html) module. > getpass.getuser() > > Return the “login name” of the user. Availability: Unix, Windows. > > This function checks the environment variables LOGNAME, USER, LNAME and > USERNAME, in order, and returns the value of the first one which is set to a > non-empty string. If none are set, the login name from the password database > is returned on systems which support the pwd module, otherwise, an exception > is raised.
How do capture groups work? (wrt python regular expressions) Question: While using regex to help solve a problem in the [Python Challenge](http://www.pythonchallenge.com/), I came across some behaviour that confused me. from [here](http://docs.python.org/library/re.html#regular-expression-syntax): (...) Matches whatever regular expression is inside the parentheses. and '+' Causes the resulting RE to match 1 or more repetitions of the preceding RE. So this makes sense: >>>import re >>>re.findall(r"(\d+)", "1111112") ['1111112'] But this doesn't: >>> re.findall(r"(\d)+", "1111112") ['2'] I realise that findall returns only groups when groups are present in the regex, but why is only the '2' returned? What happends to all the 1's in the match? Answer: Because you only have one capturing group, but it's "run" repeatedly, the new matches are repeatedly entered into the "storage space" for that group. In other words, the `1`s were lost when they were "overwritten" by subsequent `1`s and eventually the `2`.
Ordering a list of dictionaries in python Question: I've got a python list of dictionaries: mylist = [ {'id':0, 'weight':10, 'factor':1, 'meta':'ABC'}, {'id':1, 'weight':5, 'factor':1, 'meta':'ABC'}, {'id':2, 'weight':5, 'factor':2, 'meta':'ABC'}, {'id':3, 'weight':1, 'factor':1, 'meta':'ABC'} ] Whats the most efficient/cleanest way to order that list by weight then factor (numericaly). The resulting list should look like: mylist = [ {'id':3, 'weight':1, 'factor':1, 'meta':'ABC'}, {'id':1, 'weight':5, 'factor':1, 'meta':'ABC'}, {'id':2, 'weight':5, 'factor':2, 'meta':'ABC'}, {'id':0, 'weight':10, 'factor':1, 'meta':'ABC'}, ] Answer: mylist.sort(key=lambda d: (d['weight'], d['factor'])) or import operator mylist.sort(key=operator.itemgetter('weight', 'factor'))
Problem using py2app with the lxml package Question: I am trying to use 'py2app' to generate a standalone application from some Python scripts. The Python uses the 'lxml' package, and I've found that I have to specify this explicitly in the setup.py file that 'py2app' uses. However, the resulting application program still won't run on machines that haven't had 'lxml' installed. My Setup.py looks like this: from setuptools import setup OPTIONS = {'argv_emulation': True, 'packages' : ['lxml']} setup(app=[MyApp.py], data_files=[], options={'py2app' : OPTIONS}, setup_requires=['py2app']) Running the application produces the following output: MyApp Error An unexpected error has occurred during execution of the main script ImportError: dlopen(/Users/ake/XXXX/XXXX/MyApp.app/Contents/Resources/lib/python2.5/lxml/etree.so, 2): Symbol not found: _xmlSchematronParse Referenced from: /Users/ake/XXXX/XXXX/MyApp.app/Contents/Resources/lib/python2.5/lxml/etree.so Expected in: dynamic lookup The symbol '_xmlSchematronParse' is from a library called 'libxml2' that 'lxml' depends on. The version that comes preinstalled with Mac OS X isn't up to date enough for 'lxml', so I had to install version 2.7.2 (in /usr/local). py2app, for some reason, is linking in the version in /Developer/SDKs/MacOSX10.3.9.sdk/usr/lib. When I run my application as a Python script though, the correct version is found. (I checked this just now by hiding the 2.7.2 version.) So my question now is, how can I tell py2app where to look for libraries? Answer: Found it. py2app has a 'frameworks' option to let you specify frameworks, and also dylibs. My setup.py file now looks like this: from setuptools import setup DATA_FILES = [] OPTIONS = {'argv_emulation': True, 'packages' : ['lxml'], 'frameworks' : ['/usr/local/libxml2-2.7.2/lib/libxml2.2.7.2.dylib'] } setup(app=MyApp.py, data_files=DATA_FILES, options={'py2app' : OPTIONS}, setup_requires=['py2app']) and that's fixed it. Thanks for the suggestions that led me here.
Good examples of python-memcache (memcached) being used in Python? Question: I'm writing a web app using Python and the web.py framework, and I need to use memcached throughout. I've been searching the internet trying to find some good documentation on the [python-memcached](http://www.tummy.com/Community/software/python-memcached/) module, but all I could find was [this example on the MySQL website](http://dev.mysql.com/doc/refman/5.1/en/ha-memcached-interfaces- python.html), and the documentation on its methods isn't great. Answer: It's fairly simple. You write values using keys and expiry times. You get values using keys. You can expire keys from the system. Most clients follow the same rules. You can read the generic instructions and best practices on [the memcached homepage](http://www.danga.com/memcached/). If you really want to dig into it, I'd look at the source. Here's the header comment: """ client module for memcached (memory cache daemon) Overview ======== See U{the MemCached homepage<http://www.danga.com/memcached>} for more about memcached. Usage summary ============= This should give you a feel for how this module operates:: import memcache mc = memcache.Client(['127.0.0.1:11211'], debug=0) mc.set("some_key", "Some value") value = mc.get("some_key") mc.set("another_key", 3) mc.delete("another_key") mc.set("key", "1") # note that the key used for incr/decr must be a string. mc.incr("key") mc.decr("key") The standard way to use memcache with a database is like this:: key = derive_key(obj) obj = mc.get(key) if not obj: obj = backend_api.get(...) mc.set(key, obj) # we now have obj, and future passes through this code # will use the object from the cache. Detailed Documentation ====================== More detailed documentation is available in the L{Client} class. """
In Python, How Do You Filter a String Such That Only Characters in Your List Are Returned? Question: Imagine a string, like 'Agh#$%#%2341- -!zdrkfd' and I only wish to perform some operating on it such that only the lowercase letters are returned (as an example), which in this case would bring 'ghzdrkfd'. How do you do this in Python? The obvious way would be to create a list, of characters, 'a' through 'z', then iterate over the characters in my string and build a new string, character by character, of those in my list only. This seems primitive. I was wondering if regular expressions are appropriate. Replacing unwanted characters seems problematic and I tend to prefer whitelisting over blacklisting. The `.match` function does not seem appropriate. I have looked over the appropriate page on the Python site, but have not found a method which seems to fit. If regular expressions are not appropriate and the correct approach is looping, is there a simple function which "explodes" a string into a list? Or am I just hitting another for loop there? Answer: If you are looking for efficiency. Using the [translate](http://docs.python.org/library/stdtypes.html#str.translate) function is the fastest you can get. It can be used to quickly replace characters and/or delete them. import string delete_table = string.maketrans( string.ascii_lowercase, ' ' * len(string.ascii_lowercase) ) table = string.maketrans('', '') "Agh#$%#%2341- -!zdrkfd".translate(table, delete_table) **In python 2.6:** you don't need the second table anymore import string delete_table = string.maketrans( string.ascii_lowercase, ' ' * len(string.ascii_lowercase) ) "Agh#$%#%2341- -!zdrkfd".translate(None, delete_table) This is method is way faster than any other. Of course you need to store the delete_table somewhere and use it. But even if you don't store it and build it every time, it is still going to be faster than other suggested methods so far. To confirm my claims here are the results: for i in xrange(10000): ''.join(c for c in s if c.islower()) real 0m0.189s user 0m0.176s sys 0m0.012s While running the regular expression solution: for i in xrange(10000): re.sub(r'[^a-z]', '', s) real 0m0.172s user 0m0.164s sys 0m0.004s **[Upon request]** If you pre-compile the regular expression: r = re.compile(r'[^a-z]') for i in xrange(10000): r.sub('', s) real 0m0.166s user 0m0.144s sys 0m0.008s Running the translate method the same number of times took: real 0m0.075s user 0m0.064s sys 0m0.012s
JavaScript build options/tools Question: These days I find myself shifting out more and more work to the client side and hence my JS files tend to get bigger and bigger. I have come to the point where most HTML pages have half a dozen or more JS imports in the header and I realised that this is hurting loading times. I have recently discovered this [script](http://www.hunlock.com/blogs/Supercharged%5FJavascript) which lets you download several JS files with one HTTP request. It is written in PHP and being a Django fan I'm planning to rewrite it in Python. I'm planning to use a HTTP redirect to the pre-minified and concatenated file and was wondering what the cost of a 301 would be. Please let me know if that is a stupid idea. On the other hand, am little worried about introducing scripting logic into the serving of static files and I was wondering if there is a viable build alternative like, say, an ant task that concatenates and minifies JS files and replaces multiple JS downloads in a HTML header with one big one, like the script does. Answer: For PHP I certainly favour doing it dynamically just because if you introduce a build step you're losing one of the main benefits of using PHP. In fact, at the risk of self-promotion I've written [Supercharging Javascript in PHP](http://www.cforcoding.com/2009/05/supercharging-javascript.html) about this very issue. Of course other technologies may vary. Again it is PHP but it's not just a lump of code for you to use (although you can jump straight to [Part 6](http://www.cforcoding.com/2009/05/supercharging- javascript-part-6.html) if you just want some fully working code) and may have value to you in terms of identifying the issues and doing things the right way and why you do them that way. I favour having bundles of Javascript files (maybe only one for the entire application) and then each page simply activates the behaviour it needs through standard means but all the code bodies are in the larger cached and minified JS file. It works out fastest this way and is a good way to go. If you do want it as part of a build process, which is a reasonable solution if you have a build process anyway, then I suggest you minify your code. There are lots of tools to do this. Have a look at [YUI Compressor](http://developer.yahoo.com/yui/compressor/). If you do a static combine of JS files, the other stuff mentioned above such as gzipping and associated issues is still relevant.
GitPython and sending commands to the Git object Question: [GitPython](http://gitorious.org/git-python) is a way of interacting with git from python. I'm trying to access the basic git commands (e.g. `git commit -m "message"`) from this module, which according to [this](http://pysync.googlecode.com/files/GitPython.pdf) should be accessed through the Git module. Here's what I've tried so far to get these commands working: >>> import git >>> foo = git.Git("~/git/GitPython") >>> bar = "git commit -m 'message'" >>> beef = git.Git.execute(foo,bar) This shows up an error saying that there is no such file or directory. I've also tried the following as paths to my git directory: `~/git/GitPython/.git` `/Users/bacon/git/gitclient/` The only other option is that the command is wrong, so I tried: `commit -m "message"` as well, and still get "no such file or directory". What do I need to do to get these git commands working properly? Answer: I havn't tried it to verify yet but it seems git.Git.execute expects a list of commandline arguments (if you give it a string it'll look for an executable exactly matching the string, spaces and everything - which naturally wouldn't be found), so something like this I think would work: import git import os, os.path g = git.Git(os.path.expanduser("~/git/GitPython")) result = g.execute(["git", "commit", "-m", "'message'"]) other changes: * I expect using a path with ~ in it wouldn't work so I used os.path.expanduser to expand ~ to your home directory * using instance.method(*args) instead of Class.method(instance, *args) is generally preferred so I changed that, though it'd still work with the other way There might be saner ways than manually running the commit command though (I just didn't notice any quickly looking through the source) so I suggest making sure there isn't a higher-level way before doing it that way
Daemon python wrapper "subprocess I/O timed out", need some directions Question: I am not very familiar with the way of creating a daemon in Python, therefore wheb trying to install and run a third party open source TeX Python Wrapper i got bite by an error i do nor really understand. I added some print to help debugging. The faulty one is called texdp.py When i run mathrand which calls texdp server start, i get the following error output_fds {8: 'dvi', 5: 'log', 6: 'logfile', 7: 'err'} input_fds 3 readable, writable [] [3] outflds, inputflds: [8, 5, 6, 7] [3] pointer len_str: 0 63 folder fd: 7 readable, writable [5] [] outflds, inputflds: [8, 5, 6, 7] [] pointer len_str: 63 63 folder fd: 7 readable, writable [5] [] outflds, inputflds: [8, 5, 6, 7] [] pointer len_str: 63 63 folder fd: 5 readable, writable [] [] outflds, inputflds: [8, 5, 6, 7] [] pointer len_str: 63 63 folder fd: 5 SUB IO ERROR: readable [] pointer == len_str: 63 , 63 Traceback (most recent call last): File "/usr/local/bin/mathtrand", line 18, in <module> server.start() File "/usr/local/lib/python2.6/dist-packages/mathtran/server.py", line 71, in start self.secplain.start() File "/usr/local/lib/python2.6/dist-packages/tex/texdp.py", line 159, in start self.process(self._params.start) File "/usr/local/lib/python2.6/dist-packages/tex/texdp.py", line 175, in process value = self._process(str + self._params.done, self._params.done_str) File "/usr/local/lib/python2.6/dist-packages/tex/texdp.py", line 210, in _process raise SubprocessError, 'subprocess I/O timed out' The part of the code responsible is attached and located around line 200 in the method **def _process**. I have no idea of where to start looking, and what does this error really means. Any help is more than welcome. https://texd.svn.sourceforge.net/svnroot/texd/trunk/py/tex/texdp.py # Copyright: (c) 2007 The Open University, Milton Keynes, UK # License: GPL version 2 or (at your option) any later version. # Author: Jonathan Fine <jfine@pytex.org>, <J.Fine@open.ac.uk> """Wrapper around TeX process, to handle input and output. Further comments to go here. """ __version__ = '$Revision: 116 $'[11:-2] # $Source$ # TODO: Move interface instances to elsewhere. # TODO: error recovery, e.g. undefined control sequence. # TODO: Abnormal exit is leaving orphaned processes. # TODO: Refactor _process into tex.util, share with metapostdp. import os # Create directories and fifos from select import select # Helps handle i/o to TeX process from tex.util import make_nonblocking # For non-blocking file descriptor from tex.util import DaemonSubprocess from tex.dviopcode import FNT_DEF1, FNT_DEF4 import signal class SubprocessError(EnvironmentError): pass # TODO: This belongs elsewhere. class Interface(object): """Stores useful, but format specific, constants.""" def __init__(self, **kwargs): # TODO: Be more specific about the parameters. self.__dict__ = kwargs # TeX knows about these fonts, but Python does not yet know. # This list created by command: $tex --ini '&plain' \\dump preloaded_fonts = ( 'cmr10', 'cmr9', 'cmr8', 'cmr7', 'cmr6', 'cmr5', 'cmmi10', 'cmmi9', 'cmmi8', 'cmmi7', 'cmmi6', 'cmmi5', 'cmsy10', 'cmsy9', 'cmsy8', 'cmsy7', 'cmsy6', 'cmsy5', 'cmex10', 'cmss10', 'cmssq8', 'cmssi10', 'cmssqi8', 'cmbx10', 'cmbx9', 'cmbx8', 'cmbx7', 'cmbx6', 'cmbx5', 'cmtt10', 'cmtt9', 'cmtt8', 'cmsltt10', 'cmsl10', 'cmsl9', 'cmsl8', 'cmti10', 'cmti9', 'cmti8', 'cmti7', 'cmu10', 'cmmib10', 'cmbsy10', 'cmcsc10', 'cmssbx10', 'cmdunh10', 'cmr7 scaled 2074', 'cmtt10 scaled 1440', 'cmssbx10 scaled 1440', 'manfnt', ) # Ship out a page that starts with a font def. load_font_template = \ r'''%% \begingroup \hoffset 0sp \voffset 0sp \setbox0\hbox{\font\tmp %s\relax\tmp M}%% \ht0 0sp \shipout\box 0 \endgroup ''' secplain_load_font_template = \ r'''%% \_begingroup \_hoffset 0sp \_voffset 0sp \_setbox0\_hbox{\_font\_tmp %s\_relax\_tmp M}%% \_ht0 0sp \_shipout\_box 0 \_endgroup ''' plain = Interface(format='plain', start = r'\shipout\hbox{}' '\n', done = '\n' r'\immediate\write16{DONE}\read-1to\temp ' '\n', done_str = 'DONE\n', stop = '\end' '\n', preloaded_fonts = preloaded_fonts, load_font_template = load_font_template, ) secplain = Interface(format='secplain', start = r'\_shipout\_hbox{}' '\n', done = '\n' r'\_immediate\_write16{DONE}\_read-1to\_temp ' '\n', done_str = 'DONE\n', stop = '\_end' '\n', preloaded_fonts = preloaded_fonts, load_font_template = secplain_load_font_template, ) class Texdp(DaemonSubprocess): """Wrapper around TeX process that handles input and output. More comments go here. """ _fifos = ('texput.tex', 'texput.log', 'texput.dvi') def _make_args(self): # Don Knuth created plain.fmt, renamed by some to tex.fmt. fmt = self._params.format if fmt == 'plain' or fmt == 'tex': fmt = '' else: fmt = '--fmt=' + fmt # Build up the arguments list. args = ('tex', '--ipc',) args += ('--output-comment=""',) # Don't record time of run. if fmt: args += (fmt,) args += ('texput.tex',) return args def start(self): super(Texdp, self).start() # Start the TeX process. # We will now initialise TeX, and conprocessnect to file descriptors. # We need to do some low-level input/output, in order to # manage long input strings. Therefore, we use file # descriptors rather than file objects. # We map output fds to what will be a dictionary key. ofd = self._output_fd_dict = {} cwd = self._cwd # Shorthand. child = self._child # For us, stdin and stdout are special. self._stdin = child.stdin.fileno() self._stdout = child.stdout.fileno() # Read stdout and stderr to 'log' and 'err' respectively. ofd[self._stdout] = 'log' ofd[child.stderr.fileno()] = 'err' # Open 'texput.tex', and block until it is available, which is # when TeX has started. Then make 'texput.tex' non-blocking, # in case of a long write. self._texin = os.open(os.path.join(cwd, 'texput.tex'), os.O_WRONLY) make_nonblocking(self._texin) # Read 'texput.log' and 'texput.dvi' to 'logfile' and 'dvi'. for src, tgt in (('texput.log', 'logfile'), ('texput.dvi', 'dvi')): fd = os.open(os.path.join(cwd, src), os.O_RDONLY|os.O_NONBLOCK) ofd[fd] = tgt # Ship out blank page, and initialise preloaded fonts. self.process(self._params.start) self._fontdefs = [] for font_spec in self._params.preloaded_fonts: self.load_new_font(font_spec) def process(self, str): "Return dictionary with log, dvi, logfile and err entries." # TeX will read the data, following by the 'done' command. # The 'done' command will cause TeX to write the 'done_str', # which signals the end of the process. It will also pause # TeX for input. # TODO: I do not know why the pause is required, but it is. # Remove it here and in the _params, and the program hangs. value = self._process(str + self._params.done, self._params.done_str) self._child.stdin.write('\n') # TeX is paused for input. return value def _process(self, str, done_str): # Write str, and read output, until we are done. Then gather # up the accumulated output, and return as a dictionary. The # input string might be long. Later, we might allow writing to # stdin, in response to errors. # Initialisation. print "output_fds ", self._output_fd_dict output_fds = self._output_fd_dict.keys() print "input_fds ", self._texin input_fds = [self._texin] accumulator = {} for fd in output_fds: accumulator[fd] = [] pointer, len_str = 0, len(str) # The main input/ouput loop. # TOD0: magic number, timeout. done = False while not done: readable, writable = select(output_fds, input_fds, [], 0.1)[0:2] print "readable, writable", readable, writable, " outflds, inputflds: ", output_fds, input_fds print "pointer len_str: ", pointer, len_str print "folder fd: ", fd if not readable and pointer == len_str: print "SUB IO ERROR: readable", readable, " pointer == len_str:", pointer, ",", len_str os.kill(self._child.pid, signal.SIGKILL) self._child.wait() raise SubprocessError, 'subprocess I/O timed out' if pointer != len_str and writable: written = os.write(self._texin, str[pointer:pointer+4096]) pointer += written if pointer == len_str: input_fds = [] for fd in readable: if self._child.poll() is not None: raise SubprocessError, 'read from terminated subprocess' tmp = os.read(fd, 4096) if fd == self._stdout: if tmp.endswith(done_str): tmp = tmp[:-len(done_str)] done = True accumulator[fd].append(tmp) if pointer != len_str: raise SystemError, "TeX said 'done' before end of input." # Join accumulated output, create and return ouput dictionary. value = {} for fd, name in self._output_fd_dict.items(): value[name] = ''.join(accumulator[fd]) return value def load_new_font(self, font_spec): """Tell both TeX and Python about a new font. Raises an exception if the font is not new. """ # Ask TeX to load font, and ship out page that uses it. command mathtran= self._params.load_font_template % font_spec dvi = self.process(command)['dvi'] bytes = dvi[45:-1] # Page body. opcode = ord(bytes[0]) # First opcode. # The first opcode should be a fontdef, which we extract. if FNT_DEF1 <= opcode <= FNT_DEF4: body_len = (2 + (opcode - FNT_DEF1) + 12 # Checksum, scale, design size. + 2) # Length of 'area' and font name. name_len = ord(bytes[body_len - 2]) \ + ord(bytes[body_len - 1]) fontdef = bytes[:body_len + name_len] self._fontdefs.append(fontdef) return else: raise ValueError, "font '%s' not new or not found" % font_spec Answer: The timeout is based on the `select` call readable, writable = select(output_fds, input_fds, [], 0.1)[0:2] The timeout is 0.1 seconds. Is this appropriate? The variable names are murky ("pointer" makes little sense in Python). However, it appears that if nothing happens on 0.1 seconds, a "timeout" is raised. * * * Weirdly, this program opens files to communicate with a subprocess. That's very odd to be "sharing" a file with a subprocess. Usually we do one of two things -- use pipes to communicate actively with a subprocess or use files to leave the subprocess run on its own. Here's a simpler design. 1. Put the input into the input file. 2. Run the Tex daemon subprocess until it finishes or you're tired of waiting for it. 3. If you're tired of waiting for it, kill it. Else * Look at the status from the wait function * Read the output file. That's pretty much all you need. And there will be no mysterious "pause", no low-level I/O, no non-blocking I/O. If, for some reason you need to communicate with the subprocess, then you should look at replacing the files with pipes (which aren't shared and are probably a better for for whatever you're doing.)
Can modules have properties the same way that objects can? Question: With python properties, I can make it such that obj.y calls a function rather than just returning a value. Is there a way to do this with modules? I have a case where I want module.y to call a function, rather than just returning the value stored there. Answer: Only instances of new-style classes can have properties. You can make Python believe such an instance is a module by stashing it in `sys.modules[thename] = theinstance`. So, for example, your m.py module file could be: import sys class _M(object): def __init__(self): self.c = 0 def afunction(self): self.c += 1 return self.c y = property(afunction) sys.modules[__name__] = _M() **Edited** : removed an implicit dependency on globals (had nothing to do with the point of the example but did confuse things by making the original code fail!).
Python Multiprocessing atexit Error "Error in atexit._run_exitfuncs" Question: I am trying to run a simple multiple processes application in Python. The main thread spawns 1 to N processes and waits until they all done processing. The processes each run an infinite loop, so they can potentially run forever without some user interruption, so I put in some code to handle a KeyboardInterrupt: #!/usr/bin/env python import sys import time from multiprocessing import Process def main(): # Set up inputs.. # Spawn processes Proc( 1).start() Proc( 2).start() class Proc ( Process ): def __init__ ( self, procNum): self.id = procNum Process.__init__(self) def run ( self ): doneWork = False while True: try: # Do work... time.sleep(1) sys.stdout.write('.') if doneWork: print "PROC#" + str(self.id) + " Done." break except KeyboardInterrupt: print "User aborted." sys.exit() # Main Entry if __name__=="__main__": main() The problem is that when using CTRL-C to exit, I get an additional error even though the processes seem to exit immediately: ......User aborted. Error in atexit._run_exitfuncs: Traceback (most recent call last): File "C:\Python26\lib\atexit.py", line 24, in _run_exitfuncs func(*targs, **kargs) File "C:\Python26\lib\multiprocessing\util.py", line 281, in _exit_function p.join() File "C:\Python26\lib\multiprocessing\process.py", line 119, in join res = self._popen.wait(timeout) File "C:\Python26\lib\multiprocessing\forking.py", line 259, in wait res = _subprocess.WaitForSingleObject(int(self._handle), msecs) KeyboardInterrupt Error in sys.exitfunc: Traceback (most recent call last): File "C:\Python26\lib\atexit.py", line 24, in _run_exitfuncs func(*targs, **kargs) File "C:\Python26\lib\multiprocessing\util.py", line 281, in _exit_function p.join() File "C:\Python26\lib\multiprocessing\process.py", line 119, in join res = self._popen.wait(timeout) File "C:\Python26\lib\multiprocessing\forking.py", line 259, in wait res = _subprocess.WaitForSingleObject(int(self._handle), msecs) KeyboardInterrupt I am running Python 2.6 on Windows. If there is a better way to handle multiprocessing in Python, please let me know. Answer: Rather then just forcing `sys.exit()`, you want to send a signal to your threads to tell them to stop. Look into using [signal handlers](http://docs.python.org/library/signal.html) and threads in Python. You could potentially do this by changing your `while True:` loop to be `while keep_processing:` where `keep_processing` is some sort of global variable that gets set on the KeyboardInterrupt exception. I don't think this is a good practice though.
How to spawn parallel child processes on a multi-processor system? Question: I have a Python script that I want to use as a controller to another Python script. I have a server with 64 processors, so want to spawn up to 64 child processes of this second Python script. The child script is called: $ python create_graphs.py --name=NAME where NAME is something like XYZ, ABC, NYU etc. In my parent controller script I retrieve the name variable from a list: my_list = [ 'XYZ', 'ABC', 'NYU' ] So my question is, what is the best way to spawn off these processes as children? I want to limit the number of children to 64 at a time, so need to track the status (if the child process has finished or not) so I can efficiently keep the whole generation running. I looked into using the subprocess package, but rejected it because it only spawns one child at a time. I finally found the multiprocessor package, but I admit to being overwhelmed by the whole threads vs. subprocesses documentation. Right now, my script uses `subprocess.call` to only spawn one child at a time and looks like this: #!/path/to/python import subprocess, multiprocessing, Queue from multiprocessing import Process my_list = [ 'XYZ', 'ABC', 'NYU' ] if __name__ == '__main__': processors = multiprocessing.cpu_count() for i in range(len(my_list)): if( i < processors ): cmd = ["python", "/path/to/create_graphs.py", "--name="+ my_list[i]] child = subprocess.call( cmd, shell=False ) I really want it to spawn up 64 children at a time. In other stackoverflow questions I saw people using Queue, but it seems like that creates a performance hit? Answer: What you are looking for is the [process pool](http://docs.python.org/library/multiprocessing.html#module- multiprocessing.pool) class in multiprocessing. import multiprocessing import subprocess def work(cmd): return subprocess.call(cmd, shell=False) if __name__ == '__main__': count = multiprocessing.cpu_count() pool = multiprocessing.Pool(processes=count) print pool.map(work, ['ls'] * count) And here is a calculation example to make it easier to understand. The following will divide 10000 tasks on N processes where N is the cpu count. Note that I'm passing None as the number of processes. This will cause the Pool class to use cpu_count for the number of processes ([reference](http://docs.python.org/library/multiprocessing.html#multiprocessing.pool.multiprocessing.Pool)) import multiprocessing import subprocess def calculate(value): return value * 10 if __name__ == '__main__': pool = multiprocessing.Pool(None) tasks = range(10000) results = [] r = pool.map_async(calculate, tasks, callback=results.append) r.wait() # Wait on the results print results
Accurate timing of functions in python Question: I'm programming in python on windows and would like to accurately measure the time it takes for a function to run. I have written a function "time_it" that takes another function, runs it, and returns the time it took to run. def time_it(f, *args): start = time.clock() f(*args) return (time.clock() - start)*1000 i call this 1000 times and average the result. (the 1000 constant at the end is to give the answer in milliseconds.) This function seems to work but i have this nagging feeling that I'm doing something wrong, and that by doing it this way I'm using more time than the function actually uses when its running. Is there a more standard or accepted way to do this? When i changed my test function to call a print so that it takes longer, my time_it function returns an average of 2.5 ms while the cProfile.run('f()') returns and average of 7.0 ms. I figured my function would overestimate the time if anything, what is going on here? One additional note, it is the relative time of functions compared to each other that i care about, not the absolute time as this will obviously vary depending on hardware and other factors. Answer: Use the [`timeit` module](http://docs.python.org/library/timeit.html) from the Python standard library. Basic usage: from timeit import Timer # first argument is the code to be run, the second "setup" argument is only run once, # and it not included in the execution time. t = Timer("""x.index(123)""", setup="""x = range(1000)""") print t.timeit() # prints float, for example 5.8254 # ..or.. print t.timeit(1000) # repeat 1000 times instead of the default 1million
Python: Stopping miniDOM from expanding escape sequences Question: When xml.dom.minidom parses a piece of xml, it automagically converts escape characters for greater than and less than into their visual representation. For example: >>> import xml.dom.minidom >>> s = "<example>4 &lt; 5</example>" >>> x = xml.dom.minidom.parseString(s) >>> x.firstChild.firstChild.data u'4 < 5' Does anyone know how to stop minidom from doing this? Answer: >>> import xml.dom.minidom >>> s = "<example>4 &lt; 5</example>" >>> x = xml.dom.minidom.parseString(s) >>> x.firstChild.firstChild.toxml() u'4 &lt; 5'
Problem with my hangman game Question: I'm trying to learn python and I'm attempting a hangman game. But when I try and compare the user's guess to the word, it doesn't work. What am I missing? import sys import codecs import random if __name__ == '__main__': try: wordlist = codecs.open("words.txt", "r") except Exception as ex: print (ex) print ("\n**Could not open file!**\n") sys.exit(0) rand = int(random.random()*5 + 1) i = 0 for word in wordlist: i+=1 if i == rand: print (word, end = '') break wordlist.close() guess = input("Guess a letter: ") print (guess) #for testing purposes for letters in word: if guess == letters: print ("Yessssh") #guessing part and user interface here Answer: In your "`for word in wordlist`" loop, each word will end in a newline. Try adding `word = word.strip()` as the next line. By the way your last loop could be replaced with: if guess in word: print ("Yessssh") Bonus tip: when adding "debug prints", it's often a good idea to use repr (especially when dealing with strings). For example, your line: print (guess) #for testing purposes Might be more useful if you wrote: print (repr(guess)) #for testing purposes That way if there are weird characters in `guess`, you'll see them more easily in your debug output.
Data storage to ease data interpolation in Python Question: I have 20+ tables similar to table 1. Where all letters represent actual values. Table 1: $ / cars |<1 | 2 | 3 | 4+ <10,000 | a | b | c | d 20,000 | e | f | g | h 30,000 | i | j | k | l 40,000+ | m | n | o | p A user input could be for example, (2.4, 24594) which is a value between f, g, j, and k. My Python function definition and pseudo-code to calculate this bilinear interpolation is as follows. def bilinear_interpolation( x_in, y_in, x_high, x_low, y_low, y_high ): # interpolate with respect to x # interpolate with respect to y # return result How should I store the data from table 1 (a file, a dict, tuple of tuples, or dict of lists), so I can perform the bilinear interpolation most efficiently and correctly? Answer: If you want the most computationally efficient solution I can think of and are not restricted to the standard library, then I would recommend scipy/numpy. First, store the a..p array as a 2D numpy array and then both the $4k-10k and 1-4 arrays as 1D numpy arrays. Use scipy's interpolate.interp1d if both 1D arrays are monotonically increasing, or interpolate.bsplrep (bivariate spline representation) if not and your example arrays are as small as your example. Or simply write your own and not bother with scipy. Here are some examples: # this follows your pseudocode most closely, but it is *not* # the most efficient since it creates the interpolation # functions on each call to bilinterp from scipy import interpolate import numpy data = numpy.arange(0., 16.).reshape((4,4)) #2D array prices = numpy.arange(10000., 50000., 10000.) cars = numpy.arange(1., 5.) def bilinterp(price,car): return interpolate.interp1d(cars, interpolate.interp1d(prices, a)(price))(car) print bilinterp(22000,2) The last time I checked (a version of scipy from 2007-ish) it only worked for monotonically increasing arrays of x and y) for small arrays like this 4x4 array, I think you want to use this: <http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.bisplrep.html#scipy.interpolate.bisplrep> which will handle more interestingly shaped surfaces and the function only needs to be created once. For larger arrays, I think you want this (not sure if this has the same restrictions as interp1d): <http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp2d.html#scipy.interpolate.interp2d> but they both require a different and more verbose data structure than the three arrays in the example above.
Alternative to innerhtml that includes header? Question: I'm trying to extract data from the following page: [http://www.bmreports.com/servlet/com.logica.neta.bwp_PanBMDataServlet?param1=&param2=&param3=&param4=&param5=2009-04-22&param6=37#](http://www.bmreports.com/servlet/com.logica.neta.bwp_PanBMDataServlet?param1=&param2=&param3=&param4=&param5=2009-04-22&param6=37#) Which, conveniently and inefficiently enough, includes all the data embedded as a csv file in the header, set as a variable called gs_csv. How do I extract this? `Document.body.innerhtml` skips the header where the data is, what is the alternative that includes the header (or better yet, the value associated with `gs_csv`)? (Sorry, new to all this, I've been searching through loads of documentation, and trying a lot of them, but nothing so far has worked). * * * Thanks to Sinan (this is mostly his solution transcribed into Python). import win32com.client import time import os import os.path ie = Dispatch("InternetExplorer.Application") ie.Visible=False ie.Navigate("http://www.bmreports.com/servlet/com.logica.neta.bwp_PanBMDataServlet?param1=&param2=&param3=&param4=&param5=2009-04-22&param6=37#") time.sleep(20) webpage=ie.document.body.innerHTML s1=ie.document.scripts(1).text s1=s1[s1.find("gs_csv")+8:-11] scriptfilepath="c:\FO Share\bmreports\script.txt" scriptfile = open(scriptfilepath, 'wb') scriptfile.write(s1.replace('\n','\n')) scriptfile.close() ie.quit Answer: **Untested:** Did you try looking at what [Document.scripts](http://msdn.microsoft.com/en- us/library/aa752604%28VS.85%29.aspx) contains? **UPDATE:** For some reason, I am having immense difficulty getting this to work using the Windows Scripting Host (but then, I don't use it very often, apologies). Anyway, here is the Perl source that works: use strict; use warnings; use Win32::OLE; $Win32::OLE::Warn = 3; my $ie = get_ie(); $ie->{Visible} = 1; $ie->Navigate( 'http://www.bmreports.com/servlet/com.logica.neta.bwp_PanBMDataServlet?' .'param1=&param2=&param3=&param4=&param5=2009-04-22&param6=37#' ); sleep 1 until is_ready( $ie ); my $scripts = $ie->Document->{scripts}; for my $script (in $scripts ) { print $script->text; } sub is_ready { $_[0]->{ReadyState} == 4 } sub get_ie { Win32::OLE->new('InternetExplorer.Application', sub { $_[0] and $_[0]->Quit }, ); } __END__ C:\Temp> ie > output `output` now contains everything within the script tags.
Gather all Python modules used into one folder? Question: I don't think this has been asked before-I have a folder that has lots of different .py files. The script I've made only uses some-but some call others & I don't know all the ones being used. Is there a program that will get everything needed to make that script run into one folder? Cheers! Answer: # zipmod.py - make a zip archive consisting of Python modules and their dependencies as reported by modulefinder # To use: cd to the directory containing your Python module tree and type # $ python zipmod.py archive.zip mod1.py mod2.py ... # Only modules in the current working directory and its subdirectories will be included. # Written and tested on Mac OS X, but it should work on other platforms with minimal modifications. import modulefinder import os import sys import zipfile def main(output, *mnames): mf = modulefinder.ModuleFinder() for mname in mnames: mf.run_script(mname) cwd = os.getcwd() zf = zipfile.ZipFile(output, 'w') for mod in mf.modules.itervalues(): if not mod.__file__: continue modfile = os.path.abspath(mod.__file__) if os.path.commonprefix([cwd, modfile]) == cwd: zf.write(modfile, os.path.relpath(modfile)) zf.close() if __name__ == '__main__': main(*sys.argv[1:])
Using subprocess to run Python script on Windows Question: Is there a simple way to run a Python script on Windows/Linux/OS X? On the latter two, `subprocess.Popen("/the/script.py")` works, but on Windows I get the following error: Traceback (most recent call last): File "test_functional.py", line 91, in test_functional log = tvnamerifiy(tmp) File "test_functional.py", line 49, in tvnamerifiy stdout = PIPE File "C:\Python26\lib\subprocess.py", line 595, in __init__ errread, errwrite) File "C:\Python26\lib\subprocess.py", line 804, in _execute_child startupinfo) WindowsError: [Error 193] %1 is not a valid Win32 application * * * > _[monkut's](http://stackoverflow.com/users/24718/monkut) comment_: The use > case isn't clear. Why use subprocess to run a python script? Is there > something preventing you from importing the script and calling the necessary > function? I was writing a quick script to test the overall functionality of a Python- command-line tool (to test it on various platforms). Basically it had to create a bunch of files in a temp folder, run the script on this and check the files were renamed correctly. I could have imported the script and called the function, but since it relies on `sys.argv` and uses `sys.exit()`, I would have needed to do something like.. import sys import tvnamer sys.argv.append("-b", "/the/folder") try: tvnamer.main() except BaseException, errormsg: print type(errormsg) Also, I wanted to capture the stdout and stderr for debugging incase something went wrong. Of course a better way would be to write the script in more unit-testable way, but the script is basically "done" and I'm doing a final batch of testing before doing a "1.0" release (after which I'm going to do a rewrite/restructure, which will be far tidier and more testable) Basically, it was much easier to simply run the script as a process, after finding the `sys.executable` variable. I would have written it as a shell- script, but that wouldn't have been cross-platform. The final script can be found [here](http://github.com/dbr/tvdb%5Fapi/blob/c8d7b356cd1a7bb2ab22b510ea74e03a7d27fad6/tests/test%5Ffunctional.py) Answer: Just found `sys.executable` \- the full path to the current Python executable, which can be used to run the script (instead of relying on the shbang, which obviously doesn't work on Windows) import sys import subprocess theproc = subprocess.Popen([sys.executable, "myscript.py"]) theproc.communicate()
django error 'too many values to unpack' Question: I'm learning Django by building a simple recipes app. I have a 1 table model using the 'choices' field option for recipe categories rather than using a 2nd 'categories' table and a foreign key relationship. So i created db table via syncdb and then loaded table with test data. When i go to admin and click on the 'Recipes' link in an attempt to view recipes i get the following error: Template error In template /var/lib/python-support/python2.6/django/contrib/admin/templates/admin/change_list.html, error at line 34 Caught an exception while rendering: too many values to unpack If anyone can shed light on this cryptic error that would be great. Db is Sqlite. Django version is 1.0. The model is listed below: from django.db import models class Recipe(models.Model): CATEGORY_CHOICES = ( (1, u'Appetizer'), (2, u'Bread'), (3, u'Dessert'), (4, u'Drinks'), (5, u'Main Course'), (6, u'Salad'), (7, u'Side Dish'), (8, u'Soup'), (9, u'Sauce/Marinade'), (10, u'Other'), ) name = models.CharField(max_length=255) submitter = models.CharField(max_length=40) date = models.DateTimeField() category = models.SmallIntegerField(choices=CATEGORY_CHOICES) ingredients = models.TextField() directions = models.TextField() comments = models.TextField(null=True, blank=True) Answer: **Edit: Updated in light of kibibu's correction.** I have encountered what I believe is this same error, producing the message: Caught ValueError while rendering: too many values to unpack My form class was as follows: class CalcForm(forms.Form): item = forms.ChoiceField(choices=(('17815', '17816'))) Note that my `choices` type here a tuple. Django official documentation reads as follows for the `choices` arg: > An iterable (e.g., a list or tuple) of 2-tuples to use as choices for this > field. This argument accepts the same formats as the choices argument to a > model field. src: <https://docs.djangoproject.com/en/1.3/ref/forms/fields/#django.forms.ChoiceField.choices> This problem was solved by my observing the documentation and using a **list of tuples:** class CalcForm(forms.Form): item = forms.ChoiceField(choices=[('17815', '17816')]) Do note that while the docs state any iterable of the correct form can be used, a tuple of 2-tuples did not work: item = forms.ChoiceField(choices=(('17815', '17816'), ('123', '456'))) This produced the same error as before. Lesson: bugs happen.
capture stderr from python subprocess.Popen(command, stderr=subprocess.PIPE, stdout=subprocess.PIPE) Question: I have seen this posted so many times here; yet failed to capture intentional errors from command. Best partial work I have found so far.. from Tkinter import * import os import Image, ImageTk import subprocess as sub p = sub.Popen('datdsade',stdout=sub.PIPE,stderr=sub.PIPE) output, errors = p.communicate() root = Tk() text = Text(root) text.pack() text.insert(END, output+ "Error: " + errors ) root.mainloop() Answer: This works perfectly for me: import subprocess try: #prints results result = subprocess.check_output("echo %USERNAME%", stderr=subprocess.STDOUT, shell=True) print result #causes error result = subprocess.check_output("copy testfds", stderr=subprocess.STDOUT, shell=True) except subprocess.CalledProcessError, ex: print "--------error------" print ex.cmd print ex.message print ex.returncode print ex.output
Different behavior of python logging module when using mod_python Question: We have a nasty problem where we see that the python logging module is behaving differently when running with mod_python on our servers. When executing the same code in the shell, or in django with the runserver command or with mod_wsgi, the behavior is correct: import logging logger = logging.getLogger('site-errors') logging.debug('logger=%s' % (logger.__dict__)) logging.debug('logger.parent=%s' % (logger.parent.__dict__)) logger.error('some message that is not logged.') We then the following logging: > 2009-05-28 10:36:43,740,DEBUG,error_middleware.py:31,[logger={'name': 'site- > errors', 'parent': <logging.RootLogger instance at 0x85f8aac>, 'handlers': > [], 'level': 0, 'disabled': 0, 'manager': <logging.Manager instance at > 0x85f8aec>, 'propagate': 1, 'filters': []}] > > 2009-05-28 10:36:43,740,DEBUG,error_middleware.py:32,[logger.parent={'name': > 'root', 'parent': None, 'handlers': [<logging.StreamHandler instance at > 0x8ec612c>, <logging.handlers.RotatingFileHandler instance at 0x8ec616c>], > 'level': 10, 'disabled': 0, 'propagate': 1, 'filters': []}] As one can see, no handlers or level is set for the child logger 'site- errors'. The logging configuration is done in the settings.py: MONITOR_LOGGING_CONFIG = ROOT + 'error_monitor_logging.conf' import logging import logging.config logging.config.fileConfig(MONITOR_LOGGING_CONFIG) if CONFIG == CONFIG_DEV: DB_LOGLEVEL = logging.INFO else: DB_LOGLEVEL = logging.WARNING The second problem is that we also add a custom handler in the __init__.py that resides that in the folder as error_middleware.py: import logging from django.conf import settings from db_log_handler import DBLogHandler handler = DBLogHandler() handler.setLevel(settings.DB_LOGLEVEL) logging.root.addHandler(handler) The custom handler cannot be seen in the logging! If someone has idea where the problem lies, please let us know! Don't hesistate to ask for additonal information. That will certainly help to solve the problem. Answer: It may be better if you do not configure logging in `settings.py`. We configure your logging in our root `urls.py`. This seems to work out better. I haven't read enough Django source to know why, precisely, it's better, but it's working out well for us. I would add custom handlers here, also. Also, look closely at [mod_wsgi](http://code.google.com/p/modwsgi/). It seems to behave much better than mod_python.
Sending Multipart html emails which contain embedded images Question: I've been playing around with the email module in python but I want to be able to know how to embed images which are included in the html. So for example if the body is something like <img src="../path/image.png"></img> I would like to embed _image.png_ into the email, and the `src` attribute should be replaced with `content-id`. Does anybody know how to do this? Answer: Here is an example I found. > [**Recipe 473810: Send an HTML email with embedded image and plain text > alternate**](http://code.activestate.com/recipes/473810/): > > HTML is the method of choice for those wishing to send emails with rich > text, layout and graphics. Often it is desirable to embed the graphics > within the message so recipients can display the message directly, without > further downloads. > > Some mail agents don't support HTML or their users prefer to receive plain > text messages. Senders of HTML messages should include a plain text message > as an alternate for these users. > > This recipe sends a short HTML message with a single embedded image and an > alternate plain text message. # Send an HTML email with an embedded image and a plain text message for # email clients that don't want to display the HTML. from email.MIMEMultipart import MIMEMultipart from email.MIMEText import MIMEText from email.MIMEImage import MIMEImage # Define these once; use them twice! strFrom = 'from@example.com' strTo = 'to@example.com' # Create the root message and fill in the from, to, and subject headers msgRoot = MIMEMultipart('related') msgRoot['Subject'] = 'test message' msgRoot['From'] = strFrom msgRoot['To'] = strTo msgRoot.preamble = 'This is a multi-part message in MIME format.' # Encapsulate the plain and HTML versions of the message body in an # 'alternative' part, so message agents can decide which they want to display. msgAlternative = MIMEMultipart('alternative') msgRoot.attach(msgAlternative) msgText = MIMEText('This is the alternative plain text message.') msgAlternative.attach(msgText) # We reference the image in the IMG SRC attribute by the ID we give it below msgText = MIMEText('<b>Some <i>HTML</i> text</b> and an image.<br><img src="cid:image1"><br>Nifty!', 'html') msgAlternative.attach(msgText) # This example assumes the image is in the current directory fp = open('test.jpg', 'rb') msgImage = MIMEImage(fp.read()) fp.close() # Define the image's ID as referenced above msgImage.add_header('Content-ID', '<image1>') msgRoot.attach(msgImage) # Send the email (this example assumes SMTP authentication is required) import smtplib smtp = smtplib.SMTP() smtp.connect('smtp.example.com') smtp.login('exampleuser', 'examplepass') smtp.sendmail(strFrom, strTo, msgRoot.as_string()) smtp.quit()
How to mark a global as deprecated in Python? Question: [I've seen decorators](http://wiki.python.org/moin/PythonDecoratorLibrary#Smartdeprecationwarnings.28withvalidfilenames.2Clinenumbers.2Cetc..29) that let you mark a function a deprecated so that a warning is given whenever that function is used. I'd like to do the same thing but for a global variable, but I can't think of a way to detect global variable accesses. I know about the globals() function, and I could check its contents, but that would just tell me if the global is defined (which it still will be if the function is deprecated and not all out removed) not if it's actually being used. The best alternative I can think of is something like this: # myglobal = 3 myglobal = DEPRECATED(3) But besides the problem of how to get DEPRECATED to act exactly like a '3', I'm not sure what DEPRECATED could do that would let you detect every time it's accessed. I think the best it could do is iterate through all of the global's methods (since everything in Python is an object, so even '3' has methods, for converting to string and the like) and 'decorate' them to all be deprecated. But that's not ideal. Any ideas? Has anyone else tackled this problem? Answer: You can't do this directly, since theres no way of intercepting the module access. However, you can replace that module with an object of your choosing that acts as a proxy, looking for accesses to certain properties: import sys, warnings def WrapMod(mod, deprecated): """Return a wrapped object that warns about deprecated accesses""" deprecated = set(deprecated) class Wrapper(object): def __getattr__(self, attr): if attr in deprecated: warnings.warn("Property %s is deprecated" % attr) return getattr(mod, attr) def __setattr__(self, attr, value): if attr in deprecated: warnings.warn("Property %s is deprecated" % attr) return setattr(mod, attr, value) return Wrapper() oldVal = 6*9 newVal = 42 sys.modules[__name__] = WrapMod(sys.modules[__name__], deprecated = ['oldVal']) Now, you can use it as: >>> import mod1 >>> mod1.newVal 42 >>> mod1.oldVal mod1.py:11: UserWarning: Property oldVal is deprecated warnings.warn("Property %s is deprecated" % attr) 54 The downside is that you are now performing two lookups when you access the module, so there is a slight performance hit.
Capturing Implicit Signals of Interest in Django Question: To set the background: I'm interested in: * Capturing implicit signals of interest in books as users browse around a site. The site is written in django (python) using mysql, memcached, ngnix, and apache Let's say, for instance, my site sells books. As a user browses around my site I'd like to keep track of which books they've viewed, and how many times they've viewed them. Not that I'd store the data this way, but ideally I could have on-the-fly access to a structure like: {user_id : {book_id: number_of_views, book_id_2: number_of_views}} I realize there are a few approaches here: * Some flat-file log * Writing an object to a database every time * Writing to an object in memcached I don't really know the performance implications, but I'd rather not be writing to a database on every single page view, and the lag writing to a log and computing the structure later seems not quick enough to give good recommendations on-the-fly as you use the site, and the memcached appraoch seems fine, but there's a cost in keeping this obj in memory: you might lose it, and it never gets written somewhere 'permanent'. What approach would you suggest? (doesn't have to be one of the above) Thanks! Answer: If this data is not an unimportant statistic that might or might not be available I'd suggest taking the simple approach and using a model. It will surely hit the database everytime. Unless you are absolutely positively sure these queries **are** actually degrading overall experience there is no need to worry about it. Even if you optimize this one, there's a good chance other _unexpected_ queries are wasting more CPU time. I assume you wouldn't be asking this question if you were testing all other queries. So why risk premature optimization on this one? An advantage of the model approach would be _having an API in place_. When you have tested and decided to optimize you can keep this API and change the underlying model with something else (which will most probably be more complex than a model). I'd definitely go with a model first and see how it performs. (and also how other parts of the project perform)
Best way to retrieve variable values from a text file - Python - Json Question: Referring on [this question](http://stackoverflow.com/questions/868112/loading-files-into- variables-in-python), I have a similar -but not the same- problem.. On my way, I'll have some text file, structured like: var_a: 'home' var_b: 'car' var_c: 15.5 And I need that python read the file and then create a variable named var_a with value 'home', and so on. Example: #python stuff over here getVarFromFile(filename) #this is the function that im looking for print var_b #output: car, as string print var_c #output 15.5, as number. Is this possible, I mean, even keep the var type? Notice that I have the full freedom to the text file structure, I can use the format I like if the one I proposed isn't the best. **EDIT** : the ConfigParser can be a solution, but I don't like it so much, because in my script I'll have then to refer to the variables in the file with config.get("set", "var_name") But what I'll love is to refer to the variable directly, as I declared it in the python script... There is a way to import the file as a python dictionary? Oh, last thing, keep in mind that I don't know exactly how many variables would I have in the text file. **Edit 2** : I'm very interested at stephan's JSON solution, because in that way the text file could be read simply with others languages (PHP, then via AJAX JavaScript, for example), but I fail in something while acting that solution: #for the example, i dont load the file but create a var with the supposed file content file_content = "'var_a': 4, 'var_b': 'a string'" mydict = dict(file_content) #Error: ValueError: dictionary update sequence element #0 has length 1; 2 is required file_content_2 = "{'var_a': 4, 'var_b': 'a string'}" mydict_2 = dict(json.dump(file_content_2, True)) #Error: #Traceback (most recent call last): #File "<pyshell#5>", line 1, in <module> #mydict_2 = dict(json.dump(file_content_2, True)) #File "C:\Python26\lib\json\__init__.py", line 181, in dump #fp.write(chunk) #AttributeError: 'bool' object has no attribute 'write' In what kind of issues can I fall with the JSON format? And, how can I read a JSON array in a text file, and transform it in a python dict? P.S: I don't like the solution using .py files; I'll prefer .txt, .inc, .whatever is not restrictive to one language. Answer: > But what i'll love is to refer to the variable direclty, as i declared it in > the python script.. Assuming you're happy to change your syntax slightly, just use python and import the "config" module. # myconfig.py: var_a = 'home' var_b = 'car' var_c = 15.5 Then do from myconfig import * And you can reference them by name in your current context.
python queue & multiprocessing queue: how they behave? Question: This sample code works (I can write something in the file): from multiprocessing import Process, Queue queue = Queue() def _printer(self, queue): queue.put("hello world!!") def _cmdDisp(self, queue): f = file("Cmd.log", "w") print >> f, queue.get() f.close() instead this other sample not: (errormsg: 'module' object is not callable) import Queue queue = Queue() def _printer(self, queue): queue.put("hello world!!") def _cmdDisp(self, queue): f = file("Cmd.log", "w") print >> f, queue.get() f.close() this other sample not (I cannot write something in the file): import Queue queue = Queue.Queue() def _printer(self, queue): queue.put("hello world!!") def _cmdDisp(self, queue): f = file("Cmd.log", "w") print >> f, queue.get() f.close() Can someone explain the differences? and the right to do? Answer: For your second example, you already gave the explanation yourself---`Queue` is a module, which cannot be called. For the third example: I assume that you use `Queue.Queue` together with `multiprocessing`. A `Queue.Queue` will not be shared between processes. If the `Queue.Queue` is declared before the processes then each process will receive a copy of it which is then independent of every other process. Items placed in the `Queue.Queue` by the parent before starting the children will be available to each child. Items placed in the `Queue.Queue` by the parent after starting the child will only be available to the parent. `Queue.Queue` is made for data interchange between different **threads** inside the same process (using the [threading](http://docs.python.org/library/threading.html#module- threading) module). The multiprocessing queues are for data interchange between different Python **processes**. While the API looks similar (it's designed to be that way), the underlying mechanisms are fundamentally different. * `multiprocessing` queues exchange data by pickling (serializing) objects and sending them through pipes. * `Queue.Queue` uses a data structure that is shared between threads and locks/mutexes for correct behaviour.
Nokia N95 and PyS60 with the sensor and xprofile modules Question: I've made a python script which should modify the profile of the phone based on the phone position. Runned under ScriptShell it works great. The problem is that it hangs, both with the "sis" script runned upon "boot up", as well as without it. So my question is what is wrong with the code, and also whether I need to pass special parameters to ensymble? import appuifw, e32, sensor, xprofile from appuifw import * old_profil = xprofile.get_ap() def get_sensor_data(status): #decide profile def exit_key_handler(): # Disconnect from the sensor and exit acc_sensor.disconnect() app_lock.signal() app_lock = e32.Ao_lock() appuifw.app.exit_key_handler = exit_key_handler appuifw.app.title = u"Acc Silent" appuifw.app.menu = [(u'Close', app_lock.signal)] appuifw.app.body = Canvas() # Retrieve the acceleration sensor sensor_type= sensor.sensors()['AccSensor'] # Create an acceleration sensor object acc_sensor= sensor.Sensor(sensor_type['id'],sensor_type['category']) # Connect to the sensor acc_sensor.connect(get_sensor_data) # Wait for sensor data and the exit event app_lock.wait() The script starts at boot, using ensymble and my developer certificate. Thanks in advance Answer: I often use something like that at the top of my scripts: import os.path, sys PY_PATH = None for p in ['c:\\Data\\Python', 'e:\\Data\\Python','c:\\Python','e:\\Python']: if os.path.exists(p): PY_PATH = p break if PY_PATH and PY_PATH not in sys.path: sys.path.append(PY_PATH)