text
stringlengths
256
65.5k
PyPN is an extension that allows you to use Python to script almost anything in PN. This can be simple text formatting, or complex macros for automation. See Install PyPN for installation instructions. After PyPN is installed, go to View > Window > Scripts (or press Alt+F10). You should be able to see the Scripts menu, with the sample scripts listed. Double click on a line in the Scripts window to run that script. To edit the scripts, go to "\Program Files\Programmer's Notepad\Scripts" and look at text.py. There you'll see the source code for the sample scripts, and you can add your own. You may have to restart PN to show changes. The PyPN extension makes it pretty easy to add your own features to Programmer's Notepad. PyPN exposes the underlying Scintilla interface to Python and this results in a very powerful scripting ability. The PyPN programmer has the PN objects, Scintilla editor, and the entire installed Python library ready for import. To begin, start by creating a new document and selecting the "Python" scheme. Once the scheme is set to Python type pnscriptfile into the document, then press Ctrl+Alt+Space. This should auto-fill the document with this basic code template: ############################################################################### ## Script description ## By: your name here import pn import scintilla from pypn.decorators import script @script() def (): """ What this does """ pass The top part of the file is a header area where you can fill in some information about your script, which may help should you decide to share your script or refer back to it at some point in the future. The next few lines are the standard imports needed for most PyPN scripts. These will give you access to the pn library, the scintilla library, and will expose the @script() decorator that you can see on the next line down. The script decorator allows PN to parse out some meta-data from the script file that lets your script appear in the list of scripts on the Scripts window (Alt+F10). Generally this would have the syntax: @script("Script Name", "Group") However if this is left blank it will default to the name of the method for "Script Name" and will put the script in the "Python" group. The next line is where you define you method. In this case lets create the method HelloPN. ############################################################################### ## Hello Programmer's Notepad ## By: PyPN Programmer import pn import scintilla from pypn.decorators import script @script() def HelloPN(): """ Say Hello To Programmer's Notepad """ newDoc = pn.NewDocument(None) editor = scintilla.Scintilla(newDoc) message = "Hello Programmer's Notepad!" editor.BeginUndoAction() editor.AppendText(len(message), message) editor.EndUndoAction() pass Save this file as HelloPN.py inside of the <Programmers Notepad Home Dir>\scripts directory. Once saved, exit Programmer's Notepad and restart. When you bring up the scripts window you should see a new script under the Python group. Double clicking on this script will run it. If you do not see the script, then look in the output window for a Python error which may help you in determining what went wrong. Taking a look at the method just defined: python""" Say Hello To Programmer's Notepad """ The three quotes on an empty line begins a documentation comment. So this line aids in documenting what the script does. pythonnewDoc = pn.NewDocument(None) This line instructs PN to create a new document with no schema. Had we used pn.NewDocument("Python") we would have created a new Python document. Had we used pn.CurrentDoc() we not have created a new document but found a reference to the current document -- allowing us to append our message to the end of whatever document was currently selected. pythoneditor = scintilla.Scintilla(newDoc) To edit documents we need access to the Scintilla interface. This line creates a Scintilla editor object for the document we just created. We need an editor object to access the text of a document. pythonmessage = "Hello Programmer's Notepad!" This created a string with our message text in it. We will use this string to insert our message into the document. pythoneditor.BeginUndoAction() It is never any fun when an editor preforms an action that cannot be undone. This line will establish an "undo point" so that the user can undo the operation that our script did to the document. The user will be able to undo any operations preformed between this point and when the EndUndoAction() is called. In this case, pressing Ctrl-Z on the new document should remove our message. pythoneditor.AppendText(len(message), message) This line will append our message to the end of the document. The AppendText(len, text) function takes two arguments. The first parameter is the number of characters you would like it to take from the text string provided in the second parameter. pythoneditor.EndUndoAction() This line will mark and end of undo operations. It is advisable to ensure that you always make changes to documents inside of an undo block so that the user can revert any changes should they be disappointed in the results. pass This line has no effect. The pass statement in Python has no operation and is merely used as a place holder or marker. In this case it serves as a marker for the end of the current function. It can be removed with no effect upon the functionality or logic of the code. For a list of user generated scripts please see the List of User-Submitted PyPN Scripts. The PyPN API pages.
How do I copy a file in Python? I couldn't find anything under os. copyfile(src, dst) Copy the contents of the file named import shutil shutil.copy2('/dir/file.ext', '/new/dir/newname.ext') or shutil.copy2('/dir/file.ext', '/new/dir') In case you are stuck with Python 2.3 (as I am) you may notice that there is no def copyfile(source, dest, buffer_size=1024*1024): """ Copy a file from source to dest. source and dest can either be strings or any object with a read or write method, like StringIO for example. """ if not hasattr(source, 'read'): source = open(source, 'rb') if not hasattr(dest, 'write'): dest = open(dest, 'wb') while 1: copy_buffer = source.read(buffer_size) if copy_buffer: dest.write(copy_buffer) else: break source.close() dest.close() Use the shutil module. copyfile(src, dst) Copy the contents of the file named src to a file named dst. The destination location must be writable; otherwise, an IOError exception will be raised. If dst already exists, it will be replaced. Special files such as character or block devices and pipes cannot be copied with this function. src and dst are path names given as strings. Take a look at filesys for all the file and directory handling functions available in standard Python modules. Directory and File copy example - From Tim Golden's Python Stuff: import os import shutil import tempfile filename1 = tempfile.mktemp (".txt") open (filename1, "w").close () filename2 = filename1 + ".copy" print filename1, "=>", filename2 shutil.copy (filename1, filename2) if os.path.isfile (filename2): print "Success" dirname1 = tempfile.mktemp (".dir") os.mkdir (dirname1) dirname2 = dirname1 + ".copy" print dirname1, "=>", dirname2 shutil.copytree (dirname1, dirname2) if os.path.isdir (dirname2): print "Success"
Edited to add benchmark below. You can wrap a generator with a lock. For example, import threading class LockedIterator(object): def __init__(self, it): self.lock = threading.Lock() self.it = it.__iter__() def __iter__(self): return self def next(self): self.lock.acquire() try: return self.it.next() finally: self.lock.release() gen = [x*2 for x in [1,2,3,4]] g2 = LockedIterator(gen) print list(g2) Locking takes 50ms on my system, Queue takes 350ms. Queue is useful when you really do have a queue; for example, if you have incoming HTTP requests and you want to queue them for processing by worker threads. (That doesn't fit in the Python iterator model--once an iterator runs out of items, it's done.) If you really do have an iterator, then LockedIterator is a faster and simpler way to make it thread safe. from datetime import datetime import threading num_worker_threads = 4 class LockedIterator(object): def __init__(self, it): self.lock = threading.Lock() self.it = it.__iter__() def __iter__(self): return self def next(self): self.lock.acquire() try: return self.it.next() finally: self.lock.release() def test_locked(it): it = LockedIterator(it) def worker(): try: for i in it: pass except Exception, e: print e raise threads = [] for i in range(num_worker_threads): t = threading.Thread(target=worker) threads.append(t) t.start() for t in threads: t.join() def test_queue(it): from Queue import Queue def worker(): try: while True: item = q.get() q.task_done() except Exception, e: print e raise q = Queue() for i in range(num_worker_threads): t = threading.Thread(target=worker) t.setDaemon(True) t.start() t1 = datetime.now() for item in it: q.put(item) q.join() start_time = datetime.now() it = [x*2 for x in range(1,10000)] test_locked(it) #test_queue(it) end_time = datetime.now() took = end_time-start_time print "took %.01f" % ((took.seconds + took.microseconds/1000000.0)*1000)
The goal is to get the array of family members made and print out the results in the order created. Is there any way to tidy this up :)? // Our Person constructor function Person(name,age) { this.name=name; this.age=age; } // Now we can make an array of people var family=new Array(); family[0]=new Person("alice", 40); family[1]=new Person("bob", 42); family[2]=new Person("michelle", 8); family[3]=new Person("timmy", 6); // loop through our new array for(i=0;i<family.length;i++) { console.log(family[i].name); } There are some ways to tidy things up. // Our Person constructor This comment doesn't explain anything that isn't already known. Comments should not say what is happening, they should explain the purpose instead, where the code can not be made clear enough to provide such an explanation for you. Instead of explaining what the code is already fully capable of showing you from the code itself, it's better to use comments to explain why things occur, or to provide a broad overview instead. function Person(name,age) Spacing. A space after the comma is best practice to help aid in readability. { Coding techniques from other programming languages, such as putting the curly brace at the start of the next line, is not just a bad idea - it can actually break JavaScript. Due to syntax limitations in JavaScript and how automatic semicolon insertion works, it is a fundamental principle in JavaScript that placing the curly brace at the start of a new line, is a practice that results in more fragile code. this.name=name; this.age=age; Spacing before and after the equals sign is best practice to help improve readability of the code. } That line is good, it has no problems. // Now we can make an array of people Explaining what is happening once again, which is not a good use for comments. var family=new Array(); family[0]=new Person("alice", 40); family[1]=new Person("bob", 42); family[2]=new Person("michelle", 8); family[3]=new Person("timmy", 6); It is preferred for the sake of consistency to use [] instead of new Array. You can also create those new Person objects at the same time as defining the array, for example with: It is also a common convention in JavaScript to use single quotes to delimit strings instead of double. Both work, but single provides more convenience than double. var family = [ new Person('alice', 40), new Person('bob', 42), new Person('michelle', 8), new Person('timmy', 6) ]; Due to the duplication that we see here though, it may be worth considering storing the name/age in an array, and using a loop to create the new Person objects instead. Especially if more people are planned to be added in further development. // loop through our new array Another useless comment. for(i=0;i<family.length;i++) The i variable has not been declared so it is going to be a global one instead. That's a bad mistake. The best way to remedy that is to declare the i variable right at the start of the code, with the other var declaration. Spacing should be added after the for word, which helps to reinforce that it is not a function name being called. Also, spacing before and after the equals sign, after the semicolons, and before and after the less than sign for improved readability. { Should come after the closing parenthesis of the for statement, with a space separating the two. console.log(family[i].name); Be aware that Internet Explorer doesn't support console.log - there are shims though that help to provide support, such as https://github.com/kayahr/console-shim } No troubles with this line So what we end up with is something like the following: // Create and show family info function Person(name, age) { this.name = name; this.age = age; } var family = [ new Person('alice', 40), new Person('bob', 42), new Person('michelle', 8), new Person('timmy', 6) ], i; for (i = 0; i < family.length; i += 1) { console.log(family[i].name); } I have also replaced i++ with i += 1 because I agree with Douglas Crockford, that incrementors and decrementors are not as good to use as other techniques that make it clearer what is going on. For some details, see http://stackoverflow.com/questions/971312/why-avoid-increment-and-decrement-operators-in-javascript Comments-I didn't create the comments. I just kept them there because of being lazy. So I'm ignoring any suggestions for that :). No offense to you. As for readability, I prefer no spaces in stuff such as (name,age). It helps my readability. I was mainly just asking for advice on how to shorten code, which you were able to answer with the array advice. console.log is just the part of the program that I'm supposed to write. I mean, I'm using codecademy.com and they force me to write console.log. Though again, comments based on that don't help me shorten the code. Thank you Paul for the array advice :). Another potential way to do that is to use the forEach method instead, which allows you to simplify things, and you won't need that variable either. family.forEach(function (person) { console.log(person.name); }); The forEach method is available on all modern web browsers. If you want it to succeed on Internet explorer, the forEach documentation page has compatibility code that adds the capability for web browsers that don't know how. Wow, that's some efficient looking code. Thanks for the advice Paul.
How to display player's score on Pygame? I'm a new programmer working on a memory game for my computer science summative.The game goes like this: the computer displays random boxes at random positions and then the user has to guess where the boxes are and click on it. I'm basically done, except right now I'm trying to create like 5 different levels that range in level of difficulty. eg level 1 will display like 2 boxes and level 2 will display like 5, etc. And then if the user gets through all levels they can play again. I know its a lot but I really want to get an A on this. But right now I'm stuck because it doesnt really work until I try to close the window, and even then it only goes halfway. Any help would be appreciated. import pygame , sys import random import time size=[500,500] pygame.init() screen=pygame.display.set_mode(size) # Colours LIME = (0,255,0) RED = (255, 0, 0) BLACK = (0,0,0) PINK = (255,102,178) SALMON = (255,192,203) WHITE = (255,255,255) LIGHT_PINK = (255, 181, 197) SKY_BLUE = (176, 226, 255) screen.fill(BLACK) # Width and Height of game box width=50 height=50 # Margin between each cell margin = 5 rows = 20 columns = 20 # Set title of screen pygame.display.set_caption("Spatial Recall") # Used to manage how fast the screen updates clock=pygame.time.Clock() coord=[] # Create a 2 dimensional array. A two dimesional # array is simply a list of lists. def resetGrid(): grid = [] for row in range(rows): # Add an empty array that will hold each cell # in this row grid.append([]) for column in range(columns): grid[row].append(0) # Append a cell return grid def displayAllPink(pygame): for row in range(rows): for column in range(columns): color = LIGHT_PINK pygame.draw.rect(screen,color,[(margin+width)*column + margin,(margin+height)*row+margin,width,height]) pygame.display.flip() def displayOtherColor(pygame,grid): coord = [] for i in range(random.randint(2,5)): x = random.randint(2, rows-1) y = random.randint(2, columns-1) color = LIME pygame.draw.rect(screen,color,[(margin+width)*y + margin,(margin+height)*x+margin,width,height]) coord.append((x,y)) grid[x][y] = 1 pygame.display.flip() time.sleep(1) return coord def runGame(gameCount,coord,pygame,grid): pygame.event.clear() pygame.display.set_caption("Spatial Recall: Level "+ str(gameCount)) pygame.time.set_timer(pygame.USEREVENT,1000) time = 0 #clock.tick( # -------- Main Program Loop ----------- #Loop until the user clicks the close button. done = False while done==False: event = pygame.event.wait() # User did something if event.type == pygame.QUIT: # If user clicked close done=True # Flag that we are done so we exit this loop pygame.event.clear() print "Game ",gameCount, "ends" elif event.type == pygame.USEREVENT: time = time + 1 pygame.display.set_caption("Spatial Recall: Level "+ str(gameCount) + " Time: "+ str(time)) if time == 100: done = True pygame.display.set_caption("Time out, moving to next level") pygame.event.clear() return False elif event.type == pygame.MOUSEBUTTONDOWN: # User clicks the mouse. Get the position pos = pygame.mouse.get_pos() # Change the x/y screen coordinates to grid coordinates column=pos[0] // (width+margin) row=pos[1] // (height+margin) if (row,column) in coord: print coord coord.remove((row,column)) print coord color = LIME pygame.draw.rect(screen,color,[(margin+width)*column + margin,(margin+height)*row+margin,width,height]) if coord == []: done=True pygame.display.set_caption("Time out, moving to next level") pygame.event.clear() return True else: color = RED pygame.draw.rect(screen,color,[(margin+width)*column + margin,(margin+height)*row+margin,width,height]) pygame.display.flip() def startTheGame(gameCount): grid = resetGrid() displayAllPink(pygame) coord = displayOtherColor(pygame,grid) displayAllPink(pygame) runGame(gameCount,coord,pygame,grid) for i in range(2): startTheGame(i+1) pygame.quit () View Local Answers... Can't find the answer to this question? No answers yet. Can you answer this question? How to display player's score on Pygame?
As you probably noticed in the table of Python software in the previous column, many of the Python tools for XML processing are available in the PyXML package. There are also XML libraries built into Python, but these are pretty well covered in the official documentation. One of the things I'm going to do in these columns is provide brief information on significant new happenings relevant to Python-XML development, including significant software releases. PyXML 0.8.1 has been released. Major changes include updated DOM support and the disabling of the bundled XSLT library from the default install. More on PyXML below. The Fredrik "effbot" Lundh has kicked off a project to develop a graphical RSS newsreader in Python. Also, Mark Nottingham developed RSS.py, a Python module for processing RSS 0.9.x and 1.0 feeds. Mark is the author of an excellent RSS tutorial. Eric Freese has released SemanText, a program for developing and managing semantic networks using XML Topic Maps. It can also build such topic maps contextually from general XML formats. It comes with a GUI for knowledge-base management. There has been some talk on the XML-SIG about developing a general type library architecture in Python useful for plugging data types into schema, Web services and other projects. Finally, in the previous column, I mentioned that the cDomlette XML parser supports XInclude and XML Base, but I neglected to mention that libxml's parser does as well. libxml also supports XML Catalogs. xmlproc also supports XML catalogs. Recent builds of 4Suite add XML and OASIS catalog support to cDomlette. PyXML is available as source, Windows installer, and RPM. Python itself is the only requirement. Any version later than 2.0 will do, I recommend 2.2.1 or later. If your operating environment has a C compiler, it's easy to build from source. Download the archive, unpack it to a suitable location and run python setup.py install Which is the usual way to set up Python programs. It is automatically installed to the package directory of the python executable that is used to run the install command. You will see a lot of output as it compiles and copies files. If you'd rather suppress this, use "python setup.py --quiet install". PyXML overlays the xml module that comes with Python. It replaces all the contents of the Python module with its own versions. This is usually OK because the two code bases are the same, except that PyXML has usually incorporated more features and bug fixes. It's also possible that PyXML has introduced new bugs as well. If you ever need to uninstall PyXML, just remove the directory "_xmlplus" in the "site-packages" directory of your Python installation. The original Python xml module will be left untouched. The SAX support in PyXML is basically the same code as the built-in SAX library in Python. The main added feature is a really clever module, xml.sax.writers, which provides facilities for re-serializing SAX events to XML and even SGML. This is especially useful as the end point of a SAX filter chain, and I'll cover it in more detail in a future article on Python SAX filters. PyXML also adds the xmlproc validating XML parser which can be selected using the make_parser function. Base Python does not support validation. PyXML builds a good deal of machinery over the skeleton of the DOM Node class that comes with Python. First of all, there is 4DOM, a big DOM implementation which tries to be more faithful to the DOM spec than to be naturally Pythonic. It supports DOM Level 2, both XML (core) and HTML modules, including events, ranges, and traversal. PyXML also provides more frequently updated versions of minidom and pulldom. Because there are many DOM implementations available for Python, PyXML also adds a system in xml.dom.domreg for choosing between DOM implementations according to desired features. Only minidom and 4DOM are currently registered, so it is not yet generally useful, but this should change. There is also a lot of material in the xml.dom.ext module, which is set aside, generally, for DOM extensions specific to PyXML. These extensions include mechanisms for reading and writing serialized XML that predate the DOM Level 3 facilities for this. Creating 4DOM nodes is a matter of using the modules in xml.dom.ext.readers. The general pattern is to create a reader object, which can then be used to parse from multiple sources. For parsing XML, one would usually use xml.dom.ext.readers.Sax2, and for HTML, xml.dom.ext.readers.HtmlLib. Listing 1, demonstrates reading XML. from xml.dom.ext.reader import Sax2 DOC = """<?xml version="1.0" encoding="UTF-8"?> <verse> <attribution>Christopher Okibgo</attribution> <line>For he was a shrub among the poplars,</line> <line>Needing more roots</line> <line>More sap to grow to sunlight,</line> <line>Thirsting for sunlight</line> </verse> """ #Create an XML reader object reader = Sax2.Reader() #Create a 4DOM document node parsed from XML in a string doc_node = reader.fromString(DOC) #You can execute regular DOM operations on the document node verse_element = doc_node.documentElement #And you can even use "Pythonic" shortcuts for things like #Node lists and named node maps #The first child of the verse element is a white space text node #The second is the attribution element attribution_element = verse_element.childNodes[1] #attribution_string becomes "Christopher Okibgo" attribution_string = attribution_element.firstChild.data Listing 2 demonstrates reading HTML. You need to be connected to the Internet to run it successfully. Listing 2: Creating a 4DOM HTML node by reading from a URL from xml.dom.ext.reader import HtmlLib #Create an HTML reader object reader = HtmlLib.Reader() #Create a 4DOM document node parsed from HTML at a URL doc_node = reader.fromUri("http://www.python.org") #Get the title of the HTML document title_elem = doc_node.documentElement.getElementsByTagName("TITLE")[0] #title_string becomes "Python Language Website" title_string = title_elem.firstChild.data The HTML parser is pretty forgiving, but not as much so as most Web browsers. Some non-standard HTML will cause errors. One can also output XML using the xml.dom.ext.Print and xml.dom.ext.PrettyPrint functions. One can output proper XHTML from appropriate XML and HTML DOM nodes using xml.dom.ext.XHtmlPrint and xml.dom.ext.XHtmlPrettyPrint. Pure white space nodes between elements can be removed using xml.dom.ext.StripXml and xml.dom.ext.StripHtml. There are other small utility functions in this module. Listing 3 is a continuation of listing 1 which takes the document node that was read, strips it of white space, and then re-serializes the result to XML. Listing 3: Stripping white space and then writing the resulting XML from xml.dom.ext import StripXml, Print #Strip the white space nodes in place and return the same node StripXml(doc_node) #Print the node as serialized XML to stdout Print(doc_node) #Write the node as serialized XML to a file f = open("tmp.xml", "w") Print(doc_node, stream=f) f.close() Listing 4 illustrates creating a tiny XML document from scratch using standard DOM API, and printing the serialized result. Listing 4: Creating a 4DOM document bit by bit and writing the result from xml.dom import implementation from xml.dom import EMPTY_NAMESPACE, XML_NAMESPACE from xml.dom.ext import Print #Create a document type node using the doctype name "message" #A blank system ID and blank public ID (i.e. no DTD information) doctype = implementation.createDocumentType("message", None, None) #Create a document node, which also creates a document element node #For the element, use a blank namespace URI and local name "message" doc = implementation.createDocument(EMPTY_NAMESPACE, "message", doctype) #Get the document element msg_elem = doc.documentElement #Create an xml:lang attribute on the new element msg_elem.setAttributeNS(XML_NAMESPACE, "xml:lang", "en") #Create a text node with some data in it new_text = doc.createTextNode("You need Python") #Add the new text node to the document element msg_elem.appendChild(new_text) #Print out the result Print(doc) Notice the use of EMPTY_NAMESPACE or None for blank URIs and ID fields. This is a Python-XML convention. Rather than using the empty string, which is, after all, a valid URI reference, None is used (EMPTY_NAMESPACE is merely an alias for None). This has the added value of being a bit more readable. If you run this, you'll see that no declaration is given for the XML namespace. This is because of the special status of that namespace. If you created elements or attributes in other namespaces, those namespace declarations would be rendered in the output. 4DOM is a useful tool for learning about DOM in general, but it's not a practical DOM for most uses. The problem is that it is very slow and memory intensive. A lot of effort goes into handling DOM arcana; in particular, the event system is a significant drag on performance, though it has some very interesting uses. I am one of the original authors of 4DOM, and yet I rarely use it any more. There are other, much faster DOM implementations which I'll be covering in future articles. Luckily, the lessons you learn experimenting with 4DOM are mostly applicable to other Python DOMs. Two XML files can be different and yet have the same effective content to an XML processor. This is because XML defines that some differences are insignificant, such as the order of attributes, the choice of single or double quotes, and some differences in the use of character entities. Canonicalization, popularly abbreviated as c14n, is a mechanism for serializing XML which normalizes all these differences. As such, c14n can be used to make XML files easier to compare. One important application of this is security. You may want to use technology to ensure that an XML file has not been tampered with, but you might want this check to ignore insignificant lexical differences. The xml.dom.ext.c14n module provides canonicalization support for Python. It provides a Canonicalize function which takes a DOM node (most implementations should work), and writes out canonical XML. PyXML includes XPath and XSLT implementations written completely in Python, but as of this writing, the XSLT module is still being integrated into PyXML and does not really work. If you need XSLT processing, you will want to get Python-libxslt, 4Suite, Pyana, or one of the other packages I mentioned in the last installment. The XPath support, however, does work and operates on DOM nodes. In fact, it can be a very handy shortcut from using DOM methods for navigating. The following snippet uses an XPath expression to retrieve a list of all elements in a DOM document which are in the XSLT namespace. If you want to run it, first of all prepend code to define "doc" as a DOM document node object. from xml.ns import XSLT #The XSLT namespace http://www.w3.org/1999/XSL/Transform NS = XSLT.BASE from xml.xpath import Context, Evaluate #Create an XPath context with the given DOM node #With no other nodes in the context list #(list size 1, current position 1) #And the given prefix/namespace mapping con = Context.Context(doc, 1, 1, processorNss={"xsl": NS}) #Evaluate the XPath expression and return the resulting #Python list of nodes result = Evaluate("//xsl:*", context=con) XPath expressions can also return other data types: strings, numbers, and boolean. The first two are returned as regular Python strings and floats. Booleans are returned as instances of a special class, xml.utils.boolean. This will probably change in Python 2.3, which has a built-in boolean type. If you are making extensive use of XPath, you might still want to consider the implementation in 4Suite, which is a faster version of the one in PyXML, with more bug fixes as well. I mentioned some of the other modules in PyXML in the last installment, including the xmlproc validating parser, in module xml.parsers.xmlproc, Quick Parsing for XML, in module xml.utils.qp_xml and the WDDX library, in module xml.marshal.wddx. Marshalling is the process of representing arbitrary Python objects as XML, and there is also a general marshaling facility in the xml.marshal.generic module. It has a similar interface to the standard pickle module. Listing 6 marshals a small Python dictionary to XML. from xml.marshal.generic import Marshaller marshal = Marshaller() obj = {1: [2, 3], 'a': 'b'} #Dump to a string xml_form = marshal.dumps(obj) The created XML looks as follows (line wrapping and indentation added for clarity): <?xml version="1.0"?> <marshal> <dictionary id="i2"> <int>1</int> <list id="i3"><int>2</int><int>3</int></list> <string>a</string> <string>b</string> </dictionary> </marshal> You can find more code examples in the demos directory of the PyXML package. The place to discuss PyXML or to report problems is the Python XML SIG mailing list. In the next article, I'll offer a tour of 4Suite, which provides enhanced versions of some of PyXML's facilities, as well as some other unique features. XML.com Copyright © 1998-2006 O'Reilly Media, Inc.
A handy Python script for parsing the O'Reilly book catalogue. by Jacek Artymiak I'm working on a on-line bookstore project, which I'd like to automate as much as possible. Things like book insertion and deletion ought to happen automatically, with only occasional need to mess with code or HTML. My choice of language for this project is Python, with a touch of urllib and re. Since one of the sources of book titles and ISBN numbers I use is the O'Reilly book catalogue, I thought I'd share this little script with other ORA fans. #!/usr/bin/python import urllib, re, sys try: page = urllib.urlopen('http://www.oreilly.com/catalog/prdindex.html') except IOError, (errno, strerror): sys.exit ("I/O error(%s): %s" % (errno, strerror)) title = "" isbn = "" price = "" page = page.read() page = page.replace("\n", "") page = page.replace("\r", "") page = page.replace("> ", ">") page = page.replace(" ", " ") page = page[page.find("<b>Examples</b></td>") + len("<b>Examples</b></td>"):] while(1): page = page[page.find("<tr ") + len("<tr "):] if (len(page) == 1): break page = page[page.find("http://www.oreilly.com/catalog/"):] if (len(page) == 1): break page = page[page.find("\">"):] page = page[2:] title = page[:page.find("</a>")] page = page[page.find("\">"):] page = page[2:] isbn = page[:page.find("</td>")] isbn = isbn.replace("-", "") page = page[page.find("\">"):] page = page[2:] price = page[:page.find("</td>")] print title + ":" + isbn + ":" + price
I'm using this sample code to connect to Facebook Chat over XMPP: #!/usr/bin/python import sleekxmpp import logging logging.basicConfig(level=logging.DEBUG) def session_start(event): chatbot.send_presence() print('Session started') chatbot.get_roster() def message(msg): if msg['type'] in ('chat','normal'): print('msg received') print(msg['body']) msg.reply('Thanks').send() jid = 'myusername@chat.facebook.com' password = 'mypassword' server = ('chat.facebook.com', 5222) chatbot = sleekxmpp.ClientXMPP(jid,password) chatbot.add_event_handler('session_start', session_start) chatbot.add_event_handler('message', message) chatbot.auto_reconnect = True chatbot.connect(server) chatbot.process(block=True) Everything seems to be Ok, but when I run that code, I can't connect to Facebook server: DEBUG:sleekxmpp.basexmpp:setting jid to myusername@chat.facebook.com DEBUG:sleekxmpp.basexmpp:Loaded Plugin (RFC-6120) STARTTLS Stream Feature DEBUG:sleekxmpp.basexmpp:Loaded Plugin (RFC-6120) Resource Binding Stream Feature DEBUG:sleekxmpp.basexmpp:Loaded Plugin (RFC-3920) Start Session Stream Feature DEBUG:sleekxmpp.basexmpp:Loaded Plugin (RFC-6120) SASL Stream Feature DEBUG:sleekxmpp.xmlstream.xmlstream:Trying to connect to chat.facebook.com:5222 DEBUG:sleekxmpp.xmlstream.xmlstream:Connecting to chat.facebook.com:5222 ERROR:sleekxmpp.xmlstream.xmlstream:Could not connect to chat.facebook.com:5222. Socket Error #111: Connection refused DEBUG:sleekxmpp.xmlstream.xmlstream:Trying to connect to chat.facebook.com:5222 DEBUG:sleekxmpp.xmlstream.xmlstream:Waiting 1.97953654103 seconds before connecting. ... Am I missing something here?
I have a group of variables named k1, k2 k3....k52. They variables are lists/numpy arrays depending on the scenario. Essentially I'd like to perform the same manipulation on them en masse within a loop, but am having trouble ierating over them. Essentially what i'd like is something like this: for i in arange(0,52): 'k'+ str(i) = log10(eval('k' + str(i))) Obviously i know the above wont work, but it gives the idea. My actual attempt is this: for i in arange(0,10): rate = eval('k' + str(i)) rate = np.array(rate,dtype=float) rate = log10(rate) rate.tolist() vars()[rate] = 'k' + str(i) (Its changed to a numpy array so i can log it, and then back to a list so i change the variable name back to what it was) Thanks for any help you can provide. I get the feeling this is something quite simple, but its escaping me at the moment. edit: thanks very much for the answers, i should have explained that I can't really store them a set array, they need to remain as independent variables for reasons i don't really want to go into.
So, I am trying to create a unittest.TestCase that creates several different databases with different member types and verifies something about the data(not important for this context). I wanted to use a generator to pass the necessary information from a global dict to the function which will take care of the object creation and verification. Currently, it will only setUp, run, and tearDown the first variant of the test. How can I generate several distinct (i.e. varying parameters from a data structure such as a dict) calls to my method, which has to determine how to populate the database based on the member type? Each call should be run as a separate test and have the setUp and tearDown performed every time. The test case is being run with PyUnit in the Pydev GUI. import sys, os, socket, shutil, unittest # Global counter for test generation testRun = 1 # test dictionary for generating multiple tests basicMembers = {"Int" : 1} # test operations to generate federations def addItemAndGetSizeOf(basicMember, expectedValue, testFileName): print "foo %s %s %s" % (basicMember, expectedValue, testFileName) class BasicMembers(unittest.TestCase): testFileName = "bar" + str(testRun) def setUp(self): global testRun testRun += 1 print testRun def tearDown(self): pass def testBasicMembers(self): for basicMember, expectedValue in basicMembers.items(): yield addItemAndGetSizeOf, basicMember, expectedValue, self.testFileName if __name__ == "__main__": #import sys;sys.argv = ['', 'Test.testName'] unittest.main()
January 5th, 2012 at 9:26 pm by Dr. Drang Last night I posted a TextExpander snippet that automated the procedure for generating Apple affiliate links for the items in iTunes and Mac App Stores. This morning I got an email from David Smith, who pointed out that affiliate links can be much shorter and easier to read if you’re willing to forgo the click tracking that LinkShare does. He linked to his own post from last week that describes the process very well. I liked David’s way of making affiliate links so much I’ve dropped last night’s snippet and made a new one. The one advantage of the previous snippet is that it allows you to use LinkShare to track clicks as well as purchases. With the new snippet, you can track purchases only. To me, that’s a small price to pay to go from frightening looking URLs like this http://click.linksynergy.com/fs-bin/stat?id=L4JhWyGwYTM &offerid=146261&type=3&subid=0&tmpid=1826 &RD_PARM1=http%253A%252F%252Fitunes.apple.com%252Fus %252Fapp%252Fnotesy-for-dropbox%252Fid386095500 %253Fmt%253D8%2526uo%253D4%2526partnerId%253D30 to nice, easy-to-read ones like this http://itunes.apple.com/us/app/notesy-for-dropbox/ id386095500?mt=8&partnerId=30&siteID=L4JhWyGwYTM (In both cases, I’ve inserted line breaks so you can see the whole thing without horizontal scrolling.) As before, the links are generated through a Python script: python: 1: #!/usr/bin/python 2: 3: from subprocess import check_output 4: from sys import stdout 5: 6: # The procedure followed here is taken from Apple's instructions for for linking: 7: # http://www.apple.com/itunes/affiliates/resources/documentation/linking-to-the-itunes-music-store.html 8: 9: # My affiliate ID. 10: myID = 'useyourown' 11: 12: # Get the URL from the clipboard. 13: clipURL = check_output('pbpaste') 14: 15: # Add my ID and the partnerId parameter to the URL. If there are already 16: # queries, add them as additional ones; if not, add them as the only ones. 17: if '?' in clipURL: 18: itemURL = '%s&partnerId=30&siteID=%s' % (clipURL, myID) 19: else: 20: itemURL = '%s?partnerId=30&siteID=%s' % (clipURL, myID) 21: 22: # Write it out 23: stdout.write(itemURL) which you can copy and paste into a new TextExpander shell snippet: The script is so much simpler than the LinkShare-ified version I wrote before, I’m almost embarrassed to post it. The only real logic is in Lines 17-20, which decide whether the partnerID parameter should be preceded by a & or a ?. The one thing you’ll have to do if you want to use this snippet yourself is change Line 10 to put your own affiliate ID between the single quotation marks. You can pull your ID out of any affiliate link you might have generated in the past through Apple’s Link Maker tool. Look for a portion of the URL near the beginning that looks like …/stat?id=L4JhWyGwYTM&offerid… Your ID is the alphanumeric string between the = and the &. You can, of course, use whatever abbreviation you like for the snippet; I settled on ;aal for “Apple affiliate link.” The procedure for using this snippet is basically the same as the one I gave last night: Find the item you want to link to in iTunes, the Mac App Store, or in your browser on one of Apple’s product web pages. Copy the link to the item from either the popup menu (in iTunes or the App Store) or the toolbar (in your browser). Switch to wherever you want to put the link and type ;aalto insert the URL. As I said in last night’s post, I’d like to cut out Step 2 but can’t figure out how without resorting to GUI scripting, which I’m always leery of. With this snippet, you can make affiliate links to Thanks again to David for telling me about this simplification. Update 1/7/12 If you’re using Snow Leopard or earlier, you probably have a version of Python earlier than Python 2.7, which means your subprocess library won’t have the check_output convenience function. You’ll need to use this version of the script instead: python: 1: #!/usr/bin/python 2: 3: from subprocess import * 4: from sys import stdout 5: 6: # The procedure followed here is taken from Apple's instructions for for linking: 7: # http://www.apple.com/itunes/affiliates/resources/documentation/linking-to-the-itunes-music-store.html 8: 9: # My affiliate ID. 10: myID = 'useyourown' 11: 12: # Get the URL from the clipboard. 13: clipURL = clipURL = Popen('pbpaste', stdout=PIPE).communicate()[0] 14: 15: # Add my ID and the partnerId parameter to the URL. If there are already 16: # queries, add them as additional ones; if not, add them as the only ones. 17: if '?' in clipURL: 18: itemURL = '%s&partnerId=30&siteID=%s' % (clipURL, myID) 19: else: 20: itemURL = '%s?partnerId=30&siteID=%s' % (clipURL, myID) 21: 22: # Write it out 23: stdout.write(itemURL) The difference are in Lines 3 and 13. This script will work under Python 2.7, too, but Line 13 is distinctly more opaque. Frankly, the whole subprocess module—and the popen2 module that preceded it as the Pythonic way of shelling out—stinks on ice. Anyone who wants to run shell commands from within a script already knows the shell syntax; forcing them to use an unnatural syntax, no matter how Pythonic it may be, is wasteful and stupid. Perl and Ruby are much smarter about this. I agree with this guy and will be checking out his Envoy library.
Il linguaggio Python Python Python è un linguaggio di programmazione general purpose (utilizzabile quindi per gli scopi più diversi) estremamente di alto livello. La sua filosofia di progettazione pone l'accento sulla produttività del programmatore e sulla leggibilità del codice. Il suo nucleo ha una sintassi minimale con pochissimi comandi e con una semantica semplice, ma contiene un'ampia libreria standard con API per molte delle funzioni del sistema operativo su cui viene eseguito l'interprete. Il codice Python, seppur minimale, definisce oggetti di tipo liste (list), tuple (tuple), dizionari (dict) ed interi a lunghezza arbitraria (long). Python supporta diversi paradigmi di programmazione: programmazione orientata agli oggetti (class), programmazione imperativa (def), e programmazione funzionale (lambda). Python ha un sistema di tipizzazione dinamica e un sistema automatico di gestione della memoria che utilizza il conteggio dei riferimenti (come Perl, Ruby e Scheme). La prima versione di Python è stata rilasciata da Guido van Rossum nel 1991. Il linguaggio ha un modello di sviluppo aperto, basato su una comunità di sviluppatori gestita dalla Python Software Foundation, una associazione non-profit. Ci sono molti interpreti e compilatori che implementano il linguaggio Python, tra cui uno in Java (Jython). In questo breve approfondimento si farà riferimento all'implementazione C creata da Guido. Sul sito ufficiale di Python[python] si possono trovare molti tutorial, la documentazione ufficiale e il manuale di riferimento della libreria standard del linguaggio. Questo capitolo può essere saltato se si ha già familiarità con il linguaggio Python. Avvio Le distribuzioni binarie di web2py per Microsoft Windows e Apple Mac OS X già comprendono l'interprete Python. Su Windows web2py può essere avviato con il seguente comando (digitato al prompt del DOS): web2py.exe -S welcome Su Apple Mac OS X digitare il seguente comando nella finestra del Terminale (dalla stessa cartella contenente web2py.app): ./web2py.app/Contents/MacOS/web2py -S welcome Su una macchina Linux o su macchine Unix è molto probabile che Python sia già installato. In questo caso digitare al prompt dello shell il seguente comando: python web2py.py -S welcome Nel caso che Python 2.5 non sia presente dovrà essere scaricato e installato prima di eseguire web2py. L'opzione della linea di comando -S welcome indica a web2py di eseguire la shell interattiva come se i comandi fossero inseriti in un controller dell'applicazione welcome, l'applicazione di base di web2py. Questo rende disponibili quasi tutte le classi, le funzioni e gli oggetti di web2py. Questa è l'unica differenza tra la shell interattiva di web2py e la normale linea di comando dell'interprete Python. L'interfaccia amministrativa fornisce anche una shell utilizzabile via web per ogni applicazione. Per accedere a quella dell'applicazione "welcome" utilizzare l'indirizzo: http://127.0.0.1:8000/admin/shell/index/welcome Tutti gli esempi di questo capitolo possono essere provati sia nella shell standard di Python che nella shell via web di web2py. help, dir Nel linguaggio Python sono presenti due comandi utili per ottenere informazioni sugli oggetti, sia interni che definiti dall'utente, istanziati nello scope corrente. Utilizzare il comando help per richiedere la documentazione di un oggetto (per esempio "1"): >>> help(1) Help on int object: class int(object) | int(x[, base]) -> integer | | Convert a string or number to an integer, if possible. A floating point | argument will be truncated towards zero (this does not include a string | representation of a floating point number!) When converting a string, use | the optional base. It is an error to supply a base when converting a | non-string. If the argument is outside the integer range a long object | will be returned instead. | | Methods defined here: | | __abs__(...) | x.__abs__() <==> abs(x) ... e, poichè "1" è un intero, si ottiene come risposta la descrizione della classe int e di tutti i suoi metodi. Qui l'output del comando è stato troncato perchè molto lungo e dettagliato. Allo stesso modo si può ottenere una lista di metodi dell'oggetto "1" con il comando dir: >>> dir(1) ['__abs__', '__add__', '__and__', '__class__', '__cmp__', '__coerce__', '__delattr__', '__div__', '__divmod__', '__doc__', '__float__', '__floordiv__', '__getattribute__', '__getnewargs__', '__hash__', '__hex__', '__index__', '__init__', '__int__', '__invert__', '__long__', '__lshift__', '__mod__', '__mul__', '__neg__', '__new__', '__nonzero__', '__oct__', '__or__', '__pos__', '__pow__', '__radd__', '__rand__', '__rdiv__', '__rdivmod__', '__reduce__', '__reduce_ex__', '__repr__', '__rfloordiv__', '__rlshift__', '__rmod__', '__rmul__', '__ror__', '__rpow__', '__rrshift__', '__rshift__', '__rsub__', '__rtruediv__', '__rxor__', '__setattr__', '__str__', '__sub__', '__truediv__', '__xor__'] Tipi Python è un linguaggio dinamicamente tipizzato. Questo significa che le variabili non hanno un tipo e pertanto non devono essere dichiarate. I valori, d'altra parte, hanno un tipo. Si può interrogare una variabile per sapere il tipo del valore che contiene: >>> a = 3 >>> print type(a) <type 'int'> >>> a = 3.14 >>> print type(a) <type 'float'> >>> a = 'hello python' >>> print type(a) <type 'str'> Python include nativamente anche strutture dati come le liste e i dizionari. str Python supporta l'utilizzo di due differenti tipi di stringhe: stringhe ASCII e stringhe Unicode. Le stringhe ASCII sono delimitate dagli apici ('...'), dai doppi apici("...") o da tre doppi apici ("""..."""). I tre doppi apici delimitano le stringhe multilinea. Le stringhe Unicoode iniziano con il carattere u seguito dalla stringa che contiene i caratteri Unicode. Una stringa Unicode può essere convertita in una stringa ASCII scegliendo una codifica (encoding). Per esempio: >>> a = 'this is an ASCII string' >>> b = u'This is a Unicode string' >>> a = b.encode('utf8') Dopo aver eseguito questi tre comandi la variabile a è una stringa ASCII che memorizza caratteri codificati con UTF8. Internamente web2py utlizza sempre la codifica UTF8 nelle stringhe. È possibile scrivere il valore delle variabili nelle stringhe in diversi modi: >>> print 'number is ' + str(3) number is 3 >>> print 'number is %s' % (3) number is 3 >>> print 'number is %(number)s' % dict(number=3) number is 3 L'ultima notazione è la più esplicita e la meno soggetta ad errori. Ed è pertanto da preferire. Molti oggetti in Python, per esempio i numeri, possono essere convertiti in stringhe utlizzando str o repr. Questi due comandi sono molto simili e producono risultati leggermente diversi. Per esempio: >>> for i in [3, 'hello']: print str(i), repr(i) 3 3 hello 'hello' Per le classi definite dall'utente str e repr possono essere definiti utilizzando gli speciali operatori __str__ e __repr__. Questi metodi sono brevemente descritti più avanti. Per ulteriori informazioni si può fare riferimento alla documentazione ufficiale di Python[pydocs] . repr ha sempre un valore di default. Un'altra caratteristica importante delle stringhe in Python è che sono oggetti iterabili (sui quali si può eseguire un ciclo che ritorna sequenzialmente ogni elemento dell'oggetto): >>> for i in 'hello': print i h e l l o liste I metodi principali delle liste in Python sono append (aggiungi), insert (inserisci) e delete (cancella): >>> a = [1, 2, 3] >>> print type(a) <type 'list'> >>> a.append(8) >>> a.insert(2, 7) >>> del a[0] >>> print a [2, 7, 3, 8] >>> print len(a) 4 Le liste possono essere affettate (sliced) per ottenerne solo alcuni elementi: >>> print a[:3] [2, 7, 3] >>> print a[1:] [7, 3, 8] >>> print a[-2:] [3, 8] e concatenate: >>> a = [2, 3] >>> b = [5, 6] >>> print a + b [2, 3, 5, 6] Una lista è un oggetto iterabile quindi si può ciclare su di essa: >>> a = [1, 2, 3] >>> for i in a: print i 1 2 3 Gli elementi di una lista non devono obbligatoriamente essere dello stesso tipo, ma possono essere qualsiasi tipo di oggetto Python. tuple Una tupla è come una lista, ma la sua dimensione e i suoi elementi sono immutabili, mentre in una lista sono mutabili. Se un elemento di una tupla è un oggetto gli attributi dell'oggetto sono mutabili. Una tupla è definita dalle parentesi tonde: >>> a = (1, 2, 3) Perciò, mentre questa assegnazione è valida per una lista: >>> a = [1, 2, 3] >>> a[1] = 5 >>> print a [1, 5, 3] l'assegnazione di un elemento all'interno di una tupla non è un'operazione valida: >>> a = (1, 2, 3) >>> print a[1] 2 >>> a[1] = 5 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'tuple' object does not support item assignment La tupla, così come la lista, è un oggetto iterabile. È da notare che una tupla formata da un solo elemento deve includere una virgola alla fine: >>> a = (1) >>> print type(a) <type 'int'> >>> a = (1,) >>> print type(a) <type 'tuple'> Le tuple, a causa della loro immutabilità, sono molto efficienti per l'impacchettamento (packing) degli oggetti. Le parentesi spesso sono opzionali: >>> a = 2, 3, 'hello' >>> x, y, z = a >>> print x 2 >>> print z hello dict Un dizionario (dict) in Python è una hash table che collega una chiave ad un valore. Per esempio: >>> a = {'k':'v', 'k2':3} >>> a['k'] v >>> a['k2'] 3 >>> a.has_key('k') True >>> a.has_key('v') False Le chiavi possono essere di un qualsiasi tipo che implementa il metodo __hash__ (int, stringa, o classe). I valori possono essere di qualsiasi tipo. Chiavi e valori diversi nello stesso dizionario non devono obbligatoriamente essere dello stesso tipo. Se le chiavi sono caratteri alfanumerici un dizionario può anche essere definito con una sintassi alternativa: >>> a = dict(k='v', h2=3) >>> a['k'] v >>> print a {'k':'v', 'h2':3} Metodi utili di un dizionario sono has_key, keys, values e items: >>> a = dict(k='v', k2='3) >>> print a.keys() ['k', 'k2'] >>> print a.values() ['v', 3] >>> print a.items() [('k', 'v'), ('k2', 3)] Il metodo items produce una lista di tuple ognuna contenente una chiave ed il suo valore associato. Gli elementi di un dizionario o di una lista possono essere cancellati con il comando del: >>> a = [1, 2, 3] >>> del a[1] >>> print a [1, 3] >>> a = dict(k='v', h2=3) >>> del a['h2'] >>> print a {'k':'v'} Internamente Python utilizza l'operatore hash per convertire gli oggetti in interi ed utilizza quell'intero per determinare dove memorizzare l'oggetto. >>> hash("hello world") -1500746465 Indentazione del codice Python utilizza l'indentazione per delimitare blocchi di codice. Un blocco inizia con una linea che termina con i due punti (":") e continua per tutte le linee che hanno lo stesso livello di indentazione o un livello maggiore. Per esempio: >>> i = 0 >>> while i < 3: >>> print i >>> i = i + 1 >>> 0 1 2 Normalmente si utilizzano 4 spazi per ogni livello di indentazione È una buona regola non mischiare i caratteri di tabulazione (tab) con gli spazi, perchè questo potrebbe causare problemi. for ... in In Python è possibile ciclare (loop) sugli oggetti iterabili: >>> a = [0, 1, 'hello', 'python'] >>> for i in a: print i 0 1 hello python Una scorciatoia molto utilizzata è il comando xrange, che genera un iterabile senza memorizzare la lista degli elementi: >>> for i in xrange(0, 4): print i 0 1 2 3 Questo è equivalente alla sintassi in C/C++/C#/Java: for(int i=0; i<4; i=i+1) { print(i); } Un altro utile comando è enumerate, che esegue il conteggio all'interno di un ciclo: >>> a = [0, 1, 'hello', 'python'] >>> for i, j in enumerate(a): print i, j 0 0 1 1 2 hello 3 python Esiste inoltre il comando range(a, b, c) che ritorna una lista di interi che ad iniziare dal valore a, incrementato del valore c e che si conclude con l'ultimo valore più piccolo di b. a ha come valore di default 0 e c ha default 1. xrange è simile ma non genera nessuna lista, solo un iteratore sulla lista, per questo è maggiormente utilizzato nei cicli. Per uscire prematuramente da un ciclo si può utilizzare il comando break: >>> for i in [1, 2, 3]: print i break 1 Per avviare la successiva iterazione del ciclo, senza eseguire tutto il blocco di codice che lo compone si può utilizzare il comando continue: >>> for i in [1, 2, 3]: print i continue print 'test' 1 2 3 while Il ciclo while in Python è simile a quello di molti altri linguaggi di programmazione: ripete l'iterazione per un numero infinito di volte controllando la condizione all'inizio ciclo. Se la condizione è False il ciclo termina. >>> i = 0 >>> while i < 10: i = i + 1 >>> print i 10 Non esiste un costrutto di tipo loop ... until in Python. def ... return Questa è una tipica funzione in Python: >>> def f(a, b=2): return a + b >>> print f(4) 6 Non c'è la necessità (e neanche la possibilità) di specificare i tipi degli argomenti o il tipo del valore che viene ritornato dalla funzione. Gli argomenti delle funzioni possono avere valori di default e possono ritornare oggetti multipli: >>> def f(a, b=2): return a + b, a - b >>> x, y = f(5) >>> print x 7 >>> print y 3 Gli argomenti delle funzioni possono essere passati esplicitamente per nome: >>> def f(a, b=2): return a + b, a - b >>> x, y = f(b=5, a=2) >>> print x 7 >>> print y -3 Le funzioni possono avere un numero variabile di argomenti: >>> def f(*a, **b): return a, b >>> x, y = f(3, 'hello', c=4, test='world') >>> print x (3, 'hello') >>> print y {'c':4, 'test':'world'} In questo esempio gli argomenti non passati per nome (3, 'hello') sono memorizzati nella lista a e gli argomenti passati per nome (c, test) sono memorizzati nel dizionario b. Nel caso opposto una lista o una tupla può essere passata ad una funzione che richiede argomenti posizionali individuali tramite un'operazione di spacchettamento (unpacking): >>> def f(a, b): return a + b >>> c = (1, 2) >>> print f(*c) 3 e un dizionario può essere spacchettato per ottenere argomenti con nome: >>> def f(a, b): return a + b >>> c = {'a':1, 'b':2} >>> print f(**c) 3 if ... elif ... else >>> for i in range(3): >>> if i == 0: >>> print 'zero' >>> elif i == 1: >>> print 'one' >>> else: >>> print 'other' zero one other "elif" significa "else if". Sia elif che else sono clausole opzionali. Può esistere più di un elif ma può essere presente un unico else. Condizioni di maggior complessità possono essere create utilizzando gli operatori not, and and or. >>> for i in range(3): >>> if i == 0 or (i == 1 and i + 1 == 2): >>> print '0 or 1' try ... except ... else ... finally >>> try: >>> a = 1 / 0 >>> except Exception, e >>> print 'oops: %s' % e >>> else: >>> print 'no problem here' >>> finally: >>> print 'done' oops: integer division or modulo by zero done Quando l'eccezione è generata viene intercettata dalla clausola except, che viene eseguita, mentre non viene eseguita la clausola else. Se non viene generata nessuna eccezione la clausola except non viene eseguita ma viene eseguita la clausola else. La clausola finally è sempre eseguita. Possono esistere diverse clausole di tipo except per gestire differenti tipi di eccezione. >>> try: >>> raise SyntaxError >>> except ValueError: >>> print 'value error' >>> except SyntaxError: >>> print 'syntax error' syntax error Le clausole else e finally sono opzionali: Quella che segue è una lista di eccezioni predefinite in Python con l'aggiunta dell'unica eccezione che viene generata da web2py: BaseException +-- HTTP (definita da web2py) +-- SystemExit +-- KeyboardInterrupt +-- Exception +-- GeneratorExit +-- StopIteration +-- StandardError | +-- ArithmeticError | | +-- FloatingPointError | | +-- OverflowError | | +-- ZeroDivisionError | +-- AssertionError | +-- AttributeError | +-- EnvironmentError | | +-- IOError | | +-- OSError | | +-- WindowsError (Windows) | | +-- VMSError (VMS) | +-- EOFError | +-- ImportError | +-- LookupError | | +-- IndexError | | +-- KeyError | +-- MemoryError | +-- NameError | | +-- UnboundLocalError | +-- ReferenceError | +-- RuntimeError | | +-- NotImplementedError | +-- SyntaxError | | +-- IndentationError | | +-- TabError | +-- SystemError | +-- TypeError | +-- ValueError | | +-- UnicodeError | | +-- UnicodeDecodeError | | +-- UnicodeEncodeError | | +-- UnicodeTranslateError +-- Warning +-- DeprecationWarning +-- PendingDeprecationWarning +-- RuntimeWarning +-- SyntaxWarning +-- UserWarning +-- FutureWarning +-- ImportWarning +-- UnicodeWarning Per una descrizione dettagliata di ogni eccezione fare riferimento alla documentazione ufficiale di Python. web2py espone solamente una nuova eccezione, chiamata HTTP. Quando questa viene generata ritorna una pagina di errore HTTP (per maggiori informazioni fare riferimento al Capitolo 4). Qualsiasi oggetto può essere generato come un'eccezione, ma è buona norma generare solamente oggetti che estendono le eccezioni predefinite. classi Poichè Python è dinamicamente tipizzato a prima vista le classi e gli oggetti possono sembrare strani. In effetti non è necessario definire le variabili membro (attributi) quando si dichiara una classe e differenti istanze di una stessa classe possono avere differenti attributi. Gli attributi sono generalmente associati con l'istanza e non con la classe (tranne nel caso in cui sono dichiarati come attributi di classe, che è l'equivalente delle variabili membro statiche in C++/Java. Ecco un esempio: >>> class MyClass(object): pass >>> myinstance = MyClass() >>> myinstance.myvariable = 3 >>> print myinstance.myvariable 3 pass è un comando che non fa nulla. In questo caso è utilizzato per definire una classe MyClass che non contiene nulla. MyClass() richiama il costruttore della classe (in questo caso il costruttore di default) e ritorna on oggetto che è un'istanza della classe. (object) nella definizione della classe indica che la nuova classe estende la classe predefinita object. Questo non è obbligatorio ma è una buona regola da seguire. Ecco qui una classe più complessa: >>> class MyClass(object): >>> z = 2 >>> def __init__(self, a, b): >>> self.x = a, self.y = b >>> def add(self): >>> return self.x + self.y + self.z >>> myinstance = MyClass(3, 4) >>> print myinstance.add() 9 Le funzioni dichiarate all'interno di una classe sono chiamate metodi. Alcuni metodi speciali hanno nomi riservati. Per esempio __init__ è il costruttore della classe. Tutte le variabili sono locali al metodo, tranne le variabili dichiarate all'esterno del metodo. Per esempio z è una variabile di classe, equivalente alle variabili membro statiche del C++, che contiene lo stesso valore per ogni istanza della classe. È da notare che __init__ ha tre argomenti e il metodo add ne ha uno ma vengono chiamati rispettivamente con due e con zero argomenti. Il primo argomento rappresenta, per convenzione, il nome locale usato all'interno dei metodi per riferirsi all'istanza corrente della classe. Qui viene utilizzato self per riferirsi all'oggetto corrente, ma potrebbe essere utilizzato un qualsiasi nome. self ha lo stesso ruolo di *this in C++ o this in Java, ma self non è una parola chiave riservata. In questo modo si evitano ambiguità nella dichiarazione di classi nidificate, come nel caso di una classe definita in un metodo interno ad un altra classe. Attributi, metodi ed operatori speciali Gli attributi, i metodi e gli operatori di classe che iniziano con un doppio underscore (__) sono solitamente considerati privati, anche se questa è solamente una convenzione è non è una regola forzata dall'interprete. Alcuni di questi sono parole chiave riservate ed hanno un significato speciale. Per esempio: __len__ __getitem__ __setitem__ possono essere utilizzati per creare un oggetto contenitore che agisce come se fosse una lista: >>> class MyList(object) >>> def __init__(self, *a): self.a = a >>> def __len__(self): return len(self.a) >>> def __getitem__(self, i): return self.a[i] >>> def __setitem__(self, i, j): self.a[i] = j >>> b = MyList(3, 4, 5) >>> print b[1] 4 >>> a[1] = 7 >>> print b.a [3, 7, 5] Altri operatori speciali sono __getattr__ e __setattr__, che definiscono i modi di impostare e recuperare gli attributi della classe e __sum__ e __sub__, che possono eseguire l'overload degli operatori aritmetici. Sono già stati menzionati gli operatori speciali __str__ e __repr__. È consigliabile riferirsi a letture più approfondite per l'utilizzo di questi operatori. File Input/Output In Python è possibile aprire e scrivere in un file con: >>> file = open('myfile.txt', 'w') >>> file.write('hello world') Allo stesso modo si può rileggere il contenuto di un file con: >>> file = open('myfile.txt', 'r') >>> print file.read() hello world In alternativa si possono leggere i file in modalità binaria con "rb", scrivere in modalità binaria con "wb", e aprire i file in modalità append con "a", secondo la notazione standard del C. Il comando read ha un argomento opzionale che è il numero di byte da leggere. Si può saltare ad una qualsiasi posizione all'interno del file con il comando seek. Si può leggere il contenuto del file con read: >>> print file.seek(6) >>> print file.read() world il file viene chiuso con: >>> file.close() sebbene spesso questo non sia necessario perchè il file è chiuso automaticamente quando la variable che lo rappresenta non è più nello scope. Quando si usa web2py non si è a conoscenza di quale sia la directory corrente, perchè questo dipende da come web2py è configurato. Per questo motivo la variabilerequest.foldercontiene il path dell'applicazione corrente. I path possono essere concatenati con il comandoos.path.joindiscusso in seguito. lambda Ci sono casi in cui è necessario generare dinamicamente una funzione anonima. Questo può essere fatto utilizzando la parola chiave lambda: >>> a = lambda b: b + 2 >>> print a(3) 5 L'espressione "lambda [a]:[b]" si può leggere come "una funzione con argomento [a] che ritorna [b]". Anche se la funzione è anonima può essere memorizzata in una variabile e in questo modo acquisisce un nome. Questo tecnicamente è diverso dall'utilizzo di una def, perchè è la variabile che fa riferimento alla funzione anonima che ha un nome, non la funzione stessa. A cosa servono le lamba? In effetti sono molto utili perchè permettono di riscrivere una funzione in un altra impostando dei parametri di default senza dover creare una nuova funzione. Per esempio: >>> def f(a, b): return a + b >>> g = lambda a: f(a, 3) >>> g(2) 5 Ecco un esempio più completo e più interessante: una funzione che controlla se un numero è primo: def isprime(number): for p in range(2, number): if number % p: return False return True Questa funzione, ovviamente, impiega molto tempo ad essere eseguita. Si supponga però di avere una funzione di cache cache.ram con tre argomenti: una chiave, una funzione e un numero di secondi: value = cache.ram('key', f, 60) La prima volta che viene chiamata la funzione cache.ram richiama la funzione f(), memorizza il risultato in un dizionario e ritorna il suo valore: value = d['key']=f() La seconda volta che la funzione cache.ram viene chiamata se la chiave è già nel dizionario e non è più vecchia del numero di secondi specificati (60), ritorna il valore corrispondente senza eseguire la chiamata alla funzione f(). value = d['key'] Come fare per memorizzare nella cache l'output della funzione isprime per ogni valore in input? Ecco come: >>> number = 7 >>> print cache.ram(str(number), lambda: isprime(number), seconds) True >>> print cache.ram(str(number), lambda: isprime(number), seconds) True L'output è sempre lo stesso, ma la prima volta che viene chiamata la funzione cache.ram viene eseguita anche isprime, la seconda volta no. L'esistenza dilambdapermette di riscrivere una funzione esistente in termini di un differente set di argomenti.cache.ramecache.disksono due funzioni di cache di web2py. exec, eval A differenza di Java Python è un linguaggio completamente interpretato. Questo significa che si ha la possibilità di eseguire dei comandi Python inseriti in una stringa. Per esempio: >>> a = "print 'hello world'" >>> exec(a) 'hello world' Cosa succede in questo caso? La funzione exec comanda all'interprete di richiamare se stesso ed eseguire il contenuto della stringa passata come argomento. È anche possibile eseguire il contenuto di una stringa con uno specifico contesto definito con i simboli inseriti in un dizionario: >>> a = "print b" >>> c = dict(b=3) >>> exec(a, {}, c) 3 In questo caso l'interprete, quando esegue il contenuto della stringa a, vede i simboli definiti in c (b nell'esempio), ma non vede c o a. Questo è differente dall'esecuzione in un ambiente ristretto poichè exec non limita cosa può essere fatto dal codice interno; semplicemente definisce le variabili visibili al codice da eseguire. Una funzione molto simile ad exec è eval, l'unica differenza è che si aspetta che l'argomento passato ritorni un valore che viene a sua volta ritornato: >>> a = "3*4" >>> b = eval(a) >>> print b 12 import Application Programming Interface) per molte delle librerie di sistema (spesso in un modo indipendente dal sistema operativo). Per esempio, per utilizzare il generatore di numeri casuali: >>> import random >>> print random.randint(0, 9) 5 Questo codice stampa un intero tra 0 e 9 (incluso) generato casualmente, in questo caso il numero 5. La funzione randint è definita nel modulo random. È anche possibile importare un oggetto da un modulo nel namespace corrente: >>> from random import randint >>> print randint(0, 9) o importare tutti gli oggetti da un modulo nel namespace corrente: >>> from random import * >>> print randint(0, 9) o importare tutto il modulo in un nuovo namespace: >>> import random as myrand >>> print myrand.randint(0, 9) Nel resto del libro saranno utilizzati principalmente oggetti definiti nei moduli os, sys, datetime, time e cPickle. Tutti gli oggetti di web2py sono accessibili tramite un modulo chiamatogluon, e questo sarà il soggetto dei capitoli successivi. Internamente web2py utilizza diversi moduli di Python (per esempiothread), ma sarà raramente necessario accedere ad essi direttamente. Nelle seguenti sezioni sono elencati i moduli di maggiore utilità. os Questo modulo fornisce una interfaccia alle API del sistema operativo. Per esempio: >>> import os >>> os.chdir('..') >>> os.unlink('filename_to_be_deleted') Alcune delle funzioni del moduloos, come per esempiochdir, NON DEVONO essere utilizzate in web2py perchè non sonothread-safe(cioè sicure per essere utilizzate in applicazioni con più thread eseguiti contemporaneamente). os.path.join è estremamente utile; consente di concatenare i path in un modo indipendente dal sistema operativo. >>> import os >>> a = os.path.join('path', 'sub_path') >>> print a path/sub_path Le variabili d'ambiente possono essere accedute con: >>> print os.environ che è un dizionario in sola lettura. sys sys contiene molte variabili e funzioni, ma quella che sarà maggiormente usata è sys.path. Contiene infatti una lista di path nei quali l'interprete Python ricerca i moduli. Quando si tenta di importare un modulo Python lo ricerca in tutte le cartelle elencate in sys.path. Se vengono installati moduli aggiuntivi in altre cartelle e si vuol far sì che Python li possa importare è necessario aggiungere il percorso al modulo in sys.path. >>> import sys >>> sys.path.append('path/to/my/modules') Quando si esegue web2py Python rimane residente in memoria ed esiste perciò un unico sys.path mentre ci sono molti thread che rispondono alle richieste HTTP. Per evitare problemi di memoria è bene controllare se il percorso è già presente prima di aggiungerlo a sys.path: >>> path = 'path/to/my/modules' >>> if not path in sys.path: sys.path.append(path) datetime L'uso del modulo datetime è illustrato nei seguenti esempi: >>> import datetime >>> print datetime.datetime.today() 2008-07-04 14:03:90 >>> print datetime.date.today() 2008-07-04 A volte potrebbe essere necessario marcare un dato con un orario UTC (invece che locale). In questo caso è possibile utilizzare la seguente funzione: >>> import datetime >>> print datetime.datetime.utcnow() 2008-07-04 14:03:90 Il modulo datetime contiene diverse classi: date, datetime, time e timedelta. La differenza tra due date, due datetime o due time è un timedelta: >>> a = datetime.datetime(2008, 1, 1, 20, 30) >>> b = datetime.datetime(2008, 1, 2, 20, 30) >>> c = b - a >>> print c.days 1 In web2py date e datetime sono utilizzati per memorizzare i corrispondenti tipi SQL quando sono passati o ritornati dal database. time Il modulo time differisce da date e datetime perchè rappresenta il tempo come numero di secondi a partire dall'inizio del 1970 (epoch). >>> import time >>> t = time.time() 1215138737.571 Si può far riferimento alla documentazione ufficiale di Python per le funzioni di conversione tra il tempo in secondi e il tempo come datetime. cPickle Questo è un modulo estremamente potente. Fornisce una serie di funzioni che possono serializzare quasi tutti gli oggeti Python, inclusi gli oggetti auto-referenziali. Ecco per esempio, un oggetto piuttosto strano: >>> class MyClass(object): pass >>> myinstance = MyClass() >>> myinstance.x = 'something' >>> a = [1 ,2, {'hello':'world'}, [3, 4, [myinstance]]] ed ora: >>> import cPickle >>> b = cPickle.dumps(a) >>> c = cPickle.loads(b) In questo esempio b è la rappresentazione (serializzazione) di a in una stringa, e c è una copia di a generata dalla deserializzazione di b. CPickle può anche serializzare e deserializzare da e su un file: >>> cPickle.dumps(a, open('myfile.pickle', 'wb')) >>> c = cPickle.loads(open('myfile.pickle', 'rb')) top
I have successfuly replaced 2 x 320GB disks with 2 x 1TB and re-synched /dev/md0 & /dev/md1. "sudo mdadm --grow /dev/md0 --size=max" results in error "mdadm: component size of /dev/md0 unchanged at 304686016K" How can I grow /dev/md0 to the full 1TB? Output from fdisk -l & cat /proc/mdstat follows Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000bccd9 Device Boot Start End Blocks Id System /dev/sda1 * 2048 609374207 304686080 fd Linux RAID autodetect /dev/sda2 609374208 624998399 7812096 fd Linux RAID autodetect Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000baab1 Device Boot Start End Blocks Id System /dev/sdb1 * 2048 609374207 304686080 fd Linux RAID autodetect /dev/sdb2 609374208 624998399 7812096 fd Linux RAID autodetect Disk /dev/md1: 7999 MB, 7999520768 bytes 2 heads, 4 sectors/track, 1953008 cylinders, total 15624064 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md1 doesn't contain a valid partition table Disk /dev/md0: 312.0 GB, 311998480384 bytes 2 heads, 4 sectors/track, 76171504 cylinders, total 609372032 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table mick@mick-desktop:~/Desktop$ cat /proc/mdstat Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdb1[1] sda1[0] 304686016 blocks [2/2] [UU] md1 : active raid1 sdb2[1] sda2[0] 7812032 blocks [2/2] [UU] unused devices: <none>
I'm trying to install pip or setup tools form python 3.2 in debian 6. First case: apt-get install python3-pip...OK python3 easy_install.py webob Searching for webob Reading http://pypi.python.org/simple/webob/ Reading http://webob.org/ Reading http://pythonpaste.org/webob/ Best match: WebOb 1.2.2 Downloading http://pypi.python.org/packages/source/W/WebOb/WebOb-1.2.2.zip#md5=de0f371b46554709ce5b93c088a11cae Processing WebOb-1.2.2.zip Traceback (most recent call last): File "easy_install.py", line 5, in <module> main() File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 1931, in main with_ei_usage(lambda: File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 1912, in with_ei_usage return f() File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 1935, in <lambda> distclass=DistributionWithoutHelpCommands, **kw File "/usr/local/lib/python3.2/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/local/lib/python3.2/distutils/dist.py", line 917, in run_commands self.run_command(cmd) File "/usr/local/lib/python3.2/distutils/dist.py", line 936, in run_command cmd_obj.run() File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 368, in run self.easy_install(spec, not self.no_deps) File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 608, in easy_install return self.install_item(spec, dist.location, tmpdir, deps) File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 638, in install_item dists = self.install_eggs(spec, download, tmpdir) File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 799, in install_eggs unpack_archive(dist_filename, tmpdir, self.unpack_progress) File "/usr/lib/python3/dist-packages/setuptools/archive_util.py", line 67, in unpack_archive driver(filename, extract_dir, progress_filter) File "/usr/lib/python3/dist-packages/setuptools/archive_util.py", line 154, in unpack_zipfile data = z.read(info.filename) File "/usr/local/lib/python3.2/zipfile.py", line 891, in read with self.open(name, "r", pwd) as fp: File "/usr/local/lib/python3.2/zipfile.py", line 980, in open close_fileobj=not self._filePassed) File "/usr/local/lib/python3.2/zipfile.py", line 489, in __init__ self._decompressor = zlib.decompressobj(-15) AttributeError: 'NoneType' object has no attribute 'decompressobj' Second case: from http://pypi.python.org/pypi/distribute#installation-instructions python3 distribute_setup.py Downloading http://pypi.python.org/packages/source/d/distribute/distribute-0.6.28.tar.gz Extracting in /tmp/tmpv6iei2 Traceback (most recent call last): File "distribute_setup.py", line 515, in <module> main(sys.argv[1:]) File "distribute_setup.py", line 511, in main _install(tarball, _build_install_args(argv)) File "distribute_setup.py", line 73, in _install tar = tarfile.open(tarball) File "/usr/local/lib/python3.2/tarfile.py", line 1746, in open raise ReadError("file could not be opened successfully") tarfile.ReadError: file could not be opened successfully Third case: from http://pypi.python.org/pypi/distribute#installation-instructionstar -xzvf distribute-0.6.28.tar.gzcd distribute-0.6.28python3 setup.py install Before install bootstrap. Scanning installed packages No setuptools distribution found running install running bdist_egg running egg_info writing distribute.egg-info/PKG-INFO writing top-level names to distribute.egg-info/top_level.txt writing dependency_links to distribute.egg-info/dependency_links.txt writing entry points to distribute.egg-info/entry_points.txt reading manifest file 'distribute.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'distribute.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_py copying distribute.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO copying distribute.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying distribute.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying distribute.egg-info/entry_points.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying distribute.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO creating 'dist/distribute-0.6.28-py3.2.egg' and adding 'build/bdist.linux-x86_64/egg' to it Traceback (most recent call last): File "setup.py", line 220, in <module> scripts = scripts, File "/usr/local/lib/python3.2/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/local/lib/python3.2/distutils/dist.py", line 917, in run_commands self.run_command(cmd) File "/usr/local/lib/python3.2/distutils/dist.py", line 936, in run_command cmd_obj.run() File "build/src/setuptools/command/install.py", line 73, in run self.do_egg_install() File "build/src/setuptools/command/install.py", line 93, in do_egg_install self.run_command('bdist_egg') File "/usr/local/lib/python3.2/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/local/lib/python3.2/distutils/dist.py", line 936, in run_command cmd_obj.run() File "build/src/setuptools/command/bdist_egg.py", line 241, in run dry_run=self.dry_run, mode=self.gen_header()) File "build/src/setuptools/command/bdist_egg.py", line 542, in make_zipfile z = zipfile.ZipFile(zip_filename, mode, compression=compression) File "/usr/local/lib/python3.2/zipfile.py", line 689, in __init__ "Compression requires the (missing) zlib module") RuntimeError: Compression requires the (missing) zlib module zlib1g-dev installed Help me please
My final goal is import some data from Google Site pages. I'm trying to use gdata-python-client (v2.0.17) to download a specific Content Feed: self.client = gdata.sites.client.SitesClient(source=SOURCE_APP_NAME) self.client.client_login(USERNAME, PASSWORD, source=SOURCE_APP_NAME, service=self.client.auth_service) self.client.site = SITE self.client.domain = DOMAIN uri = '%s?path=%s' % (self.client.MakeContentFeedUri(), '[PAGE PATH]') feed = self.client.GetContentFeed(uri=uri) entry = feed.entry[0] ... Resulted entry.content has a page content in xhtml format. But this tree doesn't content any plan text data from a page. Only html page struct and links. For example my test page has <div>Some text</div> ContentFeed entry has only div node with text=None. I have debugged gdata-python-client request/response and checked resolved data from server in raw buffer - any plan text data in content. Hence it is a Google API bug. May be there is some workaround? May be i can use some common request parameter? What's going wrong here?
Munkeymon posted: Sometiems you just have to try even harder to get your Annonces I guess vv Turns out $photoList is a global variable that gets declared and transformed inside getAnnonces(). In fact, it is returned by the function, so all you'd think this does is assign the variable to itself for some reason. Well, it would, if the original programmer would have thought of declaring it as global outside of the function's scope. $photoList exists as a global variable inside the function, but has a local scope outside of it. If this isn't some kind of meta-obfuscation I don't know what it is. # ? Jan 11, 2010 17:07 # ? Sep 19, 2014 17:52 Can somebody explain to me why my old code: code: enum Foo { FOO_1, FOO_2 }; string GetInfo(Foo foo){ switch(foo) { case FOO_1: return foo1; case FOO_2: return foo2; } } code: string GetInfo(Foo foo){ stringstream stream; switch(foo) { case FOO_1: stream << this->foo1; break; case FOO_2: stream << this->foo2; break; default: return ""; } return stream.str(); } # ? Jan 13, 2010 23:17 How were foo1 and foo2 declared? Dynamically? Maybe someone's trying to avoid a race condition or memory leak? # ? Jan 13, 2010 23:51 jandrese posted: How were foo1 and foo2 declared? Dynamically? Maybe someone's trying to avoid a race condition or memory leak? How the hell do you declare something dynamically # ? Jan 14, 2010 00:10 Vanadium posted: How the hell do you declare something dynamically Well you just hardcode it # ? Jan 14, 2010 02:16 Vanadium posted: How the hell do you declare something dynamically code: #include </dev/tty> # ? Jan 14, 2010 02:22 code: mr_jim@home:~/tmp/test$ cat tty.c #include </dev/tty> mr_jim@home:~/tmp/test$ gcc tty.c #include <stdio.h> int main(void) { printf("holy poo poo\n"); return 0; } mr_jim@home:~/tmp/test$ ./a.out holy poo poo mr_jim@home:~/tmp/test$ # ? Jan 14, 2010 02:33 jandrese posted: How were foo1 and foo2 declared? Dynamically? Maybe someone's trying to avoid a race condition or memory leak? No, they're just plain old attributes. There was absolutely nothing wrong with the original code, it was apparently just not complicated enough # ? Jan 14, 2010 02:36 mr_jim posted: # ? Jan 14, 2010 03:56 How did i not know that? Are there any other neat tricks in a similar vein to that? tripwire fucked around with this message at Jan 14, 2010 around 04:00 # ? Jan 14, 2010 03:56 mr_jim posted: # ? Jan 14, 2010 04:28 tripwire posted: How did i not know that? Are there any other neat tricks in a similar vein to that? Cut out the middle man: code: echo "#include </dev/tty>" | gcc -x c - && ./a.out && rm ./a.out # ? Jan 14, 2010 04:56 mr_jim posted: Cut out the middle man: Wow, thats insane! It's almost like having an interactive c REPL console for loving around with. I could see that coming in handy for just testing snippets of code without bothering to build and make a big project. # ? Jan 14, 2010 04:59 tripwire posted: Wow, thats insane! It's almost like having an interactive c REPL console for loving around with. I could see that coming in handy for just testing snippets of code without bothering to build and make a big project. Make a big project? Just edit test.c with a text editor (ever tried actually typing code into a terminal? ugh), compile, and run it. # ? Jan 14, 2010 05:02 code: #!/bin/sh echo "This is not a REPL. Press ctrl-d to compile and run, ctrl-c to exit." echo -n "> " while true ; do echo "#include </dev/tty>" | gcc -x c - && ./a.out && rm ./a.out echo -n "> " done edit: No, I'm not. If you add "-ldl" to the gcc command, you never need worry about using a dynamic library function: code: This is not a REPL. Press ctrl-d to compile and run, ctrl-c to exit. > #include <stdio.h> #include <math.h> #include <dlfcn.h> typedef double (*math_funcp)(double); int main(void) { void *libm = dlopen("/usr/lib/libm.so", RTLD_LAZY); void *initializer = dlsym(libm, "sin"); math_funcp sin_func = *((math_funcp*)(&initializer)); printf("sin(0.6) = %f\n", (*sin_func)(0.6)); dlclose(libm); return 0; } sin(0.6) = 0.564642 > mr_jim fucked around with this message at Jan 14, 2010 around 05:33 # ? Jan 14, 2010 05:12 code: rows = gp.SearchCursor(A_Layer, The_SQL_Query_For_This_Layer) row = rows.next() # CHeck if Row is NULL, seems turning a row into a string, that is Null will print "None" if str(row) != "None": <do something> else: # Free the License! del gp sys.exit("Error Bad Feature layer") Now whats the biggest WTF here? # ? Jan 14, 2010 05:44 Perhaps a small change is called for code: #!/bin/sh echo "This is not a REPL. Press ctrl-d to compile and run, ctrl-c to exit." echo -n "> " while true ; do echo '#include <stdlib.h> #include <stdio.h> #include <math.h> int main () { #include </dev/tty> }' | gcc -lm -x c - && ./a.out && rm ./a.out echo -n "> " done # ? Jan 14, 2010 09:49 tripwire posted: Wow, thats insane! It's almost like having an interactive c REPL console for loving around with. I could see that coming in handy for just testing snippets of code without bothering to build and make a big project. http://neugierig.org/software/c-repl/ * 20 minutes of haranguing haskell, and I now get this* code: $ c-repl c-repl: a C read-eval-print loop. enter '.h' at the prompt for help. int x error: In file included from /usr/include/stdio.h:906, from <stdin>:1: /usr/include/bits/stdio2.h: In function 'int sprintf(char*, const char*, ...)': /usr/include/bits/stdio2.h:35: error: '__builtin_va_arg_pack' was not declared in this scope .... tef fucked around with this message at Jan 14, 2010 around 12:16 # ? Jan 14, 2010 11:44 http://root.cern.ch/drupal/content/cint I have no idea how good or bad it is. # ? Jan 14, 2010 13:43 Zombywuf posted: Perhaps a small change is called for Well, if you need that much hand-holding: code: #!/bin/bash # snarepl - Still Not A Read-Eval-Print Loop. if [ -z "$1" ] then srcfile=`mktemp -u` trap "rm -f $srcfile; rm -f $outfile; echo; stty echo; exit" INT echo "Editing a temporary file, which will be deleted upon exiting." read -s -p "Press enter to continue, or ctrl-c to exit. " echo else srcfile=$1 trap "rm -f $outfile; echo; stty echo; exit" INT fi if [ ! -e $srcfile ] ; then echo '#include <stdio.h> #include <stdlib.h> int main(void) { return 0; } ' > $srcfile fi outfile=`mktemp -u` while (true) ; do echo editing $srcfile oldtime=`stat -c %Y $srcfile` vim -c "set filetype=c" $srcfile +6 newtime=`stat -c %Y $srcfile` if [ $newtime -gt $oldtime ] then echo compiling $srcfile gcc -x c -o $outfile $srcfile else echo no changes fi if [ -x $outfile ] then echo running: echo ======== $outfile echo ======== fi read -s -p "Press enter to continue, or ctrl-c to exit. " done mr_jim fucked around with this message at Jan 14, 2010 around 17:15 # ? Jan 14, 2010 16:02 quote: CINT is written in C++ itself, with slightly less than 400,000 lines of code. # ? Jan 14, 2010 16:22 If you really want something that approaches a REPL for C++, why not go the whole distance and turn it into an IRC bot. # ? Jan 14, 2010 18:23 Vanadium posted: If you really want something that approaches a REPL for C++, why not go the whole distance and turn it into an IRC bot. Well that would just be silly. VV no, I got it. mr_jim fucked around with this message at Jan 14, 2010 around 18:42 # ? Jan 14, 2010 18:34 mr_jim posted: Well that would just be silly. *whoosh* # ? Jan 14, 2010 18:42 Dear former intern: Just because .NET allows you to pass any object in the sender field of the event, does not mean you should use it to pass any arbitrary data. The first google result of "C# Custom Event" shows you exactly how to pass in custom event args. Also, if your dialog only contains an ok and cancel button, you don't need to fire an event on when ok is pressed. Use the Dialog Result instead. # ? Jan 14, 2010 19:33 Mustach posted: http://root.cern.ch/drupal/content/cint I had to use it for a brief period of time. It was interesting... # ? Jan 15, 2010 01:16 http://netbeans.org/bugzilla/show_bug.cgi?id=167395 Synopsis: User: Your singleton has a public constructor and multiple instances of it are made. Dev: Oh, it is not so bad, the extra instances of the singleton are thrown away. User: Uh... # ? Jan 23, 2010 12:25 Ryouga Inverse posted: How are you going to deal with the corpus of already-existing code? Re-write it when it fails or needs modification. Probably end up with something neat like MUMPS-Linq. # ? Jan 23, 2010 13:29 code: int EntrySortFunc(const void *pEl1, const void *pEl2) { Entry *pEntry1 = *(Entry * const *) pEl1, *pEntry2 = *(Entry * const *) pEl2; [...] // sort folders before files bool fS1, fS2; if (!(fS1=!pEntry1->GetIsFolder()) != !true != !(fS2=!pEntry2->GetIsFolder())) return fS1-fS2; # ? Jan 24, 2010 23:18 Vanadium posted: !coherent # ? Jan 24, 2010 23:54 sund posted: !coherent I think you mean !(!coherent != !true). # ? Jan 25, 2010 02:57 Vanadium posted: maybe he doesn't know that == tests for equality? # ? Jan 25, 2010 04:34 MasterSlowPoke posted: maybe he doesn't know that == tests for equality? Yeah, but != tests for !equality. So..... !? # ? Jan 25, 2010 05:10 Long ago in college he spent hours debugging a program where he'd mistakenly typed = instead of == in a condition. Never again. # ? Jan 25, 2010 05:33 YeOldeButchere posted: Long ago in college he spent hours debugging a program where he'd mistakenly typed = instead of == in a condition. Gotta love new age compilers eh? poo poo, even codesense/intellisense will pick that up nowadays. # ? Jan 25, 2010 11:39 Yakattak posted: Gotta love new age compilers eh? poo poo, even codesense/intellisense will pick that up nowadays. Clearly the problem here was a bad compiler... # ? Jan 25, 2010 11:57 Nippashish posted: I think you mean !(!coherent != !true). That has to be programming by permutation. I refuse to believe anyone deliberately writes code like that. # ? Jan 25, 2010 13:59 TRex EaterofCars posted: That has to be programming by permutation. I refuse to believe anyone deliberately writes code like that. law goddamnit, and you better follow it! # ? Jan 25, 2010 18:49 I got no code to show, but I've just found out about this bug: A coworker of mine was saving hours in a database and chose to use the complete timestamp format (YYYY:MM:DD HH:mm:SS). However, he built the date using mktime() before sending it to the database. When there are parameters missing, mktime() completes them with the local date and time. For some reason, this made it so the main site would show an event's hour as right or wrong depending on the time of the day it was saved on in the admin panel. I still don't really get how or why this all happens, but it made my day hell. # ? Jan 25, 2010 20:58 # ? Sep 19, 2014 17:52 code: _CREATE_GP_DEFS = { '9.0' : (lambda x = (lambda : __import__('win32com.client')) : win32com.client.Dispatch("esriGeoprocessing.GpDispatch.1")), '9.1' : (lambda x = (lambda : __import__('win32com.client')) : win32com.client.Dispatch("esriGeoprocessing.GpDispatch.1")), '9.2' : (lambda x = (lambda : __import__('arcgisscripting')) : arcgisscripting.create()), '9.3' : (lambda x = (lambda : __import__('arcgisscripting')) : arcgisscripting.create(9.3)) } # ? Jan 26, 2010 04:30
Jx7 Re : [script] Télécharger de nombreuses quotidiennes de canal+ (suite) J'ai l'impression que tu as un problème de droits d'écritures. Hors ligne Jx7 Re : [script] Télécharger de nombreuses quotidiennes de canal+ (suite) Où as-tu placer le script? Hors ligne Jx7 Re : [script] Télécharger de nombreuses quotidiennes de canal+ (suite) Et les droits du répertoire et du script? Mets toi dans ton home et exécute les commande suivantes pour être sûr: chmod +w ./cplus chmod +x ./cplus/cplus.py Hors ligne Jx7 Re : [script] Télécharger de nombreuses quotidiennes de canal+ (suite) Changelog: # v0.5 # 19/11/2012 : Optimisations # 2 modes: verbose et test # # Syntaxe: cplus.py [--verbose] [--test] # # verbose: affiche les infos à l'écran en + des logs # test: ne lance pas les téléchargements et ne met donc pas à jour le fichier d'historique # # Les émissions non commentées sont celles que j'ai testée et qui dont les téléchargements fonctionnent # Les autres sont à tester (emplacement de la date inclu) Dernière modification par Jx7 (Le 20/11/2012, à 00:07) Hors ligne Xun Re : [script] Télécharger de nombreuses quotidiennes de canal+ (suite) Je viens d'essayer sur mon pc portable avec ubuntu, tout marche normalement, c'est super ! Pour le moment ça télécharge EDIT: Quand je veux télécharger le Grand journal, ça télécharge une première fois en créant un fichier. Sauf qu'après, en voulant télécharger un autre fichier de la même émission, il en télécharge une en reprenant le même nom que le fichier précédant. Du coup je perds le premier. Tu vois le souci ? Dernière modification par Xun (Le 20/11/2012, à 08:25) Hors ligne Jx7 Re : [script] Télécharger de nombreuses quotidiennes de canal+ (suite) En effet certaines émissions semblent être découpées en plusieurs parties. J'ai fait une v0.7 qui devrait gérer ça. v0.7 20/11/2012: Ajout d'un paramètre dans la syntaxe pour choisir la qualité par des vidéos sans avoir à éditer le script. Ajout d'explications en début de script sur son utilisation. Ajout de la gestion des émissions en plusieurs parties (attention, il est nécessaire de télécharger toutes les parties en une seule commande, le script ne gèrera pas les différentes parties si certaines ont déjà été téléchargées). Toujours à la même adresse: http://pastebin.com/U0WuqhrK Hors ligne L3R4F Re : [script] Télécharger de nombreuses quotidiennes de canal+ (suite) L3R4F a écrit : Qqun aurait le numéro xml pour télécharger le supplément de Maïténa? http://service.canal-plus.com/video/res … cplus/1063 pour les émissions complètes. http://service.canal-plus.com/video/res … cplus/1059 pour les segments. Super, merci! Je m'étais arrêté aux 500 premiers xml Hors ligne Jx7 Re : [script] Télécharger de nombreuses quotidiennes de canal+ (suite) S00000 a écrit :L3R4F a écrit : Qqun aurait le numéro xml pour télécharger le supplément de Maïténa? http://service.canal-plus.com/video/res … cplus/1063 pour les émissions complètes. http://service.canal-plus.com/video/res … cplus/1059 pour les segments. Super, merci! Je m'étais arrêté aux 500 premiers xml J'ai mis à dispo un fichier Excel avec toutes les émissions un peu plus haut... Hors ligne aspu Re : [script] Télécharger de nombreuses quotidiennes de canal+ (suite) Sans rentrer dans les cripts et autres lignes j'ai installer flvstreamer et Rtmpdump . Puis installer : http://olaf.10.free.fr/urecorder/urecor … -1_all.deb Ça marche du tonnerre ! Hors ligne Jx7 Re : [script] Télécharger de nombreuses quotidiennes de canal+ (suite) Sans rentrer dans les cripts et autres lignes j'ai installer flvstreamer et Rtmpdump . Puis installer : http://olaf.10.free.fr/urecorder/urecor … -1_all.deb Ça marche du tonnerre ! À priori Urecorder est INCOMPATIBLE avec Ubuntu 12.04 faut voir avec les autres... Est-ce qu'on peut démarrer un téléchargement avec une seule ligne de commande? Le but étant d'automatiser Dernière modification par Jx7 (Le 26/11/2012, à 01:36) Hors ligne Xun Re : [script] Télécharger de nombreuses quotidiennes de canal+ (suite) Salut ! Je fais une petite remontée de la mise à jour de ton script J'ai mis du temps à refaire des essais, période occupée Apparemment les émissions en plusieurs parties sont bien gérées (je parle pour le grand journal ) mais du coup je me disais, ça serait sympa de générer une playlist (un fichier texte en extension m3u ou autre) par date d'émission. Pourquoi pas ? Encore merci pour la maintenance de ce script. Hors ligne jlemaire Re : [script] Télécharger de nombreuses quotidiennes de canal+ (suite) Excusez mon ignorance, mais je ne comprends pas comment utiliser ce script. Est-ce du python ? Y a-t-il quelqu'un qui peut aiguiller les nouveaux venus pour le faire fonctionner ? Merci de votre aide ! Hors ligne Xun Re : [script] Télécharger de nombreuses quotidiennes de canal+ (suite) Salut, C'est effectivement du python. Il faut cependant le modifier avant de pouvoir l'executer; à savoir les lignes suivantes : # Répertoires (à créer avant) HomeDir = os.path.expanduser('/home/hope/Downloads/Canal+Replay/canal') DownloadsDir = HomeDir + "/downloads" Et il faut créer les fichiers LogFile, HistoryFile, et UnwantedFile #Fichiers LogFile = HomeDir + "/cplus_log" HistoryFile = HomeDir + "/cplus_history" UnwantedFile = HomeDir + "/cplus_unwanted" Ayant un Raspberry Pi, je voulais automatiser quotidiennement l’exécution de ce script. Seulement, j'obtiens ces erreurs: [hope@Hope Canal+Replay]$ python2 canal.py Traceback (most recent call last): File "canal.py", line 404, in <module> r = Execute(params, target_) File "canal.py", line 171, in Execute p = subprocess.Popen(_Params,stdout=_File) File "/usr/lib/python2.7/subprocess.py", line 679, in __init__ errread, errwrite) File "/usr/lib/python2.7/subprocess.py", line 1249, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory malgré de nombreux 'chmod 777' ou encore chown .... Je suis dans le dossier /home/hope/Downloads/Canal+Replay possédant le dossier canal et le 'canal.py'.... Hors ligne Jx7 Re : [script] Télécharger de nombreuses quotidiennes de canal+ (suite) Bizarre ça... tu n'avais pas dis que ça fonctionnait sur un autre Linux? Ne serait-ce pas le nom du répertoire qui possède un '+' ? Je vais voir pour les .m3u ;-) Concernant les paramètres du script dont tu parles, je devrais aussi les externaliser dans un fichier de config, ce serait plus propre. Hors ligne Hors ligne jlemaire Re : [script] Télécharger de nombreuses quotidiennes de canal+ (suite) Merci, j'ai réussi à faire fonctionner le script. Par contre, il semble ne pas fonctionner pour Le Meilleur Du Hier. Une idée ? Dernière modification par jlemaire (Le 16/12/2012, à 16:47) Hors ligne Jx7 Re : [script] Télécharger de nombreuses quotidiennes de canal+ (suite) Merci, j'ai réussi à faire fonctionner le script. Par contre, il semble ne pas fonctionner pourLe Meilleur Du Hier. Une idée ? As-tu précisé les infos de cette émissions? Dans le script il y a la liste des émissions de Canal + (ou du moins une partie) et en entête il est précisé qu'il faut renseigner: L'ID de l'émission, L'emplacement de la date, Le nom de l'émission, Le nom supposé du fichier Je n'ai pas renseigné tous les champs pour toutes les émissions de Canal +, j'ai mis simplement les noms d'émission et les "supposés" ID de celles-ci. Chacun doit renseigner les champs en fonction des émissions qui l'intéressent en attendant une évolution du script. Dans ton cas, les infos devraient être les suivantes: L'ID de l'émission: 215 Ensuite, comme les fichiers de cette émission ressemblent à ça sur le site: LE_MEILLEUR_DU_HIER_EMISSION_121123_CAN_297682_video_HD.mp4 on devrait avoir: L'emplacement de la date: 5 (0: LE, 1: MEILLEUR, 2: DU, 3: HIER, 4: EMISSION et 5: 121123 donc la date) Le nom de l'émission: "Le meilleur du hier" Le nom supposé du fichier: "LE_MEILLEUR_DU_HIER" Donc tu dois remplacer la ligne: #ShowList.append([215,4,"Le meilleur du hier"]) # 63 107 110 112 ? par: ShowList.append([215,5,"Le meilleur du hier","LE_MEILLEUR_DU_HIER"]) # 63 107 110 112 ? Hors ligne aspu Re : [script] Télécharger de nombreuses quotidiennes de canal+ (suite) Retour après quelques jours d'utilisation ! Comme c'était juste " la revue de presse" des deux allumées du petit journal de canal+ que je voulais télécharger j'ai mis du temps à regarder plus dans les menus . Pour les autres chaines ça ne fonctionne pas , je me retrouve avec un fichier téléchargé instantanément mais vide . Hors ligne aspu Re : [script] Télécharger de nombreuses quotidiennes de canal+ (suite) aspu a écrit : Sans rentrer dans les cripts et autres lignes j'ai installer flvstreamer et Rtmpdump . Puis installer : http://olaf.10.free.fr/urecorder/urecor … -1_all.deb Ça marche du tonnerre ! À priori Urecorder est INCOMPATIBLE avec Ubuntu 12.04 faut voir avec les autres... Est-ce qu'on peut démarrer un téléchargement avec une seule ligne de commande? Le but étant d'automatiser Suis sous 11.04 ... Hors ligne urustu Re : [script] Télécharger de nombreuses quotidiennes de canal+ (suite) La version graphique avec CANAL+, ARTE, M6 et W9 : http://olaf.10.free.fr/urecorder/urecor … -1_all.deb (version 0.9-1 par olaf.10 - 22/08/2010) Bonjour, J'ai tenté l'installation mais voici le message d'erreur : dpkg: error processing /tmp/urecorder_0.9-1_all.deb (--install): parsing file '/var/lib/dpkg/tmp.ci/control' near line 11 package 'urecorder': blank line in value of field 'Description' Ce programme peut/pourra-t-il aussi récupérer les programmes de D8/D17 ? HP Pavilion 17-e027sf | CPU AMD A4-5000 alias Kabini | Graphique Radeon HD 8330 | Wifi Realtek RTL8188EE > Ubuntu Precise 12.04.4 LTS | Trusty 14.04 LTS | Fedora 20 | Mint 17Samsung 300E7A nVidia Optimus > Ubuntu Precise 12.04 LTS Hors ligne arthurd Re : [script] Télécharger de nombreuses quotidiennes de canal+ (suite) Xun : Il te manque le paquet flvstreamer je pense, c'est ce qui me donnait cette erreur sur mon arch. (pour trouver : python2 -m pdb cplus.py pour débugger, cont au début pour lancer le script, puis quand il bug tu passes en mode post mortem et tu peux remonter dans la pile d'appel avec "up" jusqu'à arriver à l'endroit du script qui a bugger et afficher avec "p _Params" la variable _Params qui commence par "flvstreamer…" et te donne le problème) Hors ligne Xun Re : [script] Télécharger de nombreuses quotidiennes de canal+ (suite) C'est fort possible que ça soit ça ... Mais flvstreamer n'est pas disponible pour architecture ARM de ce que j'ai vu... Je voulais faire télécharger les vidéos par mon raspberry pi ... hope@Hope canalReplay]$ python2 -m pdb cplus.py > /home/hope/Downloads/canalReplay/cplus.py(73)<module>() -> import os, urllib, subprocess, time, sys (Pdb) cont Traceback (most recent call last): File "/usr/lib/python2.7/pdb.py", line 1314, in main pdb._runscript(mainpyfile) File "/usr/lib/python2.7/pdb.py", line 1233, in _runscript self.run(statement) File "/usr/lib/python2.7/bdb.py", line 387, in run exec cmd in globals, locals File "<string>", line 1, in <module> File "cplus.py", line 73, in <module> import os, urllib, subprocess, time, sys File "cplus.py", line 171, in Execute p = subprocess.Popen(_Params,stdout=_File) File "/usr/lib/python2.7/subprocess.py", line 679, in __init__ errread, errwrite) File "/usr/lib/python2.7/subprocess.py", line 1249, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory Uncaught exception. Entering post mortem debugging Running 'cont' or 'step' will restart the program > /usr/lib/python2.7/subprocess.py(1249)_execute_child() -> raise child_exception (Pdb) postmortem *** NameError: name 'postmortem' is not defined (Pdb) up > /usr/lib/python2.7/subprocess.py(679)__init__() -> errread, errwrite) (Pdb) post_mortem *** NameError: name 'post_mortem' is not defined (Pdb) pm *** NameError: name 'pm' is not defined Hors ligne mulder29 Re : [script] Télécharger de nombreuses quotidiennes de canal+ (suite) Ah d'accord, c'est donc ici qu'il faut aller pour demander des renseignemetns pour extraire les vidéos de Canal Replay ? Faut-il utiliser python ? ou le terminal/konsol suffit ? Hors ligne
Edit: https://github.com/agiliq/merchant/blob/master/billing/gateways/authorize_net_gateway.pylooks pretty nice, haven't tried it yet. Edit: [For the next project I have that uses authorize.net, I'm going to take a close look at: http://github.com/zen4ever/django-authorizenet It looks pretty nice. I don't think that it has support for recurring payments though.] In the past I have made little one-off implementations. For simple post to the AIM payment gateway, you can use something like this: URL = 'https://test.authorize.net/gateway/transact.dll' API = {'x_login':'XXX', 'x_tran_key':'XXX', 'x_method':'CC', 'x_type':'AUTH_ONLY', 'x_delim_data':'TRUE', 'x_duplicate_window':'10', 'x_delim_char':'|', 'x_relay_response':'FALSE', 'x_version':'3.1'} def call_auth(amount, card_num, exp_date, card_code, zip_code, request_ip=None): '''Call authorize.net and get a result dict back''' import urllib2, urllib payment_post = API payment_post['x_amount'] = amount payment_post['x_card_num'] = card_num payment_post['x_exp_date'] = exp_date payment_post['x_card_code'] = card_code payment_post['x_zip'] = zip_code payment_request = urllib2.Request(URL, urllib.urlencode(payment_post)) r = urllib2.urlopen(payment_request).read() return r def call_capture(trans_id): # r.split('|')[6] we get back from the first call, trans_id capture_post = API capture_post['x_type'] = 'PRIOR_AUTH_CAPTURE' capture_post['x_trans_id'] = trans_id capture_request = urllib2.Request(URL, urllib.urlencode(capture_post)) r = urllib2.urlopen(capture_request).read() return r To authorize, you do something like: r = authorize.call_auth( unicode(decimal_total), request.POST.get('card_num'), request.POST.get('exp_date'), request.POST.get('card_code'), request.POST.get('zip_code') if request.POST.get('zip_code') else address.zip_code, ) if r.split('|')[0] == '1': # it's good, we have authorized the card... else: error = "%s Please try again." % (r.split('|')[3]) then, we can capture: r = authorize.call_capture(trans_id) # r.split('|')[6] in first response.. if r.split('|')[0] == '1': # we captured it. else: error = r.split('|')[3] There are more options, ways to request, nuances in the response to parse... I assume b/c A in AIM stands for advanced that all of the authorize.net options are available. http://developer.authorize.net/guides/AIM/ I know that your question is what lib is best .. well, it might be easiest just to implement your own little bit of ad-hoc request and response for your specific requirements rather than trying to trove through an api on top of an api.
I am trying to write my very first python script. This was working but then after some slight refactoring I have, apparently, broken the indentation. I can not determine what is the problem. The interpretor complains about the following method. Can someone point it out? def dataReceived(self, data): a = data.split(':') print a if len(a) > 1: command = a[0] content = a[1] msg = "" if command == "iam": self.name = content msg = self.name + " has joined" elif command == "msg": msg = self.name + ": " + content print msg The error reads: File "python_server.py", line 17 a = data.split(':') ^ IndentationError: expected an indented block
In the logging howto documentation there is this example: import logging # create logger logger = logging.getLogger('simple_example') logger.setLevel(logging.DEBUG) # create console handler and set level to debug ch = logging.StreamHandler() ch.setLevel(logging.DEBUG) # create formatter formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') # add formatter to ch ch.setFormatter(formatter) # add ch to logger logger.addHandler(ch) Why I should set the level to logging.DEBUG twice, for logger, and for the streamhandler?I understand ch.setLevel(logging.DEBUG) will set the debug level for the stream handler. But what the effect is of setting the level to logger? Where this level is reflected? I get the same console output if I change the level for example to info either to the logger or to the streamhandler That is: ........... logger.setLevel(logging.INFO) ............ ch.setLevel(logging.DEBUG) gives the same output in console than ........... logger.setLevel(logging.DEBUG) ............ ch.setLevel(logging.INFO)
Please consider using this website to upload your scripts: It's a bit more organized and allows for easier browsing/uploading/viewing/downloading. If you have any comments/suggestions on that site, please use the feedback link. Eventually the scriptshare website's functionality will be incorporated into pnotepad.org, but for now it exists on a separate domain. If you've already uploaded scripts to this page, consider registering on the scriptshare site and re-uploading your script there. That will allow for things like comments, sorting, etc. This wiki page is now obsolete This page served as a temporary home for user-submitted scripts. If you are adding a new script, please consider using the http://scriptshare.rocketmonkeys.com/ site instead. This page is now obsolete. * Please do not modify someone else's script. If you modify someone else's script, repost it under your name or send your changes to the original author so they can update it. Otherwise it's chaos... chaos! Please follow this template: ### [author's username] - [script title] [description] [script contents] This is modified from the SortLines() script by Scott (wischeese). It sorts lines and removes any duplicates. Case sensitive. @script("Sort Lines (No Duplicates)", "Text") def SortLinesNoDuplicates(): """ Sort Lines, Remove Duplicates (Modified by jumpfroggy from wischeese's "SortLines" script) """ editor = scintilla.Scintilla(pn.CurrentDoc()) editor.BeginUndoAction() lsSelection = editor.GetTextRange(editor.SelectionStart, editor.SelectionEnd) laLines = lsSelection.splitlines(0) laLines.sort() # Filter out duplicate lines laLines2 = [] for line in laLines: if line not in laLines2: laLines2.append(line) lsReplace = string.join(laLines2, '\r\n' ) editor.ReplaceSel(lsReplace) editor.EndUndoAction() ClipStack is a Programmer's Notepad clipboard stack. It allows the user to select text and copy the text, which gets pushed onto a stack. Later this text can be pasted, which then pops the text off of this stack. So, for example, if the user copies item A, then item B, and then item C, all using ClipStack; then they could then paste these items back in the order: Item C, Item B, Item A. ClipStack does copy the the text into the clipboard so that you can paste the current selection in another application, but this will not pop an item off of the stack. When pasting using ClipStack the current contents of the clipboard are replaced with the item on the top of the stack. This means that anything currently in the clipboard will be lost. What ClipStack does not do is push the current clipboard onto the stack. ClipStack could be improved by the addition of a clipboard aware python module that would allow the current clipboard to be used as the top item of the stack. This would allow the snippet to better interact with other applications. ############################################################################### ## ClipStack.py -- A clipboard stack allowing you to copy items into a stack ## and then paste them back in a FILO way. So the first item copied to the stack ## is the last item pasted from it. ## Copy item #1, Copy item #2, Copy item #3 ## Paste #3, #2, #1 ## ## The last item copied is placed onto the regular OS clipboard ## The last item pasted is ALSO placed into the regular OS clipboard ## So take care not to confuse this with the regular OS clipboard and ## The standard Copy/Paste operations. ## By: NickDMax import pn import scintilla from pypn.decorators import script class PNClipStack: """ Maintains a stack of text to paste """ def __init__(self): """ initialize inner data: stack -- the internal list used to store clipboard items """ self.stack = [] def push(self, text): self.stack.append(text) def pop(self): retValue = '' if len(self.stack) > 0: retValue = self.stack.pop() return retValue def clear(self): self.stack = [] def getSize(self): return len(self.stack) ClipStack = PNClipStack() @script('Stack Count', 'ClipStack') def clstkCount(): """ Prints out the size of the current ClipStack """ pn.AddOutput('ClipStack Size: ' + str(ClipStack.getSize()) + '\n') @script('Copy', 'ClipStack') def clstkCopy(): """ Adds the current selection to the ClipStack """ doc = pn.CurrentDoc() if doc is not None: #Lets try not to crash our script editor = scintilla.Scintilla(doc) start = editor.SelectionStart end = editor.SelectionEnd if (start == end): #nothing is selected lets try to grab the current word. start = editor.WordStartPosition(start, True) end = editor.WordEndPosition(end, True) text = None if (start != end): text = editor.GetTextRange(start, end) if text is not None: ClipStack.push(text) editor.CopyText(len(text), text) #clstkCount() @script('Cut', 'ClipStack') def clstkCut(): """ Adds the current selection to the ClipStack -- cuts it from document""" doc = pn.CurrentDoc() if doc is not None: #Lets try not to crash our script editor = scintilla.Scintilla(doc) start = editor.SelectionStart end = editor.SelectionEnd if (start == end): #nothing is selected lets try to grab the current word. start = editor.WordStartPosition(start, True) end = editor.WordEndPosition(end, True) text = None if (start != end): text = editor.GetTextRange(start, end) if text is not None: ClipStack.push(text) editor.SetSel(start, end) editor.Cut() #clstkCount() @script('Paste', 'ClipStack') def clstkPaste(): """ Pastes the top item from the ClipStack into the document """ doc = pn.CurrentDoc() if doc is not None: #Lets try not to crash our script editor = scintilla.Scintilla(doc) text = ClipStack.pop() editor.CopyText(len(text), text) editor.Paste() #clstkCount() @script('Clear', 'ClipStack') def clstkCear(): """ Clears the current ClipStack """ ClipStack.clear() #clstkCount() Use this script to encode/decode selections using base64. This can be very useful for embedding Base64 images into HTML. Simply open a small gif/png in PN, select all, and then Base64Encode. This creates a Base64 Version of your image in a new document. Prefix the encoding with ''data:image/gif;base64,'' and you have a Base64 version of your image that you can paste directly into and HTML IMG tag's source. ############################################################################### ## base64Utils.py -- PnPy utility script to encode and decode base64. ## tested on Python 2.6.1. This utility will take the current selection ## (or document if there is no selection) and create a base 64 encoded document ## It will also take a base 64 encoded document and return the unencoded text. ## -- Note: No verification is done to ensure that the unencoded data is ## ASCII or valid unicode textual data. ## By: NickDMax import pn import scintilla from pypn.decorators import script import base64 @script("Base64Encode", "DocUtils") def doBase64(): """ This method will grab the curent selection/document and create a new document that is a base64 vesion of the text """ doc = pn.CurrentDoc() if doc is not None: #Lets try not to crash pn too often... editor = scintilla.Scintilla(doc) start = editor.SelectionStart end = editor.SelectionEnd if (start == end): #nothing is selected so we will just grab it all... start = 0 end = editor.Length text = editor.GetTextRange(start, end) newDoc = pn.NewDocument(None) newEditor = scintilla.Scintilla(newDoc) newEditor.BeginUndoAction() encoded = base64.b64encode(text) l = len (encoded) m = 0 while l > 80: str = encoded[m:m+80] + '\n' newEditor.AppendText(len(str), str) l, m = l - 80, m + 80 str = encoded[m:m+l] newEditor.AppendText(len(str), str) newEditor.EndUndoAction() @script("DecodeBase64", "DocUtils") def undoBase64(): """ This method will grab the curent selection/document and create a new document that is the base64 decoded vesion of the text """ doc = pn.CurrentDoc() if doc is not None: #Lets try not to crash pn too often... editor = scintilla.Scintilla(doc) start = editor.SelectionStart end = editor.SelectionEnd if (start == end): #nothing is selected so we will just grab it all... start = 0 end = editor.Length text = editor.GetTextRange(start, end) decoded = base64.b64decode(text) newDoc = pn.NewDocument(None) newEditor = scintilla.Scintilla(newDoc) newEditor.BeginUndoAction() newEditor.AppendText(len(decoded), decoded) newEditor.EndUndoAction() Functions to extract the current selection as a new document, or paste the contents of the clipboard as a new document. ############################################################################### ## AsNewUtils scripts -- the purpose of this script is to provide functions for ## extracting /pasting text as new documents. ## By: NickDMax import pn import scintilla from pypn.decorators import script @script("Extract As New", "DocUtils") def dupDocument(): """ This script will extract the current selection to a new document. """ """ if there is no selection then it will duplicate the entire document. """ doc = pn.CurrentDoc() if doc is not None: #Lets try not to crash pn too often... editor = scintilla.Scintilla(doc) start = editor.SelectionStart end = editor.SelectionEnd sch = doc.CurrentScheme if (start == end): #nothing is selected so we will just grab it all... start = 0 end = editor.Length text = editor.GetTextRange(start, end) newDoc = pn.NewDocument(sch) newEditor = scintilla.Scintilla(newDoc) newEditor.BeginUndoAction() newEditor.AppendText(len(text), text) newEditor.EndUndoAction() @script("Paste As New", "DocUtils") def pasteAsNew(): """ This script will paste the current contense of the clipboard into a new document """ newDoc = pn.NewDocument(None) newEditor = scintilla.Scintilla(newDoc) newEditor.BeginUndoAction() newEditor.Paste() newEditor.EndUndoAction() Extracts the current selection as a hex dump in a new document. ############################################################################### ## Doc2Hex v0.1 -- Will extract the current selection into a new document ## Formatting the text as a Hex Dump. ## By: NickDMax import pn import scintilla from pypn.decorators import script def HexEncode(text): output = "" if text is not None: lineLength = 0 position = 0 while len(text) > 0: output += "%08X |" % position if len(text) <= 16: lineLength = len(text) else: lineLength = 16 snippet = text[0: lineLength] for x in snippet: output += " %02X" % ord(x) output += "\n" position += 16; text = text[lineLength:] return output @script("Doc2Hex", "DocUtils") def Doc2Hex(): """ Doc2Hex will convert the current document into a HexDump """ doc = pn.CurrentDoc() if doc is not None: #Lets try not to crash pn too often... editor = scintilla.Scintilla(pn.CurrentDoc()) start = editor.SelectionStart end = editor.SelectionEnd if (start == end): #nothing is selected so we will just grab it all... start = 0 end = editor.Length text = editor.GetTextRange(start, end) newDoc = pn.NewDocument(None) newEditor = scintilla.Scintilla(newDoc) newEditor.BeginUndoAction() encoded = HexEncode(text) newEditor.AppendText(len(encoded), encoded) newEditor.EndUndoAction() This script converts the number you have selected into Decimal, Hexadecimal, Octal and Binary and shows the results in the output window. import pn, scintilla def hex2dec(s): """return the integer value of a hexadecimal string s""" return int(s, 16) def Denary2Binary(n): """convert denary integer n to binary string bStr""" bStr = '' if n < 0: raise ValueError, "must be a positive integer" if n == 0: return '0' while n > 0: bStr = str(n % 2) + bStr n = n >> 1 return bStr @script("ConvertNumber") def ConvertNumber(): s = scintilla.Scintilla(pn.CurrentDoc()) if s.SelectionEnd - s.SelectionStart < 1: return sel = s.SelText if sel.find('0x') != -1: sel = sel.replace("0x", "") sel = hex2dec(sel) else: sel = int(sel) pn.AddOutput("Dec: %d\n" % sel) pn.AddOutput("Hex: 0x%X\n" % sel) pn.AddOutput("Oct: %o\n" % sel) pn.AddOutput("Bin: %s" % Denary2Binary(sel)) This script beautify the xml content in current active tab. ```python import pn import scintilla import re, string from pypn.decorators import script @script("Beautify", "Xml") def Beautify(): editor = scintilla.Scintilla(pn.CurrentDoc()) data = editor.GetText(editor.Length) fields = re.split('(<.*?>)',data) content = '' level = 0 for f in fields: if string.strip(f)=='': continue if f[0]=='<' and f[1] != '/' and f[-2] != '/': content += ' '*(level*4) + f + '\n' level = level + 1 elif f[0] == '<' and f[1] != '/' and f[-2] == '/': content += ' '*(level*4) + f + '\n' elif f[:2]=='</': level = level - 1 content += ' '*(level*4) + f + '\n' else: content += ' '*(level*4) + f + '\n' editor.BeginUndoAction() editor.ClearAll() editor.AddText(len(content), content) editor.EndUndoAction()
After configuring and building PETSc, I have successfully been able to run several examples. In particular, I am working with this example. I have been able to run the program using the following commands: make ex2 mpiexec -n 4 ./ex2 -m 40 -n 40 which produces the following output: Norm of error 0.000642883 iterations 26Norm of error 0.000642883 iterations 26Norm of error 0.000642883 iterations 26Norm of error 0.000642883 iterations 26 This seems to tell me that the same problem was solved 4 times, rather than once by four processors in parallel. Suspicious, I ran the program again using mpiexec -n 4 ./ex2 -m 40 -n 40 -log_summary which produced the following output (note that it says that ./ex2 was run with only 1 processor: Norm of error 0.000642883 iterations 26 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./ex2 on a petsc-arc named utepgeon01.utep.edu with 1 processor, by pmdelgado2 Tue Jan 24 22:16:23 2012 Using Petsc Release Version 3.2.0, Patch 6, Wed Jan 11 09:28:45 CST 2012 Max Max/Min Avg Total Time (sec): 6.203e-02 1.00000 6.203e-02 Objects: 4.500e+01 1.00000 4.500e+01 Flops: 3.230e+06 1.00000 3.230e+06 3.230e+06 Flops/sec: 5.207e+07 1.00000 5.207e+07 5.207e+07 Memory: 7.996e+05 1.00000 7.996e+05 MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Reductions: 4.730e+02 1.00000 I know that the computer I'm using has 4 processors (after typing grep processor /proc/cpuinfo). After checking my reconfigure-petsc-arch.py file, I see the following: #!/usr/bin/python if __name__ == '__main__': import sys import os sys.path.insert(0, os.path.abspath('config')) import configure configure_options = [ '--download-f-blas-lapack=1', '--download-mpich=1', '--with-cc=gcc', '--with-fc=gfortran', 'PETSC_ARCH=petsc-arch', ] configure.petsc_configure(configure_options) Is there something else I need to do to distribute the work of solving the linear system in parallel?
raspouillas Re : Topic des lève-tôt… Faisons manger leurs caleçons aux couche-tard! [4] Je ne faisait aucune allusion au problème de @souen. Dernière modification par raspouillas (Le 15/06/2012, à 20:37) ljere Re : Topic des lève-tôt… Faisons manger leurs caleçons aux couche-tard! [4] alors voici la première partie du script et la j'ai besoin d'aide car je ne suis sure de rien ]#!/usr/bin/python # encoding: utf-8 from BeautifulSoup import BeautifulSoup import urllib2 import re import time import sys import mechanize# est ce que je peux mette ces 3 lignes sur 1 seul? ignoreList = ( 'compteur des leve tot', )# pourquoi on utilise """ au lieu d'un simple "en début et fin et ne peut on simplifié la phrase? class Day: """un jours dure de 5h à 9h du matin exclu ([5h:9h[) il contient la dernière entrée (points) de ce jour pour chaque joueurs""" def __init__(self): self.entries={} def __str__(self): for entry in self.entries.items(): print entry,'+',entries[entry]# c'est normal que ce def n'est pas aligné aux autres?def utcFrance(): return 1 + time.localtime(time.time())[-1] #1 + 1 si on est a l'heure d'été # ici je crois qu'on doit modifier en def __init__(self,tuple=(5,9),utc=utcFrance()):class Date: def __init__(self,tuple=(20,0),utc=utcFrance()): self.h = (int(tuple[0])-utcFrance()+24+utc)%24 self.m = int(tuple[1]) def __cmp__(self, other): return cmp(self.points(),other.points()) def points(self): pts = {5: 10, 6: 6, 7: 3, 8: 1} return pts.get(self.h, 0) si vous avez les réponses ou d'autres suggestions je vous écoute merci Hors ligne Pylades Re : Topic des lève-tôt… Faisons manger leurs caleçons aux couche-tard! [4] # est ce que je peux mette ces 3 lignes sur 1 seul? Oui, mais c’est mieux sur trois lignes. # pourquoi on utilise """ au lieu d'un simple "en début et fin et ne peut on simplifié la phrase? Pour pouvoir écrire cette docstring sur plusieurs lignes si besoin. On peut reformuler sans impacter le fonctionnement du programme. # c'est normal que ce def n'est pas aligné aux autres? Oui, faut pas toucher. # ici je crois qu'on doit modifier en def __init__(self,tuple=(5,9),utc=utcFrance()): Possible, mais c’est probablement plus compliqué… ce compteur est presque immaintenable. Mais bon, dans un futur plus ou moins proche, on va refaire un compteur propre pour le TdCT. Du coup, il s’adaptera ici en un clin d’œil. Dernière modification par Πυλάδης (Le 15/06/2012, à 18:49) “Any if-statement is a goto. As are all structured loops. “And sometimes structure is good. When it’s good, you should use it. “And sometimes structure is _bad_, and gets into the way, and using a goto is just much clearer.” Linus Torvalds – 12 janvier 2003 En ligne ljere Re : Topic des lève-tôt… Faisons manger leurs caleçons aux couche-tard! [4] Mais bon, dans un futur plus ou moins proche, on va refaire un compteur propre pour le TdCT. Du coup, il s’adaptera ici en un clin d’œil. c'est marrant nesthib m'a dit la même chose mais quand personne le sais .... donc en attendant si on veut bien continuer à m'aider j’apprends le python et j'arriverai peut être à avoir un compteur viable en attendant un compteur plus maintenable et en phase avec python 2.7.3 alors que le script de tshirtman est prévu pour du python 2.6.* /me est donc décidé à persévérer Hors ligne ljere Re : Topic des lève-tôt… Faisons manger leurs caleçons aux couche-tard! [4] peux t'on me dire que fait (veut dire) cette partie if ( (str_date.split(' ')[0] in ['Hier'] and int(str_date.split(' ')[2].split('<')[0].split(':')[0]) in range(5,24)) or (str_date.split(' ')[0] in ["Aujourd\'hui"] and int(str_date.split(' ')[2].split('<')[0].split(':')[0]) in range(5)) ): Hors ligne Floyd Pepper Re : Topic des lève-tôt… Faisons manger leurs caleçons aux couche-tard! [4] [ /me est donc décidé à persévérer /me t'encourage vivement, et j'espère que le truc grec t'a donné de vrais conseils hors forum, cause que vu d'ici, cette intervention est aussi péremptoire qu'inutile. Perso, je ne peux que te soutenir et te souhaiter de nous faire "the compteur des Lts" qui sera adapté au Cts. Vieux papy triste et Hétérocentriste (conditionné), en attente d'être complètement con×. J'aurais tendance à ne pas utiliser de smilleys. Le plus tu t'fais chier, le plus t'es emmerdé. En ligne ljere Re : Topic des lève-tôt… Faisons manger leurs caleçons aux couche-tard! [4] répondu golgoth42 Floyd Pepper merci pour les encouragements, par contre continu de poster le tien j'en ai besoin pour mes test Hors ligne Floyd Pepper Re : Topic des lève-tôt… Faisons manger leurs caleçons aux couche-tard! [4] 'jour Vieux papy triste et Hétérocentriste (conditionné), en attente d'être complètement con×. J'aurais tendance à ne pas utiliser de smilleys. Le plus tu t'fais chier, le plus t'es emmerdé. En ligne ljere Re : Topic des lève-tôt… Faisons manger leurs caleçons aux couche-tard! [4] bonjour tout le monde Hors ligne souen Re : Topic des lève-tôt… Faisons manger leurs caleçons aux couche-tard! [4] Hello bonjour Tu es programmé, mais tu es libre. Ubuntu Studio "Si il n'y a pas de solution, c'est qu'il n'y a pas problème !" (sagesse Shadok) Hors ligne PPdM Re : Topic des lève-tôt… Faisons manger leurs caleçons aux couche-tard! [4] Salutatous par Toutatis! Un vrai ennemi ne te laissera jamais tomber. Hors ligne raspouillas Re : Topic des lève-tôt… Faisons manger leurs caleçons aux couche-tard! [4] bonjour .... raspouillas Re : Topic des lève-tôt… Faisons manger leurs caleçons aux couche-tard! [4] PS: Je n'ai de retour de réponse ? Mindiell Re : Topic des lève-tôt… Faisons manger leurs caleçons aux couche-tard! [4] ... bonjour ! Une seconde, on est samedi @ljere: spli permet de séparer une chaine en n morceaux séparés par le caractère donné. En prenant la date affichée : Hier à 21:19 ou Aujourd'hui à 08:56 on obtient alors un tableau du type [0] => "Hier", [1] => "à", [2] => "21:19" Ensuite le triple split, resplite la chaine et utilise le 3eme élément pour tenter d'en extraire l'heure. le split avec '<' est inutile dans ce cas, il a du être rajouté à une époque lointaine (ancien forum ?). On obtient donc un seul élément égal au 3eme élément déjà pris en compte ("21:19"), puis on split par le ":" pour obtenir l'heure et on s'intéresse aux heures comprises entre 5 et 24. Le deuxième partie s'intéresse aux heures entre 0 et 5. Au final, tu obtiens donc les informations pour les dernières 24 heures, et on peut, dès lors, supposer que le compteur se lance tous les jours à 5h00 du matin PS: j'ai pas répondu en mp : c'était déjà fait, et puis ça peut en intéresser d'autres Hors ligne ljere Re : Topic des lève-tôt… Faisons manger leurs caleçons aux couche-tard! [4] donc comme pour nous on s’intéresse à une période de 6h à 9h if ( (str_date.split(' ')[0] in ["Aujourd\'hui"] and int(str_date.split(' ')[2].split(':')[6]) in range(8)) ): comme 9h est exclu je doit bien mettre 8? Hors ligne Floyd Pepper Re : Topic des lève-tôt… Faisons manger leurs caleçons aux couche-tard! [4] 1 1787 FloydPepper 2 1543 pierguiard 3 1463 MdMax 4 1247 Azurea 5 1199 souen 6 968 Ras&#039; 7 767 raspouillas 8 552 Arcans 9 428 peterp@n 10 359 golgoth42 11 293 mindiell 12 277 omc 13 219 Πυλάδης 14 176 pololasi 15 117 edge_one 16 101 nathéo 17 99 karameloneboudeplus 18 61 agarwood 19 60 Niltugor 20 52 1101011 20 52 jeyenkil 20 52 ljere 23 43 Crocoii 24 42 nakraïou 24 42 DaveNull 26 40 Biaise 27 39 Clem_ufo 28 38 Atem18 29 22 marinmarais 30 18 Ju 31 13 Le grand rohr sha 32 10 Phoenix 32 10 FLOZz 32 10 sakul 32 10 SopolesRâ 36 6 wiscot 36 6 timsy 36 6 Slystone 36 6 Hibou57 36 6 tshirtman 36 6 marting 36 6 c4nuser 43 4 Morgiver 43 4 :!pakman 45 3 Phoenamandre 45 3 gonzolero 45 3 helly 45 3 Le Rouge 49 1 herewegoagain 49 1 TheUploader 49 1 Kyansaa 49 1 Xiti29 Vieux papy triste et Hétérocentriste (conditionné), en attente d'être complètement con×. J'aurais tendance à ne pas utiliser de smilleys. Le plus tu t'fais chier, le plus t'es emmerdé. En ligne Ras&#039; Re : Topic des lève-tôt… Faisons manger leurs caleçons aux couche-tard! [4] J'me réveille -__________-' C'est trop tard pour les points ? Va t'faire shampouiner en GMT-4 ! http://blag.xserver-x.org/ Les types awesome n'ont rien à prouver. À personne. Hors ligne PPdM Re : Topic des lève-tôt… Faisons manger leurs caleçons aux couche-tard! [4] J'me réveille -__________-' C'est trop tard pour les points ? nan tu es en avance pour demain! Hors ligne Ras&#039; Re : Topic des lève-tôt… Faisons manger leurs caleçons aux couche-tard! [4] Va détourer ton avatar au lieu de te moquer ! Va t'faire shampouiner en GMT-4 ! http://blag.xserver-x.org/ Les types awesome n'ont rien à prouver. À personne. Hors ligne Mindiell Re : Topic des lève-tôt… Faisons manger leurs caleçons aux couche-tard! [4] donc comme pour nous on s’intéresse à une période de 6h à 9h if ( (str_date.split(' ')[0] in ["Aujourd\'hui"] and int(str_date.split(' ')[2].split(':')[6]) in range(8)) ): comme 9h est exclu je doit bien mettre 8? En fait j'aurai plutôt mis ça : if ( (str_date.split(' ')[0] in ["Aujourd\'hui"] and int(str_date.split(' ')[2].split(':')[0]) in range(5,8)) ): Car : - je ne sais pas pourquoi tu mets 6 au lieu de 0 dans le tableau : quand tu coupes 05:34 en chaines par rapport à ":", tu obtiens un tableau de deux éléments : "0" => "05", "1" => "34" - Il faut s'occuper uniquement de 5 à 8 heures Voilà PS: Ah oui, ça à l'air facile python ou pas ? Hors ligne raspouillas Re : Topic des lève-tôt… Faisons manger leurs caleçons aux couche-tard! [4] Bonjour ... ljere Re : Topic des lève-tôt… Faisons manger leurs caleçons aux couche-tard! [4] bonjour ... Hors ligne ljere Re : Topic des lève-tôt… Faisons manger leurs caleçons aux couche-tard! [4] @Mindiell ce n'est pas facile mais c'est intéressant encore merci pour tes conseils et explications Hors ligne Floyd Pepper Re : Topic des lève-tôt… Faisons manger leurs caleçons aux couche-tard! [4] 'jour Vieux papy triste et Hétérocentriste (conditionné), en attente d'être complètement con×. J'aurais tendance à ne pas utiliser de smilleys. Le plus tu t'fais chier, le plus t'es emmerdé. En ligne
I have written a python script to parse an HTML page, get some strings and then write to a mysql table. I am using the MySQLDb module for the database connection. The strings retrieved are encoded in ISO-8859-7 (Greek), which is the default encoding in the MySQL table as well. The code which produces the exception is the following : def db_write(list) : import MySQLdb as sql try : con = sql.connect(//database info here//) except : print "could not connect to database" exit() cur = con.cursor() for i in my_range (8,len(list)-2,2) : query = 'INSERT INTO as_doy VALUES (%s,"%s")' % (list[i],list[i+1]) print query try : cur.execute(query) con.commit() except : print "failed" con.rollback() con.close() The exception i get is ERROR 1366 (HY000): Incorrect string value: '\xEF\xBF\xBD\xEF\xBF\xBD...' I have tried encoding the strings in utf-8, decoding and re-encoding in iso-8859-7, but nothing has worked for me yet.
Background I have two python processes that need to communicate with each other. The comminication is handled by a class named Pipe. I made a seperate class for this because most of the information that needs to be communicated comes in the form of dictionaries so Pipe implements a pretty simple protocol for doing this. Here is the Pipe constructor: def __init__(self,sPath): """ create the fifo. if it already exists just associate with it """ self.sPath = sPath if not os.path.exists(sPath): try: os.mkfifo(sPath) except: raise Exception('cannot mkfifo at path \n {0}'.format(sPath)) self.iFH = os.open(sPath,os.O_RDWR | os.O_NONBLOCK) self.iFHBlocking = os.open(sPath,os.O_RDWR) So ideally I would just construct a Pipe in each process with the same path and they would be able to talk nice. I'm going to skip out stuff about the protocol because I think it is largely unnecessary here. All read and write operations make use of the following 'base' functions: def base_read_blocking(self,iLen): self.lock() lBytes = os.read(self.iFHBlocking,iLen) self.unlock() return lBytes def base_read(self,iLen): print('entering base read') self.lock() lBytes = os.read(self.iFH,iLen) self.unlock() print('exiting base read') return lBytes def base_write_blocking(self,lBytes): self.lock() safe_write(self.iFHBlocking,lBytes) self.unlock() def base_write(self,lBytes): print('entering base write') self.lock() safe_write(self.iFH,lBytes) self.unlock() print('exiting base write') safe_write was suggested in another post def safe_write(*args, **kwargs): while True: try: return os.write(*args, **kwargs) except OSError as e: if e.errno == 35: import time print(".") time.sleep(0.5) else: raise locking and unlocking is handled like this: def lock(self): print('locking...') while True: try: os.mkdir(self.get_lock_dir()) print('...locked') return except OSError as e: if e.errno != 17: raise e def unlock(self): try: os.rmdir(self.get_lock_dir()) except OSError as e: if e.errno != 2: raise e print('unlocked') The Problem This sometimes happens: ....in base_read lBytes = os.read(self.iFH,iLen) OSError: [Errno 11] Resource temporarily unavailable Sometimes it's fine. The Magical Solution I seem to have stopped the problem from happening. Please note this is not me answering my own question. My question is explained in the next section. I changed the read functions to look more like this and it sorted stuff out: def base_read(self,iLen): while not self.ready_for_reading(): import time print('.') time.sleep(0.5) lBytes = ''.encode('utf-8') while len(lBytes)<iLen: self.lock() try: lBytes += os.read(self.iFH,iLen) except OSError as e: if e.errno == 11: import time print('.') time.sleep(0.5) finally: self.unlock() return lBytes def ready_for_reading(self): lR,lW,lX = select.select([self.iFH,],[],[],self.iTimeout) if not lR: return False lR,lW,lX = select.select([self.iFHBlocking],[],[],self.iTimeout) if not lR: return False return True The Question I'm struggling to find out exactly why it is temporarily unavailable. The two processes cannot access the actual named pipe at the same time due to the locking mechanism (unless I am mistaken?) so is this due to something more fundamental to fifos that my program is not taking into account? All I really want is an explanation... The solution I found works but it looks like magic. Can anyone offer an explanation? System Ubuntu 12.04, Python3.2.3
Background From the documentation example here, one can easily produce the following contour plot with the code snippet. import matplotlib import numpy as np import matplotlib.cm as cm import matplotlib.mlab as mlab import matplotlib.pyplot as plt matplotlib.rcParams['xtick.direction'] = 'out' matplotlib.rcParams['ytick.direction'] = 'out' delta = 0.025 x = np.arange(-3.0, 3.0, delta) y = np.arange(-2.0, 2.0, delta) X, Y = np.meshgrid(x, y) Z1 = mlab.bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0) Z2 = mlab.bivariate_normal(X, Y, 1.5, 0.5, 1, 1) # difference of Gaussians Z = 10.0 * (Z2 - Z1) # Create a simple contour plot with labels using default colors. The # inline argument to clabel will control whether the labels are draw # over the line segments of the contour, removing the lines beneath # the label plt.figure() CS = plt.contour(X, Y, Z) plt.clabel(CS, inline=1, fontsize=10) plt.title('Simplest default with labels') My Goal I have obtained my contour plot and meanwhile got the matplotlib.contour.QuadContourSet instance CS. In the example snippet, CS is only used for clabel(). However for my case, I need to obtain either the equation of the contour line or the coordinate set for further computation. How can I extract the coordinates of the contour line from the instance CS? ORHow can I achieve it in other ways? I bet there must be a way of doing so. Otherwise, the contour thing is only a "vase for visualization" then.
When I run this simple code: from pylab import * import numpy as np X = []; Y = []; for i in range (0,10): X.append(i) Y.append(i) plot(X,Y) show() I don't get any window. I tried to replace show with draw with the same result. I'm using python version 3.2.2 How can I show the window/plot than (apart from printing it to file and open the file). Note, I'm using this example:
employees = [] for i in range(0,10): emp = Employee(i) emp.first_name = "%s-%s"%("first name", i) emp.last_name = "%s-%s"%("last_name", i) emp.desgination = "%s-%s"%("engineer", i) employees.append(emp) ids = [e.eid for e in employees] Following is my class definition: class Employee: _fields = {} def __init__(self, eid): self.eid = eid def __getattr__(self, name): return self._fields.get(name) def __setattr__(self,name,value): self._fields[name] = value def __str__(self): return str(self._fields) def __unicode__(self): return str(self._fields) The issue is that when I print ids, it contains 10 times 9... i.e. [9, 9, 9, 9, 9, 9, 9, 9, 9, 9] It seems that the same emp variable is being overwritten. I am not sure what going wrong. Though I am a Java coder but I thought I had a fair idea of Python as well.
I'm attempting to use the python logging module to do complex things. I'll leave the motivation for this design out because it would greatly lengthen the post, but I need to have a root logger that spams a regular log file for our code and libraries that use logging -- and a collection of other loggers that go to different log files. The overall setup should look like this. I will do everything to stdout in this example to simplify the code. import logging, sys root = logging.getLogger('') top = logging.getLogger('top') bottom = logging.getLogger('top.bottom') class KillFilter(object): def filter(self, msg): return 0 root_handler = logging.StreamHandler(sys.stdout) top_handler = logging.StreamHandler(sys.stdout) bottom_handler = logging.StreamHandler(sys.stdout) root_handler.setFormatter(logging.Formatter('ROOT')) top_handler.setFormatter(logging.Formatter('TOP HANDLER')) bottom_handler.setFormatter(logging.Formatter("BOTTOM HANDLER")) msg_killer = KillFilter() root.addHandler(root_handler) top.addHandler(top_handler) bottom.addHandler(bottom_handler) top.addFilter(msg_killer) root.error('hi') top.error('hi') bottom.error('hi') This outputs ROOTBOTTOM HANDLERROOT The second root handler call should not because according to logging documentation the msg_killer will stop the message from going up to the root logger. Obviously the documentation could use improvement. Edit: removed my "in the moment" harsh words for python logging.
I am wrapping a library which was written in C++ to Python API libwebqq There is a type which is defined in boost function . typedef boost::function<void (std::string)> EventListener; Python level can define "EventListener" variable callbacks. There is also a map structure in C++ level which is event_map in class Adapter. The key type of event_map is a QQEvent enum type and the value type of event_map is class "Action" which wraps EvenListener. class Action { EventListener _callback; public: Action (){ n_actions++; } Action(const EventListener & cb ){ setCallback(cb); } virtual void operator()(std::string data) { _callback(data); } void setCallback(const EventListener & cb){ _callback = cb; } virtual ~Action(){ std::cout<<"Destruct Action"<<std::endl; n_actions --; } static int n_actions; }; class Adapter{ std::map<QQEvent, Action > event_map; public: Adapter(); ~Adapter(); void trigger( const QQEvent &event, const std::string data); void register_event_handler(QQEvent event, EventListener callback); bool is_event_registered(const QQEvent & event); void delete_event_handler(QQEvent event); }; "register_event_handler" in class Adapter is the API to register a callback function to related event. And C++ back end will call it if event happened. But we need to implement the callbacks in python level. And I wrapped the callback type in "callback.i" The problem is , when I call the register_event in test python script, a type error always occurs: Traceback (most recent call last): File "testwrapper.py", line 44, in <module> worker = Worker() File "testwrapper.py", line 28, in __init__ a.setCallback(self.on_message) File "/home/devil/linux/libwebqq/wrappers/python/libwebqqpython.py", line 95, in setCallback def setCallback(self, *args): return _libwebqqpython.Action_setCallback(self, *args) TypeError: in method 'Action_setCallback', argument 2 of type 'EventListener const &' Destruct Action Please help to figure out the root cause of this type error and a solution to this problem.
Here is one simple way. def isComposedOf(A, B): bset = set(B) for c in A: if c not in bset: return False return True This algorithm walks each string once, so it runs in O(len(A) + len(B)) time. When the answer is yes, you cannot do better than len(A) comparisons even in the best case, because no matter what you must check every letter. And in the worst case one of the characters in A is hidden very deep in B. So O(len(A) + len(B)) is optimal, as far as worst-case performance is concerned. Similarly: when the answer is no, you cannot do better than len(B) comparisons even in the best case; and in the worst case the character that isn't in B is hidden very deep in A. So O(len(A) + len(B)) is again optimal. You can reduce the constant factor by using a better data structure for bset. You can avoid scanning all of B in some (non-worst) cases where the answer is yes by building it lazily, scanning more of B each time you find a character in A that you haven't seen before.
I want to write the text (which I get from AJAX) to a file, and then read it. The following code for read the content from a file handle=open('file','r+') var=handle.read() print var The following code for writing the content to the file handle1=open('file.txt','r+') handle1.write("I AM NEW FILE") handle1.close() If you can use this in Django view... try somethink like this: def some_view(request): text = request.POST.get("text", None) if text is not None: f = open( 'some_file.txt', 'w+') f.write(text) f.close() return HttpResponse() f = open( 'filename.txt', 'w+' ) f.write( 'text' ) f.close() f = open( 'filename.txt', 'r' ) for line in f: print( line ) f.close()
basically i want a shape moving vertically, and when it reaches the upper border of the windows appears in the bottom border. this is what i have: from utalcanvas import * import time show() window_size(500,500) window_style( "Mi Ventana", "black") window_coordinates(0,0,1000,1000) circulo = create_filled_circle(500,20,50,"green") for x in range(100): y = 10 x = 0 time.sleep(0.0150) move(circulo, -x, y) window_update() any ideas?? thanks!
Curve fitting is not always that straightforward. The curve_fit algorithm is based on least squares curve fitting and usually needs an initial guess for the input parameters. Depending on the kind of function you want to fit, your initial guess has to be a good one. Even though you tried an initial guess, I would say you have an additional problem which has to do with your sampling frequency and the frequency of your wave. For further information, you can have a look at the Nyquist-Shannon sampling theorem at Wikipedia. In simple words, the frequency of your wave is 1.759 / (2 * pi) = 0.28, which it turns out to be very close to the sampling frequency of your x array (~0.33). Another issue that might arise is to have too many oscillations to fit to your function. In order for your code to work, I would either suggest you increase the frequency of your wave (a > 4 * 0.33) or you increase your sampling frequency and reduce the length of your space vector x. I ran the following code and obtained the results as illustrated here: # -*- coding: utf-8 -*- import numpy as np import pylab as pl from scipy.optimize import curve_fit def mysine(x, a): return 1. * np.sin(a * x) a = 1.759 # Wave frequency x = np.linspace(0, 10, 100) # <== This is what I changed y = np.sin(a * x) + 0. * np.random.normal(size=len(x)) # Runs curve fitting with initial guess. popt, pcov = curve_fit(mysine, x, y, p0=[1.5]) # Calculates the fitted curve yf = mysine(x, *popt) # Plots results for comparison. pl.ion() pl.close('all') fig = pl.figure() ax = fig.add_subplot(111) ax.plot(x, y, '-', c=[0.5, 0.5, 0.5]) ax.plot(x, yf, 'k-', linewidth=2.0) ax.text(0.97, 0.97, ur'a=%.4f, ã=%.4f' % (a, popt[0]), ha='right', va='top', fontsize=14, transform=ax.transAxes) fig.savefig('stow_curve_fit.png')
How do I do that? I'm trying NotesAdministrationProcess but some lack example in the help file. Searching the web is difficult since these method is seldom documented. Currently I'm using RenameNotesUser to rename the user but the changes only happen when I run 'tell adminp process new' on the server. How do I automate the the rename after sending the rename request? Also I've no luck with changing the user's OU/Department. Which method should I use? Currently using RecertifyUser but it pops an error about certifier ID is not an ancestor of something. For example, I want to move a user from Technical department to Sales Department (John/Technical/ACME to John/Sales/ACME). Recertifying John manually in Domino Administrator using the sales ID works fine. I also want the change of user's OU to take effect immediately without telling Domino Console to process it. The above 2 process don't have to run at the same time because a user couldn't have name and OU change at the same time. Below is the test code/agent that I use (I'll comment the rename code if I want to run the recertify code and vice versa): Dim s As New NotesSession Dim db As NotesDatabase Dim vw As NotesView, doc As NotesDocument Dim adminp As NotesAdministrationProcess Dim svr$, path$, cert$, pwd$, staffid$, newlastname$ svr="" 'server1/ACME path="" 'names.nsf cert="" 'for rename(C:\tech.id), for recertify(C:\sales.id) pwd="" 'for rename(tech), for recertify(sales) staffid="" 'A0001(John's ID) newlastname="" 'James Set db=s.Getdatabase(svr, path, False) Set adminp=s.Createadministrationprocess(svr) adminp.Certifierfile=cert adminp.Certifierpassword=pwd Set vw=db.Getview("People\by Staff Number") Set doc=vw.Getdocumentbykey(staffid, True) 'for rename. our company only use the last name as name Call adminp.Renamenotesuser(doc.FullName(0), newlastname) 'for change ou Call adminp.Recertifyuser(doc.FullName(0))
pyxshell (à prononcer comme vous pouvez) est un module Python qui permet d’enchaîner des fonctions de traitement de flux avec un opérateur « tube » — pipe, en anglais — (|) , de la même manière qu’avec un shell. Si — comme moi — vous devez régulièrement analyser à la main des données textuelles diversement structurées sous différents environnements. Si vous affectionnez la ligne de commande sans raffoler de la syntaxe de Bash, vous apprécierez d’avoir tout sous la main à côté de vos outils Python favoris. Par exemple : >>> out=[] >>> (random.randint(0,2) for i in range(10)) | map(lambda x: ["Oui ","nous sommes ","tous différents "][x]) | sort | uniq | tee(sys.stdout) > out Oui tous différents nous sommes >>> print(out) ['Oui ', 'tous différents ', 'nous sommes '] Dit autrement, pyxshell permet d’appeler séquentiellement des générateurs sur des structures de données itérables avec une notation infixée. Il descend du ciel avec un ensemble de fonctions qui rappellent celles de la pile GNU, principalement pour traiter des versets textuels ligne par ligne. Le module permet une redirection des flux vers des fichiers et/ou des variables à l’aide des vénérables opérateurs > et >>. Il permet également d’étendre l’utilisation des tubes à vos glorieuses fonctions, à l’aide d’un décorateur consacré : @pipe. Il dispose en sus d’augustes opérateurs permettant de combiner des flux (avec +) ou d’en faire le produit cartésien de leurs bienheureux éléments (avec *). pyxshell vous est révélé sous la « (Un)license ». D’autres exemples tirés des écritures : >>> import sys >>> from pyxshell.common import * >>> list( ['Vous', 'êtes', 'tous','des','individus'] | grep(r'[aeo]') ) ['Vous', 'êtes', 'tous', 'des'] >>> ["Je dis que vous êtes le Messie","Et je m'y connais","J'en ai suivi un paquet"] | map(lambda x: "— "+x+"\n") | grep("'") | cut([-2,-1]) | traverse | glue > sys.stdout m'y connais un paquet Pour les mécréants qui voudraient utiliser leurs commandes shell de la même façon, depuis un interpréteur, mais sans avoir à utiliser des ré‐implémentations en pur Python, le module Plumbum leur conviendra mieux.
I am getting KeyError in my partial pipeline when I try to register with twitter accounts while facebook accounts work fine. This is odd because the same function is working fine with facebook users. The error message is as below: KeyError at /myapp/ 'partial_pipeline' at 'myapp_auth_form' and my code is: settings.py SOCIAL_AUTH_ENABLED_BACKENDS=('facebook','twitter',) SOCIAL_AUTH_DEFAULT_USERNAME='new_social_auth_user' ... TWITTER_CONSUMER_KEY = 'mytwitterconsumerkey' TWITTER_CONSUMER_SECRET = 'mytwitterconsumersecret' LOGIN_URL = '/login/' LOGIN_REDIRECT_URL = '/' LOGIN_ERROR_URL = '/login-error/' SOCIAL_AUTH_PIPELINE = ( 'social_auth.backends.pipeline.social.social_auth_user', 'social_auth.backends.pipeline.misc.save_status_to_session', 'myapp.pipeline.has_email', 'myapp.pipeline.check_by_email', 'myapp.pipeline.redirect_to_form', 'myapp.pipeline.get_username', 'myapp.pipeline.create_user', 'social_auth.backends.pipeline.social.associate_user', 'social_auth.backends.pipeline.social.load_extra_data', 'social_auth.backends.pipeline.user.update_user_details' ) myapp.pipeline from django.http import HttpResponseRedirect from django.core.urlresolvers import reverse from social_auth.models import UserSocialAuth from registration.models import UserProfile def has_email(details, user=None, *args, **kwargs): """Check if email is provided and ask for it otherwise """ if user: return None email = details.get('email') if email: kwargs['request'].session['saved_email'] = email else: session = kwargs['request'].session email = session.get('saved_email') if not email: return HttpResponseRedirect(reverse('myapp_email_form')) def check_by_email(details, user=None, user_exists=UserSocialAuth.simple_user_exists, *args, **kwargs): """Check if there's user with same email address and ask for its password to associate """ if user: return None session = kwargs['request'].session email = session.get('saved_email') if email: if user_exists(username=email): return HttpResponseRedirect(reverse('myapp_auth_form')) def redirect_to_form(*args, **kwargs): """Redirect to get password if user is None """ session = kwargs['request'].session if not session.get('saved_password') and not session.get('saved_nickname') and not session.get('saved_sex') and kwargs.get('user') is None: return HttpResponseRedirect(reverse('social_auth_form')) def get_username(details, user=None, *args, **kwargs): """Return an username for new user. Return current user username if user was given. Returns email address since myapp uses email for username """ if user: return {'username': user.username} username = details.get('email') or '' return {'username': username} def create_user(backend, details, response, uid, username, user=None, *args, **kwargs): """Create user and user profile. Depends on get_username pipeline.""" if user: return {'user': user} if not username: return None request = kwargs['request'] password = request.session.get('saved_password') or '' user = UserSocialAuth.create_user(username=username, email=username, password=password) nickname = request.session.get('saved_nickname') or '' sex = request.session.get('saved_sex') or 'F' profile = UserProfile.objects.create(user = user, nickname = nickname, sex = sex) referee_nickname = request.session.get('saved_referee') or False # if there was a recommender if referee_nickname: try: referee_profile = UserProfile.objects.get(nickname=referee_nickname) profile.referee = referee_profile.user profile.save() except UserProfile.DoesNotExist: pass return { 'user': user, 'is_new': True } views.py from social_auth.utils import setting from django.contrib.auth import authenticate, login def myapp_email_form(request): # if user is logged in already, redirect the user to home if request.user.is_authenticated(): if request.GET.get('mobile', False): return HttpResponseRedirect(reverse('m_home')) return HttpResponseRedirect(reverse('home')) """ If email is unprovided, ask for it """ if request.method == 'POST': form = EmailForm(request.POST) if form.is_valid(): email = form.cleaned_data['email'] name = setting('SOCIAL_AUTH_PARTIAL_PIPELINE_KEY', 'partial_pipeline') request.session['saved_email'] = email backend = request.session[name]['backend'] return redirect('socialauth_complete', backend=backend) else: form = EmailForm() email = request.session.get('saved_email') or '' variables = RequestContext(request, { 'form': form, 'email': email, }) if request.is_mobile or request.GET.get('mobile', False): return render_to_response('mobile/registration/social/email_form.html', variables, context_instance=RequestContext(request)) return render_to_response('.../email_form.html', variables, context_instance=RequestContext(request)) def myapp_auth_form(request): # if user is logged in already, redirect the user to home if request.user.is_authenticated(): if request.GET.get('mobile', False): return HttpResponseRedirect(reverse('m_home')) return HttpResponseRedirect(reverse('home')) """ If user's email is already registered to myapp, ask user for its password """ if request.method == 'POST': form = LoginForm(request.POST) if form.is_valid(): email = form.cleaned_data['username'] user = authenticate(username=email, password=form.cleaned_data['password']) if user is not None: if user.is_active: login(request, user) name = setting('SOCIAL_AUTH_PARTIAL_PIPELINE_KEY', 'partial_pipeline') request.session['saved_user'] = user ############################################ backend = request.session[name]['backend'] #<- I'm getting the KeyError at this point ############################################ return redirect('socialauth_complete', backend=backend) else: return HttpResponseRedirect(reverse('inactive_user')) else: form.non_field_errors = _('A user with such email and password does not exist.') else: form = LoginForm() email = request.session.get('saved_email') or '' variables = RequestContext(request, { 'form': form, 'email': email, }) if request.is_mobile or request.GET.get('mobile', False): return render_to_response('mobile/registration/social/auth_form.html', variables, context_instance=RequestContext(request)) return render_to_response('.../auth_form.html', variables, context_instance=RequestContext(request)) def social_auth_form(request): # if user is logged in already, redirect the user to home if request.user.is_authenticated(): if request.GET.get('mobile', False): return HttpResponseRedirect(reverse('m_home')) return HttpResponseRedirect(reverse('home')) """ Remedy form taking missing information during social authentication """ nickname = '' sex = 'F' if request.method == 'POST': form = SocialRegistrationForm(request.POST) if form.is_valid(): nickname = form.cleaned_data['nickname'] sex = form.cleaned_data['sex'] name = setting('SOCIAL_AUTH_PARTIAL_PIPELINE_KEY', 'partial_pipeline') request.session['saved_nickname'] = nickname request.session['saved_sex'] = sex request.session['saved_password'] = form.cleaned_data['password1'] backend = request.session[name]['backend'] return redirect('socialauth_complete', backend=backend) else: form = SocialRegistrationForm() nickname = request.session.get('saved_username') or '' sex = request.session.get('saved_gender') or 'F' if sex == 'male': sex = 'M' elif sex == 'female': sex = 'F' variables = RequestContext(request, { 'form': form, 'nickname': nickname, 'sex': sex, }) if request.is_mobile or request.GET.get('mobile', False): return render_to_response('mobile/registration/social/social_form.html', variables, context_instance=RequestContext(request)) return render_to_response('.../auth_form.html', variables, context_instance=RequestContext(request))
DJ Raging-Bull Re : Conky Control (Live Voyager) J'ai pu en lancer quelques'uns du coup, mais il y en a qui ne fonctionne pas et surtout quand je ferme le dossier du .scripts, quel que soit le conky lancé, il disparait. Pour le faire revenir, obligé de faire un kill all conky puis de relancer le conky via le script. Dernière modification par DJ Raging-Bull (Le 16/08/2013, à 07:57) Desktop: Intel Core i7 2600K - 16 Go DDR3 - GeForce GTX 590 - Asus P8P67 - Sound Blaster X-Fi Titanium 7.1 - SSD 64 Go Laptop: Lenovo ThinkPad X201 - Intel Core i5 520M - 4 Go DDR3 - Kingstone SSD 60 Go HTPC: Intel Core i5 4570S - 4 Go DDR3 - Asus H81M - Asus Xonar DX - Kingstone SSD 60 Go - 2 x WD Caviar Green 4To OS: GNU/Linux Ubuntu 14.04 LTS Trusty Tahr & Microsoft Windows 7 Édition Intégrale Hors ligne ragamatrix Re : Conky Control (Live Voyager) @Didier-T Bonjour, Je souhaite utiliser'conky-control'sous openbox mais le script 'conky' fonctionne avecxfce. Je voudrais donc changer les paramètres du scriptconkypour l'autostart d'openbox. conky: je crois que la partie à modifier pourrait être celle-ci:<metadata lang=Shell prob=0.36 /> Encoding=UTF-8 Version=0.9.4 Type=Application Name=Conky $i (${conky[$i]}) Comment= Exec=sh -c \"sleep 10; conky -c ${nom[$i]};\" StartupNotify=false Terminal=false Hidden=false" > ~/.config/autostart/Conky\ $i\.desktop else if [ -f ~/.config/autostart/Conky\ $i\.desktop ]; then rm ~/.config/autostart/Conky\ $i\.desktop & fi fi fi done fi } exemple du fichier autostart dans openbox: /home/raphix/.config/openbox/autostart:<metadata lang=Shell prob=0.95 /> ## Openbox autostart.sh ## ==================== ## When you login to your CrunchBang Openbox session, this autostart script ## will be executed to set-up your environment and launch any applications ## you want to run at startup. ## ## Note*: some programs, such as 'nm-applet' are run via XDG autostart. ## Run '/usr/lib/openbox/openbox-xdg-autostart --list' to list any ## XDG autostarted programs. ## ## More information about this can be found at: ## http://openbox.org/wiki/Help:Autostart ## ## If you do something cool with your autostart script and you think others ## could benefit from your hack, please consider sharing it at: ## http://crunchbang.org/forums/ ## ## Have fun & happy CrunchBangin'! :) ## GNOME PolicyKit and Keyring eval $(gnome-keyring-daemon -s --components=pkcs11,secrets,ssh,gpg) & ## Set root window colour hsetroot -solid "#2E3436" & feh --bg-fill ~/images/fondsdebureaux/gris.jpg & ##nitrogen --restore && xcompmgr -c -C -f -I.16 -O.16 -t-6 -l-8 -r6 -o.7 & ##cb-compositor --start && tint2 & ## Volume control for systray (sleep 2s && pnmixer) & ## Volume keys daemon xfce4-volumed & ## Enable power management ##xfce4-power-manager & ## Disable xset s off -dpms & ## Start Thunar Daemon thunar --daemon & ## Detect and configure touchpad. See 'man synclient' for more info. if egrep -iq 'touchpad' /proc/bus/input/devices; then synclient VertEdgeScroll=1 & synclient TapButton1=1 & fi ## Start xscreensaver ##xscreensaver -no-splash & ## Start Clipboard manager (sleep 3s && clipit) & ## Set keyboard settings - 250 ms delay and 25 cps (characters per second) repeat rate. ## Adjust the values according to your preferances. xset r rate 250 25 & ## Turn on/off system beep xset b off & ## The following command runs hacks and fixes for #! LiveCD sessions. ## Safe to delete after installation. cb-cowpowers & ## cb-welcome - post-installation script, will not run in a live session and ## only runs once. Safe to remove. (sleep 10s && cb-welcome --firstrun) & ## cb-fortune - have Waldorf say a little adage #(sleep 120s && cb-fortune) & ## Run the conky ##conky -q & ##scripts externes pour xplanet /usr/share/xplanet/nuages.sh & /usr/local/bin/xplanet-bg & # Make GTK apps look and behave how they were set up in the XFCE config tools #elif which xfsettingsd >/dev/null; then ##xfsettingsd & #fi Faut-il modifier le chemin vers l'autostart d'openbox ou ajouter le script conky?... Je ne sais pas aller plus loin... Comment dois-je m'y prendre ? Merci ! Salut Didier-T ! Bon j'ai trouvé la solution pour les utilisateurs d'openbox. Grâce à arpinux du forum crunchbang.fr que je remercie conky-tool A+ Hors ligne Hors ligne Blek85 Re : Conky Control (Live Voyager) Bonjour, et tout d'abord merci à vous qui nous permettez d'avoir un système et des outils des plus intéressants. Je teste la 13.10 de Voyager sur un nouveau PC (mais j'ai toujours ma vieille bécane). J'ai dû un peu trop "personnaliser" mon compte, car ConkyControl qui fonctionnait au début par le menu contextuel du bureau, est sans effet maintenant. Si je lance la commande python3 ConkyControl.py dans un terminal voici ce que j'obtiens : └─ $ ▶ cd ~/.scripts/Conky/ └─ $ ▶ python3 ConkyControl.py Conky contrôle Version 1.5 Traceback (most recent call last): File "ConkyControl.py", line 544, in <module> app = application() File "ConkyControl.py", line 532, in __init__ mBar = MenuBar(self) File "ConkyControl.py", line 140, in __init__ self.BasesConky, self.ConkyLances, self.TempLat = fonctions.initialisation(fonctions()) File "ConkyControl.py", line 72, in initialisation Latence=findall('-?\d+', line)[0] IndexError: list index out of range Je n'ai pas modifié le contenu de ConkyControl.py mais j'ai supprimé des fonctionnalités comme les bureaux multiples (4 espaces de travail c'est 3 de trop pour moi), une sorte de tableau de bord sur le côté droit de l'écran, et peut-être d'autres choses encore. Si vous avez une idée sur le pourquoi du comment, je le promets, je ne recommencerai plus. D'avance merci. Alain Dernière modification par Blek85 (Le 24/03/2014, à 18:41) Hors ligne Blek85 Re : Conky Control (Live Voyager) Bonjour à tous, bonjour Didier Merci de ta réactivité, je me suis demandé après coup, si je n'avais pas posté dans un fil plutôt réservé aux contributeurs actifs de ce projet. Voici la liste demandée : └─ $ ▶ ls -al .scripts/Conky/ total 244 drwxr-xr-x 2 alain alain 4096 mars 22 19:19 . drwxr-xr-x 22 alain alain 4096 mars 22 19:19 .. -rwx--x--x 1 alain alain 3811 mars 22 19:19 conky -rw-rw-r-- 1 alain alain 10623 mars 22 19:19 conkycontrol-3904.gif -rw-rw-r-- 1 alain alain 16876 mars 22 19:19 conkycontrol-3908.gif -rw-rw-r-- 1 alain alain 13594 mars 22 19:19 conkycontrol-3909.gif -rw-rw-r-- 1 alain alain 30814 mars 22 19:19 conkycontrol-390.gif -rw-rw-r-- 1 alain alain 16001 mars 22 19:19 conkycontrol-390.gif1 -rw-rw-r-- 1 alain alain 56336 mars 22 19:19 conkycontrol-390.gif2 -rw-rw-r-- 1 alain alain 6759 mars 22 19:19 conkycontrol-390.gif3 -rw-rw-r-- 1 alain alain 24154 mars 22 19:19 ConkyControl.py -rw-rw-r-- 1 alain alain 12575 mars 22 19:19 ConkyControl.py1 -rwx--x--x 1 alain alain 0 mars 22 19:19 conky_liste -rwx--x--x 1 alain alain 883 mars 22 19:19 conky_liste1 -rwx--x--x 1 alain alain 723 mars 22 19:19 conky_liste2 -rwx--x--x 1 alain alain 941 mars 22 19:19 conky_liste3 -rwx--x--x 1 alain alain 332 mars 22 19:19 conky_liste4 -rwx--x--x 1 alain alain 688 mars 22 19:19 conky_liste5 -rwxrwxr-x 1 alain alain 94 mars 23 23:35 DemConky.sh -rwx--x--x 1 alain alain 2584 mars 22 19:19 GesConkyControl -rwx--x--x 1 alain alain 144 mars 22 19:19 lanceur Bien cordialement, Alain EDIT : 14 h 23 Bon, finalement je sais pourquoi, et peut-être comment... Après avoir examiné les fichiers modifiés depuis l'installation, j'ai trouvé cette différence, d'un profil à l'autre, dans le fichier DemConky.sh Dans le profil où ça fonctionne : #!/bin/bash # License GPL sleep 10; sh -c "conky -c ~/.conky/conky-extra/conky12/conkyrc &" Dans le profil où une erreur est générée : #!/bin/bash # License GPL sleep None; sh -c "conky -c ~/.conky/conky-extra/conky12/conkyrc &" D'où sort ce(tte) None ? Je n'ai pas (directement en tout cas) modifié ce fichier. Sans doute une manip involontaire de l'utilisateur (oui, c'est moi ) dans l'interface de réglages, pas prévue au programme et indigeste. Désolé pour le dérangement, mais cette expérience pourra peut-être servir à d'autres. Encore merci. Dernière modification par Blek85 (Le 25/03/2014, à 15:48) Hors ligne rodofr Re : Conky Control (Live Voyager) Bonjour Didier, Oui une petite question pour conky control, c'est possible de mettre une fonction icon dans Réglage aspect fenêtres dans yad c'est --window-icon=.... J'ai pas trouvé comment faire pour python ou si c'est possible encore merci. Tu vois je refais tout les styles, avec yad et wall je peux intégrer une miniature icon témoin mais pas pour conky control qui prend une standard. Sinon, pour les options j'ai fait un mélange français anglais comme avec démarrage auto couplé à Autostart, terminer avec finish etc car les étrangers comprennent pas toujours les options. Sinon, ton Conky control en python toujours au top pour la voyager 14.04. A bientôt et merci Dernière modification par rodofr (Le 27/03/2014, à 01:58) Live DVD/USB Voyager Une autre façon de voir Ubuntu Hors ligne ricorde Re : Conky Control (Live Voyager) bonjour a tous j'ai installer conky sur Ubuntu 12.04 heure et date le numéro 11 et le numéro 19 je suis en doubles écran un écran ordi brancher en Display-port une télé écran brancher en HDMI ma configuration désigne mon écran ordi comme 1éme écran le conky 19 est bien sur cette écran mais le 11 ce met sur l'autre je souhaiterais savoir que faut il changer dans le fichier pour qu'il soit sur les 2 écran Merci par avance est il possible de faire afficher la température de la carte graphique sur le bureau ? Processeur : Intel Core i7-3930K 3200.0 MHz Socket R (LGA2011-0) Carte graphique : GIGABYTE GeForce GTX 680 Carte mére : ASUS SABERTOOTH X79 Memory : Kingston DDR3 SDRAM 666.7 MHz 2 fois 8 Go Hors ligne ricorde Re : Conky Control (Live Voyager) Le conky 19 permet de voir l’état de ca connections lorsque lon et en wifi la je suis en g3 es qu’il y a une modif a faire pour voir le débit de ça connections je sais pas si j'ai été bien clair Merci Processeur : Intel Core i7-3930K 3200.0 MHz Socket R (LGA2011-0) Carte graphique : GIGABYTE GeForce GTX 680 Carte mére : ASUS SABERTOOTH X79 Memory : Kingston DDR3 SDRAM 666.7 MHz 2 fois 8 Go Hors ligne ricorde Re : Conky Control (Live Voyager) bonsoir kennyzz69, Pour ConkyControl, voici la dernière version. Il faut décompresser dans ton Home. Ça ajoutera deux répertoires cacher, Conky et Scripts. Dans Scripts, tu trouveras le sous-répertoire Conky. Et dans le répertoire conky, c'est le fichier lanceur qui t’intéresse. J'ai ajouter la derniere version des conky dans Ubuntu 14.04 j'ai suivi la manip ca ne fonctionne pas Merci d'avance je sais pas si ca peux servire je poste le retour de cette commande ls ~/.conky/ conky conky-bar conky-cover conky-extra conky-perso alma@ubuntu:~$ Dernière modification par ricorde (Le 26/04/2014, à 23:16) Processeur : Intel Core i7-3930K 3200.0 MHz Socket R (LGA2011-0) Carte graphique : GIGABYTE GeForce GTX 680 Carte mére : ASUS SABERTOOTH X79 Memory : Kingston DDR3 SDRAM 666.7 MHz 2 fois 8 Go Hors ligne loulou54 Re : Conky Control (Live Voyager) hello je viens de decouvrir conky control dans ma voyager 14.04 j'ai de suite installé la meteo et elle s'affiche bien sur mon bureau mais me donne aucunes info comme si je ne recevais pas les info. que faut il faire de plus ? Merci bien Hors ligne Didier-T Re : Conky Control (Live Voyager) Bonsoir loulou54, je crois qu'il y a un soucis dans l'installation faite sur voyager. Le plus simple est de suivre ma signature, y récupérer le pack de base. Supprimer les répertoires ~/.conky/conky-meteo/meteo ~/.conky/conky-meteo/meteo_lua_2 Puis installer le débian, tu auras une nouvelle application installer dans accessoires (meteo-lua), elle te permettra de choisir ta ville. Ensuite il n'y auras plus qu'a lancer le conky. Hors ligne loulou54 Re : Conky Control (Live Voyager) Arffff merci c'est super sympa, mais je suis débutant est je crains de ne pas pouvoir arriver à faire les manip. Je ne sais pas où trouver les répertoires et encore moins installer le debian Hors ligne metalux Re : Conky Control (Live Voyager) Bonjour loulou54, C'est tout simple, t'inquiète pas, Didier-T nous as fait un outil "clef en main". le tilde(~) ça correspond à ton dossier utilisateur. Tu le retrouveras régulièrement dans les lignes de commande ou sur le forum pour désigner ce dossier. Soit tu les supprimes en graphique en faisant la combinaison des touches Ctrl+H pour faire apparaître les fichiers/dossiers cachés et tu fais comme tu le faisais sous ton ancien système. Soit tu te contente de recopier les 2 lignes ci-dessous dans un terminal: rm -r ~/.conky/conky-meteo/meteo rm -r ~/.conky/conky-meteo/meteo_lua_2 encore moins installer le debian Il te suffit de double-cliquer dessus et il te proposera de l'installer. Ensuite tu lances meteo-lua et tu renseignes ta ville. Il ne te reste plus qu'à lancer le conky avec la commande conky -c ~/.conky/conky-meteo/meteo/conkyrc ou le plus simple, tu l'ajoutes dans conky control. N'oublie pas ensuite d'enregistrer dans conky control pour qu'il soit présent au redémarrage de la machine. Xubuntu 14.04 LTS sur HP Pavilion t728.fr Precise Pangolin 64 bits sur Acer aspire 5738ZG Voyager 13.04 mise à niveau en 14.04 sur TOSHIBA Satellite C870-196. Faîtes la mise à jour de vos P.P.A. automatiquement Hors ligne loulou54 Re : Conky Control (Live Voyager) Bon j'essaye demain mais je promet pas d'y arriver du 1er coup hihi Hors ligne loulou54 Re : Conky Control (Live Voyager) Bon j'y suis arrivé mais sans les instructions lol Hors ligne loulou54 Re : Conky Control (Live Voyager) oui oui bien sur en fait : ces lignes de commandes ne fonctionnaient pas dans terminal: rm -r ~/.conky/conky-meteo/meteo rm -r ~/.conky/conky-meteo/meteo_lua_2 Le raccourci ctrl H non plus va comprendre ..... ( peut être à cause de voyager 14.04....) Je ne trouvais pas le dossier conky en fait si mais sans le dossier meteo et pas dans arborescence direct Donc du coup je me suis dis : on me dis d'installer un debian ( donc un programme on va dire ) si ça se trouve il va tous me recaler je refait ma manip dans le conky et bim apres qq secondes ma meteo apparait avec celle de ma ville Voili voila Hors ligne enebre Re : Conky Control (Live Voyager) Bonjour Didier-T, j'ai un petit soucis avec Conky-Control qui refuse de s'ouvrir, voici l'erreur revenue du terminal au lancement du script, je suis sur voyager 14.04 fraîchement installée $ ▶ python3 ~/.scripts/Conky/ConkyControl.py Conky contrôle Version 1.6 Traceback (most recent call last): File "/home/marc/.scripts/Conky/ConkyControl.py", line 551, in <module> app = application() File "/home/marc/.scripts/Conky/ConkyControl.py", line 535, in __init__ mBar = MenuBar(self) File "/home/marc/.scripts/Conky/ConkyControl.py", line 143, in __init__ self.BasesConky, self.ConkyLances, self.TempLat = fonctions.initialisation(fonctions()) File "/home/marc/.scripts/Conky/ConkyControl.py", line 75, in initialisation Latence=findall('-?\d+', line)[0] IndexError: list index out of range Hors ligne Didier-T Re : Conky Control (Live Voyager) Bonjour enebre, pour le moment je ne peut pas te donner de réponse car je suis loin de chez moi, mais soit il te manque un fichier, celui qui sert au démarrage automatique des conky, soit il est corrompu. nous verrons ça ce Week-end. Hors ligne enebre Re : Conky Control (Live Voyager) bonjour, comment pourrais-je écrire la ligne d'usage disque pour un média amovible afin que lorsque celui-ci n'est pas monté le conky arrête de faire la requête du média à chaque seconde. ${color}ESPACE DISQUE:${color3}${hr} ${color3}/ ${color2}${goto 50}${fs_used /} / ${fs_size /} ${goto 165}${fs_used_perc /}% ${goto 195}${color4}${fs_bar /} ${color3}D-n-g ${color2}${goto 50}${fs_used /media/marc/Drive-n-go} / ${fs_size /media/marc/Drive-n-go} ${goto 165}${fs_used_perc /media/marc/Drive-n-go}% ${goto 195}${color4}${fs_bar /media/marc/Drive-n-go} Dernière modification par enebre (Le 28/07/2014, à 09:28) Hors ligne ljere Re : Conky Control (Live Voyager) regarde ici il utilise la fonction if de conky http://conky.pitstop.free.fr/wiki/index … s_%28fr%29 En ligne enebre Re : Conky Control (Live Voyager) bonjour lijere, désolé ça ne fonctionne pas ! ${color3}D-n-g ${color2}${goto 50}${fs_used /media/marc/Drive-n-go} / ${fs_size /media/marc/Drive-n-go} ${goto 165}${fs_used_perc /media/marc/Drive-n-go}% ${goto 195}${color4}${fs_bar /media/marc/Drive-n-go}${if_existing /media/marc/Drive-n-go}${endif} ${color3}data ${color2}${goto 50}${fs_used /media/marc/data} / ${fs_size /media/marc/data} ${goto 165}${fs_used_perc /media/marc/data}% ${goto 195}${color4}${fs_bar /media/marc/data}${if_existing /media/marc/Drive-n-go}${endif} Conky: statfs64 '/media/marc/Drive-n-go': No such file or directory Conky: statfs64 '/media/marc/Drive-n-go': No such file or directory Conky: statfs64 '/media/marc/Drive-n-go': No such file or directory Conky: statfs64 '/media/marc/Drive-n-go': No such file or directory Conky: statfs64 '/media/marc/Drive-n-go': No such file or directory Conky: statfs64 '/media/marc/Drive-n-go': No such file or directory Conky: statfs64 '/media/marc/Drive-n-go': No such file or directory Conky: statfs64 '/media/marc/Drive-n-go': No such file or directory je pense que l'erreur pourrait venir qu'il n'existe aucun media monté dans /mnt et dans le conky proposé la ligne est en fait : ${color2}Root:$color ${fs_used /}/${fs_size /} - ${alignr}${fs_free_perc /}% Free${if_existing /media/disk} ${color2}Disk:$color ${fs_used /media/disk}/${fs_size /media/disk| - ${alignr}${fs_free_perc /media/disk}% Free${endif}${if_existing /mnt/Votre_Volume} la demande de if_existing /mnt/Votre_Volume ne peut être remplie puisque chez moi rien apparait dans /mnt ... comment faire alors ? Dernière modification par enebre (Le 28/07/2014, à 11:22) Hors ligne
Shortly after I wrote my news article on the Python wiki program MoinMoin, Jürgen Hermann announced a new version with this notice: This is a security update, which explains the short release cycle. It replaces someexec()calls by__import__(), which is much safer (or actually, safe in contrast to totally unsafe). IF YOU HAVE INSTALLED RELEASE 0.5 OR 0.6, UPGRADE NOW! Because of the short release cycle, the code couldn't be very different between 0.6 and 0.7. That would make changes easy to spot. I love learning opportunities. I got a copy of 0.6 to compare with 0.7. There are several places in MoinMoin where Hermann wanted to use the same code but choose between different libraries of functions to be used by that code. This is called a plug-in. Hermann uses plug-ins to select formatters based on mime-type, or to select a particular parser or extension macro set based on the type of information submitted from a form. The name of the plug-in module to use is taken from information passed by CGI. For example, here is the code used to import a formatter: exec("from formatter.%s import Formatter" % (string.replace(mimetype, '/', '_'),)) However, mimetype is taken from information passed to the program from an untrustworthy outside source. This may not seem like a big security leak, but any leak is a dangerous thing. Someone could monkey with the string passed to mimetype. The exec function will execute whatever code is passed to it. Using exec, you might unwittingly execute a dangerous block of code. Here is how Hermann fixed it. He replaced these exec strings with this utility function def importName(modulename, name): """ Import a named object from a module in the context of this function, i.e. use fully qualified module paths. Return None on failure. """ try: module = __import__(modulename, globals(), locals(), [name]) except ImportError: return None return vars(module)[name] The __import__ function only imports the specified code. If the mimetype (passed to this function as modulename) were manipulated now, you would only get an import error, not a surprise intruder. It's errors like these that the taint mode in Perl was designed to catch. In this paranoid mode, information received from outside the program, like the string assigned to mimetype in this case, is rejected as tainted unless it is first checked by a regular expression. It throws in an extra step designed to make you stop and think before leaving the barn door open. While Python doesn't have an equivalent to taint mode, it does have a module to help you restrict what might be dangerous code, the restricted execution or RExec module. The RExec module is not needed in this instance because Hermann only needs to import installed code, not evaluate user supplied code. For this purpose, the __import__ function does nicely. If, however, you work with a lot of CGI, you should study up on RExec. Stephen Figgins administrates Linux servers for Sunflower Broadband, a cable company. Read more Python News columns. Discuss this article in the O'Reilly Network Python Forum. Return to the Python DevCenter. Copyright © 2009 O'Reilly Media, Inc.
oliver2004 problème wifi avec portable HP nx6125... Salut à tous, je viens d'installer sans trop de mal Kubuntu 7.10 sur mon portable HP Compaq nx6125 (j'ai de la chance il n'est pas tatoué...). Aparemment tout marche sur la machine... sauf le wifi... j'en ai pas besoin là maintenant mais j'en aurai sûrement besoin et de toutes façons, c'est agréable que tout fonctionne. La carte est fonctionnelle puisqu'aucun problème sous windows, mais rien à faire pour la faire marcher sous Kubuntu. J'ai suivi le tuto http://doc.ubuntu-fr.org/materiel/liste … paq_nx6125 mais rien n'y fait. Voici ce que j'obtiens avec les commandes indiquées sur le sujet épinglé. DISTRIB_ID=Ubuntu DISTRIB_RELEASE=7.10 DISTRIB_CODENAME=gutsy DISTRIB_DESCRIPTION="Ubuntu 7.10" olivier@kubuntu:~$ lsusb Bus 003 Device 001: ID 0000:0000 Bus 002 Device 005: ID 05a4:9862 Ortek Technology, Inc. Bus 002 Device 004: ID 05e3:1205 Genesys Logic, Inc. Afilias Optical Mouse H3003 Bus 002 Device 003: ID 05a4:9837 Ortek Technology, Inc. Bus 002 Device 002: ID 08ff:2580 AuthenTec, Inc. Bus 002 Device 001: ID 0000:0000 Bus 001 Device 001: ID 0000:0000 olivier@kubuntu:~$ lspci 00:00.0 Host bridge: ATI Technologies Inc RS480 Host Bridge (rev 01) 00:01.0 PCI bridge: ATI Technologies Inc RS480 PCI Bridge 00:04.0 PCI bridge: ATI Technologies Inc RS480 PCI Bridge 00:05.0 PCI bridge: ATI Technologies Inc RS480 PCI Bridge 00:13.0 USB Controller: ATI Technologies Inc IXP SB400 USB Host Controller 00:13.1 USB Controller: ATI Technologies Inc IXP SB400 USB Host Controller 00:13.2 USB Controller: ATI Technologies Inc IXP SB400 USB2 Host Controller 00:14.0 SMBus: ATI Technologies Inc IXP SB400 SMBus Controller (rev 11) 00:14.1 IDE interface: ATI Technologies Inc Standard Dual Channel PCI IDE Controller 00:14.3 ISA bridge: ATI Technologies Inc IXP SB400 PCI-ISA Bridge 00:14.4 PCI bridge: ATI Technologies Inc IXP SB400 PCI-PCI Bridge 00:14.5 Multimedia audio controller: ATI Technologies Inc IXP SB400 AC'97 Audio Controller (rev 02) 00:14.6 Modem: ATI Technologies Inc SB400 AC'97 Modem Controller (rev 02) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 01:05.0 VGA compatible controller: ATI Technologies Inc Radeon XPRESS 200M 5955 (PCIE) 02:01.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5788 Gigabit Ethernet (rev 03) 02:02.0 Network controller: Broadcom Corporation BCM4318 [AirForce One 54g] 802.11g Wireless LAN Controller (rev 02) 02:04.0 CardBus bridge: Texas Instruments PCIxx21/x515 Cardbus Controller 02:04.2 FireWire (IEEE 1394): Texas Instruments OHCI Compliant IEEE 1394 Host Controller 02:04.3 Mass storage controller: Texas Instruments PCIxx21 Integrated FlashMedia Controller 02:04.4 Generic system peripheral [0805]: Texas Instruments PCI6411/6421/6611/6621/7411/7421/7611/7621 Secure Digital Controller olivier@kubuntu:~$ sudo lshw -C network [sudo] password for olivier: *-network:0 description: Ethernet interface product: NetXtreme BCM5788 Gigabit Ethernet vendor: Broadcom Corporation physical id: 1 bus info: pci@0000:02:01.0 logical name: eth0 version: 03 serial: 00:0f:b0:b9:7c:f6 size: 100MB/s capacity: 1GB/s width: 32 bits clock: 66MHz capabilities: pm vpd msi bus_master cap_list ethernet physical mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.77 duplex=full firmware=5788-v3.26 ip=192.168.1.21 latency=64 link=yes mingnt=64 module=tg3 multicast=yes port=twisted pair speed=100MB/s *-network:1 UNCLAIMED description: Network controller product: BCM4318 [AirForce One 54g] 802.11g Wireless LAN Controller vendor: Broadcom Corporation physical id: 2 bus info: pci@0000:02:02.0 version: 02 width: 32 bits clock: 33MHz capabilities: bus_master configuration: latency=64 olivier@kubuntu:~$ lsmod Module Size Used by sg 41384 0 ipv6 317192 8 af_packet 28172 2 rfcomm 47656 2 l2cap 28672 11 rfcomm bluetooth 63876 4 rfcomm,l2cap radeon 129824 2 drm 106408 3 radeon ppdev 11272 0 powernow_k8 16608 0 cpufreq_stats 8160 0 cpufreq_ondemand 10896 0 cpufreq_powersave 3072 0 cpufreq_userspace 6048 0 freq_table 6464 3 powernow_k8,cpufreq_stats,cpufreq_ondemand cpufreq_conservative 9608 0 video 21140 0 battery 12424 0 sbs 21520 0 container 6400 0 button 10400 0 dock 12264 0 ac 7304 0 ndiswrapper 233632 0 sbp2 27144 0 lp 15048 0 joydev 13440 0 pcmcia 46232 0 snd_atiixp 24084 2 snd_seq_dummy 5380 0 tifm_7xx1 10112 0 tifm_core 13832 1 tifm_7xx1 sdhci 21004 0 mmc_core 33416 1 sdhci pcspkr 4608 0 xpad 11400 0 parport_pc 41896 1 parport 44172 3 ppdev,lp,parport_pc serio_raw 9092 0 psmouse 45596 0 k8temp 7680 0 snd_seq_oss 36864 0 yenta_socket 30220 1 rsrc_nonstatic 14208 1 yenta_socket pcmcia_core 46628 3 pcmcia,yenta_socket,rsrc_nonstatic snd_seq_midi 11008 0 snd_rawmidi 29824 1 snd_seq_midi snd_atiixp_modem 19468 1 snd_ac97_codec 122200 2 snd_atiixp,snd_atiixp_modem ac97_bus 4096 1 snd_ac97_codec snd_seq_midi_event 9984 2 snd_seq_oss,snd_seq_midi snd_pcm_oss 50048 0 snd_mixer_oss 20096 1 snd_pcm_oss snd_seq 62496 6 snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_seq_midi_event snd_seq_device 10260 5 snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_rawmidi,snd_seq snd_pcm 94344 4 snd_atiixp,snd_atiixp_modem,snd_ac97_codec,snd_pcm_oss snd_timer 27272 2 snd_seq,snd_pcm snd 69288 17 snd_atiixp,snd_seq_oss,snd_rawmidi,snd_atiixp_modem,snd_ac97_codec,snd_pcm_oss,snd_mixer_oss,snd_seq,snd_seq_device,snd_pcm,snd_timer soundcore 10272 1 snd snd_page_alloc 12560 3 snd_atiixp,snd_atiixp_modem,snd_pcm i2c_piix4 11020 0 i2c_core 30208 1 i2c_piix4 shpchp 38300 0 pci_hotplug 36612 1 shpchp evdev 13056 7 ext3 146576 2 jbd 69360 1 ext3 mbcache 11272 1 ext3 ide_cd 35488 0 cdrom 41768 1 ide_cd ide_disk 20352 6 ata_generic 9988 0 libata 138928 1 ata_generic scsi_mod 172856 3 sg,sbp2,libata usbhid 32576 0 hid 33408 1 usbhid ohci1394 38984 0 ieee1394 109528 2 sbp2,ohci1394 tg3 118788 0 atiixp 7824 0 [permanent] ide_core 141200 3 ide_cd,ide_disk,atiixp ehci_hcd 40076 0 ohci_hcd 25092 0 usbcore 161584 6 ndiswrapper,xpad,usbhid,ehci_hcd,ohci_hcd thermal 16528 0 processor 36232 2 powernow_k8,thermal fan 6920 0 fuse 52528 5 apparmor 47008 0 commoncap 9472 1 apparmor olivier@kubuntu:~$ iwconfig lo no wireless extensions. eth0 no wireless extensions. olivier@kubuntu:~$ ifconfig eth0 Lien encap:Ethernet HWaddr 00:0F:B0:B9:7C:F6 inet adr:192.168.1.21 Bcast:192.168.1.255 Masque:255.255.255.0 adr inet6: fe80::20f:b0ff:feb9:7cf6/64 Scope:Lien UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Packets reçus:252387 erreurs:0 :0 overruns:0 frame:0 TX packets:177429 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:1000 Octets reçus:275552682 (262.7 MB) Octets transmis:18439639 (17.5 MB) Interruption:10 lo Lien encap:Boucle locale inet adr:127.0.0.1 Masque:255.0.0.0 adr inet6: ::1/128 Scope:Hà´te UP LOOPBACK RUNNING MTU:16436 Metric:1 Packets reçus:20 erreurs:0 :0 overruns:0 frame:0 TX packets:20 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 Octets reçus:1620 (1.5 KB) Octets transmis:1620 (1.5 KB) olivier@kubuntu:~$ iwlist scan lo Interface doesn't support scanning. eth0 Interface doesn't support scanning. olivier@kubuntu:~$ uname -r -m 2.6.22-14-generic x86_64 olivier@kubuntu:~$ cat /etc/network/interfaces auto lo iface lo inet loopback Donc apparemment, si j'ai bien compris, le driver est bien installé mais la carte pas reconnue? Si quelqu'un pouvais me donner une piste?? Merci d'avance :) Dans les bureaux: Ubuntu Serveur Edition 10.04 Mon laptop: Kubuntu 14.04 sur DELL Inspiron 14 (Ubuntu 12.04 à l'achat) Autres machines dans les bureaux: Kubuntu 14.04 sur 2 Compaq CQ42, Kubuntu 14.04 sur HP 550, Kubuntu 14.04 sur DELL Inspiron 14, Lubuntu 12.04 sur Compaq Netbook CQ10-600la... Sur très vielle tour: Lubuntu 12.04... ça tourne rond Hors ligne willy78 Re : problème wifi avec portable HP nx6125... lspci: 00:0b.0 Network controller: Broadcom Corporation BCM4318 [AirForce One 54g] 802.11g Wireless LAN Controller (rev 02) Tous ça avec une connection active On télecharge le driver adequat (bcmwl5): cd wget http://dlsvr03.asus.com/pub/ASUS/wireless/WL-100g-03/Driverv3100640.zip unzip Driverv3100640.zip Ensuite on installe ndiswrapper: sudo apt-get install ndiswrapper-utils-1.9 on charge le driver dans ndiswrapper sudo ndiswrapper -i ~/Driver/WinXP/bcmwl5.inf et on verifie s'il est bien chargé (avant de faire quoi que ce soi d'autres) ndiswrapper -l #C'est un petit L si il affiche un truc qui ressemble à ça: bcmwl5 : driver installed device (14E4:4318) present (alternate driver: bcm43xx) Tu peux continuer! pour décharger le module chargé automatiquement par Gutsy sudo rmmod bcm43xx pour charger ndiswrapper sudo ndiswrapper -m sudo modprobe ndiswrapper Blacklister le module chargé automatiquement par gutsy echo 'blacklist bcm43xx' | sudo tee -a /etc/modprobe.d/blacklist Mettre ndiswrapper dans la liste des modules chargés au démarrage: echo 'ndiswrapper' | sudo tee -a /etc/modules modif du fichier interfaces echo -e 'auto lo\niface lo inet loopback\n' | sudo tee /etc/network/interfaces et connecte toi avec network-manager (2 tv en haut a droite à coté de l'heure)! Hors ligne oliver2004 Re : problème wifi avec portable HP nx6125... Salut, merci pour la réponse.:) Voici où j'en suis, ça bloque quelque part: olivier@kubuntu:~$ wget http://dlsvr03.asus.com/pub/ASUS/wireless/WL-100g-03/Driverv3100640.zip --10:49:00-- http://dlsvr03.asus.com/pub/ASUS/wireless/WL-100g-03/Driverv3100640.zip => `Driverv3100640.zip' Résolution de dlsvr03.asus.com... 122.213.112.90 Connexion vers dlsvr03.asus.com|122.213.112.90|:80... connecté. requête HTTP transmise, en attente de la réponse... 200 OK Longueur: 926 968 (905K) [application/x-zip-compressed] 100%[====================================>] 926 968 141.47K/s ETA 00:00 10:49:07 (141.27 KB/s) - « Driverv3100640.zip » sauvegardé [926968/926968] --> Téléchargement du fichier, ok olivier@kubuntu:~$ unzip Driverv3100640.zip Archive: Driverv3100640.zip creating: Driver/ creating: Driver/WinXP/ inflating: Driver/WinXP/bcm43xx.cat inflating: Driver/WinXP/bcmwl5.inf inflating: Driver/WinXP/bcmwl5.sys creating: Driver/WinME/ extracting: Driver/WinME/bcm43xxa.cat inflating: Driver/WinME/bcmwl5.sys inflating: Driver/WinME/bcmwl5a.inf creating: Driver/Win98/ extracting: Driver/Win98/bcm43xxa.cat inflating: Driver/Win98/bcmwl5.sys inflating: Driver/Win98/bcmwl5a.inf creating: Driver/Win2K/ inflating: Driver/Win2K/bcm43xx.cat inflating: Driver/Win2K/bcmwl5.inf inflating: Driver/Win2K/bcmwl5.sys --> Dezipage du fichier ok. olivier@kubuntu:~$ sudo apt-get install ndiswrapper-utils-1.9 [sudo] password for olivier: Lecture des listes de paquets... Fait Construction de l'arbre des dépendances Lecture des informations d'état... Fait ndiswrapper-utils-1.9 est déjà la plus récente version disponible. Les paquets suivants ont été installés automatiquement et ne sont plus nécessaires : python-cairo python-gtk2 python-numeric Veuillez utiliser « apt-get autoremove » pour les supprimer. 0 mis à jour, 0 nouvellement installés, 0 à enlever et 0 non mis à jour. --> Installation de ndiswrapper-utils-1.9 , ok. olivier@kubuntu:~$ sudo ndiswrapper -i ~/Driver/WinXP/bcmwl5.inf installing bcmwl5 ... forcing parameter IBSSGMode from 0 to 2 forcing parameter IBSSGMode from 0 to 2 forcing parameter IBSSGMode from 0 to 2 forcing parameter IBSSGMode from 0 to 2 forcing parameter IBSSGMode from 0 to 2 forcing parameter IBSSGMode from 0 to 2 forcing parameter IBSSGMode from 0 to 2 forcing parameter IBSSGMode from 0 to 2 forcing parameter IBSSGMode from 0 to 2 forcing parameter IBSSGMode from 0 to 2 forcing parameter IBSSGMode from 0 to 2 olivier@kubuntu:~$ ndiswrapper -l bcmwl5 : driver installed device (14E4:4318) present (alternate driver: bcm43xx) bcmwl5a : driver installed device (14E4:4318) present (alternate driver: bcm43xx) --> Chargement du driver et vérification ok. olivier@kubuntu:~$ sudo rmmod bcm43xx ERROR: Module bcm43xx does not exist in /proc/modules #### ERREUR En fait j'ai éditer le fichier /proc/modules avec kate et il est vide, il n'y a rien dedans. olivier@kubuntu:~$ sudo ndiswrapper -m module configuration already contains alias directive ### Il y a déjà une directive de configuration olivier@kubuntu:~$ sudo modprobe ndiswrapper olivier@kubuntu:~$ echo 'blacklist bcm43xx' | sudo tee -a /etc/modprobe.d/blacklist blacklist bcm43xx --> Ok, le bcm43xx est bien blacklisté.... Le fichier /proc/modules reste vide, rien dessus. C'est normal? J'ai donc essayé de me connecter à travers KNetworkManager mais l'onglet Network reste inactif.:/ Dernière modification par oliver2004 (Le 03/02/2008, à 13:02) Dans les bureaux: Ubuntu Serveur Edition 10.04 Mon laptop: Kubuntu 14.04 sur DELL Inspiron 14 (Ubuntu 12.04 à l'achat) Autres machines dans les bureaux: Kubuntu 14.04 sur 2 Compaq CQ42, Kubuntu 14.04 sur HP 550, Kubuntu 14.04 sur DELL Inspiron 14, Lubuntu 12.04 sur Compaq Netbook CQ10-600la... Sur très vielle tour: Lubuntu 12.04... ça tourne rond Hors ligne willy78 Re : problème wifi avec portable HP nx6125... il y a un driver de trop d'o๠problème! sudo ndiswrapper -e bcmwl5a reboot et donnes ces retour sudo lshw -C network iwlist scanning Pour proc/module aucune importance! si bcm43xx est déjà blacklister suite a ton essaie avec le driver bcmwl5a normal qu'il ne soit pas charger et c'est tant mieux! Dernière modification par willy78 (Le 03/02/2008, à 13:43) Hors ligne oliver2004 Re : problème wifi avec portable HP nx6125... Voici les retours [sudo] password for olivier: *-network:0 description: Ethernet interface product: NetXtreme BCM5788 Gigabit Ethernet vendor: Broadcom Corporation physical id: 1 bus info: pci@0000:02:01.0 logical name: eth0 version: 03 serial: 00:0f:b0:b9:7c:f6 size: 100MB/s capacity: 1GB/s width: 32 bits clock: 66MHz capabilities: pm vpd msi bus_master cap_list ethernet physical mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.77 duplex=full firmware=5788-v3.26 ip=192.168.1.21 latency=64 link=yes mingnt=64 module=tg3 multicast=yes port=twisted pair speed=100MB/s *-network:1 UNCLAIMED description: Network controller product: BCM4318 [AirForce One 54g] 802.11g Wireless LAN Controller vendor: Broadcom Corporation physical id: 2 bus info: pci@0000:02:02.0 version: 02 width: 32 bits clock: 33MHz capabilities: bus_master configuration: latency=64 olivier@kubuntu:~$ iwlist scanning lo Interface doesn't support scanning. eth0 Interface doesn't support scanning. Apparemment ça ne va toujours pas Uhm, je vois ceci: width: 32 bits, je suis en 64 bits, ne serait-ce pas ça l'erreur? Dernière modification par oliver2004 (Le 03/02/2008, à 13:53) Dans les bureaux: Ubuntu Serveur Edition 10.04 Mon laptop: Kubuntu 14.04 sur DELL Inspiron 14 (Ubuntu 12.04 à l'achat) Autres machines dans les bureaux: Kubuntu 14.04 sur 2 Compaq CQ42, Kubuntu 14.04 sur HP 550, Kubuntu 14.04 sur DELL Inspiron 14, Lubuntu 12.04 sur Compaq Netbook CQ10-600la... Sur très vielle tour: Lubuntu 12.04... ça tourne rond Hors ligne willy78 Re : problème wifi avec portable HP nx6125... sudo modprobe -r ndiswrapper sudo modprobe ndiswrapper et retour de sudo lshw -C network iwlist scanning Hors ligne oliver2004 Re : problème wifi avec portable HP nx6125... olivier@kubuntu:~$ sudo modprobe -r ndiswrapper olivier@kubuntu:~$ sudo modprobe ndiswrapper olivier@kubuntu:~$ sudo lshw -C network *-network:0 description: Ethernet interface product: NetXtreme BCM5788 Gigabit Ethernet vendor: Broadcom Corporation physical id: 1 bus info: pci@0000:02:01.0 logical name: eth0 version: 03 serial: 00:0f:b0:b9:7c:f6 size: 100MB/s capacity: 1GB/s width: 32 bits clock: 66MHz capabilities: pm vpd msi bus_master cap_list ethernet physical mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.77 duplex=full firmware=5788-v3.26 ip=192.168.1.21 latency=64 link=yes mingnt=64 module=tg3 multicast=yes port=twisted pair speed=100MB/s *-network:1 UNCLAIMED description: Network controller product: BCM4318 [AirForce One 54g] 802.11g Wireless LAN Controller vendor: Broadcom Corporation physical id: 2 bus info: pci@0000:02:02.0 version: 02 width: 32 bits clock: 33MHz capabilities: bus_master configuration: latency=64 olivier@kubuntu:~$ iwlist scanning lo Interface doesn't support scanning. eth0 Interface doesn't support scanning. Au mème point je pense. Dans les bureaux: Ubuntu Serveur Edition 10.04 Mon laptop: Kubuntu 14.04 sur DELL Inspiron 14 (Ubuntu 12.04 à l'achat) Autres machines dans les bureaux: Kubuntu 14.04 sur 2 Compaq CQ42, Kubuntu 14.04 sur HP 550, Kubuntu 14.04 sur DELL Inspiron 14, Lubuntu 12.04 sur Compaq Netbook CQ10-600la... Sur très vielle tour: Lubuntu 12.04... ça tourne rond Hors ligne willy78 Re : problème wifi avec portable HP nx6125... §Mince j'avais pas vu tu es en 64 bits donc c'est mort pour ndiswrapper, dommage c'est la meilleur soluce! il faut passer par les driver proprio et donc virer bcm43xx du fichier blacklist: sudo gedit /etc/modprobe.d/blacklist et vire toutes les lignes avec blacklist bcm43xx ensuite va dans le menu Système > Administration > Gestionnaire de pilotes propriétaires et tu coches le pilotes de ta carte (il faut quand même un connection ethernet!) si ça marche pas passe en 32 bits (le 64 bits c'est pas du tout au point et pas que sous linux il faut attendre quelque années avant de prendre cette solution!) Hors ligne oliver2004 Re : problème wifi avec portable HP nx6125... Ah zut... bon, donc il va sans doute que je refasse une install en 32 bits? bon, c'est con mais tant pis, moi qui avait réussi à configurer mon système... 32 bits est sans doute plus stable de toutes façons. En attendant, j'ai essayé d'activer le pilote du modem mais le message est le suivant: "les sources logicielles pour le paquet sl-modem-daemon ne sont pas activées" O๠puis-je trouver ces sources pour les activer? Dans les bureaux: Ubuntu Serveur Edition 10.04 Mon laptop: Kubuntu 14.04 sur DELL Inspiron 14 (Ubuntu 12.04 à l'achat) Autres machines dans les bureaux: Kubuntu 14.04 sur 2 Compaq CQ42, Kubuntu 14.04 sur HP 550, Kubuntu 14.04 sur DELL Inspiron 14, Lubuntu 12.04 sur Compaq Netbook CQ10-600la... Sur très vielle tour: Lubuntu 12.04... ça tourne rond Hors ligne willy78 Re : problème wifi avec portable HP nx6125... je sais pas de quelle modem tu parles mais fais ça pour voir on ne sais jamais sudo apt-get install sl-modem-daemon Hors ligne oliver2004 Re : problème wifi avec portable HP nx6125... Ben c'est le seul truc qui reste à installer dans les pilotes propriétaires. Donc j'ai supposé que c'était ça mais après quelques recherches, je vois que c'est le modem fax... donc rien à voir avec le wifi... Je pense donc passer à 32 bits... le téléchargement est d'ailleurs déjà terminé, mais pas de cd vierge à portée de main... Je pensais que c'était mieux de passer à 64 bits pour tirer parti de toutes les ressources du système, mais si c'est pas encore au point autant rester à 32 bits... :) Je recommencerai l'install du wifi après, je tiens au courant merci en tout cas pour ton aide. Dans les bureaux: Ubuntu Serveur Edition 10.04 Mon laptop: Kubuntu 14.04 sur DELL Inspiron 14 (Ubuntu 12.04 à l'achat) Autres machines dans les bureaux: Kubuntu 14.04 sur 2 Compaq CQ42, Kubuntu 14.04 sur HP 550, Kubuntu 14.04 sur DELL Inspiron 14, Lubuntu 12.04 sur Compaq Netbook CQ10-600la... Sur très vielle tour: Lubuntu 12.04... ça tourne rond Hors ligne willy78 Re : problème wifi avec portable HP nx6125... ok salut et a plus sur ce topic au cas ou Pour le 64 bits il faut avoir du bol avec le matériel sinon c'est la croix et la bannière. En plus les logiciel ne sont pas encore vraiment porter en 64 bits juste du code 32 bits compiler en 64 bits donc coté optimisation ça sert a rien au mieux 4 a 10 % de mieux! Hors ligne oliver2004 Re : problème wifi avec portable HP nx6125... Vu, oui, effectivement, ça ne sert pas encore à grand chose donc, en plus si ça bloque sur certains trucs... je passe donc à 32 bits, on attendra pour me 64 bits... Merci willy78 Dans les bureaux: Ubuntu Serveur Edition 10.04 Mon laptop: Kubuntu 14.04 sur DELL Inspiron 14 (Ubuntu 12.04 à l'achat) Autres machines dans les bureaux: Kubuntu 14.04 sur 2 Compaq CQ42, Kubuntu 14.04 sur HP 550, Kubuntu 14.04 sur DELL Inspiron 14, Lubuntu 12.04 sur Compaq Netbook CQ10-600la... Sur très vielle tour: Lubuntu 12.04... ça tourne rond Hors ligne oliver2004 Re : problème wifi avec portable HP nx6125... Salut à nouveau,:) Je suis donc passé à la distribution en 32bits et j'ai refait les manip pour le carte wifi. Aucune erreur n'apparait, je peux maintenant allumer et éteindre le bouton du wifi sur le portable... auparavant, celui-ci restait allumé ou éteint sans changement... Par contre le wifi ne marche pas mieux... l'onglet Network dans KNetwork Manager n'est toujours pas activé. Ai-je raté quelque chose? olivier@kubuntu:~$ sudo lshw -C network *-network:0 description: Ethernet interface product: NetXtreme BCM5788 Gigabit Ethernet vendor: Broadcom Corporation physical id: 1 bus info: pci@0000:02:01.0 logical name: eth0 version: 03 serial: 00:0f:b0:b9:7c:f6 size: 100MB/s capacity: 1GB/s width: 32 bits clock: 66MHz capabilities: pm vpd msi bus_master cap_list ethernet physical mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.77 duplex=full firmware=5788-v3.26 ip=192.168.1.21 latency=64 link=yes mingnt=64 module=tg3 multicast=yes port=twisted pair speed=100MB/s *-network:1 description: Wireless interface product: BCM4318 [AirForce One 54g] 802.11g Wireless LAN Controller vendor: Broadcom Corporation physical id: 2 bus info: pci@0000:02:02.0 logical name: eth1 version: 02 serial: 00:14:a5:2a:02:4d width: 32 bits clock: 33MHz capabilities: bus_master ethernet physical wireless configuration: broadcast=yes driver=ndiswrapper+bcmwl5 driverversion=1.45+ASUS,02/11/2005, 3.100.64.0 latency=64 link=no module=ndiswrapper multicast=yes wireless=IEEE 802.11g olivier@kubuntu:~$ iwlist scanning lo Interface doesn't support scanning. eth0 Interface doesn't support scanning. eth1 Scan completed : Cell 01 - Address: 00:1A:6B:C1:51:D1 ESSID:"Livebox-6547" Protocol:IEEE 802.11g Mode:Managed Frequency:2.457 GHz (Channel 10) Quality:78/100 Signal level:-46 dBm Noise level:-96 dBm Encryption key:on Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s; 36 Mb/s 48 Mb/s; 54 Mb/s Extra:bcn_int=100 Extra:atim=0 Cell 02 - Address: 00:14:A4:6C:AD:EE ESSID:"WANADOO-B330" Protocol:IEEE 802.11g Mode:Managed Frequency:2.412 GHz (Channel 1) Quality:40/100 Signal level:-70 dBm Noise level:-96 dBm Encryption key:on Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 18 Mb/s 24 Mb/s; 36 Mb/s; 54 Mb/s; 6 Mb/s; 9 Mb/s 12 Mb/s; 48 Mb/s Extra:bcn_int=100 Extra:atim=0 Cell 03 - Address: 00:11:24:62:10:1D ESSID:"Cabinet Bonnel" Protocol:IEEE 802.11g Mode:Managed Frequency:2.437 GHz (Channel 6) Quality:60/100 Signal level:-57 dBm Noise level:-96 dBm Encryption key:on Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s; 36 Mb/s 48 Mb/s; 54 Mb/s Extra:bcn_int=100 Extra:atim=0 IE: WPA Version 1 Group Cipher : WEP-40 Pairwise Ciphers (1) : WEP-40 Authentication Suites (1) : PSK Il y a donc du progrès mais ça manque encore... Auriez vous une piste?:P Dernière modification par oliver2004 (Le 06/02/2008, à 13:56) Dans les bureaux: Ubuntu Serveur Edition 10.04 Mon laptop: Kubuntu 14.04 sur DELL Inspiron 14 (Ubuntu 12.04 à l'achat) Autres machines dans les bureaux: Kubuntu 14.04 sur 2 Compaq CQ42, Kubuntu 14.04 sur HP 550, Kubuntu 14.04 sur DELL Inspiron 14, Lubuntu 12.04 sur Compaq Netbook CQ10-600la... Sur très vielle tour: Lubuntu 12.04... ça tourne rond Hors ligne willy78 Re : problème wifi avec portable HP nx6125... Ah si là ça marche il scan les reseaux! Cela a-t-il été fait? Blacklister le module chargé automatiquement par gutsy echo 'blacklist bcm43xx' | sudo tee -a /etc/modprobe.d/blacklist Mettre ndiswrapper dans la liste des modules chargés au démarrage: echo 'ndiswrapper' | sudo tee -a /etc/modules modif du fichier interfaces echo -e 'auto lo\niface lo inet loopback\n' | sudo tee /etc/network/interfaces et connecte toi avec network-manager (2 tv en haut a droite à coté de l'heure)! Dernière modification par willy78 (Le 06/02/2008, à 13:59) Hors ligne oliver2004 Re : problème wifi avec portable HP nx6125... Salut, oui, tout a été fait. En fait je crois avoir fait une bourde... j'ai installé bêtement wilan, ce qui apparemment a désinstallé knetwormanager... j'ai desintallé à nouveau wilan et réinstallé knetworkmanager... il est donc installé, j'ai activé eth1, qui correspond à la wifi... donc quand j'ouvre maintenant knetworkmanager ben plus aucun onglet est actif à par le premier... alors qu'avant tous étaient actifs sauf celui de la wifi... Dans Device il me met: No active device... uhmmm, pas facile. Re testage: *-network:0 description: Ethernet interface product: NetXtreme BCM5788 Gigabit Ethernet vendor: Broadcom Corporation physical id: 1 bus info: pci@0000:02:01.0 logical name: eth0 version: 03 serial: 00:0f:b0:b9:7c:f6 size: 100MB/s capacity: 1GB/s width: 32 bits clock: 66MHz capabilities: pm vpd msi bus_master cap_list ethernet physical mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.77 duplex=full firmware=5788-v3.26 ip=192.168.1.21 latency=64 link=yes mingnt=64 module=tg3 multicast=yes port=twisted pair speed=100MB/s *-network:1 description: Wireless interface product: BCM4318 [AirForce One 54g] 802.11g Wireless LAN Controller vendor: Broadcom Corporation physical id: 2 bus info: pci@0000:02:02.0 logical name: eth1 version: 02 serial: 00:14:a5:2a:02:4d width: 32 bits clock: 33MHz capabilities: bus_master ethernet physical wireless configuration: broadcast=yes driver=ndiswrapper+bcmwl5 driverversion=1.45+ASUS,02/11/2005, 3.100.64.0 latency=64 link=no module=ndiswrapper multicast=yes wireless=IEEE 802.11g olivier@kubuntu:~$ iwlist scanning lo Interface doesn't support scanning. eth0 Interface doesn't support scanning. eth1 Scan completed : Cell 01 - Address: 00:1A:6B:C1:51:D1 ESSID:"Livebox-6547" Protocol:IEEE 802.11g Mode:Managed Frequency:2.457 GHz (Channel 10) Quality:81/100 Signal level:-44 dBm Noise level:-96 dBm Encryption key:on Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s; 36 Mb/s 48 Mb/s; 54 Mb/s Extra:bcn_int=100 Extra:atim=0 Cell 02 - Address: 00:14:A4:6C:AD:EE ESSID:"WANADOO-B330" Protocol:IEEE 802.11g Mode:Managed Frequency:2.412 GHz (Channel 1) Quality:37/100 Signal level:-72 dBm Noise level:-96 dBm Encryption key:on Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 18 Mb/s 24 Mb/s; 36 Mb/s; 54 Mb/s; 6 Mb/s; 9 Mb/s 12 Mb/s; 48 Mb/s Extra:bcn_int=100 Extra:atim=0 Cell 03 - Address: 00:11:24:62:10:1D ESSID:"Cabinet Bonnel" Protocol:IEEE 802.11g Mode:Managed Frequency:2.437 GHz (Channel 6) Quality:64/100 Signal level:-55 dBm Noise level:-96 dBm Encryption key:on Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s; 36 Mb/s 48 Mb/s; 54 Mb/s Extra:bcn_int=100 Extra:atim=0 IE: WPA Version 1 Group Cipher : WEP-40 Pairwise Ciphers (1) : WEP-40 Authentication Suites (1) : PSK Cell 04 - Address: 00:13:46:6F:C7:84 ESSID:"FPH" Protocol:IEEE 802.11g Mode:Managed Frequency:2.457 GHz (Channel 10) Quality:6/100 Signal level:-92 dBm Noise level:-96 dBm Encryption key:on Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 12 Mb/s; 24 Mb/s; 36 Mb/s; 9 Mb/s; 18 Mb/s 48 Mb/s; 54 Mb/s Extra:bcn_int=100 Extra:atim=0 En fait si je comprends bien, les différentes "Cell", c'est les différents réseaux wifi qu'il y a dans le coin?? ma live box en tout cas est bien détectée. Ou bien alors il faut faire une manip avec la livebox? Dernière modification par oliver2004 (Le 06/02/2008, à 15:09) Dans les bureaux: Ubuntu Serveur Edition 10.04 Mon laptop: Kubuntu 14.04 sur DELL Inspiron 14 (Ubuntu 12.04 à l'achat) Autres machines dans les bureaux: Kubuntu 14.04 sur 2 Compaq CQ42, Kubuntu 14.04 sur HP 550, Kubuntu 14.04 sur DELL Inspiron 14, Lubuntu 12.04 sur Compaq Netbook CQ10-600la... Sur très vielle tour: Lubuntu 12.04... ça tourne rond Hors ligne oliver2004 Re : problème wifi avec portable HP nx6125... Up? Dans les bureaux: Ubuntu Serveur Edition 10.04 Mon laptop: Kubuntu 14.04 sur DELL Inspiron 14 (Ubuntu 12.04 à l'achat) Autres machines dans les bureaux: Kubuntu 14.04 sur 2 Compaq CQ42, Kubuntu 14.04 sur HP 550, Kubuntu 14.04 sur DELL Inspiron 14, Lubuntu 12.04 sur Compaq Netbook CQ10-600la... Sur très vielle tour: Lubuntu 12.04... ça tourne rond Hors ligne willy78 Re : problème wifi avec portable HP nx6125... Tu dit j'ai activé eth1, c'est a dire? bon sinon enligne de commande fais ça: sudo iwconfig eth1 essid Livebox-6547 sudo iwconfig eth1 key Ta_clé_secrete sudo dhclient eth1 donnes le retour de la dernière commande! Hors ligne oliver2004 Re : problème wifi avec portable HP nx6125... Salut willy78 Par "activer eth1" c'est dans knetwork manager, configuration manuelle, interfaces réseaux. eth1 était desactivé, donc je me suis imaginé que ça pouvait pas marcher comme ça... j'ai donc activé l'interface mais sans plus de résultats. Voila le retour de la dernière commande: olivier@kubuntu:~$ sudo dhclient eth1 Internet Systems Consortium DHCP Client V3.0.5 Copyright 2004-2006 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ Listening on LPF/eth1/00:14:a5:2a:02:4d Sending on LPF/eth1/00:14:a5:2a:02:4d Sending on Socket/fallback DHCPDISCOVER on eth1 to 255.255.255.255 port 67 interval 5 DHCPOFFER from 192.168.1.1 DHCPREQUEST on eth1 to 255.255.255.255 port 67 DHCPACK from 192.168.1.1 bound to 192.168.1.10 -- renewal in 249034 seconds. Il a l'air d'y y avoir une réponse par contre... je n'ai toujours pas internet en wifi... Dernière modification par oliver2004 (Le 07/02/2008, à 10:33) Dans les bureaux: Ubuntu Serveur Edition 10.04 Mon laptop: Kubuntu 14.04 sur DELL Inspiron 14 (Ubuntu 12.04 à l'achat) Autres machines dans les bureaux: Kubuntu 14.04 sur 2 Compaq CQ42, Kubuntu 14.04 sur HP 550, Kubuntu 14.04 sur DELL Inspiron 14, Lubuntu 12.04 sur Compaq Netbook CQ10-600la... Sur très vielle tour: Lubuntu 12.04... ça tourne rond Hors ligne willy78 Re : problème wifi avec portable HP nx6125... c'est exactement ce qu'il ne faut pas faire (ne pas utiliser configuration manuel) C'est incompatible avec knetworkmanager le fichier /etc/network/interfaces ne doit comporter que ça: auto lo iface lo inet loopback La configuration manuel écrit des données dans ce fichier et donc knetworkmanager ne peux plus gérer la connection! bound to 192.168.1.10 -- renewal in 249034 seconds. Ceci veut dire que tu es connecter! mais en manuel donc ça marche cette commande: echo -e 'auto lo\niface lo inet loopback\n' | sudo tee /etc/network/interfaces rétabli ce qu'il faut dans le fichier interfaces après cette commande, attends 1 mn, ensuite tu cliques gauche sur knetworkmanager (sous kde, je crois qu'il est en bas a droite a coté de l'heure) et tu cliques sur ton reseau qui devrais apparaitre! Hors ligne
I can't get a basic shader program working in PyQt. I think this should at least compile the shader code correctly (I'm no expert here), but addShaderFromSourceFile() always returns false no matter what I try. The shader program log is always empty too. I'm on Ubuntu 12.04, and I can compile and run GLSL shader programs in C++. So I don't think it's a system issue. File shader.vert void main(void) { gl_Position = ftransform(); } File shader.frag void main(void) { gl_FragColor = vec4(1.0,0.0,0.0,1.0); } File test_shaders.py from OpenGL.GL import * from OpenGL.GLU import * from PyQt4 import QtCore, QtGui from PyQt4.QtOpenGL import * class ExampleQGLWidget(QGLWidget): def __init__(self, parent): QGLWidget.__init__(self, parent) self.shaderProgram = QGLShaderProgram() print self.shaderProgram.addShaderFromSourceFile(QGLShader.Vertex, "shader.vert") print self.shaderProgram.addShaderFromSourceFile(QGLShader.Fragment, "shader.frag") print self.shaderProgram.log() self.shaderProgram.link() glViewport(0,0, 640, 480) def paintGL(self): glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) self.shaderProgram.bind() def resizeGL(self, w, h): glViewport(0, 0, w, h) glMatrixMode(GL_PROJECTION) glLoadIdentity() def initializeGL(self): glClearColor(0.0, 0.0, 0.0, 1.0) glClearDepth(1.0) glMatrixMode(GL_PROJECTION) glLoadIdentity() class TestContainer(QtGui.QMainWindow): def __init__(self): QtGui.QMainWindow.__init__(self) widget = ExampleQGLWidget(self) self.setCentralWidget(widget) if __name__ == '__main__': app = QtGui.QApplication(['Shader Example']) window = TestContainer() window.show() app.exec_()
Dernière news : Fedora-Fr aux 15èmes Rencontres Mondiales du Logiciel Libre Bonjour, certains savaient que nous travaillions sur la V5 du site Fedora-Fr depuis un petit moment, une version bêta était même disponible depuis petit moment, c'est à présent la page d'accueil du forums ainsi que le planet qui passe la 5ème. Quels en sont les nouveautés ? Nouveau design plus respectueux de la charte du fedoraproject Version plus sociale vu que vous pouvez aimer Fedora-Fr, le Twitter, nous suivre sur les réseau sociaux, etc... Techniquement mieux avec eZ Publish 4.4, l'utilisation de CSS Sprites, etc... Plus de recherche avec l'utilisation d'OpenSearch, rajoutez les moteurs de recherches de Fedora-Fr facilement à votre navigateur Voila, que dire si ce n'est que c'est un gros gros travail collectif et qu'une page de remerciements sera bientôt disponible afin de remercier tout ceux qui ont contribué à cette V5. Bravo, beau travail ! Seulement, pour la page d'accueil, serait-ce possible de traduire la bannière concernant la Alpha ? C'est du SVG, cela se traduit, suffit de cliquer sur l'image pour avoir accès aux sources et modifier les chaines… Sinon je le redis, c'est un beau travail qui a été réalisé, manque plus que le Wiki pour uniformiser. « — Liberté parce que l'utilisateur est libre de faire ce qu'il veut avec le programme. — Égalité parce que tous les utilisateurs disposent des mêmes libertés. — Fraternité parce que chaque utilisateur a la possibilité de partager le programme avec le monde. » Richard Matthew Stallman Bon boulot ! Merci à tous les intervenants ! Bonne continuation tout le monde ! F20_64 Gnome-Shell - GA-990FXA-UD3 - Phenom II X6 1100T - NH-D14 - Ati HD 5570 Fanless - 8Go RAM /&/ F20_64 Gnome-Shell - GA-M55S-S3 - Athlon 64 X2 4200+ - GeminII - Ati HD5750 Fanless - 3Go RAM F20_32 - 939A785GMH/128M - Athlon 64 4000+ - 2Go RAM /&/ F20_32 Gnome-Shell - EeePC 701 - 2Go RAM Seulement, pour la page d'accueil, serait-ce possible de traduire la bannière concernant la Alpha ? https://fedoraproject.org/wiki/File:Fed … banner.svg Si tu le fais, je la mets en ligne Je redis ce que je disait dans mon sujet qui a fini dans Trash (normal, cela faisait doublon...) Niveau design c'est très classe,et très beau... Félicitations pour ce magnifique travail... Tout est dans tout... et réciproquement... C'est quoi un chalumeau??? C'est un dromaludaire à deux bosses... Quand le sage montre la lune l'imbécile regarde le doigt... Renault a écrit : Seulement, pour la page d'accueil, serait-ce possible de traduire la bannière concernant la Alpha ? https://fedoraproject.org/wiki/File:Fed … banner.svg Si tu le fais, je la mets en ligne Si cela ne convient pas, ça peut se changer. « — Liberté parce que l'utilisateur est libre de faire ce qu'il veut avec le programme. — Égalité parce que tous les utilisateurs disposent des mêmes libertés. — Fraternité parce que chaque utilisateur a la possibilité de partager le programme avec le monde. » Richard Matthew Stallman C'est en ligne. Déjà bravo pour le travaille ;) J'aime bien le nouveaux style général de fedora-fr, c'est moins austère. Mais sur la page d'accueil je regrette la disparation des dernières nouvelles des blogs et planet, et la mise en retrait de celles du forum. Mais sur la page d'accueil je regrette la disparation des dernières nouvelles des blogs et planet, et la mise en retrait de celles du forum. +1 C'est pas parce que c'est difficile qu'on n'ose pas, c'est parce qu'on ose pas que c'est difficile ! il manque le blog planet, c'est un oublie de ma part, je le rajoute de suite. Je viens aussi de looker le compte Twitter avec la V5. bonjour, etant soit sur le portable ou sur mon pc de bureau , j'ai cru avoir une hallucination , je passe du bureau au portable plus le même site de plus je regardai un episode des envahisseurs la totale quoi , je me suis pris pour David Vincent :hammer::hammer: trêve de plaisanterie , beau travail .. bonne soirée a+ Rien n'est jamais perdu, tant qu'il reste quelquechose à trouver. (Pierre Dac) Pour openSearch c'est fonctionnel 1 - Sur la page d'accueil il y a une double frappe Fedora est une distribution Linux basée sur le le système d'exploitation GNU/Linux... 2 - La bannière de F15 est un peu petite maintenant dans sa version francisée. Pas évident de tout lire. F20_64 Gnome-Shell - GA-990FXA-UD3 - Phenom II X6 1100T - NH-D14 - Ati HD 5570 Fanless - 8Go RAM /&/ F20_64 Gnome-Shell - GA-M55S-S3 - Athlon 64 X2 4200+ - GeminII - Ati HD5750 Fanless - 3Go RAM F20_32 - 939A785GMH/128M - Athlon 64 4000+ - 2Go RAM /&/ F20_32 Gnome-Shell - EeePC 701 - 2Go RAM Youuuuuhhhouuuuuuuuuuuuuu !!! Mon article de la doc est sur la première page ! La consécration... Non sans déc, je testais la v5 depuis le début et elle me plaisait bien au niveau du forum ; mais là, la page d'accueil envoie vraiment du pâté je dois dire. Bien joué. Dernière modification par Valdes (13/03/2011 21:14:40) Fedora 19 x86_64 sur Dell Latitude E6400 Prends le temps d'écrire correctement, et on prendra le temps de répondre correctement. Le top pour les nouvelles peintures. Bravo. Mais je regrette le fait de ne pas suivre les posts pour l' actu du forum qu'on avait en première page (10 derniers posts). Cela pour tous ceux qui ont des problèmes à traiter et pour ceux qui veuillent bien justement donner un coup de main en temps (plus ou moins) réel. Vous faites un super travaille. je serais plus que patient. Fedora 18 - KDE (x86_64) / XP Pro par VirtualBox / Dualboot Seven Pro Tour: Gigabyte MA770T-UD3P, Athlon II X3 435, DDR3 2x2 Go, Ati HD4850 512Mo HP ProBook 6550b: Intel Core i3 M370, DDR 4Go - OwnCloud sur 1&1 Franchement, je tire mon "hat" à cette nouvelle version ! Merci à toutes les personnes qui ont travaillé sur ce projet. Ce renouveau est à l'image du site : vivant et convivial. L'esprit c'est comme un parachute: s'il reste fermé, on s'écrase (FZ). Music is the Best : The La Radio Salut, Très jolie nouvelle version. Par contre, ça plante dès que je clic quelques part avec Konqueror. Voici le rapport de bogue, si jamais ça venait du site: Application: Konqueror (konqueror), signal: Segmentation fault [Current thread is 1 (Thread 0x7f4945307840 (LWP 8407))] Thread 7 (Thread 0x7f493ed1a700 (LWP 8408)): #0 0x0000003b3b0d7283 in poll () from /lib64/libc.so.6 #1 0x0000003b3cc42374 in ?? () from /lib64/libglib-2.0.so.0 #2 0x0000003b3cc42c82 in g_main_loop_run () from /lib64/libglib-2.0.so.0 #3 0x00007f493efc3774 in ?? () from /lib64/libgio-2.0.so.0 #4 0x0000003b3cc69446 in ?? () from /lib64/libglib-2.0.so.0 #5 0x0000003b3b806ccb in start_thread () from /lib64/libpthread.so.0 #6 0x0000003b3b0e0c2d in clone () from /lib64/libc.so.6 Thread 6 (Thread 0x7f4930017700 (LWP 8409)): #0 0x0000003b3b80b3b4 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x0000003875b287f4 in ?? () from /usr/lib64/libQtWebKit.so.4 #2 0x0000003b3b806ccb in start_thread () from /lib64/libpthread.so.0 #3 0x0000003b3b0e0c2d in clone () from /lib64/libc.so.6 Thread 5 (Thread 0x7f4932aa5700 (LWP 8411)): #0 0x0000003b3b80b71e in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x000000386aa72d42 in QWaitCondition::wait(QMutex*, unsigned long) () from /usr/lib64/libQtCore.so.4 #2 0x000000386aa68d58 in ?? () from /usr/lib64/libQtCore.so.4 #3 0x000000386aa726ee in ?? () from /usr/lib64/libQtCore.so.4 #4 0x0000003b3b806ccb in start_thread () from /lib64/libpthread.so.0 #5 0x0000003b3b0e0c2d in clone () from /lib64/libc.so.6 Thread 4 (Thread 0x7f492d746700 (LWP 8446)): #0 0x0000003b3b80b3b4 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007f492d76a26e in queue_processor(void*) () from /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/lib/amd64/IcedTeaPlugin.so #2 0x0000003b3b806ccb in start_thread () from /lib64/libpthread.so.0 #3 0x0000003b3b0e0c2d in clone () from /lib64/libc.so.6 Thread 3 (Thread 0x7f492cf45700 (LWP 8447)): #0 0x0000003b3b80b3b4 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007f492d76a26e in queue_processor(void*) () from /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/lib/amd64/IcedTeaPlugin.so #2 0x0000003b3b806ccb in start_thread () from /lib64/libpthread.so.0 #3 0x0000003b3b0e0c2d in clone () from /lib64/libc.so.6 Thread 2 (Thread 0x7f4927fff700 (LWP 8448)): #0 0x0000003b3b80b3b4 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007f492d76a26e in queue_processor(void*) () from /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/lib/amd64/IcedTeaPlugin.so #2 0x0000003b3b806ccb in start_thread () from /lib64/libpthread.so.0 #3 0x0000003b3b0e0c2d in clone () from /lib64/libc.so.6 Thread 1 (Thread 0x7f4945307840 (LWP 8407)): [KCrash Handler] #6 0x000000386ab5e625 in QCoreApplication::postEvent(QObject*, QEvent*, int) () from /usr/lib64/libQtCore.so.4 #7 0x00000038758501a8 in ?? () from /usr/lib64/libQtWebKit.so.4 #8 0x0000003875850ab4 in ?? () from /usr/lib64/libQtWebKit.so.4 #9 0x000000386ab7004f in QMetaObject::activate(QObject*, QMetaObject const*, int, void**) () from /usr/lib64/libQtCore.so.4 #10 0x000000386e2b910b in ?? () from /usr/lib64/libkio.so.5 #11 0x000000386e2b9af3 in ?? () from /usr/lib64/libkio.so.5 #12 0x000000386ab7004f in QMetaObject::activate(QObject*, QMetaObject const*, int, void**) () from /usr/lib64/libQtCore.so.4 #13 0x000000386bd2d542 in KJob::result(KJob*) () from /usr/lib64/libkdecore.so.5 #14 0x000000386bd2d580 in KJob::emitResult() () from /usr/lib64/libkdecore.so.5 #15 0x000000386e2f11fd in KIO::SimpleJob::slotFinished() () from /usr/lib64/libkio.so.5 #16 0x000000386e2f6742 in KIO::TransferJob::slotFinished() () from /usr/lib64/libkio.so.5 #17 0x000000386e2fae11 in KIO::TransferJob::qt_metacall(QMetaObject::Call, int, void**) () from /usr/lib64/libkio.so.5 #18 0x000000386ab7004f in QMetaObject::activate(QObject*, QMetaObject const*, int, void**) () from /usr/lib64/libQtCore.so.4 #19 0x000000386e3990a1 in KIO::SlaveInterface::dispatch(int, QByteArray const&) () from /usr/lib64/libkio.so.5 #20 0x000000386e395fd3 in KIO::SlaveInterface::dispatch() () from /usr/lib64/libkio.so.5 #21 0x000000386e389696 in KIO::Slave::gotInput() () from /usr/lib64/libkio.so.5 #22 0x000000386e389cac in KIO::Slave::qt_metacall(QMetaObject::Call, int, void**) () from /usr/lib64/libkio.so.5 #23 0x000000386ab7004f in QMetaObject::activate(QObject*, QMetaObject const*, int, void**) () from /usr/lib64/libQtCore.so.4 #24 0x000000386e2c2867 in ?? () from /usr/lib64/libkio.so.5 #25 0x000000386e2c291d in KIO::Connection::qt_metacall(QMetaObject::Call, int, void**) () from /usr/lib64/libkio.so.5 #26 0x000000386ab6fb4a in QObject::event(QEvent*) () from /usr/lib64/libQtCore.so.4 #27 0x000000386c3b78c4 in QApplicationPrivate::notify_helper(QObject*, QEvent*) () from /usr/lib64/libQtGui.so.4 #28 0x000000386c3bc3da in QApplication::notify(QObject*, QEvent*) () from /usr/lib64/libQtGui.so.4 #29 0x000000386d620596 in KApplication::notify(QObject*, QEvent*) () from /usr/lib64/libkdeui.so.5 #30 0x000000386ab5b7ac in QCoreApplication::notifyInternal(QObject*, QEvent*) () from /usr/lib64/libQtCore.so.4 #31 0x000000386ab5ef95 in QCoreApplicationPrivate::sendPostedEvents(QObject*, int, QThreadData*) () from /usr/lib64/libQtCore.so.4 #32 0x000000386ab86723 in ?? () from /usr/lib64/libQtCore.so.4 #33 0x0000003b3cc41e33 in g_main_context_dispatch () from /lib64/libglib-2.0.so.0 #34 0x0000003b3cc42610 in ?? () from /lib64/libglib-2.0.so.0 #35 0x0000003b3cc428ad in g_main_context_iteration () from /lib64/libglib-2.0.so.0 #36 0x000000386ab868bf in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4 #37 0x000000386c45c59e in ?? () from /usr/lib64/libQtGui.so.4 #38 0x000000386ab5ab42 in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4 #39 0x000000386ab5ad8c in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4 #40 0x000000386ab5f24b in QCoreApplication::exec() () from /usr/lib64/libQtCore.so.4 #41 0x00000038704b2a6a in kdemain () from /usr/lib64/libkdeinit4_konqueror.so #42 0x0000003b3b01ee5d in __libc_start_main () from /lib64/libc.so.6 #43 0x00000000004007c9 in _start () il manque le blog planet, c'est un oublie de ma part, je le rajoute de suite. Merci ;) Mais je regrette le fait de ne pas suivre les posts pour l' actu du forum qu'on avait en première page (10 derniers posts). Cela pour tous ceux qui ont des problèmes à traiter et pour ceux qui veuillent bien justement donner un coup de main en temps (plus ou moins) réel. Vous faites un super travaille. je serais plus que patient. +1 C'est en parti dans ce sens que j'avais poster mon premier message. Sur le plantage konqueror, c'est dans le cas konqueror+WebKit. konqueror+KHTML ne plante pas visiblement, en tout cas chez moi. Si tu ne viens pas à Fedora, Fedora viendra à toi ! Borisea a écrit : Mais je regrette le fait de ne pas suivre les posts pour l' actu du forum qu'on avait en première page (10 derniers posts). Cela pour tous ceux qui ont des problèmes à traiter et pour ceux qui veuillent bien justement donner un coup de main en temps (plus ou moins) réel. Vous faites un super travaille. je serais plus que patient. +1 C'est en parti dans ce sens que j'avais poster mon premier message. Jep ense faire un sous domaine plus portail avec toutes les infos Le top pour les nouvelles peintures. Bravo. Mais je regrette le fait de ne pas suivre les posts pour l' actu du forum qu'on avait en première page (10 derniers posts). Cela pour tous ceux qui ont des problèmes à traiter et pour ceux qui veuillent bien justement donner un coup de main en temps (plus ou moins) réel. Vous faites un super travaille. je serais plus que patient. Sinon tu fais comme moi, tu vas sur le forum, rubrique « afficher les messages récents » et pis des refresh de temps en temps [PapsOu@Home] $ su -lc 'yum update breizh'
I think the main problem here is that the browser settings don't actually affect the navigator.language property that is obtained via javascript. What they do affect is the HTTP 'Accept-Language' header, but it appears this value is not available through javascript at all. (Probably why @anddoutoi states he can't find a reference for it that doesn't involve server side.) I have coded a workaround: I've knocked up a google app engine script at http://ajaxhttpheaders.appspot.com that will return you the HTTP request headers via JSONP. (Note: this is a hack only to be used if you do not have a back end available that can do this for you. In general you should not be making calls to third party hosted javascript files in your pages unless you have a very high level of trust in the host.) I intend to leave it there in perpetuity so feel free to use it in your code. Here's some example code (in jQuery) for how you might use it $.ajax({ url: "http://ajaxhttpheaders.appspot.com", dataType: 'jsonp', success: function(headers) { language = headers['Accept-Language']; nowDoSomethingWithIt(language); } }); Hope someone finds this useful. Edit: I have written a small jQuery plugin on github that wraps this functionality: https://github.com/dansingerman/jQuery-Browser-Language Edit 2: As requested here is the code that is running on AppEngine (super trivial really): class MainPage(webapp.RequestHandler): def get(self): headers = self.request.headers callback = self.request.get('callback') if callback: self.response.headers['Content-Type'] = 'application/javascript' self.response.out.write(callback + "(") self.response.out.write(headers) self.response.out.write(")") else: self.response.headers['Content-Type'] = 'text/plain' self.response.out.write("I need a callback=") application = webapp.WSGIApplication( [('/', MainPage)], debug=False) def main(): run_wsgi_app(application) if __name__ == "__main__": main()
nknico Un nouveau dépôt officiel pour Ubuntu (logiciels commerciaux) Un nouveau dépot officiel vient d'ouvrir. Il contient des logiciels commerciaux. Pour l'instant Opéra et Realplayer sont disponibles. En gros, pour l'installer, il faut: * Activer les mises a jour de dapper * Mettre a jour sa dapper :-) * Via gnome-app-install (disponible dans le menu pour tous les membres du groupe admin), activer l'option pour voir les softs commerciaux * Installer opera. Pour ceux utilisant apt-get : Ajouter cette ligne dans /etc/apt/sources.list deb http://archive.canonical.com dapper-commercial main L'annonce de l'accord entre canonical et opera : http://opera.com/pressreleases/en/2006/07/06/02/ Dernière modification par nknico (Le 09/07/2006, à 15:41) Nico Hors ligne Mornagest Re : Un nouveau dépôt officiel pour Ubuntu (logiciels commerciaux) Ce qui est pratique, pour Realplayer, qui demande, pour la version Windows, de s'enregistrer, et patati et patata, tout ça pour lire une vidéo .ra (je crois).. pas pratique, quand même ^^ MErci pour l'info, Nknico, je sens que je vais avoir des mises à jour à faire ce soir (Kubuntu, me revoiciiiii ) Hors ligne fredbezies Re : Un nouveau dépôt officiel pour Ubuntu (logiciels commerciaux) Real Player ? Cette usine à gaz ? Tiens, ce sera intéressant à voir michel2652 Re : Un nouveau dépôt officiel pour Ubuntu (logiciels commerciaux) Merci pour le dépot, MAJ faite A+ Mornagest Re : Un nouveau dépôt officiel pour Ubuntu (logiciels commerciaux) Une petite question, pour Kubuntu, où trouve-t-on la liste des applications commerciales installables ? J'ai activé l'option "proprietary software" mais je ne trouve ni Opera, ni Real Player à installer... ? Merci d'avance Hors ligne michel2652 Re : Un nouveau dépôt officiel pour Ubuntu (logiciels commerciaux) Une petite question, pour Kubuntu, où trouve-t-on la liste des applications commerciales installables ? J'ai activé l'option "proprietary software" mais je ne trouve ni Opera, ni Real Player à installer... ? Merci d'avance Met le dépot que donne knico dans ton sources.list A+ Mornagest Re : Un nouveau dépôt officiel pour Ubuntu (logiciels commerciaux) Ben c'est ce que j'ai fait mais il ne m'a pas proposé d'installer le paquet realplayer, ni quoi que ce soit qui approche.. sudo apt-get update et upgrade ne donnent rien Quel est le nom de l'appli ? je tente avec Katapult mais real et realplayer sont introuvables. Je pense qu'il n'est pas installé C'est pas dramatique mais ça fait six mois que j'ai une vidéo à regarder et j'ai pas pu télécharger ce satané programme sous Windows... Hors ligne nknico Re : Un nouveau dépôt officiel pour Ubuntu (logiciels commerciaux) Ben il faut lui demander de l'installer... sudo apt-get install opera sudo apt-get install realplay Nico Hors ligne Mornagest Re : Un nouveau dépôt officiel pour Ubuntu (logiciels commerciaux) Ah, avec le nom du paquet ça semble directement plus simple Je suis un boulet, je sais... j'ouvre jamais Adept pour chercher dans les noms des paquets :$ désolé et un grand merci Ca marche et il s'ouvre bien, merci encore ^^ Hors ligne pascalc Re : Un nouveau dépôt officiel pour Ubuntu (logiciels commerciaux) une question con, mais bon, mieux vaut avouer son ignorance que faire semblant de savoir et rester ignare... Comment est-ce que l'on peut savoir quels logiciels sont disponibles sur un dépôt ? Là on découvre qu'il y a Opera et Realplayer dedans, mais si ça se trouve il y a autre chose qui m'intéresserait. Il n'y a pas une manière d'avoir la liste des logiciels d'un dépôt ? Mozilla (mes opinions n'engagent que moi et pas mon employeur) Hors ligne Lestat the vampire Re : Un nouveau dépôt officiel pour Ubuntu (logiciels commerciaux) bah tu copies l'adresse du depot dans firefox (ou autre) ici : http://archive.canonical.com et tu navigues dans les repertoires et notamment http://archive.canonical.com/pool/main/ ou tu as la liste des paquets dispo classé par ordre alphabétique....pour le moment uniquement opera et realplayer donc Hors ligne Black_pignouf Re : Un nouveau dépôt officiel pour Ubuntu (logiciels commerciaux) C'est quoi la différence avec multiverse??? Hors ligne Lestat the vampire Re : Un nouveau dépôt officiel pour Ubuntu (logiciels commerciaux) La version de realplayer fourni dans ce nouveau depot est la derniere (10.0.7) alors que realplayer fourni dans le multiverse est la 8 (ayant des problèmes de dépendances sous Dapper, qui n'est plus supportée ni mise à jour par realnetworks donc absolument non recommandé sauf eventuellement pour les anciennces versions), enfin il existe aussi le paquet realplay dans le depot plf qui est en version 10.0.6 Pour opera, je ne me suis pas penché sur la question ! Dernière modification par Lestat the vampire (Le 11/07/2006, à 18:21) Hors ligne darkangel6669 Re : Un nouveau dépôt officiel pour Ubuntu (logiciels commerciaux) Par contre, quelqu'un sait comment avoir opera en français? Hors ligne nknico Re : Un nouveau dépôt officiel pour Ubuntu (logiciels commerciaux) Il faut télécharger un fichier supplémentaire. http://www.opera.com/download/languagefiles/ Les instructions se trouvent sur la page. Nico Hors ligne Lestat the vampire Re : Un nouveau dépôt officiel pour Ubuntu (logiciels commerciaux) Je me suis permis d'ajouter cette information importante (et qui devrait en interesser plus d'un !) au wiki à cette adresse : http://doc.ubuntu-fr.org//applications/apt/depots#depot_commercial_officiel Hors ligne nknico Re : Un nouveau dépôt officiel pour Ubuntu (logiciels commerciaux) Oui très bonne idée... Nico Hors ligne darkangel6669 Re : Un nouveau dépôt officiel pour Ubuntu (logiciels commerciaux) OK merci nknico Hors ligne skateinmars Re : Un nouveau dépôt officiel pour Ubuntu (logiciels commerciaux) C'est quoi la différence avec multiverse??? Les softs sont packagés en collaboration avec les entreprises. Mais en pratique ils ont les memes fonctions, d'ailleurs des softs de multiverses vont disparaitre et entrer dans ce depot Hors ligne batteuryo Re : Un nouveau dépôt officiel pour Ubuntu (logiciels commerciaux) Bonjour, quelqu'un pourrait - il m'aider car bisarement je n'arrive pas installer opera... J'ai bien rajouter le dépôt dans mon source.list. un petit update de tout ça mais que se soit par ligne de commande ou bien par synaptic, opera n'est pas trouvée ... De plus, en passant par "Application - Ajouter/enlever", je voi bien Opera (après avoir cocher "voir les applications commerciales") mais quand je veux l'instaler on me demander d' "Activer le depôt caconical" ... et quand je dit "Ok" la liste des dépendance se recharge et on me redemande à nouveau d'activer le dépôt ...( ca tourne en rond) Pour info je suis en Ubuntu 64 bits (noyau K8) mais je n' ai pas lu que les dépôts canonical n'étés disponible que pour platforme i386 ... Voila si quelqu'un peut m'éclairer merci. Je me demande dans quel etat j'ère ... ^^ Hors ligne coubi64 Re : Un nouveau dépôt officiel pour Ubuntu (logiciels commerciaux) Donnes ton sources.list .... Hors ligne batteuryo Re : Un nouveau dépôt officiel pour Ubuntu (logiciels commerciaux) voici mon source.list : deb http://archive.ubuntu.com/ubuntu/ dapper main restricted universe multiverse deb http://security.ubuntu.com/ubuntu/ dapper-security main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ dapper-updates main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ dapper main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ dapper-updates main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu/ dapper-security main restricted universe multiverse ###Canonical### deb http://archive.canonical.com dapper-commercial main ###Asher256### # deb http://asher256-repository.tuxfamily.org dapper main dupdate french # deb http://asher256-repository.tuxfamily.org ubuntu main dupdate french ###Pour BMPx### ###Pour XGL### # deb http://www.beerorkid.com/compiz/ dapper main # deb http://xgl.compiz.info dapper main ###Pour listen### deb http://theli.free.fr/packages/dapper/ ./ ## Dépôt Pouit (pour i386 et amd64) ## Plus d'infos sur http://mrpouit.free.fr/blog/Depot-ubuntu deb http://mrpouit.free.fr/ubuntu/ dapper-misc main restricted universe multiverse non-free openalchemist # deb-src http://mrpouit.free.fr/ubuntu/ dapper-misc main restricted universe multiverse non-free openalchemist Je me demande dans quel etat j'ère ... ^^ Hors ligne batteuryo Re : Un nouveau dépôt officiel pour Ubuntu (logiciels commerciaux) Aparement je ne suis pas le seul à avoir ce problème : Tiens c'est vrai ! Je n'avais jamais remarqué... par contre c'est légèrement buggué sous 64 bits, il veut ajouter le dépôt, je l'autorise, il rafraîchit la liste, il ne trouve pas Opéra, il veut ajouter le dépôt, etc. Je me demande dans quel etat j'ère ... ^^ Hors ligne Lestat the vampire Re : Un nouveau dépôt officiel pour Ubuntu (logiciels commerciaux) Je ne suis pas sur que les paquets existants pour le moment sur ce depot commercial soit prevu pour le 64bits, je m'explique : 1-Si on parcours le contenu du depot (ex ce repertoire http://archive.canonical.com/pool/main/r/realplay/) on se rend compte qu'il n'existe qu'un seul paquet .deb, et dans le nom du paquet, on a un joli *i386.deb et pas de amd64 en vue 2-Ce depot contient bien l'arborescence necessaire pour contenir des paquets destinés à l'architecture amd64 (http://archive.canonical.com/dists/dapper-commercial/main/binary-amd64/) MAIS le contenu du fichier Packages.gz est vide alors que ce meme fichier pour le 32bits contient nos deux paquets du dépots... Bref, je ne sais pas si j'ai raison mais a priori, je dirais que c'est normal que ton synaptic ne trouve pas opera sous amd64 et ce n'est pas un probleme de sources.list !! Après peut etre qu'on peut bidouiller pour les installer quand meme sur 64bits....mais la, je ne suis pas au courant ! Hors ligne batteuryo Re : Un nouveau dépôt officiel pour Ubuntu (logiciels commerciaux) Oui "Lestat the vampire", j'avais bien compris que ce n'était pas un problème de source.list, je me doute bien que cela doit provenir de l'architechture 64bits. Reste a savoir maintenan si les paquet pour amd 64 sera bientôt dispo sur leur serveur ! Je me demande dans quel etat j'ère ... ^^ Hors ligne
I'm beginning python and I'm trying to use a two-dimensional list, that I initially fill up with the same variable in every place. I came up with this: def initialize_twodlist(foo): twod_list = [] new = [] for i in range (0, 10): for j in range (0, 10): new.append(foo) twod_list.append(new) new = [] It gives the desired result, but feels like a workaround. Is there an easier/shorter/more elegant way to do this?
Configuring and Managing WebLogic JDBC In WebLogic Server, you can configure database connectivity by configuring JDBC data sources and multi data sources and then targeting or deploying the JDBC resources to servers or clusters in your WebLogic domain. Each data source that you configure contains a pool of database connections that are created when the data source instance is created—when it is deployed or targeted, or at server startup. Applications lookup a data source on the JNDI tree or in the local application context (java:comp/env), depending on how you configure and deploy the object, and then request a database connection. When finished with the connection, the application calls connection.close(), which returns the connection to the connection pool in the data source. Figure 2-1 shows a data source and a multi data source targeted to a WebLogic Server instance. For more information about data sources in WebLogic Server, see Configuring JDBC Data Sources. A multi data source is an abstraction around a data sources that provides load balancing or failover processing between the data sources associated with the multi data source. Multi data sources are bound to the JNDI tree or local application context just like data sources are bound to the JNDI tree. Applications lookup a multi data source on the JNDI tree or in the local application context (java:comp/env) just like they do for data sources, and then request a database connection. The multi data source determines which data source to use to satisfy the request depending on the algorithm selected in the multi data source configuration: load balancing or failover. For more information about multi data sources, see Configuring JDBC Multi Data Sources. A key to understanding WebLogic JDBC configuration and management is that who creates a JDBC resource or how a JDBC resource is created determines how a resource is deployed and modified. Both WebLogic Administrators and programmers can create JDBC resources: Table 2-1 lists the JDBC module types and how they can be configured and modified. WebLogic JDBC configuration is stored in XML documents that conform to the weblogic-jdbc.xsd schema (available at http://www.bea.com/ns/weblogic/91/weblogic-jdbc.xsd). You create and manage JDBC resources either as system modules or as application modules. JDBC application modules are a WebLogic-specific extension of J2EE modules and can be configured either within a J2EE application or as stand-alone modules. When you create a JDBC resource (data source or multi data source) using the Administration Console or using the WebLogic Scripting Tool (WLST), WebLogic Server creates a JDBC module in the config/jdbc subdirectory of the domain directory, and adds a reference to the module in the domain's config.xml file. The JDBC module conforms to the weblogic-jdbc.xsd schema (available at http://www.bea.com/ns/weblogic/91/weblogic-jdbc.xsd). JDBC resources that you configure this way are considered system modules. System modules are owned by an Administrator, who can delete, modify, or add similar resources at any time. System modules are globally available for targeting to servers and clusters configured in the domain, and therefore are available to all applications deployed on the same targets and to client applications. System modules are also accessible through JMX as JDBCSystemResourceMBeans. Data source system modules are included in the domain's config.xml file as a JDBCSystemResource element, which includes the name of the JDBC module file and the list of target servers and clusters on which the module is deployed. Figure 2-2 shows an example of a data source listing in a config.xml file and the module that it maps to. Similarly, multi data source system modules are included in the domain's config.xml file as a jdbc-system-resource element. The multi data source module includes a data-source-list parameter that maps to the data source modules used by the multi data source. The individual data source modules are also included in the config.xml. Figure 2-3 shows the relationship between elements in the config.xml file and the system modules in the config/jdbc directory. In this illustration, the config.xml file lists three JDBC modules—one multi data source and the two data sources used by the multi data source, which are also listed within the multi data source module. Your application can look up any of these modules on the JNDI tree and request a database connection. If you look up the multi data source, the multi data source determines which of the other data sources to use to supply the database connection, depending on the data sources in the data-source-list parameter, the order in which the data sources are listed, and the algorithm specified in the algorithm-type parameter. For more information about multi data sources, see Configuring JDBC Multi Data Sources. JDBC resources can also be managed as application modules, similar to standard J2EE modules. A JDBC application module is simply an XML file that conforms to the weblogic-jdbc.xsd schema and represents a data source or a multi data source. JDBC modules can be included as part of an Enterprise Application as a packaged module. Packaged modules are bundled with an EAR or exploded EAR directory, and are referenced in all appropriate deployment descriptors, such as the weblogic-application.xml and ejb-jar.xml deployment descriptors. The JDBC module is deployed along with the enterprise application, and can be configured to be available only to the enclosing application or to all applications. Using packaged modules ensures that an application always has access to required resources and simplifies the process of moving the application into new environments. With packaged JDBC modules, you can migrate your application and the required JDBC configuration from environment to environment, such as from a testing environment to a production environment, without opening an EAR file and without extensive manual JDBC reconfiguration. In contrast to system resource modules, JDBC modules that are packaged with an application are owned by the developer who created and packaged the module, rather than the Administrator who deploys the module. This means that the Administrator has more limited control over packaged modules. When deploying a resource module, an Administrator can change resource properties that were specified in the module, but the Administrator cannot add or delete modules. (As with other J2EE modules, deployment configuration changes for a resource module are stored in a deployment plan for the module, leaving the original module untouched.) By definition, packaged JDBC modules are included in an enterprise application, and therefore are deployed when you deploy the enterprise application. For more information about deploying applications with packaged JDBC modules, see Deploying Applications to WebLogic Server. A JDBC application module can also be deployed as a stand-alone resource using the weblogic.Deployer utility or the Administration Console, in which case the resource is typically available to the server or cluster targeted during the deployment process. JDBC resources deployed in this manner are called stand-along modules and can be reconfigured using the Administration Console or a JSR-88 compliant tool, but are unavailable through JMX or WLST. Stand-alone JDBC modules promote sharing and portability of JDBC resources. You can create a data source configuration and distribute it to other developers. Stand-alone JDBC modules can also be used to move JDBC configuration between domains, such as between the development domain and the staging domain. For more information about JDBC application modules, see Configuring JDBC Application Modules for Deployment. For information about deploying stand-alone JDBC modules, see "Deploying JDBC, JMS, WLDF Application Modules." All WebLogic JDBC module files must end with the -jdbc.xml suffix, such as examples-demo-jdbc.xml. WebLogic Server checks the file name when you deploy the module. If the file does not end in -jdbc.xml, the deployment will fail and the server will not boot. When you use production redeployment (versioning) to deploy a version of an application that includes a packaged JDBC module, WebLogic Server identifies the data source defined in the JDBC module with a name in the following format: If transactions in a retiring version of an application time out and the version of the application is then undeployed, you may have to manually resolve any pending or incomplete transactions on the data source in the retired version of the application. After a data source is undeployed (in this case, with the retired version of the application), the WebLogic Server transaction manager cannot recover pending or incomplete transactions. In support of the modular deployment model for JDBC resources in WebLogic Server 9.1, BEA provides a schema for WebLogic JDBC objects: weblogic-jdbc.xsd. When you create JDBC resource modules (descriptors), the modules must conform to the schema. IDEs and other tools can validate JDBC resource modules based on the schema. The schema is available at http://www.bea.com/ns/weblogic/91/weblogic-jdbc.xsd. When you create JDBC resources using the Administration Console or WLST, WebLogic Server creates MBeans (Managed Beans) for each of the resources. You can then access these MBeans using JMX or the WebLogic Scripting Tool (WLST). See Developing Custom Management Utilities with JMX and WebLogic Scripting Tool for more information. Figure 2-4 shows the hierarchy of the MBeans for JDBC objects in a WebLogic domain. The JDBCSystemResourceMBean is a container for the JavaBeans created from a data source module. However, all JMX access for a JDBC data source is through the JDBCSystemResourceMBean. You cannot directly access the individual JavaBeans created from the data source module. WebLogic Server 9.1 JDBC supports JSR-77, which defines the J2EE Management Model. The J2EE Management Model is used for monitoring the runtime state of a J2EE Web application server and its resources. You can access the J2EE Management Model to monitor resources, including the WebLogic JDBC system as a whole, JDBC drivers loaded into memory, and JDBC data sources. JDBCServiceRuntimeMBean—Which represents the JDBC subsystem and provides methods to access the list of JDBCDriverRuntimeMBeans and JDBCDataSourceRuntimeMBeans currently available in the system.JDBCDriverRuntimeMBean—Which represents a JDBC driver that the server loaded into memory. JDBCDataSourceRuntimeMBeans—Which represents a JDBC data source deployed on a server or cluster. For more information about using the J2EE management model with WebLogic Server, see Monitoring and Managing with the J2EE Management APIs. #---------------------------------------------------------------------- # Create JDBC # The prefix specifies the prefix on property names. # Example: for property "mypool.Name=mypool", the prefix would be "mypool." #---------------------------------------------------------------------- import sys from java.lang import System print "@@@ Starting the script ..." global props url = sys.argv[1] usr = sys.argv[2] password = sys.argv[3] connect(usr,password, url) edit() startEdit() servermb=getMBean("Servers/examplesServer")    if servermb is None:       print '@@@ No server MBean found' else:    def addJDBC(prefix):    print("")    print("*** Creating JDBC with property prefix " + prefix) # Create the Connection Pool. The system resource will have # generated name of <PoolName>+"-jdbc"    myResourceName = props.getProperty(prefix+"PoolName")    print("Here is the Resource Name: " + myResourceName)    jdbcSystemResource = wl.create(myResourceName,"JDBCSystemResource")     myFile = jdbcSystemResource.getDescriptorFileName()    print ("HERE IS THE JDBC FILE NAME: " + myFile)     jdbcResource = jdbcSystemResource.getJDBCResource()    jdbcResource.setName(props.getProperty(prefix+"PoolName")) # Create the DataSource Params    dpBean = jdbcResource.getJDBCDataSourceParams()    myName=props.getProperty(prefix+"JNDIName")    dpBean.setJNDINames([myName]) # Create the Driver Params    drBean = jdbcResource.getJDBCDriverParams()    drBean.setPassword(props.getProperty(prefix+"Password"))    drBean.setUrl(props.getProperty(prefix+"URLName"))    drBean.setDriverName(props.getProperty(prefix+"DriverName"))    propBean = drBean.getProperties()    driverProps = Properties()    driverProps.setProperty("user",props.getProperty(prefix+"UserName"))    e = driverProps.propertyNames()    while e.hasMoreElements() :       propName = e.nextElement()       myBean = propBean.createProperty(propName)       myBean.setValue(driverProps.getProperty(propName)) # Create the ConnectionPool Params    ppBean = jdbcResource.getJDBCConnectionPoolParams()    ppBean.setInitialCapacity(int(props.getProperty(prefix+"InitialCapacity")))    ppBean.setMaxCapacity(int(props.getProperty(prefix+"MaxCapacity")))    ppBean.setCapacityIncrement(int(props.getProperty(prefix+"CapacityIncrement")))    if not props.getProperty(prefix+"ShrinkPeriodMinutes") == None:       ppBean.setShrinkFrequencySeconds(int(props.getProperty(prefix+"ShrinkPeriodMinutes")))    if not props.getProperty(prefix+"TestTableName") == None:       ppBean.setTestTableName(props.getProperty(prefix+"TestTableName"))    if not props.getProperty(prefix+"LoginDelaySeconds") == None:       ppBean.setLoginDelaySeconds(int(props.getProperty(prefix+"LoginDelaySeconds"))) # Adding KeepXaConnTillTxComplete to help with in-doubt transactions.    xaParams = jdbcResource.getJDBCXAParams()    xaParams.setKeepXaConnTillTxComplete(1) # Add Target    jdbcSystemResource.addTarget(wl.getMBean("/Servers/examplesServer")) . . . For more information, see Navigating and Editing MBeans in the WebLogic Scripting Tool. You can target or deploy JDBC resources to a cluster to improve the availability of cluster-hosted applications. For information about JDBC objects in a clustered environment, see "JDBC Connections" in Using WebLogic Server Clusters. Multi data sources are supported for use in clusters. However, note that multi data sources can only use data sources in the same JVM. Multi data sources cannot use data sources from other cluster members.
#0 Re : -1 » Reseau provisoire » Le 31/08/2013, à 21:53 NicoZic56 Réponses : 9 Hello. Oui, il y a une astuce... Il faut passer par un script suid. Modifier ton script en : #!/bin/ksh service network-manager start sleep 600 service network-manager stop Ensuite, le script doit appartenir à root, mais il faut positionner le bit suid (qui permet de s'attribuer les droits du propriétaire du fichier). sudo chmod 750 monscript sudo chmod +s monscript Voilà ce qui devrait faire l'affaire. Attention, cela crée une faille de sécurité. Suis avec attention les instructions de cette page. #1 Re : -1 » Reseau provisoire » Le 01/09/2013, à 22:40 NicoZic56 Réponses : 9 Tout d'abord, il y a une erreur dans le script, la première ligne est à supprimer Ensuite, comme je l'indiquais, mettre le bit SUID sur un script présente trop de failles de sécurité. Cela a été désactivé. Et puis ksh n'est plus nécessaire (je viens de me rendre compte qu'il n'est pas installé par défaut sous Ubuntu). En relisant ta demande, je vois que tu acceptes de saisir ton mot de passe admin, il y a donc uns solution simple. On va créer le script dans /root (sudo gedit /root/a.sh). #!/bin/sh service network-manager start sleep 600 service network-manager stop (edit... j'avais oublié) sudo chmod 700 /root/a.sh Le script a mettre sur le bureau #!/bin/sh gksudo /root/a.sh Lui mettre les droits d'exécution. Si tu double-cliques dessus, il devrait te demander les droits d'admin et lancer le script. (J'ai testé sous XFCE et ça marche avec un script similaire). #2 Re : -1 » Reseau provisoire » Le 02/09/2013, à 21:29 #3 Re : -1 » Reseau provisoire » Le 07/09/2013, à 09:59 NicoZic56 Réponses : 29 Hello. Tu as une anomalie dans ton statut iwconfig wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-AssociatedTx-Power=off Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Essaye-ça : sudo iwconfig wlan0 txpower auto Ça devrait résoudre le problème. NicoZic56 Réponses : 29 Me revoila ! J'ai beaucoup cherché, mais je n'ai pas trouvé de cas similaire, y compris sur les forums anglais. Je te propose d'essayer encore quelques manips, et si on n'y arrive pas, de tester avec une version de noyau plus anciennes. Je pense que le problème tourne bien autours de l'activation hardware, et qu'il faudrait essayer de bidouiller autour de ça. 1) Dans le BIOS Essaye de regarder s'il y a quelque chose qui permette de configurer la touche F12 (soit pour la désactiver, soit pour changer la valeur d'activation par défaut) Il faut que tu observes les changements éventuels par rfkill list qui ne doit pas remonter "Hard blocked" 2) Manœuvrer cette touche et voir (toujours par la commande rfkill) si cela change quelquechose. 3) Essayer de voir si une commande privée du driver permettrait d'activer la wifi (je n'y crois pas trop, mais bon...) Chez moi cela donne : nicolas@picogiga:~$ sudo iwpriv wlan0 wlan0 no private ioctls. Ce qui signifie que je n'ai pas de commandes privées pour le driver. Qu'en est-il chez toi ? 4) Essayer une version backport du noyau Si tu tapes : apt-cache search linux|grep image|grep backport Tu devrais voir apparaître une liste des modules backportés pour la wifi (c'est le cas en 12.04) Il faudra que tu choisisses une version (ne prends pas le pae, tu es en 64bits), et que tu l'installes par une ligne de commande du genre (à compléter par le résultat de la commande précédente) sudo apt-get install linux-backports-modules-cw-?.?-raring-generic NicoZic56 Réponses : 29 Le résultat de rfkill dépendrait donc des cartes ? Chez moi, j'ai : root@picogiga:~# rfkill list 1: acer-wireless: Wireless LAN Soft blocked: no Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no 18: hci0: Bluetooth Soft blocked: no Hard blocked: no root@picogiga:~# rfkill block wifi root@picogiga:~# rfkill list 1: acer-wireless: Wireless LAN Soft blocked: yes Hard blocked: no 2: phy0: Wireless LAN Soft blocked: yes Hard blocked: no 18: hci0: Bluetooth Soft blocked: no Hard blocked: no root@picogiga:~# rfkill unblock wifi De même, pour le mode avion ? Il est également parfois résolvable par iwconfig wlan0 txpower on(ici)... mais dans ce cas iwconfig le signale... Bon, en tout cas pour revenir au problème initial de simbad83, tu peux essayer la commande... NicoZic56 Réponses : 29 Je crois que c'est bon signe... pour une carte ralink, apparemment, ce n'est wlan0, mais ra0. Pour mon information, peux-tu confirmer que c'est la commande rfkill qui a créé l'interface ? D'après ce que j'ai vu ailleurs, cela devrait maintenant marcher... Est-ce la cas ? Ensuite, il faudrait reposter toutes les commandes du début (que tu as déja postées), ou du moins toutes celles qui ont changées. NicoZic56 Réponses : 29 Pour le moment, je suis au boulot (pas trop de temps pour regarder ça en détail). Je rebondis sur Ce blocage est spécifique de 13.04. Il faudrait pouvoir installer un autre pilote... Mais je n'ai pas su installer le rt5592sta_fix_64bit_3.8.patch. Je n'ai pas trouvé ni le patch, ni ce qu'il est censé corriger. Ou l'as-tu trouvé ? Cela m'inspire 2 remarques : 1) peut-être que le driver 64bits est bugué. Est-ce que cela t'irais d'essayer un noyau pae ? 2) j'ai déjà compilé des drivers pour linux. Si on est sûrs que cela va corriger le problème, donne-moi ton état d'avancement, je pense que je pourrais te dépanner... NicoZic56 Réponses : 29 Arg... Est-ce que la commande rfkill list indique toujours un blocage hardware ? Si c'est le cas, c'est que malheureusement on n'aurait pas beaucoup avancé. Est-ce que tu as essayé les autres pistes que je t'ai déjà indiquées (noyau 32 bits pae, bios, sudo iwpriv ra0) ? #14 Re : -1 » Equivalent de XCOPY sous Linux... » Le 03/09/2013, à 21:32 NicoZic56 Réponses : 3 Hello, Une solution (qui en vaut une autre), utiliser tar (création d'archive). Créer l'archive: tar cvzf /tmp/tmp.tgz $(find . -name '*.pdf') L'extraire à l'endroit souhaité (se déplacer préalablement dans le répertoire avec cd). tar xvzf /tmp/tmp.tgz L'inconvénient : on passe par un fichier temporaire pas très utile. #15 Re : -1 » Deconnexions wifi intempestives » Le 30/08/2013, à 22:39 NicoZic56 Réponses : 3 Bonjour ! Je pense qu'il faut modifier la configuration wifi de ta box. Je pense qu'elle apparaît dans le scan en premier, et que tu ne diffuses pas le SSID (c'est pour cela qu'elle n'apparaît pas ESSID:"<hidden>") Il faudrait te connecter sur l'interface d'administration de ta box. Change le numéro de canal. Tous les points d'accès a proximité émettent sur le canal 11. Change donc le canal pour 1, cela devrait beaucoup mieux marcher. Tu peux aussi cocher diffuser le SSID (ou nom de réseau) cela pourrait simplifier le diagnostic ultérieur. #16 Re : -1 » Deconnexions wifi intempestives » Le 31/08/2013, à 21:07 NicoZic56 Réponses : 3 C'est assez compatible avec le problème de canal réseau que je t'ai soumis. Certaines box ont un choix de canal automatique, mais selon mon expérience, cela ne marche pas toujours très bien. Il a suffit que l'une d'entre elle reboote pour que le problème disparaissent. Si tu relances une commande sudo iwlist wlan0 scanning tu devrais voir les autres canaux. Si le problème se reproduit, la recommandation de changer le numéro de canal reste valide. #18 Re : -1 » Problème de carte Wifi Zbox HD-ID12 » Le 21/08/2013, à 13:35 NicoZic56 Réponses : 4 Bonjour, Tu aurais du poster dans le forum spécialisé de la Wifi (http://forum.ubuntu-fr.org/viewforum.php?id=82) qui fourni une procédure pour les commandes à poster (http://forum.ubuntu-fr.org/viewtopic.php?id=1089311). Il peut effectivement s'agir d'un problème hardware Ou bien (et cela serait mon option préférée): c'est que le PC est situé à un endroit où il n'y a pas de couverture wifi (ou parasitée par quelque chose). Le retour des commandes permettrait d'y voir plus clair. #19 Re : -1 » Problème de carte Wifi Zbox HD-ID12 » Le 21/08/2013, à 21:15 NicoZic56 Réponses : 4 Hello, C'est un peu bizarre que la carte soit énumérée en wlan1, normalement cela devrait être wlan0. Est-ce qu'il en est pareil pour les autres machines ? Ensuite, ce n'est très bon signe que la carte ne renvoie rien... Les paramètres me semblent corrects. Il faudrait un peu insister sur la commande sudo iwlist scan Si elle ne renvoie rien, je pense qu'il y a effectivement un problème hardware... #20 Re : -1 » Bonjour, impossible d'activer Ralink corp. Device [1814:3290] malgré s » Le 21/08/2013, à 13:42 NicoZic56 Réponses : 1 Hello ! 0: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: yes Normalement, cela signifie que tu as un interrupteur matériel wifi qui désactive la wifi. Sur les PC portables, on a parfois tendance à appuyer dessus sans s'en rendre compte. #21 Re : -1 » [Résolu] Script plus lent à cause du pipe ??? » Le 21/08/2013, à 09:14 NicoZic56 Réponses : 9 Heu.. il faut juste que tu rajoutes l'appel vers zeity dans la ligne. L'aurais-tu oublié dans le copier-coller ? mencoder "/home/$USER/video.vob" -nosound -ovc frameno -sid 9 -vobsubout "/home/$USER/soustitrage" -vobsuboutindex 0 -o /dev/null | awk -vRS='\r' -F [%\(] '/%)/ {printf "\n%d\n",$2 ; printf "\n#%d %\n",$2 ;fflush();}' | awk '!x[$0]++' | zenity --progress --auto-close #22 Re : -1 » [Résolu] Script plus lent à cause du pipe ??? » Le 21/08/2013, à 09:40 #23 Re : -1 » [Résolu] Script plus lent à cause du pipe ??? » Le 21/08/2013, à 10:26 NicoZic56 Réponses : 9 J'ai refait des tests, et c'est un problème avec la commande awk qui supprime les lignes en doublons. Il ne flush pas sa sortie à chaque nouvelle ligne. Il faudrait configurer la commande de suppression des lignes pour qu'elle flush. Je ne sais pas comment faire ça. (La commande awk précédente le fait, grâce au flush dans le code passé en paramètre). Je sais m'en sortir avec python Tu crées un fichier "remove_dup.py" dans un répertoire contenu dans ton PATH, contenant : #!/usr/bin/python import sys l1="" while 1: l2 = sys.stdin.readline() if not l2: break if l2 != l1: sys.stdout.write(l2) sys.stdout.flush() l1=l2 Et tu remplaces awk '!x[$0]++' par remove_dup.py dans la ligne de commande. #24 Re : -1 » wifi bloqué par harware » Le 20/08/2013, à 22:52 NicoZic56 Réponses : 28 Bon, voila le tuto pour compiler le module Copier-coller le code suivant dans un éditeur, et sauvegarder le tout dans /tmp/nico.patch diff -rupN acerhk/acerhk.c acerhk/acerhk.c --- acerhk/acerhk.c 2009-07-02 23:48:23.000000000 +0200 +++ acerhk/acerhk.c 2013-08-19 22:43:21.556956104 +0200 @@ -37,6 +37,8 @@ #ifndef AUTOCONF_INCLUDED #include <linux/config.h> +#else +#include <generated/autoconf.h> #endif /* This driver is heavily dependent on the architecture, don't let @@ -2827,7 +2829,7 @@ static void acerhk_proc_cleanup(void) /* {{{ file operations */ -static int acerhk_ioctl( struct inode *inode, struct file *file, +static long acerhk_ioctl( /*struct inode *inode,*/ struct file *file, unsigned int cmd, unsigned long arg ) { int retval; @@ -2938,7 +2940,7 @@ static int acerhk_resume(struct platform static struct file_operations acerhk_fops = { owner: THIS_MODULE, - ioctl: acerhk_ioctl, + unlocked_ioctl: acerhk_ioctl, open: acerhk_open, #ifdef ACERDEBUG write: acerhk_write, diff -rupN acerhk/Makefile acerhk/Makefile --- acerhk/Makefile 2009-07-02 23:48:23.000000000 +0200 +++ acerhk/Makefile 2013-08-20 21:55:09.912536657 +0200 @@ -1,11 +1,17 @@ # change KERNELSRC to the location of your kernel build tree only if # autodetection does not work #KERNELSRC=/usr/src/linux -KERNELSRC?=/lib/modules/`uname -r`/build +#KERNELSRC=/lib/modules/`uname -r`/build + +KERNELSRC=/usr/src/linux-headers-`uname -r`/ + # Starting with 2.6.18, the kernel version is in utsrelease.h instead of version.h, accomodate both cases -KERNELVERSION=$(shell awk -F\" '/REL/ {print $$2}' $(shell grep -s -l REL $(KERNELSRC)/include/linux/version.h $(KERNELSRC)/include/linux/utsrelease.h)) +KERNELVERSION=$(shell uname -r | sed 's/-.*//' ) KERNELMAJOR=$(shell echo $(KERNELVERSION)|head -c3) +#KERNELVERSION=3.2.0 +#KERNELMAJOR=3.2 + # next line is for kernel 2.6, if you integrate the driver in the kernel tree # /usr/src/linux/drivers/acerhk - or something similar # don't forget to add the following line to the parent dir's Makefile: @@ -14,9 +20,12 @@ KERNELMAJOR=$(shell echo $(KERNELVERSION CONFIG_ACERHK?=m obj-$(CONFIG_ACERHK) += acerhk.o -EXTRA_CFLAGS+=-c -Wall -Wstrict-prototypes -Wno-trigraphs -O2 -fomit-frame-pointer -fno-strict-aliasing -fno-common -pipe +CFLAGS+=-c -Wall -Wstrict-prototypes -Wno-trigraphs -O2 -fomit-frame-pointer -fno-strict-aliasing -fno-common -pipe INCLUDE=-I$(KERNELSRC)/include +CFLAGS+=-D AUTOCONF_INCLUDED +INCLUDE+=-I$(KERNELSRC)/arch/x86/include + ifeq ($(KERNELMAJOR), 2.6) TARGET := acerhk.ko else @@ -33,6 +42,7 @@ help: @echo -e install\\t- copies module binary to /lib/modules/$(KERNELVERSION)/extra/ @echo -e clean\\t- removes all binaries and temporary files + # this target is only for me, don't use it yourself (Olaf) export: sh export.sh A partir du home dir wget http://ftp.fr.debian.org/debian/pool/main/a/acerhk/acerhk-source_0.5.35-8_all.deb sudo dpkg -i acerhk-source_0.5.35-8_all Et puis la compil: sudo -i cd /usr/src tar xjf acerhk.tar.bz2 cd modules patch -p0 < /tmp/nico.patch cd acerhk make make install modprobe acerhk echo 1 > /proc/driver/acerhk/wirelessled exit Enjoy !
I'm trying to compile an openCL program on Ubuntu with an NVIDIA card that worked once before, #include <CL/cl.h> #include <iostream> #include <vector> using namespace std; int main() { cl_platform_id platform; cl_device_id device; cl_context context; cl_command_queue command_queue; cl_int error; if(clGetPlatformIDs(1, &platform, NULL) != CL_SUCCESS) { cout << "platform error" << endl; } if(clGetDeviceIDs(platform, CL_DEVICE_TYPE_GPU, 1, &device, NULL) != CL_SUCCESS) { cout << "device error" << endl; } context = clCreateContext(NULL, 1, &device, NULL, NULL, &error); if(error != CL_SUCCESS) { cout << "context error" << endl; } command_queue = clCreateCommandQueue(context, device, 0, &error); if(error != CL_SUCCESS) { cout << "command queue error" << endl; } return 0; } I compile it like so, g++ -I/usr/local/cuda/include -L/usr/lib/nvidia-current -lOpenCL opencl.cpp and I get this result /tmp/ccAdS9ig.o: In function `main': opencl.cpp:(.text+0x1a): undefined reference to `clGetPlatformIDs' opencl.cpp:(.text+0x3d): undefined reference to `clGetDeviceIDs' opencl.cpp:(.text+0x65): undefined reference to `clCreateContext' opencl.cpp:(.text+0x85): undefined reference to `clCreateCommandQueue' collect2: ld returned 1 exit status but nm -D /usr/lib/nvidia-current/libOpenCL.so tells me that libOpenCL.so at least contains clGetPlatformIDs 0000000000002400 T clGetKernelWorkGroupInfo0000000000002140 T clGetMemObjectInfo0000000000002e80 T clGetPlatformIDs0000000000002de0 T clGetPlatformInfo0000000000002310 T clGetProgramBuildInfo00000000000022f0 T clGetProgramInfo00000000000021f0 T clGetSamplerInfo Am I missing something.
The Problem After posting to my django view, the code seems to just stop halfway through. I have an ajax post to a Django view that is hard coded to render a specific response (see below). When I post to that view it should always return that response, but for some reason the view out puts all of the print statements, but not the query or the render. I know I'm being redirected by my sever log (shown below). Any idea what could be causing this behavior? Server log: 127.0.0.1 - - [26/Aug/2013 21:27:23] "GET / HTTP/1.1" 200 - 127.0.0.1 - - [26/Aug/2013 21:27:23] "GET /static/css/style.css HTTP/1.1" 304 - 127.0.0.1 - - [26/Aug/2013 21:27:23] "GET /static/js/map.js HTTP/1.1" 200 - 127.0.0.1 - - [26/Aug/2013 21:27:23] "GET /static/js/home_jquery.js HTTP/1.1" 304 - 127.0.0.1 - - [26/Aug/2013 21:27:23] "GET /static/js/jquery_cookie/jquery.cookie.js HTTP/1.1" 304 - 127.0.0.1 - - [26/Aug/2013 21:27:23] "GET /favicon.ico HTTP/1.1" 404 - I've been posted. Here are my values <QueryDict: {u'name': [u'Mission Chinese Food'], u'address': [u'154 Orchard Street Manhattan']}> Mission Chinese Food 154 Orchard Street Manhattan Fish 127.0.0.1 - - [26/Aug/2013 21:27:32] "POST /results/ HTTP/1.1" 200 - Simple Django View: def results(request): if request.method == 'POST': print "I've been posted. Here are my values" print request.POST print request.POST.get('name') print request.POST.get('address') restaurant = Restaurant.objects.filter(name='Fish', address='280 Bleecker St')[0] print restaurant return render(request, "stamped/restaurant.html", {'restaurant': restaurant} Simple ajax post: var send_data = { 'name': place.name, 'address': address}; var csrftoken = $.cookie('csrftoken'); alert(csrftoken); alert(send_data); function csrfSafeMethod(method) { // these HTTP methods do not require CSRF protection return (/^(GET|HEAD|OPTIONS|TRACE)$/.test(method)); } $.ajaxSetup({ crossDomain: false, // obviates need for sameOrigin test beforeSend: function(xhr, settings) { if (!csrfSafeMethod(settings.type)) { xhr.setRequestHeader("X-CSRFToken", csrftoken); } } }); $.ajax({ url: '/results/', type: 'POST', data: send_data, success: function(response) { console.log("everything worked!"); }, error: function(obj, status, err) { alert(err); console.log(err); } }); });
I have a single template that is serviced by multiple view functions. As an example, read_posts() view returns all posts with a GET, add_post() view adds a new post with a POST. I may have other post actions on the same page, needing more view functions. Now, each of these view functions need to pass different arguments to the template. E.g. each form might require a different form argument to be passed. What is the best practice in organizing the multiple arguments to a single template from multiple view functions? As an example posts.html is the template I use: <html> <head> <title>My Django Blog</title> </head> <body> <form action="{% url 'blog:add_post' %}" method="post"> {% csrf_token %} <p><input type="text" name="title" id="title" /></p> <p><input type="textarea" name="text" id="text" /></p> <input type="submit" value="Submit" /> </form> {% for post in posts %} <h1>{{ post.title }}</h1> <h3>{{ post.pub_date }}</h3> {{ post.text }} {% endfor %} </body> </html> Here are the views I use: def display_posts(request): #All posts posts = Post.objects.all() sorted_posts = posts.order_by('-pub_date') context = { 'posts' : sorted_posts } return render(request, 'blog/posts.html', context) def add_post(request): if request.method == 'POST': form = PostForm(request.POST) #return HttpResponse('Hello World') if form.is_valid(): post = Post() post.title = form.cleaned_data['title'] post.text = form.cleaned_data['text'] post.pub_date = datetime.now() post.save() return HttpResponseRedirect(reverse('blog:display_posts')) else: form = PostForm() # An unbound form return render(request, "blog:display_posts") As you may see display_posts() is the default GET when the page is requested, and add_post() handles the http POST, when a new post is created. Each function is handling a different functionality of the page, and they need different context variables passed to the template. (Note I used context only for display_posts just yet) How do I make sure each function sends a different context to the page, and I organize them properly in the template? When you handle multiple forms on a page, do you use partial templates for them and %include them in the main page? Thanks.
This is from the docs...... if __name__ == "__main__": # Interactive mode run(host='localhost', port=8049, debug=True) This is the error I get. What did I miss? Bottle server starting up (using WSGIRefServer(debug=True))... Listening on http://localhost:8049/ Hit Ctrl-C to quit. Shutdown... Traceback (most recent call last): File "/home/ubuntu/workspace/rtbopsConfig/rtbServers/rtbUwsgiBidderServer/uwsgiBidderServer.py", line 1239, in <module> run(host='localhost', port=8049, debug=True) File "/usr/local/lib/python2.7/dist-packages/bottle-0.10.11-py2.7.egg/bottle.py", line 2426, in run server.run(app) File "/usr/local/lib/python2.7/dist-packages/bottle-0.10.11-py2.7.egg/bottle.py", line 2123, in run srv = make_server(self.host, self.port, handler, **self.options) TypeError: make_server() got an unexpected keyword argument 'debug'
I'm using Python 3.2.3's urllib.request module to download Google search results, but I'm getting an odd error in that urlopen works with links to Google search results, but not Google Scholar. In this example, I'm searching for "JOHN SMITH". This code successfully prints HTML: from urllib.request import urlopen, Request from urllib.error import URLError # Google try: page_google = '''http://www.google.com/#hl=en&sclient=psy-ab&q=%22JOHN+SMITH%22&oq=%22JOHN+SMITH%22&gs_l=hp.3..0l4.129.2348.0.2492.12.10.0.0.0.0.154.890.6j3.9.0...0.0...1c.gjDBcVcGXaw&pbx=1&bav=on.2,or.r_gc.r_pw.r_qf.,cf.osb&fp=dffb3b4a4179ca7c&biw=1366&bih=649''' req_google = Request(page_google) req_google.add_header('User Agent', 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20120427 Firefox/15.0a1') html_google = urlopen(req_google).read() print(html_google[0:10]) except URLError as e: print(e) but this code, doing the same for Google Scholar, raises a URLError exception: from urllib.request import urlopen, Request from urllib.error import URLError # Google Scholar try: page_scholar = '''http://scholar.google.com/scholar?hl=en&q=%22JOHN+SMITH%22&btnG=&as_sdt=1%2C14''' req_scholar = Request(page_scholar) req_scholar.add_header('User Agent', 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20120427 Firefox/15.0a1') html_scholar = urlopen(req_scholar).read() print(html_scholar[0:10]) except URLError as e: print(e) Traceback: Traceback (most recent call last): File "/home/ak5791/Desktop/code-sandbox/scholar/crawler.py", line 6, in <module> html = urlopen(page).read() File "/usr/lib/python3.2/urllib/request.py", line 138, in urlopen return opener.open(url, data, timeout) File "/usr/lib/python3.2/urllib/request.py", line 369, in open response = self._open(req, data) File "/usr/lib/python3.2/urllib/request.py", line 387, in _open '_open', req) File "/usr/lib/python3.2/urllib/request.py", line 347, in _call_chain result = func(*args) File "/usr/lib/python3.2/urllib/request.py", line 1155, in http_open return self.do_open(http.client.HTTPConnection, req) File "/usr/lib/python3.2/urllib/request.py", line 1138, in do_open raise URLError(err) urllib.error.URLError: <urlopen error [Errno -5] No address associated with hostname> I obtained these links by searching in Chrome and copying the link from there. One commenter reported a 403 error, which I sometimes get as well. I presume this is because Google doesn't support scraping of Scholar. However, changing the User Agent string doesn't fix this or the original problem, since I get URLErrors most of the time.
« Back to Material for Readers There are some small errors in our book that we think readers may want to know about. Any code errors listed here have been corrected in the source code downloads. If you find anything else, please let us know! Page 66, first code sample: The correct code for Django 1.0 is slightly different. Specifically, you will want to uncomment lines 2, 3 and 14 (and probably 11 as well) in the generated urls.py, so it looks like this: # Uncomment the next two lines to enable the admin: from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', # Example: # (r'^mysite/', include('mysite.foo.urls')), # Uncomment the admin/doc line below and add 'django.contrib.admindocs' # to INSTALLED_APPS to enable admin documentation: (r'^admin/doc/', include('django.contrib.admindocs.urls')), # Uncomment the next line to enable the admin: (r'^admin/(.*)', admin.site.root), ) Page 101, second code sample: Everything below class Meta should be indented one more stop (four spaces). Page 126, first sentence of third paragraph: HttpRequest should be HttpResponse. Page 189, 2nd sentence of 3rd paragraph after "Controlling Which Stories Are Viewed" header: says "Custom Managers" section is in Chapter 4. This is incorrect: custom managers are covered in Chapter 11 instead. Page 192, second code sample: The "cms-search" urlpattern needs to occur before the "cms-story" urlpattern, or else search is never reached. Page 199, code sample: there should be a Q where the line-wrap marker is printed. Page 217, Javascript code sample, second-to-last line: remove the double-quote after "update" (but keep the space and the single quote). Page 222, code sample: In the definition of the timestamp field, replace default=datetime.datetime.now with auto_now_add=True. Page 224, before form code: Explanation for "seeming redundancy in the path name" was omitted. In short, the reason is: the pastebin app uses generic views that expect certain template naming conventions (they want to find the templates in a directory on our template search path that's named after the app).
clang LLVM's clang (at least 3.1) can be easily used via -Dcc=clang. The benefit is that your generated code will be faster on DEBUGGING (optimized not so far), compile + link times are much faster and use much less memory, the diagnostics are better and because its AST does not simplify the code beyond repair (as with gcc) it is easy to add various code check passes and diagnostics such as ASan. I found several warnings which I previously ignored in my code. Storable.xs:5400:2: warning: expression result unused [-Wunused-value] SvREFCNT_inc(sv); /* XXX seems to be necessary */ ^~~~~~~~~~~~~~~~ ../../sv.h:233:2: note: expanded from macro 'SvREFCNT_inc' _sv; \ ^~~ Storable.xs:5440:2: warning: null passed to a callee which requires a non-null argument [-Wnonnull] Socket.xs:837:47: warning: conversion specifies type 'int' but the argument has type 'STRLEN' (aka 'unsigned long') [-Wformat] croak("Bad arg length for %s, length is %d, should be %d", ~^ %lu Timings /usr/src/perl $ grep scripts=21 build-5.15.*/log.test debugging -O0 -g3 build-5.15.4d-nt@24ad6161/log.test:u=9.37 s=1.54 cu=723.64 cs=28.08 scripts=2152 tests=486474 build-5.15.4d-nt@7bb3c074/log.test:u=9.26 s=1.39 cu=699.95 cs=26.65 scripts=2152 tests=489756 build-5.15.4d-nt@8e711f0d/log.test:u=9.61 s=1.05 cu=720.40 cs=25.88 scripts=2152 tests=486457 build-5.15.4d-nt@dbc6546a/log.test:u=9.32 s=1.50 cu=730.39 cs=32.75 scripts=2151 tests=484891 build-5.15.5d-nt/log.test:u=9.10 s=1.38 cu=709.83 cs=27.04 scripts=2154 tests=486849 build-5.15.5d-nt-git-clang/log.test:u=6.59 s=1.27 cu=485.97 cs=26.38 scripts=2165 tests=489595 build-5.15.5d-nt-git-llvm/log.test:u=11.92 s=1.36 cu=808.02 cs=28.55 scripts=2166 tests=487765 non-debugging -O2 build-5.15.5-nt@5e141575/log.test:u=3.89 s=1.28 cu=331.86 cs=24.34 scripts=2166 tests=487755 build-5.15.5-nt-git-clang/log.test:u=4.37 s=1.32 cu=349.79 cs=24.24 scripts=2166 tests=487823 llvm is llvm-gcc-4.5, which is the slowest, clang is clang 3.1, which is fast , cc is gcc-4.6.1 which is a bit faster -O2 address-sanitizer And then there is Google's address-sanitizer (ASan), which detects invalid pointer accesses (read+write) to stack, heap and globals. Also via shadow memory maps as Dr.Memory and DynamoRIO, just much faster than any other memory checker. It's only 2x slower than unchecked, compared to 20x slower with valgrind and 10x with drmemory. And it needs much less memory.http://code.google.com/p/address-sanitizer/wiki/AddressSanitizerGoogle checks chromium with it. -Dcc='~/address-sanitizer/asan_clang_Linux/bin/clang' -Accflags=-faddress-sanitizer -Accflags='-mllvm\ -asan-blacklist=asan_blacklist.ignore' -Aldflags=-faddress-sanitizer -Doptimize='-g3\ -O1' So I added such a perl-5.15.5d-nt-asan to my debugging test suite. But I had to create a custom asan_blacklist.ignore list to exclude lots of early asan bugs/limitations (Most of them are now fixed). Notes: An existing old clang in your path will harm the build process. I first couldn't build on Linux, even clean, only on Darwin. -m32 support was missing in my libc. So make lib64 installdid the trick. Use -Doptimize=-O2 (or use Alex' -O0 patch) The problem was that current ASan is not yet properly initialized with -O0, so our Configure probes did fail. I patched it but the developers didn't like it, though it worked for me to create a miniperl and with -O0 a perl and most CPAN modules. I just bypassed ASan. You really need -O1 or -O2 to use ASan. Can we persuade Merjin to use -O1 just for ASan? For sure not. Update: Alex created a better patch to support -O0 and this looks fine now. Great! See issue 11 On one system I got some linker problem with -fstack-protector, so I removed that from makefile and config.sh. We do not want to check that twice anway. On my debian box and my fixed post-configure clang setup it worked ok with -fstack-protector though. There's still a Darwin init problem somewhere. Even with DYLD_NO_PIE=1I had to force init IO, with something like DYLD_PRINT_OPTS=1 ./miniperl -Dv -Ilib configpm to get past initial ctor crashes. export DYLD_PRINT_OPTS=1 && make and sometimes even make MINIPERL="./miniperl -Dv -Ilib" was needed. Problem was not debuggable as it worked okay from the debugger. My darwin seems to load the wrong malloc hook. And it eventually it led to the first worthwhile problem to inspect, an invalid write in a threaded miniperl. valgrind did not detect this. $ ./miniperl -Ilib configpm Expected a Configure variable header or another paragraph of description at configpm line 1010, <GLOS> chunk 1035. written lib/Config.pod ================================================================= ==2079== ERROR: AddressSanitizer unknown-crash on address 0x7f1ec37f92f0 at pc 0x42e546 bp 0x7ffff98ab790 sp 0x7ffff98ab770 WRITE of size 8 at 0x7f1ec37f92f0 thread T0 #0 0x42e546 (build-5.15.5d-asan@a7d2e0/miniperl+0x42e546) #1 0x47b262 (build-5.15.5d-asan@a7d2e0/miniperl+0x47b262) #2 0x7f1ec4958ead (/lib/x86_64-linux-gnu/libc-2.13.so+0x1eead) #3 0x41da69 (build-5.15.5d-asan@a7d2e0/miniperl+0x41da69) 0x7f1ec37f92f0 is located 624 bytes inside of 2912-byte region [0x7f1ec37f9080,0x7f1ec37f9be0) allocated by thread T0 here: #0 0x7b93d7 (build-5.15.5d-asan@a7d2e0/miniperl+0x7b93d7) #1 0x41dc50 (build-5.15.5d-asan@a7d2e0/miniperl+0x41dc50) #2 0x47b1b0 (build-5.15.5d-asan@a7d2e0/miniperl+0x47b1b0) #3 0x7f1ec4958ead (/lib/x86_64-linux-gnu/libc-2.13.so+0x1eead) ==2079== ABORTING Shadow byte and word: 0x1fe3d86ff25e: 0 0x1fe3d86ff258: 00 00 00 00 00 00 00 00 More shadow bytes: 0x1fe3d86ff238: 00 00 00 00 00 00 00 00 0x1fe3d86ff240: 00 00 00 00 00 00 00 00 0x1fe3d86ff248: 00 00 00 00 00 00 00 00 0x1fe3d86ff250: 00 00 00 00 00 00 00 00 =>0x1fe3d86ff258: 00 00 00 00 00 00 00 00 0x1fe3d86ff260: 00 00 00 00 00 00 00 00 0x1fe3d86ff268: 00 00 00 00 00 00 00 00 0x1fe3d86ff270: 00 00 00 00 00 00 00 00 0x1fe3d86ff278: 00 00 00 00 00 00 00 00 miniperl+0x42e546 is what? It cannot resolve the syms in the backtrace yet. There is an external tool scripts/asan_symbolize.py, but these should better be rewritten in perl to be more stable. I've written now such a symbolizer tool at https://gist.github.com/1392123 and put the full results to perl514.cpanel.net. So far I prefer objdump with manual macro expansion. Adding symbolizing to asan would be my wishlist, as it sees the expanded macro also. $ objdump -S -d --start-address=0x42e546 miniperl| less int perl_run(pTHXx) { dVAR; I32 oldscope; int ret = 0; dJMPENV; PERL_ARGS_ASSERT_PERL_RUN; #ifndef MULTIPLICITY PERL_UNUSED_ARG(my_perl); #endif 000000000042e546 <perl_run+0x1026>: oldscope = PL_scopestack_ix; #ifdef VMS VMSISH_HUSHED = 0; #endif JMPENV_PUSH(ret); 42e546: 40 88 fa mov %dil,%dl 42e549: 80 e2 07 and $0x7,%dl 42e54c: 38 ca cmp %cl,%dl 42e54e: 0f 8c 18 f1 ff ff jl 42d66c <perl_run+0x14c> 42e554: e8 c7 ef 37 00 callq 7ad520 <__asan_report_store1> 42e559: 44 89 e9 mov %r13d,%ecx 42e55c: 83 e1 07 and $0x7,%ecx 42e55f: 83 c1 03 add $0x3,%ecx 42e562: 38 c1 cmp %al,%cl 42e564: 0f 8c 3c f1 ff ff jl 42d6a6 <perl_run+0x186> case 0: /* normal completion */ JMPENV_PUSH(ret) => $ make perl.i $ edit perl.i (void)( { cur_env.je_prev = (my_perl->Itop_env); (void)0; cur_env.je_ret = __sigsetjmp (((cur_env.je_buf)), ((0))); (void)0; (my_perl->Itop_env) = &cur_env; cur_env.je_mustcatch = (0); (ret) = cur_env.je_ret; } ); To check which line in this macro failed, I usually rename perl.i to .c, do a linebreak as above, fix the linenumber before and recompile. $ mv perl.i perl.c # I'm in a symlinked buildtree! $ make .... ASAN:SIGSEGV ==22000== ERROR: AddressSanitizer crashed on unknown address 0x3ae4f05642e0 (pc 0x00000042dd4d sp 0x7fff9807a7e0 bp 0x7fff9807a990 ax 0x000000000003 T0) #0 0x42dd4d (build-5.15.5d-asan@a7d2e0/miniperl+0x42dd4d) $ objdump -S -d --start-address=0x42dd4d miniperl| less cur_env.je_ret = __sigsetjmp (((cur_env.je_buf)), ((0))); 42dd4d: 80 3a 00 cmpb $0x0,(%rdx) So either cur_env.je_buf or cur_env.je_ret is wrong. Now we really have to use the debugger. If we are lucky the error is reproducable within gdb. In my case not. Or if not add a printf to this line. Recompilation with instructing the linker to use the same flags as cc helped here.I added -g -O2 -faddress-sanitizer to all LDFLAGS in makefile. There are three. perl Configure sucks big time with it's cc driver centrism, ignoring ld. This time it compiled fine and I found what looks like a real core bug: $ ./perl -f -Ilib pod/buildtoc ================================================================= ==30266== ERROR: AddressSanitizer global-buffer-overflow on address 0x7ff6ca5d8d8b at pc 0x7ff6c9e9d2dc bp 0x7fff2362e2c0 sp 0x7fff2362e2a0 READ of size 1 at 0x7ff6ca5d8d8b thread T0 #0 0x7ff6c9e9d2dc (build-5.15.5d-nt-asan@a7d2e0/libperl.so+0x1002dc) #1 0x7ff6c9e9b440 (build-5.15.5d-nt-asan@a7d2e0/libperl.so+0xfe440) #2 0x7ff6ca5cea42 (build-5.15.5d-nt-asan@a7d2e0/lib/auto/List/Util/Util.so+0x3a42) #3 0x7ff6ca025b00 (build-5.15.5d-nt-asan@a7d2e0/libperl.so+0x288b00) #4 0x7ff6c9fa5eee (build-5.15.5d-nt-asan@a7d2e0/libperl.so+0x208eee) #5 0x7ff6c9e90209 (build-5.15.5d-nt-asan@a7d2e0/libperl.so+0xf3209) #6 0x7ff6c9e86e10 (build-5.15.5d-nt-asan@a7d2e0/libperl.so+0xe9e10) #7 0x7ff6c9e67c2d (build-5.15.5d-nt-asan@a7d2e0/libperl.so+0xcac2d) #8 0x7ff6c9e5a01e (build-5.15.5d-nt-asan@a7d2e0/libperl.so+0xbd01e) #9 0x7ff6c9e561f8 (build-5.15.5d-nt-asan@a7d2e0/libperl.so+0xb91f8) #10 0x7ff6c9f1fa83 (build-5.15.5d-nt-asan@a7d2e0/libperl.so+0x182a83) #11 0x7ff6c9e8d4b8 (build-5.15.5d-nt-asan@a7d2e0/libperl.so+0xf04b8) #12 0x7ff6c9e88120 (build-5.15.5d-nt-asan@a7d2e0/libperl.so+0xeb120) #13 0x404d9e (build-5.15.5d-nt-asan@a7d2e0/perl+0x404d9e) #14 0x7ff6c8f5fead (/lib/x86_64-linux-gnu/libc-2.13.so+0x1eead) #15 0x404b59 (build-5.15.5d-nt-asan@a7d2e0/perl+0x404b59) 0x7ff6ca5d8d8b is located 0 bytes to the right of global variable '.str27' (0x7ff6ca5d8d80) of size 11 '.str27' is ascii string 'List::Util' ==30266== ABORTING Shadow byte and word: 0x1ffed94bb1b1: 3 0x1ffed94bb1b0: 00 03 f9 f9 f9 f9 f9 f9 More shadow bytes: 0x1ffed94bb190: f9 f9 f9 f9 00 00 00 00 0x1ffed94bb198: f9 f9 f9 f9 00 00 00 00 0x1ffed94bb1a0: 00 00 00 04 f9 f9 f9 f9 0x1ffed94bb1a8: 03 f9 f9 f9 f9 f9 f9 f9 =>0x1ffed94bb1b0: 00 03 f9 f9 f9 f9 f9 f9 0x1ffed94bb1b8: 00 07 f9 f9 f9 f9 f9 f9 0x1ffed94bb1c0: 00 00 00 00 00 00 00 00 0x1ffed94bb1c8: 00 00 00 00 05 f9 f9 f9 0x1ffed94bb1d0: f9 f9 f9 f9 00 04 f9 f9 Now this really looks like an invalid read past the trailing 0-byte on the gv name. (size 11 of 'List::Util' sounds like the 0 was allocated. 8+2+1 = 11) $ objdump -Sd --start-address=0x1002dc libperl.so|less 00000000001002dc <Perl_gv_name_set+0x19c>: if (!(flags & GV_ADD) && GvNAME_HEK(gv)) { unshare_hek(GvNAME_HEK(gv)); } PERL_HASH(hash, name, len); 1002dc: 89 fa mov %edi,%edx 1002de: 83 e2 07 and $0x7,%edx 1002e1: 83 c2 03 add $0x3,%edx 1002e4: 38 ca cmp %cl,%dl 1002e6: 7c 70 jl 100358 <Perl_gv_name_set+0x218> These macro expansions are a bit longer, so I spare you the details. Same procedure as above. 0xfe440 is in Perl_gv_init_pvn, which comes from List/Util.so which probably defined the global name of the module. The bug really was there in ListUtil.xs if (SvTYPE(rmcgv) != SVt_PVGV) gv_init(rmcgv, lu_stash, "List::Util", 12, TRUE); 12 is clearly off-by-two. A classical copy&paste error from 3 lines above. What worries me is that no other compiler or tool found this. Filed as rt.cpan.org #72700 And why does valgrind does not complain? Because valgrind cannot find Global OOB (Out of bound) not Stack OOB, only Heap OOB. Here we have the global variable '.str27'. valgrind only found these known leaks: (full details with --leap-check=full) Warning: bad signal number 0 in sigaction() HEAP SUMMARY: in use at exit: 4,840,027 bytes in 44,571 blocks total heap usage: 722,897 allocs, 678,326 frees, 158,660,281 bytes allocated Searching for pointers to 44,571 not-freed blocks Checked 9,191,152 bytes LEAK SUMMARY: definitely lost: 1,557 bytes in 82 blocks indirectly lost: 0 bytes in 0 blocks possibly lost: 0 bytes in 0 blocks still reachable: 4,838,470 bytes in 44,489 blocks suppressed: 0 bytes in 0 blocks Rerun with --leak-check=full to see details of leaked memory ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 4 from 4) used_suppression: 4 dl-hack3-cond-1 An invalid read is certainly more important than a minor leak. And valgrind is so slow that it is only used randomly. ASan is so fast and so much better that I compile it in and use it all the time now in my debugging perl. BTW, the leak is: 1,557 bytes in 82 blocks are definitely lost in loss record 1,284 of 1,582 at 0x4C2779D: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) by 0x4EE9CE6: Perl_safesysmalloc (util.c:100) by 0x4EEB7CC: Perl_savepv (util.c:1103) by 0x4E78989: Perl_newXS_len_flags (op.c:7045) by 0x4E77E92: Perl_newCONSTSUB_flags (op.c:6947) by 0x4E8D893: Perl_gv_init_pvn (gv.c:373) by 0x4E9237C: Perl_gv_fetchpvn_flags (gv.c:1691) by 0x4E93D55: Perl_gv_fetchsv (gv.c:1395) by 0x4E7B187: Perl_ck_rvconst (op.c:7667) by 0x4E6DE69: Perl_newUNOP (op.c:3687) by 0x4EA6C4B: Perl_yylex (toke.c:6690) by 0x4EB934B: Perl_yyparse (perly.c:434) BTW: I really like the concept of shadow memory maps. See AddressSanitizerAlgorithm or the developers thesis paper about DynamoRIO at http://www.burningcutlery.com/derek/phd.html Much easier than with huge guard pages, like electric fence. Summary of found perl core bugs With the old asan rev 144800 I found 32+13 new unique perl core problems, unthreaded. Most of them look security relevant. Just filed one bug report for now. Will have to automate this somehow. scripts/asan_symbolize.py works only on Linux for me, and I wrote a better symbolizer asan_addr2dis. Some of the problems seem to be asan problems not perl. address-sanitizer: I love you! There are some minor bugs still, it is currently being merged into llvm proper, but it's usable. $ perl -lne'BEGIN{$/=q/ERROR: AddressSan/}; print join " ",$1,$2,$3 if /tizer (.+?) on address.* ((?:READ|WRITE) of size \d+).*? is located (at offset \d+ in .*?) of T0/s' log.test-5.15.5d-nt-asan\@a7d2e0| sort -u stack-buffer-overflow READ of size 1 <Perl_pp_entereval> stack-buffer-overflow READ of size 8 <Perl_sv_vcatpvfn> stack-buffer-overflow WRITE of size 1 <Perl_gv_stashpvn> stack-buffer-underflow WRITE of size 1 <Perl_gv_stashpvn> stack-buffer-overflow WRITE of size 1 <Perl_gv_fetchfile_flags> bogus stack-buffer-overflow WRITE of size 1 <S_study_chunk> stack-buffer-overflow WRITE of size 1 <Perl_call_sv> stack-buffer-overflow WRITE of size 8 <Perl_call_sv> stack-buffer-underflow WRITE of size 1 <Perl_call_sv> stack-buffer-overflow WRITE of size 1 <Perl_amagic_call> stack-buffer-overflow WRITE of size 4 <S_find_byclass> stack-buffer-overflow WRITE of size 4 <Perl_pregcomp> stack-buffer-overflow WRITE of size 4 <Perl_re_compile> stack-buffer-overflow WRITE of size 8 <Perl_re_compile> stack-buffer-underflow WRITE of size 8 <Perl_re_compile> stack-buffer-overflow WRITE of size 8 <Perl_sighandler> stack-buffer-overflow WRITE of size 8 <Perl_call_list> stack-buffer-overflow WRITE of size 8 <Perl_regexec_flags> stack-buffer-underflow WRITE of size 8 <Perl_regexec_flags> stack-buffer-overflow WRITE of size 8 <Perl_die_unwind> stack-buffer-underflow WRITE of size 4 <Perl_die_unwind> stack-buffer-underflow READ of size 8 <Perl_sv_vcatpvfn> stack-buffer-underflow WRITE of size 1 <Perl_die_unwind> stack-buffer-underflow WRITE of size 1 <Perl_newATTRSUB> stack-buffer-underflow WRITE of size 1 <Perl_Gv_AMupdate> stack-buffer-underflow WRITE of size 4 <Perl_pp_die> stack-buffer-underflow WRITE of size 4 <Perl_croak> stack-buffer-underflow WRITE of size 8 <Perl_pp_entersub> $ perl -lne'BEGIN{$/=q/ERROR: AddressSan/}; print join " ",$1,$2,$3 if /tizer (.+?) on address.*((?:READ|WRITE) of size \d+).*? is( located \d bytes .*? \()/s' log.test-5.15.5d-nt-asan\@a7d2e0| sort -u global-buffer-overflow READ of size 1 to the right of global variable '.str' global-buffer-overflow READ of size 1 to the right of global variable '.str69' heap-buffer-overflow READ of size 1 at 16-byte region heap-buffer-overflow READ of size 1 at 16-byte region heap-buffer-overflow READ of size 8 at 16-byte region heap-buffer-overflow READ of size 8 at 16-byte region heap-buffer-overflow READ of size 8 at 19-byte region heap-buffer-overflow READ of size 8 7 bytes to the right of 9-byte region heap-buffer-overflow READ of size 8 8 bytes to the right of 8-byte region heap-buffer-overflow READ of size 8 8 bytes to the right of 8-byte region heap-buffer-overflow READ of size 8 8 bytes to the right of 8-byte region heap-buffer-overflow READ of size 8 8 bytes to the right of 8-byte region heap-buffer-overflow READ of size 8 8 bytes to the right of 8-byte region The developer mentions for those OOB reads: Please be aware that some of the out-of-bound reads may be caused by over-optimizations in string processing functions.For example, a function may read 8 bytes at a time if it knows that the strings are 8-aligned and NULL-terminated. Theoretically this is still an error, but in practice it should not cause any problems. I have to check all of them manually and keep them in an asan perl blacklist, which is a suppression file. After some days analyzing mosty of these reports I came to the conclusion that only the very first report caught a perl bug, the rest were false positives. Caused by either not detecting local pointer updates or by mangling the control-flow with longjmp. Since then asan is now included in llvm trunk, and miniperl can be compiled out of the box. See the asan HowToBuild instructions, and for configure I used -D'cc=/usr/src/llvm/projects/compiler-rt/lib/asan_clang_linux/bin/clang' -A'ccflags=-faddress-sanitizer' -A'ldflags=-faddress-sanitizer' llvm rev 146046 Now only those tests failed: op/taint.t op/tie.t re/pat_re_eval.t re/pat_rt_report.t re/reg_mesg.t re/regexp.t re/regexp_noamp.t re/regexp_notrie.t re/regexp_qr.t re/regexp_qr_embed.t re/regexp_trielist.t run/fresh_perl.t uni/method.t uni/parser.t uni/readline.t With problems in those functions: $ perl -lne'BEGIN{$/=q/ERROR: AddressSan/}; print join " ",$1,$2,$3 if /tizer (.+?) on address.*((?:READ|WRITE) of size \d+).*? is located (at offset \d+ in .*?) of T0/s' log.test | sort -u stack-buffer-overflow READ of size 1 <Perl_sv_compile_2op_is_broken> stack-buffer-overflow READ of size 4 <Perl_vmess> stack-buffer-overflow READ of size 8 <Perl_vcroak> stack-buffer-overflow WRITE of size 1 <Perl_amagic_call> stack-buffer-overflow WRITE of size 1 <Perl_call_sv> stack-buffer-overflow WRITE of size 1 <Perl_gv_stashpvn> stack-buffer-overflow WRITE of size 1 <Perl_re_compile> stack-buffer-overflow WRITE of size 1 <Perl_vcroak> stack-buffer-overflow WRITE of size 1 <S_incline> stack-buffer-overflow WRITE of size 1 <S_re_croak2> stack-buffer-overflow WRITE of size 4 <Perl_re_compile> stack-buffer-overflow WRITE of size 4 <S_pack_rec> stack-buffer-overflow WRITE of size 8 <Perl_call_sv> stack-buffer-overflow WRITE of size 8 <perl_destruct> stack-buffer-overflow WRITE of size 8 <Perl_die_unwind> stack-buffer-overflow WRITE of size 8 <Perl_Gv_AMupdate> stack-buffer-overflow WRITE of size 8 <Perl_hv_common> stack-buffer-overflow WRITE of size 8 <Perl_re_compile> stack-buffer-overflow WRITE of size 8 <Perl_regexec_flags> stack-buffer-underflow WRITE of size 1 <Perl_die_unwind> stack-buffer-underflow WRITE of size 8 <Perl_regexec_flags> Still investigating. They look like false alarms to me. But I already detected some more CPAN errors, like #73118 in DBI, and #73111 in JSON::XS Static analysis with clang-analyzer clang comes with scan-build which uses ccc-analyzer to statically analyze C/C++ code. http://clang-analyzer.llvm.org/ scan-build ./Configure ... scan-build -V -k make scan-build -V -k make test generates a lot of html reports in /tmp/scan-build-* Have a look and you will be surprised. It's no big deal mostly, but Perl definitely could benefit from more compiler attributes esp. noreturn, and more defensive code.
Update (May 2014): Please note that these instructions are outdated. while it is still possible (and in fact easier) to blog with the Notebook, the exact process has changed now that IPython has an official conversion framework. However, Blogger isn't the ideal platform for that (though it can be made to work). If you are interested in using the Notebook as a tool for technical blogging, I recommend looking at Jake van der Plas' Pelican support or Damián Avila's support in Nikola. Update: made full github repo for blog-as-notebooks, and updated instructions on how to more easily configure everything and use the newest nbconvert for a more streamlined workflow. Since the notebook was introduced with IPython 0.12, it has proved to be very popular, and we are seeing great adoption of the tool and the underlying file format in research and education. One persistent question we've had since the beginning (even prior to its official release) was whether it would be possible to easily write blog posts using the notebook. The combination of easy editing in markdown with the notebook's ability to contain code, figures and results, makes it an ideal platform for quick authoring of technical documents, so being able to post to a blog is a natural request. Today, in answering a query about this from a colleague, I decided to try again the status of our conversion pipeline, and I'm happy to report that with a bit of elbow-grease, at least on Blogger things work pretty well! The purpose of this post is to quickly provide a set of instructions on how I got it to work, and to test things out. Please note: this requires code that isn't quite ready for prime-time and is still under heavy development, so expect some assembly. Converting your notebook to html with nbconvert The first thing you will need is our nbconvert tool that converts notebooks across formats. The README file in the repo contains the requirements for nbconvert (basically python-markdown, pandoc, docutils from SVN and pygments). Once you have nbconvert installed, you can convert your notebook to Blogger-friendly html with: nbconvert -f blogger-html your_notebook.ipynb This will leave two files in your computer, one named your_notebook.html and one named your_noteboook_header.html; it might also create a directory called your_notebook_files if needed for ancillary files. The first file will contain the body of your post and can be pasted wholesale into the Blogger editing area. The second file contains the CSS and Javascript material needed for the notebook to display correctly, you should only need to use this once to configure your blogger setup (see below): # Only one notebook so far (master)longs[blog]> ls 120907-Blogging with the IPython Notebook.ipynb fig/ old/ # Now run the conversion: (master)longs[blog]> nbconvert.py -f blogger-html 120907-Blogging\ with\ the\ IPython\ Notebook.ipynb # This creates the header and html body files (master)longs[blog]> ls 120907-Blogging with the IPython Notebook_header.html fig/ 120907-Blogging with the IPython Notebook.html old/ 120907-Blogging with the IPython Notebook.ipynb Configuring your Blogger blog to accept notebooks The notebook uses a lot of custom CSS for formatting input and output, as well as Javascript from MathJax to display mathematical notation. You will need all this CSS and the Javascript calls in your blog's configuration for your notebook-based posts to display correctly: Once authenticated, go to your blog's overview page by clicking on its title. Click on templates (left column) and customize using the Advanced options. Scroll down the middle column until you see an "Add CSS" option. Copy entire the contents of the _headerfile into the CSS box. That's it, and you shouldn't need to do anything else as long as the CSS we use in the notebooks doesn't drastically change. This customization of your blog needs to be done only once. While you are at it, I recommend you change the width of your blog so that cells have enough space for clean display; in experimenting I found out that the default template was too narrow to properly display code cells, producing a lot of text wrapping that impaired readability. I ended up using a layout with a single column for all blog contents, putting the blog archive at the bottom. Otherwise, if I kept the right sidebar, code cells got too squished in the post area. I also had problems using some of the fancier templates available from 'Dynamic Views', in that I could never get inline math to render. But sticking to those from the Simple or 'Picture Window' categories worked fine and they still allow for a lot of customization. Note: if you change blog templates, Blogger does destroy your custom CSS, so you may need to repeat the above steps in that case. Adding the actual posts Now, whenever you want to write a new post as a notebook, simply convert the .ipynb file to blogger-html and copy its entire contents to the clipboard. Then go to the 'raw html' view of the post, remove anything Blogger may have put there by default, and paste. You should also click on the 'options' tab (right hand side) and select both Show HTML literally and Use <br> tag, else your paragraph breaks will look all wrong. That's it! What can you put in? I will now add a few bits of code, plots, math, etc, to show which kinds of content can be put in and work out of the box. These are mostly bits copied from our example notebooks so the actual content doesn't matter, I'm just illustrating the kind of content that works. # Let's initialize pylab so we can plot later%pylab inline With pylab loaded, the usual matplotlib operations work x = linspace(0, 2*pi)plot(x, sin(x), label=r'$\sin(x)$')plot(x, cos(x), 'ro', label=r'$\cos(x)$')title(r'Two familiar functions')legend() The notebook, thanks to MathJax, has great LaTeX support, so that you can type inline math $(1,\gamma,\ldots, \infty)$ as well as displayed equations: $$ e^{i \pi}+1=0 $$ but by loading the sympy extension, it's easy showcase math output from Python computations, where we don't type the math expressions in text, and instead the results of code execution are displayed in mathematical format: %load_ext sympyprintingimport sympy as symfrom sympy import *x, y, z = sym.symbols("x y z") From simple algebraic expressions Rational(3,2)*pi + exp(I*x) / (x**2 + y)eq = ((x+y)**2 * (x+1))eqexpand(eq) To calculus diff(cos(x**2)**2 / (1+x), x) You can easily include formatted text and code with markdown You can italicize, boldface build lists and embed code meant for illustration instead of execution in Python: def f(x): """a docstring""" return x**2 or other languages: if (i=0; i<n; i++) { printf("hello %d\n", i); x += 4; } And since the notebook can store displayed images in the file itself, you can show images which will be embedded in your post: from IPython.display import ImageImage(filename='fig/img_4926.jpg') You can embed YouTube videos using the IPython object, this is my recent talk at SciPy'12 about IPython: from IPython.display import YouTubeVideoYouTubeVideo('iwVvqwLDsJo') Including code examples from other languages Using our various script cell magics, it's easy to include code in a variety of other languages %%rubyputs "Hello from Ruby #{RUBY_VERSION}"%%bashecho "hello from $BASH" And tools like the Octave and R magics let you interface with entire computational systems directly from the notebook; this is the Octave magic for which our example notebook contains more details: %load_ext octavemagic%%octave -s 500,500# butterworth filter, order 2, cutoff pi/2 radiansb = [0.292893218813452 0.585786437626905 0.292893218813452];a = [1 0 0.171572875253810];freqz(b, a, 32); The rmagic extension does a similar job, letting you call R directly from the notebook, passing variables back and forth between Python and R. %load_ext rmagic Start by creating some data in Python X = np.array([0,1,2,3,4])Y = np.array([3,5,4,6,7]) Which can then be manipulated in R, with results available back in Python (in XYcoef): %%R -i X,Y -o XYcoefXYlm = lm(Y~X)XYcoef = coef(XYlm)print(summary(XYlm))par(mfrow=c(2,2))plot(XYlm)XYcoef And finally, in the same spirit, the cython magic extension lets you call Cython code directly from the notebook: %load_ext cythonmagic%%cython -lmfrom libc.math cimport sinprint 'sin(1)=', sin(1) Keep in mind, this is still experimental code! Hopefully this post shows that the system is already useful to communicate technical content in blog form with a minimal amount of effort. But please note that we're still in heavy development of many of these features, so things are susceptible to changing in the near future. By all means join the IPython dev mailing list if you'd like to participate and help us make IPython a better tool!
lukophron Re : [python]Monsieur Cinéscript ah, bug du forum, je voyais mon dernier message en triple, je veux effacer les doubles et il m'efface tout.enfin bref ça veut dire quoi "je voulais voir ce que ça donnait" ? et heureusement que j'ai dit "ne pas faire la màj"... le problème, lorsque tu fais des trucs comme ça, c'est que je perds mon temps à rechercher manuellement les erreurs, et à les corriger. Les balises qui ne sont pas passées, il va falloir que je repère lesquelles en comparant les films trouvés dans le topic avec les films de la liste, que je supprime les balises de la liste pour qu'elles soient à nouveau prises en compte par la màj. Et plus tu mets à jour, plus j'ai de pages de topic à me taper manuellement. Enfin, on va dire que c'est mon script qui n'est pas assez papy-friendly Notes à moi-même : - faire un envoi sur pastebin des sorties terminal - demander confirmation de la modification en ligne (comme ça, en cas de doute, on refuse et on évite de faire des modif' inadéquates) Comme je disais dans mon message effacé, IMDb modifie régulièrement sa mise en page web. A chaque fois cela entraînera des bugs. Du coup je réfléchis aussi à un autre système, basé sur la liste IMDb des titres de films (à suivre, et vos idées sont bienvenues). J'essaie de reprendre ça aussi rapidement que possible, mais je suis un peu pris par le taf là. Jusqu'à ce que je poste ici : NE PAS METTRE À JOUR SVP (merci d'être patient, si c'est pas màj pendant 3-4 jours c'est pas la mort) Dernière modification par lukophron (Le 09/07/2013, à 02:37) Le danger avec les glands est qu'ils prennent racines. Corneille Hors ligne spinoziste Re : [python]Monsieur Cinéscript @S.O.D : Pas mal la dernière . Dites ça vous dirait pas de changer la trogne de Monsieur Cinéma au passage ? @lukophron : Tu peux peut-être programmer Monsieur Cinéma pour qu'il nous avertisse sur le fil s'il n'a pas réussi à maj à partir d'IMDb ? Dernière modification par spinoziste (Le 09/07/2013, à 02:38) Nous mourrons tous . Hors ligne lukophron Re : [python]Monsieur Cinéscript Dites ça vous dirait pas de changer la trogne de Monsieur Cinéma au passage ? Comme ça ? À mon avis, il va y avoir bataille ! Le danger avec les glands est qu'ils prennent racines. Corneille Hors ligne lukophron Re : [python]Monsieur Cinéscript @lukophron : Tu peux peut-être programmer Monsieur Cinéma pour qu'il nous avertisse sur le fil s'il n'a pas réussi à maj à partir d'IMDb ? tu peux préciser l'intérêt ? (et l'intérêt comparativement à ce qu'il prévienne l'utilisateur directement ?) Le danger avec les glands est qu'ils prennent racines. Corneille Hors ligne spinoziste Re : [python]Monsieur Cinéscript Pour la photo par contre vaut mieux se mettre d'accord en effet pour eviter un conflit inter-ciné-râle on fout les freres Lumieres et pis c'est tout . Non pas l'utilisateur mais le fil . Je sais pas si par exemple tu peux pas faire de maj du script (pendant un certain temps) , monsieur Cinéma poste un message sur le fil avec la date ou le film qu'il n'a pas reussi à maj ? /me essaie de reflechir à sa connerie ... (please wait...) Nous mourrons tous . Hors ligne SODⒶ Re : [python]Monsieur Cinéscript Dites ça vous dirait pas de changer la trogne de Monsieur Cinéma au passage ? À mon avis, il va y avoir bataille ! Malgré mon admiration pour Kubrik, il ne symbolise pas aussi bien que Pierre Tchernia le cinéma en général. Le sacré n'est fait que pour qu'on lui pisse dessus, qu'on lui crache dessus, qu'on le brûle et qu'on le rende kitch et mercantile ! Les idoles ne sont là que pour qu'on les renverse ! yikes Le passé n'est fait que pour amuser quelques historiens et pour être manipulé, tordu, vendu par les autres ! En ligne lukophron Re : [python]Monsieur Cinéscript bataille ! @spino : non, non, cpas con. En fait, en lien avec la demande de S.O.D., je verrai bien un topic consacré à la liste des films trouvés. Le premier message modifié avec la liste complète, et un nouveau post pour signaler les détails de la dernière màj. Le danger avec les glands est qu'ils prennent racines. Corneille Hors ligne SODⒶ Re : [python]Monsieur Cinéscript bataille ! T'façon, c'est moi le graphiste. En admettant qu'on ai un serveur dispo, Y aurait moyen de faire tourner un javascript qui permettrai de savoir si un film a déjà été joué ? C'est un peu hard, pour le moment, le seul truc que j'ai trouvé, c'est de faire une recherche dans la page avec le numéro IMDB. Le sacré n'est fait que pour qu'on lui pisse dessus, qu'on lui crache dessus, qu'on le brûle et qu'on le rende kitch et mercantile ! Les idoles ne sont là que pour qu'on les renverse ! yikes Le passé n'est fait que pour amuser quelques historiens et pour être manipulé, tordu, vendu par les autres ! En ligne lukophron Re : [python]Monsieur Cinéscript tout est possible, mais je ne vois pas de différence entre - "Ctrl+F" et entrer le nom du film (ou son numéro) - "clique sur l'appli java" et entrer le nom du film (ou son numéro) la recherche textuelle est intégrée dans tous les navigateurs, on ferait quoi de plus ? Sinon, pour le serveur, j'ai une question : si on automatise tout, on risque d'avoir des trucs comme ça [*]tt0125451[/*] planqué un peu partout ou des balises fausses à cause d'une erreur de frappe, et personne ne le remarquerait. Bon, pour l'instant on a system qui dit oui à des titres "(DVD release)", cpas mieux Mais autant changer pour le meilleur. Vous verriez comment les choses ? Dernière modification par lukophron (Le 09/07/2013, à 07:56) Le danger avec les glands est qu'ils prennent racines. Corneille Hors ligne SystemeD Re : [python]Monsieur Cinéscript ah, bug du forum, je voyais mon dernier message en triple, je veux effacer les doubles et il m'efface tout. enfin bref ça veut dire quoi "je voulais voir ce que ça donnait" ? et heureusement que j'ai dit "ne pas faire la màj"... le problème, lorsque tu fais des trucs comme ça, c'est que je perds mon temps à rechercher manuellement les erreurs, et à les corriger. Les balises qui ne sont pas passées, il va falloir que je repère lesquelles en comparant les films trouvés dans le topic avec les films de la liste, que je supprime les balises de la liste pour qu'elles soient à nouveau prises en compte par la màj. Et plus tu mets à jour, plus j'ai de pages de topic à me taper manuellement. Enfin, on va dire que c'est mon script qui n'est pas assez papy-friendly Notes à moi-même : - faire un envoi sur pastebin des sorties terminal - demander confirmation de la modification en ligne (comme ça, en cas de doute, on refuse et on évite de faire des modif' inadéquates) Comme je disais dans mon message effacé, IMDb modifie régulièrement sa mise en page web. A chaque fois cela entraînera des bugs. Du coup je réfléchis aussi à un autre système, basé sur la liste IMDb des titres de films (à suivre, et vos idées sont bienvenues). J'essaie de reprendre ça aussi rapidement que possible, mais je suis un peu pris par le taf là. Jusqu'à ce que je poste ici : NE PAS METTRE À JOUR SVP (merci d'être patient, si c'est pas màj pendant 3-4 jours c'est pas la mort) Quand je fais une mise à jour, de façon tout à fait aléatoire, je ne vais pas voir si tu dit : ne pas mettre à jour ! Si c'est pour avoir des reproches, moi je vais te laisser ton script et tu te feras tes maj. Hors ligne SODⒶ Re : [python]Monsieur Cinéscript @ 6steme1 : personne ne t'engueule, on discute, c'est tout. De toute façon, le script devrait bientôt tourner sur un serveur. Le sacré n'est fait que pour qu'on lui pisse dessus, qu'on lui crache dessus, qu'on le brûle et qu'on le rende kitch et mercantile ! Les idoles ne sont là que pour qu'on les renverse ! yikes Le passé n'est fait que pour amuser quelques historiens et pour être manipulé, tordu, vendu par les autres ! En ligne SystemeD Re : [python]Monsieur Cinéscript @ 6steme1 : personne ne t'engueule, on discute, c'est tout. De toute façon, le script devrait bientôt tourner sur un serveur. Excellente idée. Hors ligne SODⒶ Re : [python]Monsieur Cinéscript le script devrait bientôt tourner sur un serveur. Je cherche un moyen pour acheter un nom de domaine sans que mes coordonnées personnelles apparaissent dans les DNS. Si quelqu'un sait comment faire, pour pas trop cher, merci. Le sacré n'est fait que pour qu'on lui pisse dessus, qu'on lui crache dessus, qu'on le brûle et qu'on le rende kitch et mercantile ! Les idoles ne sont là que pour qu'on les renverse ! yikes Le passé n'est fait que pour amuser quelques historiens et pour être manipulé, tordu, vendu par les autres ! En ligne lukophron Re : [python]Monsieur Cinéscript merci aux M. Ciné de vérifier et signaler un pb python verifajour.py mise à jour en cours... v0.9.6b url du topic = http://forum.ubuntu-fr.org/viewtopic.php?id=1299441 id du topic = 1299441 récupération du message-liste première page récupérée http://forum.ubuntu-fr.org/viewtopic.php?id=1299441 message récupéré pour mise à jour page en cours = 1 recherche des codes imdb sur la page 1 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=1 ['tt1298650', 'tt0053146'] recherche des codes imdb sur la page 2 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=2 ['tt0115693', 'tt0078908'] recherche des codes imdb sur la page 3 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=3 ['tt0783233', 'tt0059079', 'tt0477139', 'tt0290673'] recherche des codes imdb sur la page 4 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=4 [] recherche des codes imdb sur la page 5 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=5 ['tt2084989'] recherche des codes imdb sur la page 6 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=6 ['tt1614989'] recherche des codes imdb sur la page 7 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=7 ['tt2248731', 'tt0120669', 'tt0064423'] recherche des codes imdb sur la page 8 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=8 ['tt0093870', 'tt0000417', 'tt1219289'] recherche des codes imdb sur la page 9 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=9 ['tt1528769', 'tt0053221', 'tt0097523'] recherche des codes imdb sur la page 10 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=10 [] recherche des codes imdb sur la page 11 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=11 ['tt1570728', 'tt0026564'] recherche des codes imdb sur la page 12 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=12 ['tt1830499', 'tt0108551', 'tt0074916'] recherche des codes imdb sur la page 13 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=13 ['tt0050260', 'tt0050539'] recherche des codes imdb sur la page 14 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=14 ['tt1821641', 'tt1583421'] recherche des codes imdb sur la page 15 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=15 ['tt1736633'] recherche des codes imdb sur la page 16 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=16 ['tt2334879'] recherche des codes imdb sur la page 17 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=17 [] page atteinte : 17 recherche des titres des nouveaux films sur imdb appuyer sur 'o' ou 'n' pour accepter ou refuser un film ['tt0000417', 'tt0059079', 'tt1830499', 'tt1583421', 'tt2334879', 'tt0050539', 'tt1821641', 'tt2084989', 'tt1736633', 'tt0097523', 'tt2248731', 'tt1614989', 'tt0115693', 'tt0053221', 'tt0783233', 'tt0074916', 'tt0050260', 'tt0053146', 'tt0120669', 'tt1298650', 'tt1219289', 'tt1570728', 'tt0026564', 'tt0064423', 'tt0093870', 'tt0290673', 'tt0078908', 'tt1528769', 'tt0477139', 'tt0108551'] Balise : tt0000417 --- Accepter le titre : (Le) voyage dans la lune (1902) ?o titre ajouté Balise : tt0059079 --- Accepter le titre : Da zui xia (1966) ?o titre ajouté Balise : tt1830499 --- Accepter le titre : Beur sur la ville (2011) ?o titre ajouté Balise : tt1583421 --- Accepter le titre : G.I. Joe: Retaliation (2013) ?o titre ajouté Balise : tt2334879 --- Accepter le titre : White House Down (2013) ?o titre ajouté Balise : tt0050539 --- Accepter le titre : (The) Incredible Shrinking Man (1957) ?o titre ajouté Balise : tt1821641 --- Accepter le titre : (The) Congress (2013) ?o titre ajouté Balise : tt2084989 --- Accepter le titre : Upstream Color (2013) ?o titre ajouté Balise : tt1736633 --- Accepter le titre : Oslo, 31. august (2011) ?o titre ajouté Balise : tt0097523 --- Accepter le titre : Honey, I Shrunk the Kids (1989) ?o titre ajouté Balise : tt2248731 --- Accepter le titre : Crapuleuses 2012) ?o titre ajouté Balise : tt1614989 --- Accepter le titre : Hodejegerne (2011) ?o titre ajouté Balise : tt0115693 --- Accepter le titre : Hak hap (1996) ?o titre ajouté Balise : tt0053221 --- Accepter le titre : Rio Bravo (1959) ?o titre ajouté Balise : tt0783233 --- Accepter le titre : Atonement (2007) ?o titre ajouté Balise : tt0074916 --- Accepter le titre : Mr. Klein (1976) ?o titre ajouté Balise : tt0050260 --- Accepter le titre : Comme un cheveu sur la soupe (1957) ?o titre ajouté Balise : tt0053146 --- Accepter le titre : Orfeu Negro (1959) ?o titre ajouté Balise : tt0120669 --- Accepter le titre : Fear and Loathing in Las Vegas (1998) ?o titre ajouté Balise : tt1298650 --- Accepter le titre : Pirates of the Caribbean: On Stranger Tides (2011) ?o titre ajouté Balise : tt1219289 --- Accepter le titre : Limitless (2011) ?o titre ajouté Balise : tt1570728 --- Accepter le titre : Crazy, Stupid, Love. (2011) ?o titre ajouté Balise : tt0026564 --- Accepter le titre : (La) kermesse héroïque (1935) ?o titre ajouté Balise : tt0064423 --- Accepter le titre : Heureux qui comme Ulysse... (1970) ?o titre ajouté Balise : tt0093870 --- Accepter le titre : RoboCop (1987) ?o titre ajouté Balise : tt0290673 --- Accepter le titre : Irréversible (2002) ?o titre ajouté Balise : tt0078908 --- Accepter le titre : (The) Brood (1979) ?o titre ajouté Balise : tt1528769 --- Accepter le titre : Kommandør Treholt & ninjatroppen (2010) ?o titre ajouté Balise : tt0477139 --- Accepter le titre : Wristcutters: A Love Story (2006) ?o titre ajouté Balise : tt0108551 --- Accepter le titre : What's Love Got to Do with It (1993) ?o titre ajouté recherche imdb terminée message modifié, à envoyer sauvegarde locale ok, liste.bak mis à jour modification du post en cours... opening http://forum.ubuntu-fr.org/edit.php?id=13804791 modification faite ... mise à jour terminée Le danger avec les glands est qu'ils prennent racines. Corneille Hors ligne lukophron Re : [python]Monsieur Cinéscript @S.O.D option gratuite http://www.ovh.com/fr/domaines/service_owo.xml Le danger avec les glands est qu'ils prennent racines. Corneille Hors ligne SystemeD Re : [python]Monsieur Cinéscript Bon alors qui lance le script ? Un bot ? J'ai la version d'avant, j'en fais quoi, corbeille ? Hors ligne monsieur cinéma Re : [python]Monsieur Cinéscript Le script a débuté sur sam. 13 juil. 2013 08:13:51 UTC mise à jour en cours... v0.9.7 url du topic = http://forum.ubuntu-fr.org/viewtopic.php?id=1299441 id du topic = 1299441 récupération du message-liste première page récupérée http://forum.ubuntu-fr.org/viewtopic.php?id=1299441 message récupéré pour mise à jour page en cours = 17 recherche des codes imdb sur la page 17 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=17 ['tt1321870', 'tt0086993'] recherche des codes imdb sur la page 18 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=18 ['tt2186812'] recherche des codes imdb sur la page 19 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=19 ['tt0043918', 'tt0085454', 'tt0079945', 'tt0045555'] recherche des codes imdb sur la page 20 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=20 [] page atteinte : 20 recherche des titres des nouveaux films sur imdb appuyer sur 'o' ou 'n' pour accepter ou refuser un film ['tt1321870', 'tt0086993', 'tt0085454', 'tt0045555', 'tt0079945', 'tt2186812', 'tt0043918'] Balise : tt1321870 --- Accepter le titre : Gangster Squad (2013) ?o titre ajouté Balise : tt0086993 --- Accepter le titre : (The) Bounty (1984) ?o titre ajouté Balise : tt0085454 --- Accepter le titre : Don Camillo (1984) ?o titre ajouté Balise : tt0045555 --- Accepter le titre : (The) Big Heat (1953) ?o titre ajouté Balise : tt0079945 o--- Accepter le titre : Star Trek: The Motion Picture (1979) ?o appuyer sur O ou Y pour accepter, N pour refuser --- Accepter le titre : Star Trek: The Motion Picture (1979) ?o titre ajouté Balise : tt2186812 --- Accepter le titre : 20 ans d'écart (2013) ?o titre ajouté Balise : tt0043918 --- Accepter le titre : Don Camillo (1952) ?o titre ajouté recherche imdb terminée nouvelle liste créée Voulez-vous modifier la liste en local?o ok, liste.bak mis à jour Voulez-vous modifier la liste en ligne?o modification du post en cours... opening http://forum.ubuntu-fr.org/edit.php?id=13804791 modification faite ... mise à jour terminée Script terminé sur sam. 13 juil. 2013 08:14:45 UTC Hors ligne monsieur cinéma Re : [python]Monsieur Cinéscript prêt pour le bot note pour nous-mm : pour enlever un film, il faut l'enlever de la liste brute Hors ligne SystemeD Re : [python]Monsieur Cinéscript prêt pour le bot note pour nous-mm : pour enlever un film, il faut l'enlever de la liste brute Donc, je mets le script à la corbeille ? Hors ligne loutre Re : [python]Monsieur Cinéscript Salut, Je suis développeur Python à mes heures perdues. Je me permets de faire quelques remarques sur le style. En Python, on aime développer de façon "pythonne", on aime le style. try/except : Même si Python le permet, toujours mettre la classe de l'erreur après un except try : tid = url.split('=')[1].rstrip('&p') except : tid = 'ERROR : url invalide ?' est à éviter. Il est préférable qu'une exception qui n'est pas anticipée fasse tout simplement planter le programme. opérateur ternaire : Plutôt que : if new_imdb : imdb_str = ' '.join(new_imdb) else : imdb_str = '' préfère imdb_str = ' '.join(new_imdb) if new_imdb else ''; ou (ancien style) : imdb_str = new_imdb and ' '.join(new_imdb) or ''; Ça ne laisse pas de doute sur l'alimentation de la variable, et c'est plus concis. Fichiers : Plutôt que : file = open('tchernia', 'r') tchernia = file.readlines() file.close() penche toi sur l'utilisation du mot clef 'with'. Tu verras que c'est plus sûr, et plus joli ! Fonctions lambda : Plus concis également et très Python-style, tu peux remplacer : def cleanString(s): if isinstance(s,str): s = unicode(s,"utf-8","replace") s=unicodedata.normalize('NFD',s) return s par : cleanString = lambda s: unicode(s,"utf-8","replace") if isinstance(s,str) else unicodedata.normalize('NFD',s) mais c'est du détail... (et toujours préférer la lisibilité) Tuples et variables : On aime bien en python utiliser le dépaquetage de tuples pour alimenter les variables. Par exemple, au lieu de a = a.lstrip() b = b.lstrip() On peut écrire : a, b = a.lstrip(), b.lstrip() Toujours plus simple : Plutôt que : liste_alpha = [] for i in range(28) : liste_alpha.append([]) Préfère xrange que range si c'est pour une boucle (xrange retourne un itérateur. range une liste, et donc plus d'espace mémoire). 'i' est ici une variable inutilisée. En python on a coutume de nommer ce genre de variable '_'. Ce qui donne : liste_alpha = [] for _ in xrange(28) : liste_alpha.append([]) Mais dans ce cas précis tu peux faire encore plus simple : liste_alpha = 28*[[]] Voila pour mes deux centimes. Plus d'infos sur le style Python ici : PEP 8 Tu ne perds pas ton temps à apprendre Python Hors ligne lukophron Re : [python]Monsieur Cinéscript salamat loutre ! merci pour tes retours ! try/except : Même si Python le permet, toujours mettre la classe de l'erreur après un except yep, c'est pas propre, j'ai dealé au fur et à mesure des bugs préfère imdb_str = ' '.join(new_imdb) if new_imdb else ''; ou (ancien style) : imdb_str = new_imdb and ' '.join(new_imdb) or ''; Ça ne laisse pas de doute sur l'alimentation de la variable, et c'est plus concis. hmmm, je vois l'intérêt, mais par goût perso je n'aime pas les "if else" sur la même ligne. C'est comme pour les fonctions que les vrais "pythonneux" affectionnent. C'est bine concis mais ce n'est plus lisible pour moi, il faut encore que je m'habitue aux lamda... penche toi sur l'utilisation du mot clef 'with'. Tu verras que c'est plus sûr, et plus joli ! oui, je ne connaissais pas quand j'ai commencé à coder, c'est bien de me rappeler que je devrais améliorer les choses maintenant que je connais un peu plus (idem pour les tuples) ok pour xrange et la dénomination de i, c'est bon à savoir. mais : liste_alpha = 28*[[]] ne fonctionnait pas avec le reste. Je me souviens l'avoir tester dans tous les sens (en python2.6, je ne sais pas si ça joue) en tous cas je n'arrivais pas à balancer alfab = 'abcdefghijklmnopqrstuvwxyz' for i in dicfilm.items(): if i[1] in alfab : liste_alpha[alfab.index(i[1])+1].append(i[0]) elif i[1].isdigit() : liste_alpha[0].append(i[0]) else : liste_alpha[27].append(i[0]) tes deux centimes ont été appréciés Le danger avec les glands est qu'ils prennent racines. Corneille Hors ligne Jules Petibidon Re : [python]Monsieur Cinéscript @loutre : Attention, la multiplication d'arrays c'est pas bon du tout >>> L = 5*[[]] >>> L[0].append(1) >>> L [[1], [1], [1], [1], [1]] Préfère quelque chose du genre : >>> L = [[] for i in range(5)] >>> L[0].append(1) >>> L [[1], [], [], [], []] Hors ligne loutre Re : [python]Monsieur Cinéscript @loutre : Attention, la multiplication d'arrays c'est pas bon du tout >>> L = 5*[[]] >>> L[0].append(1) >>> L [[1], [1], [1], [1], [1]] Préfère quelque chose du genre : >>> L = [[] for i in range(5)] >>> L[0].append(1) >>> L [[1], [], [], [], []] Exact. Un piège de base, pourtant. Hors ligne monsieur cinéma Re : [python]Monsieur Cinéscript correction manuelle pour le doublon "Le locataire" et le classement de "L'homme qui aimait les femmes" Ça me fait penser, je peux assez facilement passer un script sur la liste entière pour faire un tri alphabétique cohérent, dont on conserverait le principe pour la suite des mise à jour. Le problème est le "principe" en lui-même : qu'est-ce que l'on considère comme le premier mot ? (on vire tous les déterminants ? autre idée ?) On peut s'en foutre aussi ^ ^ Hors ligne monsieur cinéma Re : [python]Monsieur Cinéscript Le script a débuté sur sam. 20 juil. 2013 01:54:54 UTC mise à jour en cours... v0.9.7 url du topic = http://forum.ubuntu-fr.org/viewtopic.php?id=1299441 id du topic = 1299441 récupération du message-liste première page récupérée http://forum.ubuntu-fr.org/viewtopic.php?id=1299441 message récupéré pour mise à jour page en cours = 20 recherche des codes imdb sur la page 20 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=20 ['tt0812243'] recherche des codes imdb sur la page 21 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=21 ['tt0069747'] recherche des codes imdb sur la page 22 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=22 ['tt0061402'] recherche des codes imdb sur la page 23 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=23 ['tt0046572', 'tt0120804', 'tt0061402'] recherche des codes imdb sur la page 24 http://forum.ubuntu-fr.org/viewtopic.php?id=1299441&p=24 [] page atteinte : 24 recherche des titres des nouveaux films sur imdb appuyer sur 'o' ou 'n' pour accepter ou refuser un film ['tt0046572', 'tt0069747', 'tt0120804', 'tt0061402', 'tt0812243'] Balise : tt0046572 --- Accepter le titre : L'étrange désir de Monsieur Bard (1954) ?o titre ajouté Balise : tt0069747 --- Accepter le titre : (Les) aventures de Rabbi Jacob (1973) ?o titre ajouté Balise : tt0120804 --- Accepter le titre : Resident Evil (2002) ?o titre ajouté Balise : tt0061402 --- Accepter le titre : (The) Big Shave (1968) ?o titre ajouté Balise : tt0812243 --- Accepter le titre : Ex Drummer (2007) ?o titre ajouté recherche imdb terminée nouvelle liste créée Voulez-vous modifier la liste en local?o ok, liste.bak mis à jour Voulez-vous modifier la liste en ligne?o modification du post en cours... opening http://forum.ubuntu-fr.org/edit.php?id=13804791 modification faite ... mise à jour terminée Script terminé sur sam. 20 juil. 2013 01:56:41 UTC Hors ligne
I have an object subclass which implements a dynamic dispatch __ iter __ using a caching generator (I also have a method for invalidating the iter cache) like so: def __iter__(self): print("iter called") if self.__iter_cache is None: iter_seen = {} iter_cache = [] for name in self.__slots: value = self.__slots[name] iter_seen[name] = True item = (name, value) iter_cache.append(item) yield item for d in self.__dc_list: for name, value in iter(d): if name not in iter_seen: iter_seen[name] = True item = (name, value) iter_cache.append(item) yield item self.__iter_cache = iter_cache else: print("iter cache hit") for item in self.__iter_cache: yield item It seems to be working... Are there any gotchas I may not be aware of? Am I doing something ridiculous?
I would like to iterate in a for loop using 3 (or any number of) lists with any number of elements, for example: from itertools import izip for x in izip(["AAA", "BBB", "CCC"], ["M", "Q", "S", "K", "B"], ["00:00", "01:00", "02:00", "03:00"]): print x but it gives me: ('AAA', 'M', '00:00')('BBB', 'Q', '01:00')('CCC', 'S', '02:00') I want: ('AAA', 'M', '00:00')('AAA', 'M', '01:00')('AAA', 'M', '02:00')..('CCC', 'B', '03:00') Actually I want this: for word, letter, hours in [cartesian product of 3 lists above] if myfunction(word,letter,hours): var_word_letter_hours += 1
If I want to move to C++ and SDL in the future, is Python and pygame a good way to learn SDL? Python+PyGame is a really great idea for learning SDL. I wrote a somewhat popular game that way. Python/PyGame seems much more advanced than SDL bindings to any other language, and one huge advantage compared to C++ is that don't have to compile code, and with some simple hacking can even modify a running program, and see the feedback live. It makes a huge huge difference - like using a GUI vector graphics program vs writing SVG in a text editor. Unfortunately you don't get this out of the box, because you need to adapt your program a bit to see it. As for other advantages of PyGame, jrpg ran with very small changes on Linux, Windows, and OSX. I had to do some tweaks to fullscreen mode switching, and double buffering as there were some differences between OSes, but no recompilation was ever necessary. If you have any problems you can get good stack trace and debug your problems live, that's not really possible with C++ once you get a memory corruption or a segfault. I don't really know how easy or how hard would it be to mix C++ and Python for your SDL games. I think it cannot be too hard, as PyGame is a pretty straightforward but very nicely made wrapper for SDL, and Python/C++ mixing is supposed to be easy enough. You can learn some techniques, ways to implement game logic etc. in SDL-based enviroment but after moving to C++/SDL you will have to use SDL functions directly, helper functions/objects from pyGame will be completely useles. pygame abstracts the SDL interface quite a lot, therefore I don't think there's much of an advantage carried over. Of course. You can write a sdl game or tools in really less time. You can start with this code that display the data/chimp.bmp into a 468x60 screen : import pygame, sys,os from pygame.locals import * pygame.init() window = pygame.display.set_mode((468, 60)) pygame.display.set_caption('Monkey Fever') screen = pygame.display.get_surface() monkey_head_file_name = os.path.join("data","chimp.bmp") monkey_surface = pygame.image.load(monkey_head_file_name) screen.blit(monkey_surface, (0,0)) pygame.display.flip() def input(events): for event in events: if event.type == QUIT: sys.exit(0) else: print event while True: input(pygame.event.get()) When you are familiar with SDL objects you can easily move to C++ (if you want again :p Pygame is fast and you can make a complexe game with it). You could try pyglet If you are targetting at OpenGL. It's much better thought out library than what pygame is. But then, if you want to move to C++ and SDL in future, do it now. That way you actually learn SDL. But before doing such an irresponsible thing, it'd be perhaps a good idea to check into pyglet first. Just because for designing your apps properly in C++ as well no matter how bad your libraries are. python won't prevent you off learning For your purpose PySDL2 is better than pygame. It imports SDL2 API almost directly. I wouldn't consider Python (or any managed or interpreted language, for that matter) a good way to learn any complex task, because it insulates the programmer from the workings of the system too much. As a friend of mine put it, "Python loves you and wants you to be happy." And that's all well and good if you already know the fundamentals, but if you want to You'll learn the what very quickly, but not the why, and then when something goes badly wrong, (and it will eventually, in any non-trivial project,) you'll be left with no idea what's happening or why.
I am debugging some code and I want to find out when a particular dictionary is accessed. Well, it's actually a class that subclass dict and implements a couple extra features. Anyway, what I would like to do is subclass dict myself and add override __getitem__ and __setitem__ to produce some debugging output. Right now, I have class DictWatch(dict): def __init__(self, *args): dict.__init__(self, args) def __getitem__(self, key): val = dict.__getitem__(self, key) log.info("GET %s['%s'] = %s" % str(dict.get(self, 'name_label')), str(key), str(val))) return val def __setitem__(self, key, val): log.info("SET %s['%s'] = %s" % str(dict.get(self, 'name_label')), str(key), str(val))) dict.__setitem__(self, key, val) 'name_label' is a key which will eventually be set that I want to use to identify the output. I have then changed the class I am instrumenting to subclass DictWatch instead of dict and changed the call to the superconstructor. Still, nothing seems to be happening. I thought I was being clever, but I wonder if I should be going a different direction. Thanks for the help!
I've just started learning python. I am using it to write a script to calculate salt inflow rolling average. I have data like that Date A4260502_Flow A4261051_Flow A4260502_EC A4261051_EC 25/02/1970 1304 0 411 0 1304 26/02/1970 1331 0 391 0 1331 27/02/1970 0 0 420 411 0 28/02/1970 0 0 400 391 0 1/03/1970 0 0 0 420 0 2/03/1970 1351 1304 405 400 1327.5 3/03/1970 2819 1331 415 405 2075 4/03/1970 2816 0 413 0 2816 5/03/1970 0 1351 0 415 1351 6/03/1970 0 0 0 0 0 7/03/1970 0 2819 0 413 2819 8/03/1970 0 0 0 0 0 9/03/1970 0 2816 0 412 2816 And my script is inputfilename = "output.csv" outputfilename = "SI_calculation.csv" # Open files infile = open(inputfilename,"r+") outfile = open(outputfilename,'w') # Initialise variables EC_conversion = 0.000525 rolling_avg = 5 flow_avg_list = [] SI_list = [] SI_ra_list = [] SI_ra1 = [] # Import module import csv import numpy #L20 table = [] reader = csv.reader(infile) #read for row in csv.reader(infile): table.append(row) infile.close() for r in range(1,len(table)): for c in range(1,len(row)): #l30 table[r][c] = float(table[r][c]) #Calculating flow average for r in range(1,len(table)): flow1 = table[r][1] flow2 = table[r][2] if flow1 == 0.0: flow_avg = flow2 #l40 elif flow2 == 0.0: flow_avg = flow1 else: flow_avg = (flow1+flow2)/2 flow_avg_list.append(flow_avg) #Calculating salt inflow for r in range(1,len(table)): s1 = table[r][3] s2 = table[r][4] #l50 if s1 == 0.0 or s2 == 0.0 or flow_avg_list[r-1] == 0.0: SI = 0.0 else: SI = EC_conversion*flow_avg_list[r-1]*(s2-s1) SI_list.append(SI) print SI_list #Calculating rolling average salt inflow for r in range(1,len(table)): if r < 5: #rolling-avg = 5 for i in range(0,r+5): #l60 S = SI_list[i] SI_ra1.append(S) SI_ra = numpy.mean(SI_ra1) SI_ra_list.append(SI_ra) elif r > (len(table) - 4): for i in range(r-5,len(table)-1): S = SI_list[i] SI_ra1.append(S) SI_ra = numpy.mean(SI_ra1) SI_ra_list.append(SI_ra) #l70 else: for i in range(r-5,r+5): S = SI_list[i] #Line 73 SI_ra1.append(S) SI_ra = numpy.mean(SI_ra1) SI_ra_list.append(SI_ra) print SI_ra_list When I ran the script it gave me the error: Line 73: list index out of range. Does anyone know what the error could be? Sorry this is a long script. I don't know how to shorten it yet.
i am learning python, and i am having troubles with saving the output of a small function to file. My python function is the following: #!/usr/local/bin/python import subprocess import codecs airport = '/System/Library/PrivateFrameworks/Apple80211.framework/Versions/Current/Resources/airport' def getAirportInfo(): arguments = [airport, "--scan" , "--xml"] execute = subprocess.Popen(arguments, stdout=subprocess.PIPE) out, err = execute.communicate() print out return out airportInfo = getAirportInfo() outFile = codecs.open('wifi-data.txt', 'w') outFile.write(airportInfo) outFile.close() I guess that this would only work on a Mac, as it references some PrivateFrameworks. Anyways, the code almost works as it should. The print statement prints a huge xml file, that i'd like to store in a file, for future processing. And here start the problems.In the version above, the script executes without any errors, however, when i try to open the file, i get an error message, along the lines of error with utf-8 encoding. Ignoring this, opens the file, and most of the things look fine, except for a couple of things: some SSID have non-ascii characters, like ä, ö and ü. When printing those on the screen, they are correctly displayed as\xc3\xa4and so on. When I open the file it is displayed incorrectly, the usual random garbage. some of the xml values look like these when printed on screen: Data("\x00\x11WLAN-0024FE056185\x01\x08\x82\x84\x8b\x96\x0c\ … x10D\x00\x01\x02") but like this when read from file: //8AAAAAAAAAAAAAAAAAAA== I thought it could be an encoding error (seen as the Umlauts have problems, the error message says something about the utf-8 encoding being messed up, and the text containing \x type of characters), and i tried looking here for possible solutions. However, no matter what i do, there are still errors: adding an additional argument 'utf-8' to the codecs.open yields a UnicodeDecodeError: 'ascii' codec can't decode byte 0x9a in position 24227: ordinal not in range(128)and the generated file is empty. explicitly encoding to utf-8 with outFile.write(airportInfo.encode('utf-8'))before saving results in the same error not being an expert, i tried decoding it, maybe i was just doing the exact opposite of what needed to be done, but i got an UnicodeDecodeError: 'utf8' codec can't decode byte 0x8a in position 8980: invalid start byte The only the thing that worked (unsurprisingly), was to write the repr() of the string to file, but that is just not what i need, and also i can't make a nice .plist of a file full with escape symbols. So please, please, can somebody help me? What am i missing? If it helps, the type that gets saved in airportInfo is str (as in type(airportInfo) == str) and not u
Use reduction to show that the following function is not computable, where P is any python program that takes a single input x: sotrue(P) = true, if P(x) returns true for every value of x, sotrue(P) = false, otherwise (if P(x) returns False or does not halt for at least one value of x) The proof is proof by contradiction, and the goal is to find a way to compute halt, given a supposed algorithm for sotrue(P). Assuming that soTrue can be computed, I want to reach the contradiction that halt is computable. Here is the algorithm I created: def halt (f, i): def sotrue(P): ...code for sotrue goes here... def ff(x) f(i) return true return soTrue(ff) So halt(f, i) computes halt! This is a contradiction since halt is known to not be computable! By contradiction, sotrue(P) is also not computable. MY QUESTION IS: The algorithm I made seems too simple to be true. Could anyone point out if there is some problem with my algorithm or suggest a way to include the case when P(x) returns False since my proof does not include it, but rather is based on the "does not halt" part of the problem?
Pierre Thibault Comment fonctionnent les pipes et les commandes shell avec Python Bonjour, J'ai de la difficulté à comprendre comment les redirections fonctionnent avec les commandes shell et Python. Par exemple, si dans un shell bash je tape: aa | bb Si je tape la commande précédente, cela veut dire que la sortie de la commande 'aa', stdout, sera l'entrée pour la commande 'bb', stdin, n'est-ce pas? Mon problème se trouve à cette ligne de code en Python : subprocess.call(["xsel", "-b", "-i"], stdin=tmp_file) L'entrée pour la commande 'xsel' se trouve avec le fichier tmp_file. Cette commande fonctionne très bien. Ce que je ne comprends pas est que j'ai dû employer l'option '-i' qui est définie dans la doc de xsel de la façon suivante : -i, --input Read standard input into the selection Ma question est pourquoi dois-je employer cette option? Par exemple, si je tape dans le shell bash : echo xx | xsel xsel lit son information depuis l'entrée standard qui est alimentée par la commande 'echo', n'est-ce pas? Pourquoi n'ai-je pas à employer l'option -i comme pour ma ligne de code Python? Hors ligne Maisondouf Re : Comment fonctionnent les pipes et les commandes shell avec Python parce que le shell, c'est pas du python. Chacun a sa syntaxe et son fonctionnement distinct. By default, this program outputs the selection without modification if both standard input and standard output are terminals (ttys). Otherwise, the current selection is output if standard output is not a terminal (tty), and the selection is set from standard input if standard input is not a terminal (tty). Alors effectivement depuis Python, il n'y pas de notion de pipe au sens shell, c'est le call python qui "déverse" le contenu du fichier dans l'entrée de la commande. Comme si sous shell, on avait fait: cat tmp_file | xsel ASUS M5A88-v EVO avec AMD FX(tm)-8120 Eight-Core Processor, OS principal Precise 12.04.1 LTS 63bits½ Bricoleur, menteur, inculte, inadapté social et mythomane, enfin d'après certains.... "the secret of my form is summed up in two words, no sport" (Winston Churchill) Hors ligne Pierre Thibault Re : Comment fonctionnent les pipes et les commandes shell avec Python Alors effectivement depuis Python, il n'y pas de notion de pipe au sens shell, c'est le call python qui "déverse" le contenu du fichier dans l'entrée de la commande. Comme si sous shell, on avait fait:cat tmp_file | xsel Je ne comprends pas. Si je fais la commande précédente dans le terminal, je n'ai pas besoin de l'option -i comme en Python. Est-ce que ça veut dire qu'il y a une erreur dans la documentation de la commande xsel? Hors ligne pingouinux Re : Comment fonctionnent les pipes et les commandes shell avec Python Bonjour, Comme l'indique l'extrait de man xsel (Maisondouf #2), l'entrée standard est traitée différemment si c'est un terminal. Ces trois commandes vont bien lire l'entrée standard : cat tmp_file | xselxsel <tmp_filexset -i # Lecture sur le terminal mais pas celle-ci : xsel Hors ligne Pierre Thibault Re : Comment fonctionnent les pipes et les commandes shell avec Python OK. C'est ce que je vois. Donc on peut détecter si une entrée ou une sortie est un terminal. Je ne savais pas cela. Cela ne me semble pas être en accord avec la doc qui dit: Otherwise, the current selection is output if standard output is not a terminal (tty), and the selection is set from standard input if standard input is not a terminal (tty). Ça dit que ça lit l'entrée standard donc je ne vois pas pourquoi je dois employer l'option -i. Hors ligne aurelien.noce Re : Comment fonctionnent les pipes et les commandes shell avec Python En effet c'est bizarre ton histoire, et surtout je n'ai pas le bug chez moi... c'est quoi ta version de python/xsel ? En même temps, l'auteur lui-même précise que son algo de defaulting est assez limite (src) et qu'il vaut mieux utiliser les options explicites comme le -i ici. Mais bon je suis curieux de voir comprendre le pourquoi du comment. Au hasard, est-tu sûr que ton tmp_file ne vaut pas justement sys.stdin ? Hors ligne Pierre Thibault Re : Comment fonctionnent les pipes et les commandes shell avec Python En effet c'est bizarre ton histoire, et surtout je n'ai pas le bug chez moi... c'est quoi ta version de python/xsel ? > python --version && xsel --versionPython 2.7.3xsel version 1.2.0 by Conrad Parker <conrad@vergenet.net> En même temps, l'auteur lui-même précise que son algo de defaulting est assez limite (src) et qu'il vaut mieux utiliser les options explicites comme le -i ici. OK, ça expliquerait les choses. Mais bon je suis curieux de voir comprendre le pourquoi du comment. Au hasard, est-tu sûr que ton tmp_file ne vaut pas justement sys.stdin ? Non, je ne crois pas. Voici le code Python de mon petit script: #!/usr/bin/env python # Add for 4 spaces at each line of the current selection # store the result in the clipboard import subprocess import tempfile sel = subprocess.check_output("xsel") end_new_line = sel.endswith("\n") sel = sel.replace("\n", "\n ", sel.count("\n") - int(end_new_line)) sel = " " + sel with tempfile.TemporaryFile(mode='w+') as tmp_file: tmp_file.write(sel) tmp_file.seek(0) subprocess.call(["xsel", "-b", "-i"], stdin=tmp_file) Hors ligne aurelien.noce Re : Comment fonctionnent les pipes et les commandes shell avec Python ton code me donne une idée: - essaye laisser le mode par défaut (lecture+ecriture et binaire) sur ton tempfile - essaye d'appeler .flush() sur ton file descripteur histoire d'être sûr d'avoir tout est écrit avant de lancer ton process avec ça ça devrait mieux marcher, tu confirmes ? OK, ça expliquerait les choses. En fait non... le code source est très clair: xsel va lire l'entrée standard ssi isatty(0) renvoie True, et après un rapide test chez moi ce n'est pas le cas dans le cas d'un appel depuis subprocess.call() avec l'option stdin settée à un fichier. D'où l'idée qu'il n'y arrive pas parce que le fichier n'est pas prêt. Dernière modification par aurelien.noce (Le 11/01/2013, à 11:43) Hors ligne Pierre Thibault Re : Comment fonctionnent les pipes et les commandes shell avec Python Voici la nouvelle version du code: #!/usr/bin/env python # Add for 4 spaces at each line of the current selection # store the result in the clipboard import subprocess import tempfile sel = subprocess.check_output("xsel") end_new_line = sel.endswith("\n") sel = sel.replace("\n", "\n ", sel.count("\n") - int(end_new_line)) sel = " " + sel with tempfile.TemporaryFile() as tmp_file: tmp_file.write(sel) tmp_file.flush() tmp_file.seek(0) subprocess.call(["xsel", "-b", "-i"], stdin=tmp_file) Ça fonctionne toujours mais si j'enlève l'option "-i", ça ne fonctionne plus. Hors ligne
As requested I'm posting this an answer. I wrote a short sage script to check the primality of numbers of the form $10^n+333$ where $n$ is in the range $[4,2000]$. I found that the following values of $n$ give rise to prime numbers: $$4,5,6,12,53,222,231,416.$$ Edit 3: I stopped my laptop's search between 2000 and 3000, since it hadn't found anything in 20 minutes. I wrote a quick program to check numbers of the form $10^n+3*10^i+33$. Here are a couple 100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000030000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000033 100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000300033 100000000000000000000000000000000000000000000000000000300000000000000000000000000000000000000033 100000000000000000000000000000000000000000000000030000000000000000000000000000000000000000000033 100000000000000000000000000000000000000000000030000000000000000000000000000000000000000000000033 10000000000000000000000000000000003000000033 10000000000000000000000000000030000000000033 10000000000000000000000030000000000000000033 10000000003000000000000000000000000000000033 There seemed to be plenty of numbers of this form and presumably I could find more if I checked some of the other possible forms as outlined by dr jimbob. Note: I revised the post a bit after jimbob pointed out I was actually looking for primes that didn't quite fit the requirements. Edit 4: As requested here are the sage scripts I used. To check if $10^n+333$ was prime: for n in range(0,500): k=10^n+333 if(is_prime(k)): print n And to check for numbers of the form $10^n+3*10^i+33$: for n in range(0,500): k=10^n+33 for i in range(2,n): l=k+3*10^i if(is_prime(l)): print l
Copyright Chris McCormick, 2009-2010. (Google code page) (Launchpad project page) http://mccormick.cx/ http://podsix.com.au/ PodSixNet mailing list (great for getting help) PodSixNet is a lightweight network layer designed to make it easy to write multiplayer games in Python. It uses Python's built in asyncore library and rencode.py (included) to asynchronously serialise network events and arbitrary data structures, and deliver them to your high level classes through simple callback methods. Each class within your game client which wants to receive network events, subclasses the ConnectionListener class and then implements Network_* methods to catch specific user-defined events from the server. You don't have to wait for buffers to fill, or check sockets for waiting data or anything like that, just do connection.Pump() once per game loop and the library will handle everything else for you, passing off events to all classes that are listening. Sending data back to the server is just as easy, using connection.Send(mydata). Likewise on the server side, events are propagated to Network_* method callbacks and data is sent back to clients with the client.Send(mydata) method. If you find a bug, please report it on the mailing list or the Google code issues page. For users of the Construct game making environment for Windows, there is a tutorial on doing multiplayer networking with PodSixNet, here. Thanks to Dave Chabo for contributing this tutorial. First make sure you have Python 2.4 or greater installed. Next you'll want to get the PodSixNet source. You can either check the latest cutting-edge code out of the bzr repository: Of if you prefer SVN check it out of the Google code project: Or you can download a tarball of the latest release (version 78). The module is found inside a subdirectory called PodSixNet within the top level folder. There's an __init__.py inside there, so you can just copy or symlink the PodSixNet sub-directory into your own project and then do import PodSixNet, or else you can run sudo setup.py install to install PodSixNet into your Python path. Use sudo setup.py develop if you want to stay up to date with the cutting edge and still be able to svn/bzr up every now and then. By default PodSixNet uses a binary encoder to transfer data over the network, but it can optionally use the JSON format or other formats supported by a serialiser which has 'dumps' and 'loads' methods. If you want to serialise your data using JSON you can change the first line of Channel.py to 'from simplejson import dumps, loads' or use the built-in json library in Python 2.6 or higher. This will allow you to write game clients in languages that can't read the 'rencode' binary format, such as Javascript. Chat example: python examples/ChatServer.pypython examples/ChatClient.py Whiteboard example: python examples/WhiteboardServer.pypython examples/WhiteboardServer.py LagTime example (measures round-trip time from the server to the client): python examples/LagTimeServer.pypython examples/LatTimeClient.py You will need to subclass two classes in order to make your own server. Each time a client connects, a new Channel based class will be created, so you should subclass Channel to make your own server-representation-of-a-client class like this: from PodSixNet.Channel import Channel class ClientChannel(Channel): def Network(data): print data def Network_myaction(data): print "myaction:", data Whenever the client does connection.Send(mydata), the Network() method will be called. The method Network_myaction() will only be called if your data has a key called 'action' with a value of "myaction". In other words if it looks something like this: data = {"action": "myaction", "blah": 123, ... } Next you need to subclass the Server class like this: from PodSixNet.Server import Server class MyServer(Server): channelClass = ClientChannel def Connected(self, channel, addr): print 'new connection:', channel Set channelClass to the channel class that you created above. The method Connected() will be called whenever a new client connects to your server. See the example servers for an idea of what you might do each time a client connects. You need to call Server.Pump() every now and then, probably once per game loop. For example: myserver = MyServer() while True: myserver.Pump() sleep(0.0001) When you want to send data to a specific client/channel, use the Send method of the Channel class: channel.Send({"action": "hello", "message": "hello client!"}) To have a client connect to your new server, you should use the Connection module. See pydoc Connection for more details, but here's a summary: Connection.connection is a singleton Channel which connects to the server. You'll only have one of these in your game code, and you'll use it to connect to the server and send messages to the server. from Connection import connection # connect to the server - optionally pass hostname and port like: ("mccormick.cx", 31425) connection.Connect() connection.Send({"action": "myaction", "blah": 123, "things": [3, 4, 3, 4, 7]}) You'll also need to put the following code once somewhere in your game loop: connection.Pump() Any time you have an object in your game which you want to receive messages from the server, subclass ConnectionListener. For example: from Connection import ConnectionListener class MyNetworkListener(ConnectionListener): def Network(self, data): print 'network data:', data def Network_connected(self, data): print "connected to the server" def Network_error(self, data): print "error:", data['error'][1] def Network_disconnected(self, data): print "disconnected from the server" def Network_myaction(data): print "myaction:", data Just like in the server case, the network events are received by Network_* callback methods, where you should replace '*' with the value in the 'action' key you want to catch. You can implement as many or as few of the above as you like. For example, NetworkGUI would probably only want to listen for the _connected, _disconnected, and _error network events. The data for _error always comes in the form of network exceptions, like (111, 'Connection refused') - these are passed straight from the socket layer and are standard socket errors. Another class might implement custom methods like Network_myaction(), which will receive any data that gets sent from the server with an 'action' key that has the name 'myaction'. For example, the server might send a message with the number of players currently connected like so: channel.Send({"action": "numplayers", "players": 10}) And the listener would look like this: from Connection import ConnectionListener class MyPlayerListener(ConnectionListener): def Network_numplayers(data): # update gui element displaying the number of currently connected players print data['players'] You can subclass ConnectionListener as many times as you like in your application, and every class you make which subclasses it will receive the network events via named Network callbacks. You should call the Pump() method on each object you instantiate once per game loop: gui = MyPlayerListener() while 1: connection.Pump() gui.Pump() PodSixNet is licensed under the terms of the LGPL v3.0 or higher. See the file called COPYING for details. This basically means that you can use it in most types of projects (commercial or otherwise), but if you make changes to the PodSixNet code you must make the modified code available with the distribution of your software. Hopefully you'll tell us about it so we can incorporate your changes. I am not a lawyer, so please read the license carefully to understand your rights with respect to this code. Twisted is a fantastic library for writing robust network code. I have used it in several projects in the past, and it was quite nice to work with. That said, Twisted: These are some of the reasons why I decided to write a library that is lightweight, has no dependencies except Python, and is dedicated 100% to the task of multiplayer game networking.
You should apply your function along the axis=1. Function will receive a row as an argument, and anything it returns will be collected into a new series object df.apply(you_function, axis=1) Example: >>> df = pd.DataFrame({'a': np.arange(3), 'b': np.random.rand(3)}) >>> df a b 0 0 0.880075 1 1 0.143038 2 2 0.795188 >>> def func(row): return row['a'] + row['b'] >>> df.apply(func, axis=1) 0 0.880075 1 1.143038 2 2.795188 dtype: float64 As for the second part of the question: row wise operations, even optimised ones, using pandas apply, are not the fastest solution there is. They are certainly a lot faster than a python for loop, but not the fastest. You can test that by timing operations and you'll see the difference. Some operation could be converted to column oriented ones (one in my example could be easily converted to just df['a'] + df['b']), but others cannot. Especially if you have a lot of branching, special cases or other logic that should be perform on your row. In that case, if the apply is too slow for you, I would suggest "Cython-izing" your code. Cython plays really nicely with the NumPy C api and will give you the maximal speed you can achieve. Or you can try numba. :)
Alex0000 Déconnexion sur Batterie [Samsung] Bonjour à tous J'ai un problème récurrent avec mon wifi quelque soit l'os ou la version de celui-ci. Après un petit temps sur batterie ( entre 5min et 1h) il se déconnecte de plus le gestionnaire de connexion ne montre plus aucun wifi. Sur Windows, l'outils de résolution de problème redémarre la carte et tout rentre dans l'ordre, si on omet la déconnexion ... Je n'ai pas trouver la commande qui permet de redémarrer la carte, ou une solution durable. Par contre si j'introduits : sudo /etc/init.d/networking restart ça fait tout planter sous 12.10, et sous 11.10 rien ne se passait ... Merci d'avance >> cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.10 DISTRIB_CODENAME=quantal DISTRIB_DESCRIPTION="Ubuntu 12.10" >> lsusb Bus 001 Device 002: ID 0ac8:c33f Z-Star Microelectronics Corp. Webcam Bus 008 Device 002: ID 0a5c:2151 Broadcom Corp. Bluetooth Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub >> lspci -k -nn | grep -A 3 -i net 02:00.0 Network controller [0280]: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) [168c:002b] (rev 01) Subsystem: Askey Computer Corp. Device [144f:7167] Kernel driver in use: ath9k Kernel modules: ath9k 04:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 02) Subsystem: Samsung Electronics Co Ltd Device [144d:c060] Kernel driver in use: r8169 Kernel modules: r8169 >> sudo lshw -C network *-network description: Interface réseau sans fil produit: AR9285 Wireless Network Adapter (PCI-Express) fabriquant: Atheros Communications Inc. identifiant matériel: 0 information bus: pci@0000:02:00.0 nom logique: wlan0 version: 01 numéro de série: 00:26:b6:66:d3:d6 bits: 64 bits horloge: 33MHz fonctionnalités: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.5.0-19-generic firmware=N/A ip=192.168.1.113 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn ressources: irq:16 mémoire:f6000000-f600ffff *-network description: Ethernet interface produit: RTL8101E/RTL8102E PCI Express Fast Ethernet controller fabriquant: Realtek Semiconductor Co., Ltd. identifiant matériel: 0 information bus: pci@0000:04:00.0 nom logique: eth0 version: 02 numéro de série: 00:24:54:18:83:2b taille: 10Mbit/s capacité: 100Mbit/s bits: 64 bits horloge: 33MHz fonctionnalités: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half latency=0 link=no multicast=yes port=MII speed=10Mbit/s ressources: irq:43 portE/S:3000(taille=256) mémoire:f4000000-f4000fff mémoire:f2000000-f200ffff mémoire:f2020000-f203ffff >> lsmod Module Size Used by samsung_laptop 14532 0 rfcomm 46619 12 parport_pc 32688 0 bnep 18140 2 ppdev 17073 0 binfmt_misc 17500 1 snd_hda_codec_hdmi 32007 1 snd_hda_codec_realtek 77876 1 ip6t_REJECT 12574 1 xt_hl 12521 6 ip6t_rt 12558 3 nf_conntrack_ipv6 14054 7 nf_defrag_ipv6 13158 1 nf_conntrack_ipv6 ipt_REJECT 12541 1 xt_LOG 17349 10 xt_limit 12711 13 xt_tcpudp 12603 18 xt_addrtype 12635 4 xt_state 12578 14 joydev 17457 0 ip6table_filter 12815 1 ip6_tables 27207 2 ip6t_rt,ip6table_filter nf_conntrack_netbios_ns 12665 0 nf_conntrack_broadcast 12589 1 nf_conntrack_netbios_ns nf_nat_ftp 12649 0 nf_nat 25254 1 nf_nat_ftp arc4 12529 2 nf_conntrack_ipv4 14480 9 nf_nat nf_defrag_ipv4 12729 1 nf_conntrack_ipv4 nf_conntrack_ftp 13359 1 nf_nat_ftp snd_hda_intel 33491 5 ath9k 131308 0 snd_hda_codec 134212 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel mac80211 539908 1 ath9k i915 520629 3 nf_conntrack 82633 8 nf_conntrack_ipv6,xt_state,nf_conntrack_netbios_ns,nf_conntrack_broadcast,nf_nat_ftp,nf_nat,nf_conntrack_ipv4,nf_conntrack_ftp snd_hwdep 13602 1 snd_hda_codec snd_pcm 96580 4 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec ath9k_common 14055 1 ath9k snd_seq_midi 13324 0 drm_kms_helper 46784 1 i915 snd_rawmidi 30512 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi snd_seq 61521 2 snd_seq_midi,snd_seq_midi_event coretemp 13400 0 snd_timer 29425 2 snd_pcm,snd_seq ath9k_hw 395218 2 ath9k,ath9k_common snd_seq_device 14497 3 snd_seq_midi,snd_rawmidi,snd_seq drm 275528 4 i915,drm_kms_helper snd 78734 19 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device iptable_filter 12810 1 ath 23827 3 ath9k,ath9k_common,ath9k_hw ip_tables 26995 1 iptable_filter microcode 22803 0 psmouse 95552 0 btusb 22474 0 i2c_algo_bit 13413 1 i915 lp 17759 0 serio_raw 13215 0 lpc_ich 17061 0 video 19335 2 samsung_laptop,i915 soundcore 15047 1 snd cfg80211 206566 3 ath9k,mac80211,ath bluetooth 209199 22 rfcomm,bnep,btusb parport 46345 3 parport_pc,ppdev,lp snd_page_alloc 18484 2 snd_hda_intel,snd_pcm x_tables 29711 13 ip6t_REJECT,xt_hl,ip6t_rt,ipt_REJECT,xt_LOG,xt_limit,xt_tcpudp,xt_addrtype,xt_state,ip6table_filter,ip6_tables,iptable_filter,ip_tables mac_hid 13205 0 r8169 61650 0 >> iwconfig wlan0 IEEE 802.11bgn ESSID:"homell" Mode:Managed Frequency:2.412 GHz Access Point: 00:C0:49:DA:DA:08 Bit Rate=48 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=42/70 Signal level=-68 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:1246 Invalid misc:82226 Missed beacon:0 >> ifconfig -a eth0 Link encap:Ethernet HWaddr 00:24:54:18:83:2b UP BROADCAST MULTICAST MTU:1500 Metric:1 Packets reçus:0 erreurs:0 :0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:1000 Octets reçus:0 (0.0 B) Octets transmis:0 (0.0 B) lo Link encap:Boucle locale inet adr:127.0.0.1 Masque:255.0.0.0 adr inet6: ::1/128 Scope:Hôte UP LOOPBACK RUNNING MTU:16436 Metric:1 Packets reçus:12999 erreurs:0 :0 overruns:0 frame:0 TX packets:12999 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 Octets reçus:1715972 (1.7 MB) Octets transmis:1715972 (1.7 MB) wlan0 Link encap:Ethernet HWaddr 00:26:b6:66:d3:d6 inet adr:192.168.1.113 Bcast:192.168.1.255 Masque:255.255.255.0 adr inet6: fe80::226:b6ff:fe66:d3d6/64 Scope:Lien UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Packets reçus:636704 erreurs:0 :0 overruns:0 frame:0 TX packets:793819 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:1000 Octets reçus:458508453 (458.5 MB) Octets transmis:447953157 (447.9 MB) >> sudo iwlist scan wlan0 Scan completed : Cell 01 - Address: 00:C0:49:DA:DA:08 Channel:1 Frequency:2.412 GHz (Channel 1) Quality=46/70 Signal level=-64 dBm Encryption key:on ESSID:"homell" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 22 Mb/s Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=00000003723316ba Extra: Last beacon: 20ms ago IE: Unknown: 0006686F6D656C6C IE: Unknown: 010582848B962C IE: Unknown: 030101 IE: Unknown: 0706455520010D14 IE: Unknown: 2A0100 IE: Unknown: 32080C1218243048606C IE: WPA Version 1 Group Cipher : TKIP Pairwise Ciphers (1) : TKIP Authentication Suites (1) : PSK IE: Unknown: DD0A0800280101000200FF0F Cell 02 - Address: 74:EA:3A:C8:2A:BA Channel:1 Frequency:2.412 GHz (Channel 1) Quality=26/70 Signal level=-84 dBm Encryption key:on ESSID:"TP LINK" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 12 Mb/s; 24 Mb/s; 36 Mb/s Bit Rates:9 Mb/s; 18 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=000003784ac6f181 Extra: Last beacon: 240ms ago IE: Unknown: 00075450204C494E4B IE: Unknown: 010882848B960C183048 IE: Unknown: 030101 IE: Unknown: 050400010000 IE: Unknown: 2A0100 IE: Unknown: 32041224606C IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : TKIP CCMP Authentication Suites (1) : PSK Preauthentication Supported Cell 03 - Address: 00:0D:93:7E:5A:84 Channel:6 Frequency:2.437 GHz (Channel 6) Quality=33/70 Signal level=-77 dBm Encryption key:on ESSID:"vax" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=0000021d69900189 Extra: Last beacon: 2036ms ago IE: Unknown: 0003766178 IE: Unknown: 010482848B96 IE: Unknown: 030106 IE: Unknown: 050401030000 IE: Unknown: 2A0100 IE: Unknown: 2F0100 IE: Unknown: 32080C1218243048606C IE: Unknown: DD0700039301030000 IE: Unknown: DD06001018020300 IE: WPA Version 1 Group Cipher : TKIP Pairwise Ciphers (1) : TKIP Authentication Suites (1) : PSK Cell 04 - Address: 00:30:F1:FB:1D:A0 Channel:6 Frequency:2.437 GHz (Channel 6) Quality=29/70 Signal level=-81 dBm Encryption key:on ESSID:"Kamerdelle 83 (philips)" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 22 Mb/s Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=000002849f813122 Extra: Last beacon: 1996ms ago IE: Unknown: 00174B616D657264656C6C6520383320287068696C69707329 IE: Unknown: 010582848B962C IE: Unknown: 030106 IE: Unknown: 2A0100 IE: Unknown: 32080C1218243048606C >> uname -r -m 3.5.0-19-generic x86_64 >> cat /etc/network/interfaces # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback >> nm-tool NetworkManager Tool State: connected (global) - Device: eth0 ----------------------------------------------------------------- Type: Wired Driver: r8169 State: unavailable Default: no HW Address: 00:24:54:18:83:2B Capabilities: Carrier Detect: yes Wired Properties Carrier: off - Device: wlan0 [homell] ------------------------------------------------------ Type: 802.11 WiFi Driver: ath9k State: connected Default: yes HW Address: 00:26:B6:66:D3:D6 Capabilities: Speed: 48 Mb/s Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points (* = current AP) *homell: Infra, 00:C0:49:DA:DA:08, Freq 2412 MHz, Rate 54 Mb/s, Strength 51 WPA TP LINK: Infra, 74:EA:3A:C8:2A:BA, Freq 2412 MHz, Rate 54 Mb/s, Strength 19 WPA2 Kamerdelle 83 (philips): Infra, 00:30:F1:FB:1D:A0, Freq 2437 MHz, Rate 54 Mb/s, Strength 30 WEP vax: Infra, 00:0D:93:7E:5A:84, Freq 2437 MHz, Rate 54 Mb/s, Strength 39 WPA IPv4 Settings: Address: 192.168.1.113 Prefix: 24 (255.255.255.0) Gateway: 192.168.1.20 DNS: 192.168.1.1 DNS: 192.168.1.2 >> sudo rfkill list 0: hci0: Bluetooth Soft blocked: yes Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no 2: samsung-wlan: Wireless LAN Soft blocked: no Hard blocked: no Hors ligne Alex0000 Re : Déconnexion sur Batterie [Samsung] up Hors ligne toutafai Re : Déconnexion sur Batterie [Samsung] Bonsoir, es-ce que ton portable est passé en veille quand il était sur batterie ? cela me fait penser a un pb de gestion d’énergie, regarde dans le bios si tu n'as pas des paramètres a ce niveau la ... la prochaine fois que cela t'arrive, et avant de bidouiller quoique ce soit donne le retour de dmesg |grep -e wlan0 -e wireless - e ath9k cat /var/lib/NetworkManager/NetworkManager.state iwconfig Ubuntu Server 12.04 x32 sur IBM P4. - XP / Seven / Ubuntu 14.04 x64 sur Lenovo ThinkPad. Purée d'unity...difficile de s'y faire - canon MG5350 - Logitech Quickcam 3000 - TNT Intuix S800 - Freebox v6 révolution (Trop de la balle !!!) Utilisateur d'Ubuntu depuis novembre 2006. 23 PC libérés grâce aux OS libres...et de 24 et quel 24 ...car pc au taff et pour le taff ^^ Hors ligne Alex0000 Re : Déconnexion sur Batterie [Samsung] Tout d'abord merci d'avoir pris le temps de me répondre. Au niveau de la veille, non ce n'est pas le cas. Ca arrive que je vienne de boot, sortir de veille, hibernation ou danse la polka! Mais je vais checker le bios; je n'y avais pas pensé. Je poste le retour dès qu'il me lache Dernière modification par Alex0000 (Le 16/12/2012, à 00:16) Hors ligne Alex0000 Re : Déconnexion sur Batterie [Samsung] Alors voici les réponse des commandes déconnexion 5 minute après le boot: oumpa@lap-oumpa:~$ cat /var/lib/NetworkManager/NetworkManager.state [main] NetworkingEnabled=true WirelessEnabled=true WWANEnabled=true WimaxEnabled=true oumpa@lap-oumpa:~$ dmesg |grep -e wlan0 -e wireless - e ath9k (entrée standard):[ 26.150266] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready (entrée standard):[ 26.152196] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready (entrée standard):[ 28.003299] wlan0: authenticate with 90:94:e4:d2:6a:56 (entrée standard):[ 28.011830] wlan0: send auth to 90:94:e4:d2:6a:56 (try 1/3) (entrée standard):[ 28.014495] wlan0: authenticated (entrée standard):[ 28.021134] wlan0: associating with AP with corrupt beacon (entrée standard):[ 28.024084] wlan0: associate with 90:94:e4:d2:6a:56 (try 1/3) (entrée standard):[ 28.030491] wlan0: RX AssocResp from 90:94:e4:d2:6a:56 (capab=0xc11 status=0 aid=2) (entrée standard):[ 28.030579] wlan0: associated (entrée standard):[ 28.031443] IPv6: ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready (entrée standard):[ 64.677798] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 134.863515] wlan0: authenticate with 90:94:e4:d2:6a:56 (entrée standard):[ 135.261468] wlan0: send auth to 90:94:e4:d2:6a:56 (try 1/3) (entrée standard):[ 135.464020] wlan0: send auth to 90:94:e4:d2:6a:56 (try 2/3) (entrée standard):[ 135.668056] wlan0: send auth to 90:94:e4:d2:6a:56 (try 3/3) (entrée standard):[ 135.872048] wlan0: authentication with 90:94:e4:d2:6a:56 timed out (entrée standard):[ 138.554721] wlan0: authenticate with 90:94:e4:d2:6a:56 (entrée standard):[ 138.950754] wlan0: direct probe to 90:94:e4:d2:6a:56 (try 1/3) (entrée standard):[ 139.152018] wlan0: direct probe to 90:94:e4:d2:6a:56 (try 2/3) (entrée standard):[ 139.356034] wlan0: direct probe to 90:94:e4:d2:6a:56 (try 3/3) (entrée standard):[ 139.560018] wlan0: authentication with 90:94:e4:d2:6a:56 timed out (entrée standard):[ 142.244187] wlan0: authenticate with 90:94:e4:d2:6a:56 (entrée standard):[ 142.641568] wlan0: direct probe to 90:94:e4:d2:6a:56 (try 1/3) (entrée standard):[ 142.844040] wlan0: direct probe to 90:94:e4:d2:6a:56 (try 2/3) (entrée standard):[ 143.048044] wlan0: direct probe to 90:94:e4:d2:6a:56 (try 3/3) (entrée standard):[ 143.252038] wlan0: authentication with 90:94:e4:d2:6a:56 timed out (entrée standard):[ 145.932456] wlan0: authenticate with 90:94:e4:d2:6a:56 (entrée standard):[ 146.328310] wlan0: direct probe to 90:94:e4:d2:6a:56 (try 1/3) (entrée standard):[ 146.532035] wlan0: direct probe to 90:94:e4:d2:6a:56 (try 2/3) (entrée standard):[ 146.736044] wlan0: direct probe to 90:94:e4:d2:6a:56 (try 3/3) (entrée standard):[ 146.940047] wlan0: authentication with 90:94:e4:d2:6a:56 timed out (entrée standard):[ 148.041997] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready (entrée standard):[ 152.628846] wlan0: authenticate with 90:94:e4:d2:6a:56 (entrée standard):[ 153.025956] wlan0: direct probe to 90:94:e4:d2:6a:56 (try 1/3) (entrée standard):[ 153.228035] wlan0: direct probe to 90:94:e4:d2:6a:56 (try 2/3) (entrée standard):[ 153.432036] wlan0: direct probe to 90:94:e4:d2:6a:56 (try 3/3) (entrée standard):[ 153.636059] wlan0: authentication with 90:94:e4:d2:6a:56 timed out grep: e: Aucun fichier ou dossier de ce type grep: ath9k: Aucun fichier ou dossier de ce type oumpa@lap-oumpa:~$ iwconfig eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=14 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on Et j'ai également eu le problème en sortie de veille oumpa@lap-oumpa:~$ dmesg |grep -e wlan0 -e wireless - e ath9k (entrée standard):[ 24.374058] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready (entrée standard):[ 24.375350] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready (entrée standard):[ 93.939308] wlan0: authenticate with 90:94:e4:d2:6a:56 (entrée standard):[ 93.946484] wlan0: send auth to 90:94:e4:d2:6a:56 (try 1/3) (entrée standard):[ 93.948503] wlan0: authenticated (entrée standard):[ 93.955493] wlan0: associating with AP with corrupt beacon (entrée standard):[ 93.956258] wlan0: associate with 90:94:e4:d2:6a:56 (try 1/3) (entrée standard):[ 93.960312] wlan0: RX AssocResp from 90:94:e4:d2:6a:56 (capab=0xc11 status=0 aid=3) (entrée standard):[ 93.960403] wlan0: associated (entrée standard):[ 93.961271] IPv6: ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready (entrée standard):[ 142.416984] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 267.450960] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 392.484797] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 517.416669] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 642.450021] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 731.526045] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=80.57.232.14 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=113 ID=10412 PROTO=UDP SPT=62076 DPT=59653 LEN=75 (entrée standard):[ 751.099943] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=98.92.180.142 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=107 ID=16706 PROTO=UDP SPT=22859 DPT=59653 LEN=75 (entrée standard):[ 768.406056] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 783.162810] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=124.168.49.52 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=110 ID=3029 PROTO=UDP SPT=29378 DPT=59653 LEN=75 (entrée standard):[ 819.497707] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=217.132.129.178 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=110 ID=26558 PROTO=UDP SPT=27851 DPT=59653 LEN=75 (entrée standard):[ 833.219634] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=62.255.232.114 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=114 ID=6751 PROTO=UDP SPT=44096 DPT=59653 LEN=75 (entrée standard):[ 838.554992] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=31.29.211.30 DST=192.168.1.40 LEN=131 TOS=0x00 PREC=0x00 TTL=110 ID=20939 PROTO=UDP SPT=28782 DPT=59653 LEN=111 (entrée standard):[ 838.961321] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=90.222.218.33 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=113 ID=15283 PROTO=UDP SPT=52238 DPT=59653 LEN=75 (entrée standard):[ 838.968753] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=90.222.218.33 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=15290 PROTO=UDP SPT=52238 DPT=59653 LEN=38 (entrée standard):[ 842.173131] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=90.222.218.33 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=15658 PROTO=UDP SPT=52238 DPT=59653 LEN=38 (entrée standard):[ 848.441516] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=90.222.218.33 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=16309 PROTO=UDP SPT=52238 DPT=59653 LEN=38 (entrée standard):[ 893.439484] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 895.807103] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=24.77.54.203 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=116 ID=34953 PROTO=UDP SPT=40189 DPT=59653 LEN=75 (entrée standard):[ 915.616834] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=88.235.52.54 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=51 ID=38481 PROTO=UDP SPT=12408 DPT=59653 LEN=75 (entrée standard):[ 929.149248] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=90.56.93.51 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=48 ID=3585 PROTO=UDP SPT=44520 DPT=59653 LEN=38 (entrée standard):[ 935.936315] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=90.222.218.33 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=25452 PROTO=UDP SPT=52238 DPT=59653 LEN=38 (entrée standard):[ 939.075037] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=90.222.218.33 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=25805 PROTO=UDP SPT=52238 DPT=59653 LEN=38 (entrée standard):[ 945.667844] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=90.222.218.33 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=26558 PROTO=UDP SPT=52238 DPT=59653 LEN=38 (entrée standard):[ 987.611437] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=94.12.214.227 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=29457 PROTO=UDP SPT=35652 DPT=59653 LEN=38 (entrée standard):[ 990.714691] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=94.12.214.227 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=30297 PROTO=UDP SPT=35652 DPT=59653 LEN=38 (entrée standard):[ 995.141942] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=97.89.80.143 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=45 ID=0 DF PROTO=UDP SPT=52053 DPT=59653 LEN=38 (entrée standard):[ 998.150317] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=97.89.80.143 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=45 ID=0 DF PROTO=UDP SPT=52053 DPT=59653 LEN=38 (entrée standard):[ 1013.053142] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=220.233.12.78 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=107 ID=29277 PROTO=UDP SPT=49849 DPT=59653 LEN=38 (entrée standard):[ 1019.498528] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 1040.040006] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=77.207.145.115 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=52 ID=65369 PROTO=UDP SPT=25444 DPT=59653 LEN=75 (entrée standard):[ 1061.804785] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=94.194.22.124 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=50 ID=44978 PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 1083.508559] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=90.222.218.33 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=8472 PROTO=UDP SPT=52238 DPT=59653 LEN=38 (entrée standard):[ 1102.080120] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=79.176.228.51 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=11246 PROTO=UDP SPT=20094 DPT=59653 LEN=38 (entrée standard):[ 1134.221457] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=86.156.13.168 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=110 ID=2574 PROTO=UDP SPT=63186 DPT=59653 LEN=75 (entrée standard):[ 1141.996297] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=83.86.228.115 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=52 ID=19364 PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 1172.390827] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=77.58.123.28 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=51 ID=34736 PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 1194.933595] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=83.86.228.115 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=52 ID=44641 PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 1210.708901] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=70.79.66.115 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=50 ID=53868 PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 1233.959743] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=89.134.77.194 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=23293 PROTO=UDP SPT=16087 DPT=59653 LEN=38 (entrée standard):[ 1250.906090] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=111.92.70.150 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=46 ID=34775 PROTO=UDP SPT=60879 DPT=59653 LEN=38 (entrée standard):[ 1265.827604] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=85.74.93.30 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=114 ID=47137 PROTO=UDP SPT=53646 DPT=59653 LEN=75 (entrée standard):[ 1300.548513] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=79.176.228.51 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=22631 PROTO=UDP SPT=20094 DPT=59653 LEN=38 (entrée standard):[ 1303.681009] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=79.176.228.51 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=22804 PROTO=UDP SPT=20094 DPT=59653 LEN=38 (entrée standard):[ 1320.598826] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=186.19.46.240 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=111 ID=4764 PROTO=UDP SPT=22500 DPT=59653 LEN=38 (entrée standard):[ 1340.837248] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=94.208.16.136 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=53 ID=7458 PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 1377.800565] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=105.236.1.136 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=112 ID=31017 PROTO=UDP SPT=23024 DPT=59653 LEN=75 (entrée standard):[ 1380.559154] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=84.229.137.34 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=14304 PROTO=UDP SPT=31025 DPT=59653 LEN=38 (entrée standard):[ 1414.874938] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=220.233.104.215 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=108 ID=63320 PROTO=UDP SPT=28335 DPT=59653 LEN=38 (entrée standard):[ 1424.599687] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=220.233.104.215 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=108 ID=64041 PROTO=UDP SPT=28335 DPT=59653 LEN=38 (entrée standard):[ 1443.223070] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=2.28.206.100 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=111 ID=27383 PROTO=UDP SPT=17720 DPT=59653 LEN=75 (entrée standard):[ 1471.324528] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=195.182.146.122 DST=192.168.1.40 LEN=129 TOS=0x00 PREC=0x00 TTL=112 ID=5359 PROTO=UDP SPT=23170 DPT=59653 LEN=109 (entrée standard):[ 1490.903537] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=89.134.77.194 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=11616 PROTO=UDP SPT=16087 DPT=59653 LEN=38 (entrée standard):[ 1501.859829] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=177.102.238.59 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=111 ID=23378 PROTO=UDP SPT=15444 DPT=59653 LEN=38 (entrée standard):[ 1521.273515] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=5.55.143.70 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=112 ID=10836 PROTO=UDP SPT=23100 DPT=59653 LEN=38 (entrée standard):[ 1549.567479] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=76.14.65.234 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=49 ID=27835 PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 1561.517882] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=77.58.123.28 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=51 ID=55417 PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 1584.237946] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=120.61.30.238 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=47 ID=20806 PROTO=UDP SPT=11536 DPT=59653 LEN=75 (entrée standard):[ 1597.574215] wlan0: deauthenticating from 90:94:e4:d2:6a:56 by local choice (reason=3) (entrée standard):[ 1710.513007] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready (entrée standard):[ 1715.476678] wlan0: authenticate with 90:94:e4:d2:6a:56 (entrée standard):[ 1715.489049] wlan0: send auth to 90:94:e4:d2:6a:56 (try 1/3) (entrée standard):[ 1715.491074] wlan0: authenticated (entrée standard):[ 1715.497563] wlan0: associating with AP with corrupt beacon (entrée standard):[ 1715.500070] wlan0: associate with 90:94:e4:d2:6a:56 (try 1/3) (entrée standard):[ 1715.504166] wlan0: RX AssocResp from 90:94:e4:d2:6a:56 (capab=0xc11 status=0 aid=3) (entrée standard):[ 1715.504256] wlan0: associated (entrée standard):[ 1715.505204] IPv6: ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready (entrée standard):[ 1730.305546] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 1754.786892] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=96.49.53.116 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=801 PROTO=UDP SPT=20915 DPT=59653 LEN=38 (entrée standard):[ 1758.161750] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=96.49.53.116 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=1181 PROTO=UDP SPT=20915 DPT=59653 LEN=38 (entrée standard):[ 1759.090101] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=151.32.245.102 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=112 ID=54523 PROTO=UDP SPT=22496 DPT=59653 LEN=38 (entrée standard):[ 1764.508810] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=96.49.53.116 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=1968 PROTO=UDP SPT=20915 DPT=59653 LEN=38 (entrée standard):[ 1764.955625] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=81.98.226.158 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=115 ID=14611 PROTO=UDP SPT=56154 DPT=59653 LEN=38 (entrée standard):[ 1774.519661] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=81.98.226.158 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=115 ID=18039 PROTO=UDP SPT=56154 DPT=59653 LEN=38 (entrée standard):[ 1834.224178] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=5.55.143.70 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=112 ID=17980 PROTO=UDP SPT=23100 DPT=59653 LEN=38 (entrée standard):[ 1837.336063] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=5.55.143.70 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=18272 PROTO=UDP SPT=23100 DPT=59653 LEN=38 (entrée standard):[ 1843.812995] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=5.55.143.70 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=18884 PROTO=UDP SPT=23100 DPT=59653 LEN=38 (entrée standard):[ 1855.397420] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 1873.437599] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=67.83.29.46 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=112 ID=14608 PROTO=UDP SPT=34519 DPT=59653 LEN=38 (entrée standard):[ 1876.809734] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=67.83.29.46 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=112 ID=14611 PROTO=UDP SPT=34519 DPT=59653 LEN=38 (entrée standard):[ 1881.611970] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=178.191.186.216 DST=192.168.1.40 LEN=48 TOS=0x00 PREC=0x00 TTL=52 ID=47871 PROTO=UDP SPT=60000 DPT=59653 LEN=28 (entrée standard):[ 1883.331092] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=67.83.29.46 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=112 ID=14613 PROTO=UDP SPT=34519 DPT=59653 LEN=38 (entrée standard):[ 1935.277466] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=5.55.143.70 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=112 ID=27555 PROTO=UDP SPT=23100 DPT=59653 LEN=38 (entrée standard):[ 1938.567180] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=5.55.143.70 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=27898 PROTO=UDP SPT=23100 DPT=59653 LEN=38 (entrée standard):[ 1944.954786] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=5.55.143.70 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=28531 PROTO=UDP SPT=23100 DPT=59653 LEN=38 (entrée standard):[ 1967.219070] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=86.160.33.28 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=111 ID=4168 PROTO=UDP SPT=12743 DPT=59653 LEN=75 (entrée standard):[ 1980.493396] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 2015.497269] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=201.83.130.199 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=111 ID=26048 PROTO=UDP SPT=55710 DPT=59653 LEN=38 (entrée standard):[ 2020.543353] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=190.231.94.68 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=29440 PROTO=UDP SPT=64609 DPT=59653 LEN=38 (entrée standard):[ 2040.553047] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=67.83.29.46 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=112 ID=14617 PROTO=UDP SPT=34519 DPT=59653 LEN=38 (entrée standard):[ 2062.151883] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=86.85.156.51 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=52 ID=7982 PROTO=UDP SPT=10633 DPT=59653 LEN=75 (entrée standard):[ 2081.939014] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=94.175.8.166 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=23359 PROTO=UDP SPT=39465 DPT=59653 LEN=38 (entrée standard):[ 2126.170640] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=91.153.62.95 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=19149 PROTO=UDP SPT=57555 DPT=59653 LEN=38 (entrée standard):[ 2129.301479] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=91.153.62.95 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=19183 PROTO=UDP SPT=57555 DPT=59653 LEN=38 (entrée standard):[ 2141.292364] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=76.219.229.48 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=111 ID=20686 PROTO=UDP SPT=14063 DPT=59653 LEN=38 (entrée standard):[ 2161.947708] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=69.171.135.88 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=109 ID=30662 PROTO=UDP SPT=52394 DPT=59653 LEN=38 (entrée standard):[ 2180.779086] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=86.131.227.154 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=111 ID=968 PROTO=UDP SPT=37508 DPT=59653 LEN=38 (entrée standard):[ 2210.214811] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=142.161.150.73 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=114 ID=7473 PROTO=UDP SPT=42221 DPT=59653 LEN=75 (entrée standard):[ 2219.820455] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=5.55.143.70 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=22000 PROTO=UDP SPT=23100 DPT=59653 LEN=38 (entrée standard):[ 2250.601778] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=114.77.105.79 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=3264 PROTO=UDP SPT=57921 DPT=59653 LEN=38 (entrée standard):[ 2261.198314] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=96.49.53.116 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=31678 PROTO=UDP SPT=20915 DPT=59653 LEN=38 (entrée standard):[ 2281.740059] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=188.4.70.135 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=15661 PROTO=UDP SPT=42914 DPT=59653 LEN=38 (entrée standard):[ 2305.697614] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=190.16.105.104 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=43 ID=0 DF PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 2320.476787] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=188.4.36.7 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=2395 DF PROTO=UDP SPT=53094 DPT=59653 LEN=38 (entrée standard):[ 2355.550727] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 2360.263186] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=94.175.8.166 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=824 PROTO=UDP SPT=39465 DPT=59653 LEN=38 (entrée standard):[ 2388.989681] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=197.163.27.6 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=111 ID=3180 PROTO=UDP SPT=13662 DPT=59653 LEN=75 (entrée standard):[ 2419.111319] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=186.52.131.16 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=46 ID=12036 PROTO=UDP SPT=24797 DPT=59653 LEN=38 (entrée standard):[ 2422.129755] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=186.52.131.16 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=46 ID=12037 PROTO=UDP SPT=24797 DPT=59653 LEN=38 (entrée standard):[ 2440.798577] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=69.171.135.88 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=109 ID=6599 PROTO=UDP SPT=52394 DPT=59653 LEN=38 (entrée standard):[ 2480.327857] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=109.186.73.51 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=110 ID=27228 PROTO=UDP SPT=10035 DPT=59653 LEN=75 (entrée standard):[ 2480.482212] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 2505.439606] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=92.239.224.137 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=50 ID=16891 PROTO=UDP SPT=24874 DPT=59653 LEN=38 (entrée standard):[ 2520.326512] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=216.8.132.206 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=111 ID=23722 PROTO=UDP SPT=24252 DPT=59653 LEN=38 (entrée standard):[ 2542.139479] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=96.49.53.116 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=227 PROTO=UDP SPT=20915 DPT=59653 LEN=38 (entrée standard):[ 2561.215791] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=178.174.218.92 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=109 ID=21615 PROTO=UDP SPT=44003 DPT=59653 LEN=38 (entrée standard):[ 2582.414098] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=78.56.233.179 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=39904 PROTO=UDP SPT=32656 DPT=59653 LEN=38 (entrée standard):[ 2605.516221] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 2622.833575] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=69.255.40.5 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=112 ID=9040 PROTO=UDP SPT=46587 DPT=59653 LEN=75 (entrée standard):[ 2642.591036] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=114.77.105.79 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=23338 PROTO=UDP SPT=57921 DPT=59653 LEN=38 (entrée standard):[ 2694.101037] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=96.49.53.116 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=18158 PROTO=UDP SPT=20915 DPT=59653 LEN=38 (entrée standard):[ 2695.587437] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=85.241.8.40 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=7133 PROTO=UDP SPT=47908 DPT=59653 LEN=38 (entrée standard):[ 2703.476545] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=96.49.53.116 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=19215 PROTO=UDP SPT=20915 DPT=59653 LEN=38 (entrée standard):[ 2730.549805] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 2741.555802] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=87.218.129.117 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=114 ID=51510 PROTO=UDP SPT=25900 DPT=59653 LEN=75 (entrée standard):[ 2774.766776] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=190.194.164.229 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=43 ID=29859 PROTO=UDP SPT=60006 DPT=59653 LEN=75 (entrée standard):[ 2796.621132] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=85.241.8.40 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=50261 PROTO=UDP SPT=47908 DPT=59653 LEN=38 (entrée standard):[ 2799.728894] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=85.241.8.40 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=54068 PROTO=UDP SPT=47908 DPT=59653 LEN=38 (entrée standard):[ 2819.736079] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=46.12.47.190 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=115 ID=15148 PROTO=UDP SPT=53080 DPT=59653 LEN=38 (entrée standard):[ 2855.583102] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 2868.305737] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=188.4.10.101 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=50 ID=36899 PROTO=UDP SPT=51422 DPT=59653 LEN=38 (entrée standard):[ 2895.625459] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=112.207.183.98 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=111 ID=4325 PROTO=UDP SPT=29276 DPT=59653 LEN=38 (entrée standard):[ 2901.612169] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=88.4.172.58 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=51 ID=0 DF PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 2927.259899] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=177.2.165.29 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=15369 PROTO=UDP SPT=56892 DPT=59653 LEN=38 (entrée standard):[ 2965.894396] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=24.64.205.106 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=25444 PROTO=UDP SPT=11140 DPT=59653 LEN=38 (entrée standard):[ 2968.439507] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=142.68.204.60 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=111 ID=9076 PROTO=UDP SPT=13176 DPT=59653 LEN=75 (entrée standard):[ 2980.515084] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 3019.653204] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=85.218.101.137 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=49 ID=0 DF PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 3037.295512] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=77.248.59.144 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=49 ID=7792 PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 3040.175927] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=77.248.59.144 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=49 ID=2659 PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 3070.551776] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=76.181.220.237 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=55424 PROTO=UDP SPT=24417 DPT=59653 LEN=38 (entrée standard):[ 3080.195302] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=76.181.220.237 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=56646 PROTO=UDP SPT=24417 DPT=59653 LEN=38 (entrée standard):[ 3105.548910] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 3130.350131] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=114.78.37.163 DST=192.168.1.40 LEN=48 TOS=0x00 PREC=0x00 TTL=47 ID=14963 PROTO=UDP SPT=63724 DPT=59653 LEN=28 (entrée standard):[ 3151.388237] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=114.78.37.163 DST=192.168.1.40 LEN=48 TOS=0x00 PREC=0x00 TTL=48 ID=19291 PROTO=UDP SPT=63724 DPT=59653 LEN=28 (entrée standard):[ 3180.574097] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=114.78.37.163 DST=192.168.1.40 LEN=48 TOS=0x00 PREC=0x00 TTL=48 ID=26054 PROTO=UDP SPT=63724 DPT=59653 LEN=28 (entrée standard):[ 3183.925917] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=114.78.37.163 DST=192.168.1.40 LEN=48 TOS=0x00 PREC=0x00 TTL=48 ID=26776 PROTO=UDP SPT=63724 DPT=59653 LEN=28 (entrée standard):[ 3224.265486] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=177.2.165.29 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=1523 PROTO=UDP SPT=56892 DPT=59653 LEN=38 (entrée standard):[ 3227.701115] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=177.2.165.29 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=2203 PROTO=UDP SPT=56892 DPT=59653 LEN=38 (entrée standard):[ 3241.520140] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=213.57.65.8 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=111 ID=5524 PROTO=UDP SPT=57708 DPT=59653 LEN=38 (entrée standard):[ 3260.165072] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=24.64.205.106 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=3288 PROTO=UDP SPT=11140 DPT=59653 LEN=38 (entrée standard):[ 3313.381138] wlan0: deauthenticating from 90:94:e4:d2:6a:56 by local choice (reason=3) (entrée standard):[ 3322.761675] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready grep: e: Aucun fichier ou dossier de ce type grep: ath9k: Aucun fichier ou dossier de ce type oumpa@lap-oumpa:~$ cat /var/lib/NetworkManager/NetworkManager.state [main] NetworkingEnabled=true WirelessEnabled=true WWANEnabled=true WimaxEnabled=true oumpa@lap-oumpa:~$ iwconfig eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=14 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on Voila En espérant que tu puisse en tirer quelque chose. Hors ligne toutafai Re : Déconnexion sur Batterie [Samsung] tu peux donner le resultat de ping -c3 192.168.1.20 Ubuntu Server 12.04 x32 sur IBM P4. - XP / Seven / Ubuntu 14.04 x64 sur Lenovo ThinkPad. Purée d'unity...difficile de s'y faire - canon MG5350 - Logitech Quickcam 3000 - TNT Intuix S800 - Freebox v6 révolution (Trop de la balle !!!) Utilisateur d'Ubuntu depuis novembre 2006. 23 PC libérés grâce aux OS libres...et de 24 et quel 24 ...car pc au taff et pour le taff ^^ Hors ligne Alex0000 Re : Déconnexion sur Batterie [Samsung] Je suis un wifi différent donc je doute que l'ip demandé soit la même, aucune idée de ce qu'est le mais j'ai été chercher la nouvel ip du gateway ipv4 voila le résultat de la cmd pour les 2 ip, en précisant que le wifi est fonctionnel, et que je ne sais pas trop ce que je fais PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. 64 bytes from 192.168.1.1: icmp_req=1 ttl=64 time=3.02 ms 64 bytes from 192.168.1.1: icmp_req=2 ttl=64 time=3.19 ms 64 bytes from 192.168.1.1: icmp_req=3 ttl=64 time=3.32 ms --- 192.168.1.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 3.020/3.180/3.324/0.140 ms ping -c3 192.168.1.20 PING 192.168.1.20 (192.168.1.20) 56(84) bytes of data. From 192.168.1.41 icmp_seq=1 Destination Host Unreachable From 192.168.1.41 icmp_seq=2 Destination Host Unreachable From 192.168.1.41 icmp_seq=3 Destination Host Unreachable --- 192.168.1.20 ping statistics --- 3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2016ms pipe 3 J' Hors ligne toutafai Re : Déconnexion sur Batterie [Samsung] mais justement une gateway sur 192.168.1.20 c'est pas courant.... la gateway (paserelle en français) désigne la machine (routeur/pc qui partage internet) par laquelle vont transiter les informations pour allez (et venir) d'internet....c'est toi qui a mis cette info, c'est bien cela ? Ubuntu Server 12.04 x32 sur IBM P4. - XP / Seven / Ubuntu 14.04 x64 sur Lenovo ThinkPad. Purée d'unity...difficile de s'y faire - canon MG5350 - Logitech Quickcam 3000 - TNT Intuix S800 - Freebox v6 révolution (Trop de la balle !!!) Utilisateur d'Ubuntu depuis novembre 2006. 23 PC libérés grâce aux OS libres...et de 24 et quel 24 ...car pc au taff et pour le taff ^^ Hors ligne Alex0000 Re : Déconnexion sur Batterie [Samsung] Chez moi je suis l'unique poste ubuntu au sein d'un réseau windows avec entre autre un vpn et toute une floppée de serveur dont je ne suis pas admin, ce qui pourrais expliquer l'ip "étrange" vu qu'il ne vient pas d'une box je ne comprend pas trop "mis" l'info, j'ai simplement c/c ce qu'avais pondu le "wificheck" et j'en ai refais un pour le wifi que j'utilise actuellement. Mais mon soucis vient très clairement de ma machine car ca m'arrive partout. Hors ligne Alex0000 Re : Déconnexion sur Batterie [Samsung] Bon il semblerais que j'ai atteint un nouveau niveau, la déconnexion se fait également sur secteur ( chose que je n'avais jamais constater en 2 ans d'utilisation...). Juste la commande de redémarrage de la carte me serais bien utile. Dernière modification par Alex0000 (Le 25/12/2012, à 15:42) Hors ligne compte supprimé Re : Déconnexion sur Batterie [Samsung] Salut, Sur batterie tu peux essayer de désactiver le mode économie d'énergie de la carte wifi. Voir ici : http://forum.ubuntu-fr.org/viewtopic.php?id=1042031 Alex0000 Re : Déconnexion sur Batterie [Samsung] Je voulais dire secteur ... pas batterie Désolé. je vais quand même jeter un coup d'oeil sur le thread Hors ligne
I recently stumbled upon this and am checking here to see if what I am proposing is indeed feasible and can be considered a breach of privacy. For obvious reasons I am not revealing the website which exhibits this property The URLs are of the format : https://xxxxxxxxyyyyyzzzz/xyz/<6 digit rand>_<17 digit rand>_<10 digit rand>_n.jpg And requesting the above link will return you an image. Now, as you can see, the entropy of the possible URLs are quite large. But note that they are all integers(0-9). This website hosts contents of millions of people ;) and my guess is that at least 10% of the URLs contained within these random number will work. Of course, its just a guess. My question is : is this feasible ? Is my Claim true ? My presumption here is that these random numbers may be a non-cryptographic hash of some string. There is no way to confirm the above sentence. For the sake of this question, lets assume it does. My code to generate these links looks like so (just a snippet) first = str(random.randint(100000,999999)) second=str(random.randint(10000,99999))+str(random.randint(10000,99999))+str(random.randint(10000,99999))+str(random.randint(10,99)) third= str(random.randint(10000,99999))+str(random.randint(10000,99999)) test='https://<URL>/'+first+'_'+second+'_'+third+'_n.jpg' try: image=urllib2.urlopen(test) print (image.read()).__len__() except: print "fail" I have not tried to run this for more than tens of requests for the fear of my IP being blocked by the server for excessive requests. I do not intend to either. Just want to clarify if my understanding is right. P.S: Am not a python developer, so please forgive if my code is ugly (suggestions will be happily taken to improve)
I'am using Python 2.5 on App Engine and tried to get the Jinja2 ModuleLoader to work. To initialize the environment I use : @staticmethod # get Jinja environment (global) def get_new(): # and initialize Jinja environment if myEnv._my_env == None : path = os.path.join(os.path.dirname(__file__), 'compiled') myEnv._my_env = Environment(loader = ModuleLoader(path)) return myEnv._my_env 'compiled' is a directory in my GAE project. But I receive TemplateNotFound exceptions all the time?? I compiled the templates using : env = Environment(extensions=['jinja2.ext.i18n']) env.filters['euros'] = Euros db_runtimes = Runtimes.all() # the html templates saved in a db.Blob for each in db_runtimes.fetch(999) : key = each.key().name() source = db.Blob(each.content).decode('utf-8') name = key.split('.')[0] raw = env.compile(source, name=name, filename=name + '.py', raw=True) each.compiled = db.Blob(raw.encode('utf-8')) # compile and save the .py each.put() The resulting code looks fine. Any ideas? I hope you can help me. This article from Rodrigo Moraes shows that loading templates from python modules is very fast. But in this 2009 proof of concept he "hacked" the Jinja code to be able to run the code. I think the ModuleLoader should do the same job. https://groups.google.com/group/pocoo-libs/browse_thread/thread/748b0d2024f88f64 The testmod.py looks like this : from __future__ import division from jinja2.runtime import LoopContext, TemplateReference, Macro, Markup, TemplateRuntimeError, missing, concat, escape, markup_join, unicode_join, to_string, identity, TemplateNotFound name = u'testmod.py' def root(context, environment=environment): if 0: yield None yield u'<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"\n"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">\n<html xmlns="http://www.w3.org/1999/xhtml">\n<head>\n<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />\n<title>TEST</title>\n</head>\n<body>\n\t<p>test template</p>\n</body>\n</html>' blocks = {} debug_info = '' And the page handler : def get(self): my_env = myEnv.get() page = 'testmod.py' template = my_env.get_template(page) self.response.out.write(template.render({})) I'have also tried to get the template without the .py extension.
i am trying to convert a date string to date format >>> str = "04-18-2002 03:50PM" >>> time.strptime(str, '%m-%d-%Y %H:%M%p') time.struct_time(tm_year=2002, tm_mon=4, tm_mday=18, tm_hour=3, tm_min=50, tm_sec=0, tm_wday=3, tm_yday=108, tm_isdst=-1) however when the year is in two digit it breaks >>> str = "04-18-02 03:50PM" >>> time.strptime(str, '%m-%d-%Y %H:%M%p') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.7/_strptime.py", line 454, in _strptime_time return _strptime(data_string, format)[0] File "/usr/lib/python2.7/_strptime.py", line 325, in _strptime (data_string, format)) ValueError: time data '04-18-02 03:50' does not match format '%m-%d-%Y %H:%M' any ideas??
I'm asked to create a method that returns the number of occurrences of a given item in a list. I know how to write code to find a specific item, but how can I code it to where it counts the number of occurrences of a random item. For example if I have a list [4, 6 4, 3, 6, 4, 9] and I type something like s1.count(4), it should return 3 or s1.count(6) should return 2. I'm not allowed to use and built-in functions though. In a recent assignment, I was asked to count the number of occurrences that sub string "ou" appeared in a given string, and I coded it if len(astr) < 2: return 0 else: return (astr[:2] == "ou")+ count_pattern(astr[1:]) Would something like this work?? def count(self, item): num=0 for i in self.s_list: if i in self.s_list: num[i] +=1 def __str__(self): return str(self.s_list)
#2326 Le 28/10/2012, à 20:40 nathéo Re : /* Topic des codeurs [7] */ J'ai un exam sur les listes chainée aussi, et aussi des exos à rendre dessus, mais pour le premier j'ai préféré utiliser un tableau et pour le second j'avais trop de mal avec le cours, j'ai pas rendu de rendu, je m'y mettrais plus sérieusement dessus à partir de demain... @ grim : Bah pour le moment j'ai un peu de mal avec le fonctionnement du passage d'un élément à un autre, sinon je comprends le principe avec le pointeur qui pointe sur l'élément suivant ou précédent. Enfin je travaillerais ça demain et la semaine prochaine. Pour le moment j'essaie de le faire avec un simple tableau, ça a plutôt l'air d'aller, j'arrive a gérer un ensemble de parenthèse, j'ai déjà réussi à créer une calculatrice ultra basique, et une fonction qui permet également de convertir un "nombre" (entré sous forme de caractère) en "vrai" nombre. Donc créer une calculatrice qui gère les priorité est déjà un peu plus complexe je trouve, je bloque un peu dessus, surtout à l'endroit ou il faudrait stocké le résultat d'une opération, parce qu'avec seulement des retour de fonction ça me semble un peu chaud... Dernière modification par nathéo (Le 01/11/2012, à 13:54) C'est rarement par le sarcasme qu'on élève son âme.Le jus de la vigne clarifie l'esprit et l'entendement. De quoi souffres-tu ? De l'irréel intact dans le réel dévasté ? N'oubliez pas d'ajouter un [RESOLU] si votre problème est réglé.ᥟathé൭о Hors ligne #2327 Le 28/10/2012, à 21:40 :!pakman Re : /* Topic des codeurs [7] */ @nathéo : Tu déconnes ? On te balance dans des projets comme ça sans vous apprendre ce qu'est une pile ? Regarde aussi les expressions postfixées, ça pourra t'être utile je pense... Edit : Ah bah grim t'en a parlé, je ne connaissait pas le nom RPN, j'ai cherché, c'est la même chose Dernière modification par :!pakman (Le 28/10/2012, à 21:45) ... Hors ligne #2328 Le 28/10/2012, à 22:10 :!pakman Re : /* Topic des codeurs [7] */ Mettons, tu dois analyser l'expression suivante : 5 * ( ( ( 9 + 8 ) * ( 4 * 6 ) ) + 7 ) Voici la même expression, en forme postfixée : 5 9 8 + 4 6 * * 7 + * Pour arriver à cette forme, voici l'algo, tu lis ton expression originale de gauche à droite et : - Si le symbole est une opérande, tu l'écris. - Si c'est un opérateur, tu l'empile (tu ne l'écris pas). - Si c'est une parenthèse ouvrante "(", tu ne fais rien. - Si c'est une parenthèse fermante ")", alors tu dépile l’opérateur (le dernier que tu as empilé, forcément) et tu l'écris. A toi de voir maintenant, comment tu peux, dans ton programme, te servir de cette forme simplifiée et du système de piles pour fournir un résultat Je vais y réfléchir si je trouve le temps, tiens ! Ça m'amuse ^^ (Peut être dès ce soir, même...) Un conseil, ne te lance pas à corps perdu dans un programme (vécu, j'en ai déjà payé le prix) sans y avoir pensé avant sur le papier. La programmation, c'est réfléchir et se servir de sa logique et de ses connaissances, et pas seulement taper sur un clavier si je puis dire ainsi Dernière modification par :!pakman (Le 28/10/2012, à 22:31) ... Hors ligne #2329 Le 28/10/2012, à 22:41 grim7reaper Re : /* Topic des codeurs [7] */ Si je suis l’algo de :!pakman (j’ai dû ajouter un petit truc pour que ça tourne quand même), ça donne : #!/usr/bin/env python3 from sys import argv from operator import add, sub, mul, truediv OP = { '+': add, '-':sub, '*':mul, '/':truediv } if __name__ == '__main__': # Conversion. stack = [] rpn = [] for x in argv[1]: if str.isdigit(x): rpn.append(x) elif x in '+-*/': stack.append(x) elif x == ')': rpn.append(stack.pop()) rpn.extend(stack) # Evaluation. stack = [] for x in rpn: if x in '+-*/': a = float(stack.pop()) b = float(stack.pop()) stack.append(OP[x](a, b)) else: stack.append(x) print(stack.pop()) À noter que cette implémentation est loin d’être correcte : ne gère pas quand les nombres font plus d’un chiffre (je ne découpe pas la chaîne en token), ne gère pas la priorité (l’algo de conversion en RPN de :!pakman est un peu trop naïf ) Mais ça donne une idée de base. Hors ligne #2330 Le 29/10/2012, à 00:45 nathéo Re : /* Topic des codeurs [7] */ Mettons, tu dois analyser l'expression suivante : 5 * ( ( ( 9 + 8 ) * ( 4 * 6 ) ) + 7 ) Voici la même expression, en forme postfixée : 5 9 8 + 4 6 * * 7 + * Pour arriver à cette forme, voici l'algo, tu lis ton expression originale de gauche à droite et : - Si le symbole est uneopérande,tu l'écris. - Si c'est unopérateur,tu l'empile(tu ne l'écris pas). - Si c'est une parenthèse ouvrante "(", tu ne fais rien. - Si c'est une parenthèse fermante ")", alors tu dépile l’opérateur (le dernier que tu as empilé, forcément) et tu l'écris. A toi de voir maintenant, comment tu peux, dans ton programme, te servir de cette forme simplifiée et du système de piles pour fournir un résultat Je vais y réfléchir si je trouve le temps, tiens ! Ça m'amuse ^^ (Peut être dès ce soir, même...) Un conseil, ne te lance pas à corps perdu dans un programme (vécu, j'en ai déjà payé le prix) sans y avoir pensé avant sur le papier. La programmation, c'est réfléchir et se servir de sa logique et de ses connaissances, et pas seulement taper sur un clavier si je puis dire ainsi Ma méthode est un peu longue à expliquer en texte, alors je vais montrer le schéma que j'ai fait pour que ce soit plus compréhensible (au fait pour le moment je n'ai encore rien codé, je fais des notes ou je schématise mon algo ) Enfin bon c'est pas super clair, mais j'espère que c'est compréhensible. Je fais quand même quelques précisions : donc il faut penser que le tableau est parcouru par une boucle du type "tant que tab[i] différent de '\0'" puis que m, tout comme i est incrémenté de 1 par tour de boucle. (m et i ayant tous deux été initialisé avant la boucle.) Si tab[i] contient une parenthèse fermante, une boucle se crée. (enfin la suite doit être suffisamment claire sur l'image je pense) @ grim : Merci Pour le code, je vais essayer de le travailler pour voir si j'obtiens un résultat, normalement je devrais bosser la fonction jusqu'à au moins mardi, donc d'ici là je pense je devrais avoir trouvé un moyen de faire la calculatrice avec les deux méthodes (enfin j'espère bien ). @nathéo : Tu déconnes ? On te balance dans des projets comme ça sans vous apprendre ce qu'est une pile ? Regarde aussi lesexpressions postfixées, ça pourra t'être utile je pense... Edit : Ah bah grim t'en a parlé, je ne connaissait pas le nom RPN, j'ai cherché, c'est la même chose Un des concepts d'Epitech est de stimuler au maximum l'esprit imaginatif du codeur pour trouver la solution, avec un minimum de moyen. À part le my_putchar, toutes les fonctions utilisées (ou presque, malloc comme type d'exeption par exemple) sont codées par nous-même. Dernière modification par nathéo (Le 29/10/2012, à 01:00) C'est rarement par le sarcasme qu'on élève son âme.Le jus de la vigne clarifie l'esprit et l'entendement. De quoi souffres-tu ? De l'irréel intact dans le réel dévasté ? N'oubliez pas d'ajouter un [RESOLU] si votre problème est réglé.ᥟathé൭о Hors ligne #2331 Le 29/10/2012, à 02:45 Elzen Re : /* Topic des codeurs [7] */ En progrès Déjà, j'me suis un peu plus documenté sur le REST et j'ai réglé mon serveur mieux que ce qu'on avait apprit en vitesse à la fac, maintenant j'ai un semblant de vrai serveur HTTP qui génère bien ce que je veux. Du coup, j'ai choisi le png comme format d'images, et à mes déplorables qualités de dessinateur près, le rendu du truc commence à être bien sympa Si tout se passe bien et vu que j'suis en vacances, j'devrais pouvoir diffuser un genre de version alpha d'ici le milieu de la semaine, s'il y a des gens qui veulent tester et/ou essayer de faire des niveaux. Elzen : polisson, polémiste, polymathe ! (ex-ArkSeth) Un script pour améliorer quelques trucs du forum. La joie de t'avoir connu surpasse la peine de t'avoir perdu… J'ai pour qualité de ne jamais attaquer les gens. J'ai pour défaut de souvent avoir l'air de le faire. Hors ligne #2332 Le 29/10/2012, à 09:26 Le Rouge Re : /* Topic des codeurs [7] */ Un des concepts d'Epitech est de stimuler au maximum l'esprit imaginatif du codeur pour trouver la solution, avec un minimum de moyen. C'est sûr que présenté comme ça ça a l'air tentant mais ils vont quand même bien plus vite que la musique. Avant de pouvoir inventer tes propres solutions originales, il faut que tu étudies les solutions que d'autres ont proposé pour pouvoir voir leurs points forts/faibles/améliorables/issus de compromis/etc. Ensuite seulement tu seras capable de trouver de vraies solutions. En particulier, si tu veux savoir coder, il faut que tu bouffes des maths. Church et Turing étaient des mathématiciens qui ont inventé la théorie de la calculabilité (de deux façons différentes) pour résoudre des questions purement mathématiques très, mais alors, très compliquées. En ce qui me concerne, si tu bosses ça dans ton coin et que tu viens poser des questions ici, j'y répondrai du mieux que je peux ^^ J'ose espérer qu'on vous a parlé de machines de Turing et de classes de complexité ? Dernière modification par Le Rouge (Le 29/10/2012, à 09:27) Hors ligne #2333 Le 29/10/2012, à 10:55 :!pakman Re : /* Topic des codeurs [7] */ nathéo a écrit : Un des concepts d'Epitech est de stimuler au maximum l'esprit imaginatif du codeur pour trouver la solution, avec un minimum de moyen. C'est sûr que présenté comme ça ça a l'air tentant mais ils vont quand même bien plus vite que la musique. Avant de pouvoir inventer tes propres solutions originales, il faut que tu étudies les solutions que d'autres ont proposé pour pouvoir voir leurs points forts/faibles/améliorables/issus de compromis/etc. Ensuite seulement tu seras capable de trouver de vraies solutions. En particulier, si tu veux savoir coder, il fautque tu bouffes des maths. Church et Turing étaient des mathématiciens qui ont inventé la théorie de la calculabilité (de deux façons différentes) pour résoudre des questions purement mathématiques très, mais alors,trèscompliquées. En ce qui me concerne, si tu bosses ça dans ton coin et que tu viens poser des questions ici, j'y répondrai du mieux que je peux ^^ J'ose espérer qu'on vous a parlé de machines de Turing et de classes de complexité ? C'est vrai. Tu codes (nathéo) des structures de données, et je pense que si on vous fait coder des structures de données sans vous parler avant de complexité algorithmique, vos structures seront peut-être mal choisies. Quand utiliser un arbre binaire ? Quand utiliser une liste chaînée ? Quand utiliser une pile ? etc. Si tu te retrouve à résoudre un problème en utilisant une pile alors que la solution la plus optimisée est l'arbre binaire par exemple, ton algo fonctionnera, mais sera d'une lenteur affolante, et ceci proportionnellement au volume de données traitées. A petite échelle tu ne t'en rendra pas compte, mais lorsque tu le fera fonctionner avec de grands volumes de données, tu va le sentir Dernière modification par :!pakman (Le 29/10/2012, à 12:44) ... Hors ligne #2334 Le 30/10/2012, à 18:59 Elzen Re : /* Topic des codeurs [7] */ Bon, j'avais à peu près fini une version utilisable de mon machin, alors j'me suis dit que j'allais publier ça pour voir, donc je jarre le tout, je démarre le serveur, et bang ! grosse erreur Apparemment, SOAP n'aime pas un de mes POJOs. J'ai testé les autres, à première vue ça passe sans problème, mais le POJO définissant un jeu (taille de la grille, objets posables et positions des objets déjà posés, en gros), j'ai une exception à chaque fois que je tente de l'échanger 'pis un truc bien moche, en plus : java.lang.StackOverflowError at java.lang.Integer.toString(Integer.java:129) at java.lang.String.valueOf(String.java:2943) at com.sun.xml.internal.bind.DatatypeConverterImpl._printInt(DatatypeConverterImpl.java:443) at com.sun.xml.internal.bind.v2.model.impl.RuntimeBuiltinLeafInfoImpl$18.print(RuntimeBuiltinLeafInfoImpl.java:684) at com.sun.xml.internal.bind.v2.model.impl.RuntimeBuiltinLeafInfoImpl$18.print(RuntimeBuiltinLeafInfoImpl.java:678) at com.sun.xml.internal.bind.v2.model.impl.RuntimeBuiltinLeafInfoImpl$StringImpl.writeText(RuntimeBuiltinLeafInfoImpl.java:143) at com.sun.xml.internal.bind.v2.runtime.LeafBeanInfoImpl.serializeBody(LeafBeanInfoImpl.java:114) at com.sun.xml.internal.bind.v2.runtime.XMLSerializer.childAsXsiType(XMLSerializer.java:687) at com.sun.xml.internal.bind.v2.runtime.property.SingleElementNodeProperty.serializeBody(SingleElementNodeProperty.java:141) at com.sun.xml.internal.bind.v2.runtime.ClassBeanInfoImpl.serializeBody(ClassBeanInfoImpl.java:321) at com.sun.xml.internal.bind.v2.runtime.XMLSerializer.childAsXsiType(XMLSerializer.java:687) (J'vous passe la fin, mais ça se répète sur un bon millier de lignes… logique pour une StackOverflow, dans un sens). Comment un truc qui fonctionne nickel à l'utilisation peut-il provoquer une StackOverflow quand on essaye de l'envoyer par SOAP ? :s Elzen : polisson, polémiste, polymathe ! (ex-ArkSeth) Un script pour améliorer quelques trucs du forum. La joie de t'avoir connu surpasse la peine de t'avoir perdu… J'ai pour qualité de ne jamais attaquer les gens. J'ai pour défaut de souvent avoir l'air de le faire. Hors ligne #2335 Le 30/10/2012, à 20:01 maxpoulin64 Re : /* Topic des codeurs [7] */ Bon, j'avais à peu près fini une version utilisable de mon machin, alors j'me suis dit que j'allais publier ça pour voir, donc je jarre le tout, je démarre le serveur, et bang ! grosse erreur Apparemment, SOAP n'aime pas un de mes POJOs. J'ai testé les autres, à première vue ça passe sans problème, mais le POJO définissant un jeu (taille de la grille, objets posables et positions des objets déjà posés, en gros), j'ai une exception à chaque fois que je tente de l'échanger 'pis un truc bien moche, en plus : java.lang.StackOverflowError at java.lang.Integer.toString(Integer.java:129) at java.lang.String.valueOf(String.java:2943) at com.sun.xml.internal.bind.DatatypeConverterImpl._printInt(DatatypeConverterImpl.java:443) at com.sun.xml.internal.bind.v2.model.impl.RuntimeBuiltinLeafInfoImpl$18.print(RuntimeBuiltinLeafInfoImpl.java:684) at com.sun.xml.internal.bind.v2.model.impl.RuntimeBuiltinLeafInfoImpl$18.print(RuntimeBuiltinLeafInfoImpl.java:678) at com.sun.xml.internal.bind.v2.model.impl.RuntimeBuiltinLeafInfoImpl$StringImpl.writeText(RuntimeBuiltinLeafInfoImpl.java:143) at com.sun.xml.internal.bind.v2.runtime.LeafBeanInfoImpl.serializeBody(LeafBeanInfoImpl.java:114) at com.sun.xml.internal.bind.v2.runtime.XMLSerializer.childAsXsiType(XMLSerializer.java:687) at com.sun.xml.internal.bind.v2.runtime.property.SingleElementNodeProperty.serializeBody(SingleElementNodeProperty.java:141) at com.sun.xml.internal.bind.v2.runtime.ClassBeanInfoImpl.serializeBody(ClassBeanInfoImpl.java:321) at com.sun.xml.internal.bind.v2.runtime.XMLSerializer.childAsXsiType(XMLSerializer.java:687) (J'vous passe la fin, mais ça se répète sur un bon millier de lignes… logique pour une StackOverflow, dans un sens). Comment un truc qui fonctionne nickel à l'utilisation peut-il provoquer une StackOverflow quand on essaye de l'envoyer par SOAP ? :s Es-ce que tu essaierait pas de sérialiser un objet? Je vois "XMLSerializer" dans le stack. Peut-être une référence à un objet, du genre, A contiens B, et B contiens une référence vers A (donc le serializer passe en boucle sans jamais pouvoir s'en sortir car il fait A->B->A->B->A->B....) ? (Je dis probablement une connerie, j'ai fait que le strict minimum dans mes cours de Java ) Hors ligne #2336 Le 30/10/2012, à 21:14 Shanx Re : /* Topic des codeurs [7] */ Bon, je redemande, au cas où. Je cherche des cours de Java parce que je suis un peu à la ramasse. Je voudrais un cours assez complet et en français (j’ai déjà suffisamment de mal pour ne pas me casser la tête sur de l’anglais). Vous m’aviez déjà conseillé développez.com, mais il y a pas mal de tutos, et je ne sais pas trop comment choisir… Hors ligne #2337 Le 30/10/2012, à 21:15 Elzen Re : /* Topic des codeurs [7] */ Bah, le truc, c'est que j'n'essaye pas de sérialiser moi-même quoi que ce soit : je laisse l'API SOAP se débrouiller toute seule, sans trop savoir ce qu'elle fait. Mais trouvé : en fait, c'est juste que ça n'aime pas les java.awt.Point : je les ai viré et remplacé par des entiers en vrac, et tout est rentré dans l'ordre (bon, j'ai eu aussi quelques soucis avec les listes vides qui se transforment en null, mais ç't'autre chose). Du coup, ça marche (Par contre, j'voulais faire un truc bien propre, et finalement, c'est codé complètement à l'arrache, mais bon…) Donc l'archive Jar est dispo pour qui veut faire joujou avec. Je n'met pas le lien (pas envie que ce soit référencé avant que je change de machine), mais il s'appelle « StreamsEngine.jar » et est à la racine de mon serveur. Vous pouvez jouer à la version Web sur le port 9999, aussi, mais ça n'a pas grand intérêt actuellement, vu qu'il n'y a qu'un petit niveau de test… Par contre, si vous vous voulez faire des niveaux, ne vous gênez pas Les fichiers fournis servent d'exemple, mais s'il y a besoin de plus d'explications, j'en donne ^^ Edit : Bon, je redemande, au cas où. Je cherche des cours de Java parce que je suis un peu à la ramasse. Je voudrais un cours assez complet et en français (j’ai déjà suffisamment de mal pour ne pas me casser la tête sur de l’anglais). Bah j'n'ai pas de cours tout fait sous la main, mais si t'as besoin de coup de main/d'explication, n'hésite pas à demander ^^ Dernière modification par ArkSeth (Le 30/10/2012, à 21:16) Elzen : polisson, polémiste, polymathe ! (ex-ArkSeth) Un script pour améliorer quelques trucs du forum. La joie de t'avoir connu surpasse la peine de t'avoir perdu… J'ai pour qualité de ne jamais attaquer les gens. J'ai pour défaut de souvent avoir l'air de le faire. Hors ligne #2338 Le 30/10/2012, à 21:18 Shanx Re : /* Topic des codeurs [7] */ Edit :Shanx a écrit : Bon, je redemande, au cas où. Je cherche des cours de Java parce que je suis un peu à la ramasse. Je voudrais un cours assez complet et en français (j’ai déjà suffisamment de mal pour ne pas me casser la tête sur de l’anglais). Bah j'n'ai pas de cours tout fait sous la main, mais si t'as besoin de coup de main/d'explication, n'hésite pas à demander ^^ Merci, mais pour l’instant il faut que je me mettes sérieusement au Java avant de pouvoir poser des questions. Hors ligne #2339 Le 30/10/2012, à 21:24 maxpoulin64 Re : /* Topic des codeurs [7] */ ArkSeth a écrit : Edit :Shanx a écrit : Bon, je redemande, au cas où. Je cherche des cours de Java parce que je suis un peu à la ramasse. Je voudrais un cours assez complet et en français (j’ai déjà suffisamment de mal pour ne pas me casser la tête sur de l’anglais). Bah j'n'ai pas de cours tout fait sous la main, mais si t'as besoin de coup de main/d'explication, n'hésite pas à demander ^^ Merci, mais pour l’instant il faut que je me mettes sérieusement au Java avant de pouvoir poser des questions. Je crois que les tutoriels sur le SiteDuZéro peuvent faire l'affaire pour débuter sans trop se casser la tête, après tu peux aller chercher plus d'infos ailleurs comme sur développez. Du moins, c'est déjà de meilleure qualité que ce que mes profs ont pu faire en 2 ans de perte de temps à mon école. Hors ligne #2340 Le 30/10/2012, à 21:31 Elzen Re : /* Topic des codeurs [7] */ Shanx : cette année, je fais cours à des 2e année qui découvrent à peine l'objet, tu dois avoir un super niveau par rapport à eux Bon, petites précisions que j'ai oublié de noter à propos de mon jeu : d'abord, pour la version Web, JavaScript est requis (je n'voyais pas trop comment faire autrement) ; ensuite (pour la version Web et pour la version gui locale, les deux marchent de la même façon) il n'y a pas de glisser-déplacer. Vous cliquez sur un des objets de la liste de droite pour le sélectionner, puis vous cliquez sur le plateau pour le poser. Un nouveau clic sur cet objet permet de le retirer du plateau (certains objets sont fixes et ne peuvent pas être retirés). Un clic droit sur un objet le tourne, si possible. Elzen : polisson, polémiste, polymathe ! (ex-ArkSeth) Un script pour améliorer quelques trucs du forum. La joie de t'avoir connu surpasse la peine de t'avoir perdu… J'ai pour qualité de ne jamais attaquer les gens. J'ai pour défaut de souvent avoir l'air de le faire. Hors ligne #2341 Le 30/10/2012, à 21:56 grim7reaper Re : /* Topic des codeurs [7] */ @ArkSeth : je veux pas troller sur Java, mais ça met 5-7 secondes à démarrer oO Je trouve pas ça normal, ça l’est ? Et parfois quand je veux poser une pièce c’est super lent, je sais pas pourquoi. Et j’ai aussi ça à la seconde exécution (quand je charge le niveau) : [Fatal Error] rmirror.xml:23:1: Les structures de document XML doivent commencer et se terminer dans la même entité.[Fatal Error] ssource.xml:8:1: Les structures de document XML doivent commencer et se terminer dans la même entité.[Fatal Error] lmirror.xml:23:1: Les structures de document XML doivent commencer et se terminer dans la même entité. Dernière modification par grim7reaper (Le 30/10/2012, à 22:17) Hors ligne #2342 Le 31/10/2012, à 00:29 tshirtman Re : /* Topic des codeurs [7] */ Bon, je redemande, au cas où. Je cherche des cours de Java parce que je suis un peu à la ramasse. Je voudrais un cours assez complet et en français (j’ai déjà suffisamment de mal pour ne pas me casser la tête sur de l’anglais). Vous m’aviez déjà conseillé développez.com, mais il y a pas mal de tutos, et je ne sais pas trop comment choisir… Thinking in java est une référence, et les éditions les moins récentes sont dispos sur le site de l'auteur, pour la dernière version, faut acheter le bouquin http://www.mindview.net/Books/TIJ/ Hors ligne #2343 Le 31/10/2012, à 00:32 Elzen Re : /* Topic des codeurs [7] */ @ArkSeth : je veux pas troller sur Java, mais ça met 5-7 secondes à démarrer oO Je trouve pas ça normal, ça l’est ? Et parfois quand je veux poser une pièce c’est super lent, je sais pas pourquoi. Et j’ai aussi ça à la seconde exécution (quand je charge le niveau) :[Fatal Error] rmirror.xml:23:1: Les structures de document XML doivent commencer et se terminer dans la même entité. [Fatal Error] ssource.xml:8:1: Les structures de document XML doivent commencer et se terminer dans la même entité. [Fatal Error] lmirror.xml:23:1: Les structures de document XML doivent commencer et se terminer dans la même entité. Le temps de chargement, j'pense que ça vient du fait qu'il récupère les infos et les images de base sur mon serveur la première fois, et que mon serveur n'est pas non plus super costaud. Par contre, les erreurs, ç'pas normal Surtout si tu as déjà chargé un niveau, normalement c'est enregistré en local après… Elzen : polisson, polémiste, polymathe ! (ex-ArkSeth) Un script pour améliorer quelques trucs du forum. La joie de t'avoir connu surpasse la peine de t'avoir perdu… J'ai pour qualité de ne jamais attaquer les gens. J'ai pour défaut de souvent avoir l'air de le faire. Hors ligne #2344 Le 31/10/2012, à 00:33 tshirtman Re : /* Topic des codeurs [7] */ @arkseth: j'ai pas grand chose à apporter, à part… http://reinout.vanrees.org/weblog/2010/ … -rest.html Hors ligne #2345 Le 31/10/2012, à 00:50 Elzen Re : /* Topic des codeurs [7] */ @arkseth: j'ai pas grand chose à apporter, à part… http://reinout.vanrees.org/weblog/2010/ … -rest.html Yep, avant de commencer ce projet, j'étais bêtement sur la longueur d'onde de l'intervenant qui nous avait fait les TP d'AOS l'an dernier (en gros, il nous sortait que REST (qu'on a vu en deux temps trois mouvements, genre en intro, sans plus), c'était le vieux truc limité, et que maintenant on avait SOAP qui était largement mieux et moins limité) ; puis en creusant un peu (ce que j'n'avais pas spécialement pris le temps de faire à l'époque) pour faire ce truc-là, j'ai commencé à me rendre compte du niveau monumental d'absurdité de son truc (ce qui, curieusement, ne m'étonne pas trop de sa part⁽¹⁾). De toute façon, le concept de WebService me déplaît plutôt pas mal (confer ça, par exemple). J'pense qu'un jour, quand j'aurai le courage, je re-ferai ce truc-là un peu plus proprement (genre avec une interface Python pour la version locale, un client Web dédié indépendant, et genre Git pour récupérer des niveaux), parce que c'est vrai qu'il en faut ; et puis pour faire un vrai truc en Java, je chercherai autre chose, avec peut-être des trucs différents dedans. (1) Il faut dire qu'on ne s'appréciait pas beaucoup mutuellement. Enfin, c'est surtout lui qui ne m'aimait pas depuis qu'il m'avait sorti que je-ne-sais-plus-quoi n'était pas faisable et que je l'avais fait devant lui dans le quart d'heure suivant ^^ Elzen : polisson, polémiste, polymathe ! (ex-ArkSeth) Un script pour améliorer quelques trucs du forum. La joie de t'avoir connu surpasse la peine de t'avoir perdu… J'ai pour qualité de ne jamais attaquer les gens. J'ai pour défaut de souvent avoir l'air de le faire. Hors ligne #2346 Le 31/10/2012, à 00:53 The Uploader Re : /* Topic des codeurs [7] */ REST[ful] is all the rage right now. Surtout dans Rails, qui est restful depuis le début (ou en tout cas depuis la version 2 au moins, les autres j'ai pas connu ). Dernière modification par The Uploader (Le 31/10/2012, à 00:56) Passer de Ubuntu 10.04 à Xubuntu 12.04 LTS Archlinux + KDE sur ASUS N56VV. ALSA, SysV, DBus, Xorg = Windows 98 ! systemd, kdbus, ALSA + PulseAudio, Wayland = modern OS (10 years after Windows, but still...) ! Deal with it ! Hors ligne #2347 Le 31/10/2012, à 08:09 Mindiell Re : /* Topic des codeurs [7] */ En même temps, REST et SOAP, ça n'a pas grand chose à voir... Hors ligne #2348 Le 31/10/2012, à 10:59 Shanx Re : /* Topic des codeurs [7] */ Ah merde, j’avais sauté le passage où tu parles du cours de jmdoudoux. Merci, ça m’a l’air très bien. Merci aux autres aussi. Hors ligne #2349 Le 31/10/2012, à 14:17 Elzen Re : /* Topic des codeurs [7] */ En même temps, REST et SOAP, ça n'a pas grand chose à voir... Yep, c'est aussi un des trucs que j'ai découvert en creusant un peu et qui va à l'opposée de ce qu'avait sorti l'intervenant en question (à ma fac, on avait plein de super bons profs jusqu'en M1, et juste sur mon année de M2, ils ont dû mobiliser plein d'intervenants extérieurs et le niveau a bien baissé J'espère qu'ils ont pu arranger ça pour cette année). D'ailleurs, j'veux bien un complément d'informations à ces sujets, si quelqu'un a de la bonne doc sous la main, ce sera toujours utile. Elzen : polisson, polémiste, polymathe ! (ex-ArkSeth) Un script pour améliorer quelques trucs du forum. La joie de t'avoir connu surpasse la peine de t'avoir perdu… J'ai pour qualité de ne jamais attaquer les gens. J'ai pour défaut de souvent avoir l'air de le faire. Hors ligne #2350 Le 31/10/2012, à 14:24 Mindiell Re : /* Topic des codeurs [7] */ Bah, wikipedia est assez clair : REST est une architecture, SOAP est une bouse immonde un protocole. C'est un peu comme quand les gens te disent qu'ils utilisent du XML. C'est pas un format, mais une spécification au-dessus du format : avec XML seul tu fais rien. Hors ligne
Interpolating between two function tables? I’m building an inharmonic additive synthesizer, where the frequency relations of the partials are given by a breakpoint graph (a function object). I’d like to be able to interpolate between two function tables to get the partials’ frequencies. I can do some kind of interpolation with pattrstorage, but I haven’t been able to figure out how to use it to morph the data from two function tables. Also it seems to lose some of the points of the function table while doing the morph. Any advice? Here’s the main patch: allof the following text. Then, in Max, select New From Clipboard. ----------begin_max5_patcher---------- 3317.3oc2cssiiiaD84d.l+AGi8ofYUXw6LOk7crXw.0tU2iRbK2vV8jY2EY +1CuXYIYKYSqVhRLyf1WzMW0gUcXUkDI+iO+oGV+3tejcX8p+9peY0CO7G5s 7fcals7P0FdX8qo+Xy1zC1Cb8qYGNj9R15ubbmkY+nztiCk6d6zVedWQYQ5q Y187O2mmt8ztJd+07hsYk1KGtZq4OYOzcO9u9Yf07X28dY0ACMu5Gx+c6UGv Inps6N1xe6sLmNsd8pesZeukVt4a4Eu708YaJc6VvMm6JLQXdiPLuBxDzpe0 dN+2O+Iy6529h2nSQ1+QqBmj+umt+DJ7NUvPHDf4WhcouqAuzxx82M.BcAf3 9PP58ifeoweWEM4Jv.fBDx7FSkvzPKxflUmz9LsMx9ri3WE.ZtX6K0J3QCQ. 4tB1qFX+X82MsSRVBE3bFW+ik.DNgSs6fiwIDESBD8mSbaShkIZ3PKOI5cn+ ucqU+D1l+pl6p16A0t2qWgAQr+dnww2.SCkyA0BQTt0sfKFemiSXz17hLcyq ow9tAIRmfDzCHgGl8+0AJvhMTk0ZxAWN69ACTOucmVtGGt.DYzvBsXkVZ.jG SKd4VTqbKK.yQFfFAPQK8OlMNDjJ5cgIPBSw.o3jg697CaR2Z2IJQUs4uYLn 2ra6t8NL.kHEHlzP4zwmflLiW.04EdBzJaKl1MUYAZQMPWcRO9RaYhq+m3Kc 9I3C0.836kk6JtCSS4snxt.VtMdHkpF3AdTL75g1JebXzU2Eg941hcZ43EBo bzVPc+zieue196W8SvnfTpQFo7BlXXh6soM3vRcnQWBe1PBMXnFbW05P9Hre BYvhu1ZhwoVHT0A2jIt2FQEZggje7ZsxbH86YO8UsTqune0fF4ZVFWJKsBgb y17rhxF+15v+n5+HLCUq.e5m7g0GwyFGKgybGsD3GO5OdDg8FrytzmL9KZjX b5fmDnVSB1FANwRoJTSHw5IHhgFILBGHLBPVdBBXs30IpD.PBkPFGPBBDHQE NCIGgfDOgfzaqpxsbb5BRxGMLxEL8UAJrfZ60g5xrfeI+o8jx1eAc3y4ay9d 19C453wZHclTvqt3L601kzB.XWLAxlWcSnia2s4emY0ezos9T1y22kY2aYE0 mQcF8Uub1glW7ltOAMcdZ4Q4u0uc56aK+ZOvb6C34zMY8e5caC7v5W1m+ztB ifz9bMau5mT6oa8sMu1P5sGRQ5acc5k61s8wz8eO+P9itTGpaUzV2oE4ulVl Ul6DJLp9Lye8s81n5Zd4xJR0WlucXy9ca219p41026ZWsp2WSKlVtXs2wUyM 8FdT8y7bgmEb94zQZYWyC6tRYs2f9nVCXPvcIQbF0TyXBd3DE0vvw1TVsos9 qqvICFgw8gv3qfvvGDguMxxbHKw9FAeVVGiIxNQVnB0hvBEQVFVnjIvBEMqV ntf1hYKTwRvBU3JB07agRGeKz5TmmCKzi4SGyVnrEgEJ.KCKT1DXgxmUKTWM BhYKTxRvBkyYKCKT9DXgRmSKTtfE6VnKhLk3vBISIwDXgNqYJwwwdlR7EQlR LwBISI4DXgNqYJwjwdlR7EQlRL7BISI03agxm0LkXjXOSI9hHSIpbgjoTiGM rwyDcVSUhpZkpDaBg1oxDcQjpDkrPRUBfIvDcVyUhRakqDMBMQWD4JQjKjbk fI31Jwm0jkHJUjyhxVDIKQnKjjkfI39Jwm0rkHLRrahtHxVBqVHYKASvMVhM qoKQPhX2DcQjtDlsTRWZBtyRrYMcILO1SWhsHRWBiVJoKMA2ZI1rltDFXwtI 5hHcIfsTRWZBt2RrYMcIfG6oKQWDoKAvRIcoI3lKwl0zk.bvRW5ZPqwNv7Wi goynYCStZj.nAivVa2urPe+12yF2n72Mx7kJxDdOaNa7Jd1H1ezcnHSiC0sg TkiY5HMk61f0XLLFLHEZL8GLZnJVNSnZqmMKx4SLBQPenXzh3IYW0z37hoHf .1G532EJLq4JqPzPc2usPQ+3Cpumzehu3yMlB.tdmItovDLq501m1lcu9Zla bUsd8hAYnCFY78YifRSTTJ0LxncwZMO.jS9GvHE6Zk5CcKkWwRLS1ERgYT5J 9nZeiw9mYRrougVmUPLGPO3xgcuueSUyX0nga0YxySYGJyKNMlH+kZz37i7a 4O8zYC+PK5m+za6xKJOJk2Cavcq.lAKUbq.hXWAXwtBPhcEH1ch4wtSLO1ch 4wtSLO1ch4wtSLK1chYwtSLK1chYwtSLK1chowtSrofH9o.lhSMLEPA1ZFCt QIsBwN8sIU0LyqFd11.CU0jbYCUSxjAR0P9pZckummpFF2T0.bXTMyH42SUS LTUSH3MTMyThZXTMu80DClrPPPMUMLJPpF0aUiLTUiKoMTMtfFHUyaZDwfoQ 3jlzHbbfnQDdSivGLMBS0jFgICDMB2aZD9foQXzlzHLRfnQ3dSivGLMBC0jF gpBDMB2aZD9foQnrlzHTZfnQ3dSivGLMBE0jFgnBDMB2aZD1foQH7lzHDVfn QXdSivFLMBAZRiPPAhFg4MMBavzHXQSZDLOPzHLuoQXClFAiaRifg.Qiv7lF gMXZDPzjFA3AhFg4MMBcvzH.oIMBfCDMh4AKaEn7Q2HCOMaxwYkVgQoTLWZ1 luE.cSNskPPJEIJAgqn0JHJnJnXhKj.ERTTAwtBT3TvppIDHEjOskSPnXIJL fj3ZE7XMEBjBxl3hJPUIRcSHTahVUYg.ofzItzBHRhTmDNp1Dsp9BARAISbA FXhDIUaiVahVUkg.of3ItLC.jHAlPwqUPbPIYfItXCbVhP6DpNoeUEbHP52T WwAPkH3DPVywTU1gvnfposrC5N.Sz8xKohS5WUsGBi9Im3ZOfEIB.wnPs9QC ICiXZK.AQBI5VOLp19rpJDgQ+3SbUHHrDcuDRUcWfUkhHL5GaZKEAVpRzzKL UcOfU0iHL5GchqGAkjvQLbM8RUMIBi5Ql1ZR.JsVzt+upBSDF8COwElfAILF VGyxI8qp5DAp+8It5DlTZwZGvZ5ypRTDF8yuGD.4vuWPmzEcWQAxvT5shQHK xGQCo+UyDE6J.N1U.Zrq.7XWAjQtBvicmXdr6DyicmXdr6DyicmXQr6DKhcm XQr6DKhcmXQr6DKicmXYr6DCKSe.rLxep+wnHWAHyoBzXUH+dWNzu9pf60V8 a8YEx8lKBu8+y14g+x1cOlt83ZY7oez0e70f8ylTZpmzOJ1Uls5mFm0DZrJP qa13iSpEt4driS0nxOz5lcu.TY59WxJ0MdqbShSiANwC0BUuaI9hRk0KB6SO NMVVSzfsJr6rlXVTR.wEJEJWN2BaN0srPOJfjlvtT2yQmpJz0RN+sT0aLC4z sGhzM83bZhKr0DDzPTq1SwR0M8usa6u8mqdKceotAFz+Jq9GGJyR2t59MDnc fNrdPG7caHbH+kByu7oOccCCgakt+3h3BaLfvpIblK6s58WeLa+pcOWgiGta rqKKKZedQnaAdcNuB65UxMmd45h5iZRYU6wQUQilcRdQGywYc5j4VKTcqFxr ozG6mWMNrpDwnQqZPIePGoryonrwDc9K+MyjJY85G0GBhFuX8Na155JXD3lm pbA78AAo6s2G.nSP+OLWyN3V+6Fk9e5M1CzJvNqhhVgGEK..BUvGL2TgJHUS eDZa1lktejvGTnvGmUD1sZdewDw4Pvm1S3lentb.z30mSOyrlcN8ISNhFnQq amre+ozM+4cvQJuUbEclb6w7PbwUwpd8CH2ujlWbOhs3VsWWIlwa2emKValK SBWpolGJkPpf7oTAwTXzUvQzcTwlCuQpi0lhkil2381CNYJRfDKbiALKNMNc f+76Ear0Ts5m86o00B7h7dtoZ2ms9ojH2qUxiJrqmUTiI.zqznW8mGfD1shI RotgIsK0HRyIZz0oO0pVvtN7MuXNCIKgBbNiq4ES.Bmvo1cvw3DhhIAh9yIt sI0lX.17f5mn2g9+1sZC70HC1K4EsNN4n8TW5wVqyKUdUa1kSMQsKQdeOyas uUFcTZ7dJKdmlRdKbLejMEeVjML0Ggi05w4MXxF2WYCGdYyWQCBuno7U1lGW A3ltBhYQ1rYYNEjHLk6oaV3xf10Mb6u4tYAXoctCSbiHN8ViNWT64lFd2LOG kSvMZrZ9sQQrUdK0g24R5Emz7vk6kncN6PnvMujs4A2Td0A848TFJVIjWg1f mEgiH7JzlYQ17Ljv4ouFkW9pp4IdUkvagClGja4FBAZY28.Y7itgC1AUHQRc UdQb42llnar0.eoB1dFL4EFLKMwCuj6zSNSXmWziySBV94SbA9FJoi3izQmI gyqXTg4IZFvqjhTyivondibyP.CdgbhYhKAQVxRmeUGDvvLIdfejcj4Q7X+e .3c+wCJv1pBwN9fG4Vo2a+M.gaDPn8acbWA97mzu8+n2tRpG -----------end_max5_patcher----------- And here’s the partial1 patch used by the poly: allof the following text. Then, in Max, select New From Clipboard. ----------begin_max5_patcher---------- 763.3ocyXt0aaCBEG+YWo9c.Yo8RaVkAeM6s84XpJhXSRYxFmYi6RV05m8wE mKMMogNA14kRCfg+mebNGeLub6Md9yqWSZ8AeC7Cfm2Khd7T8I6waaGd9U30 4k3V0D8m2w40L+I8iw5pnrRBWMHbauzB0bqm+yuBgGN25N96lrtO9lUDsR7m iYK8AOtc7UXd9ST1xYMjbtdJggoODLA.ifxFTv1+BdT8L+81ajshlIFaWLxu ExcmV4j0p8x+dvdCXQMiyvUJc5+8FJt7zb.cBNfBtDGjqdK8OpUGhDlz43Ck wu.dfIRhDFEqZPOD6L7rDyIuBPVBRvomARnOMjZoKYxsd2+8gDCAiTnJTtjf DnCcnDJU.rOMtNYrUzYvUvkv0oPPlBAQZRDEsGAaenV7yjhYhMV7LyvbdCUj LPm+vaOE77kCMKutphvzlrudjdJZUZ9k+CTdJOurzAL7LPGd16yYkvytp4jF 63TEZsXPIIlXT97.cJ7TUBqXWlOm+DscUc4lWsBrxrMqtnyCZZlhN5W8Em4P TQYfHqPoDqEacI2HETRT9OgAVLCFtZUIk2UPbXpLAtCsSDbv.waT7T2vawNu niUbPxCmhc6713zgh5vLG4kKYNV9KgY6TfCsBviFHfCyB1WUnkcyWga3Tccp KZH+xod5pJOsC5CsXgmvXzUbgmKJqEl3Ny8YbyNlIOvrCNSRsVcDB8hMrpqd +ZntbTmV00c.3CVoN8XqEyqAkQ.JLw4eG8cuZmaYvZ3wjuZtmOHzTmym7M4k D6vH3XvHXpxGReiLmjQ5Uwujxd2UxoVV4.GQt15tl7sho+3GbPdyBRKmxvbZ M6fIIujE.b+rdhVTPXuI6cEsXUs3SQ5ExGcdZWsk81IMTRKvXtM7hSdyDWVb GaBCj3jWVfAmpoiC4L5TMbTzlQGpwiRfJznngjw4LM5JVaPiBTGEoEaTnvnH sDiCEFAsY5I5HDkN0DsMNI2NtBiy3tEc8lAABudeqvHoMHbXJfS1gn4en.ow kA -----------end_max5_patcher----------- For some reason some of the loadbangs and loadmesses don’t seem to work, so you need to click some of the buttons in the main patch to get it initialized properly… Any help is greatly appreciated, thanks in advance! Have a look at the ej.linterp external from Emmanuel Jourdan : It’s in the ejies 2.0 download. It’s help file shows interpolation between two multisliders with the output displayed on a third multislider. It may help you out. pattrstorage should morph between the two functions just fine IF the number of points is the same in each. If not, it will lose points, as it can’t do a one-to-one. So to make sure the number of points is the same, just have multiple points close to each other if need be to fill up the list length. Also multislider might work for you too, and in some ways is easier to use. pattrstorage will work fine with that as well, again, assuming the list lengths are the same. Thanks for the pointers! Those ej externals seem really cool and the ej.linterp does seem to be able to pull off what I’m looking for. However, since this is a class project where the work should be a self-contained folder, I think I have to refrain from using anything that needs installation for now. Later I’ll use them for sure, though! What I haven’t been able to pull off with patternstorage yet is how to have two different function objects and interpolate between them. I’d like to have the two function graphs on the screen simultaneously and a slider with which the user could interpolate between them (much like in the ej.linterp help file). Is this doable with pattrstorage? Sounds like you’d need three functions: the first, the second, and the third is the interp between the two. the "recall $1 $2 $3" message would be used for this, where $1 and $2 are the stored slots you want to interp between, and $3 is a value from 0. to 1., where 0. is the same as slot 1, 1. is the same as slot 2, and 0.5 is halfway between. So when you recall slots, they just need to go to the function you want. I’m not sure how to do this with just one pattrstorage though, and if you have two, I don’t think you can interp between slots on different pattrstorages. Anyone else weigh in on this? Would be good to know. I suppose you could use a placeholder function (invisible) to hold the recalls, then based on the slot chosen, these values then go to one of your two display functions. just use a [gate] that toggles back and forth, or maybe choose the destination based on the slot number recalled (like odds go to one, evens to the other, use %2 to determine odd or even). Probably don’t want to try setting the interp values back into a function that holds one of the slots, you could run into issues with the set messages and trying to create new slots. Better to use a third function that shows the interp result, or just use one for everything. If you use the "swap back and forth" technique above, you could have the other two functions on top of the real one, with transparent backgrounds, to show all three at once. Only the result one would be clickable. Here’s an approach without pattr or externals. It basically makes one list per breakpoint-envelope, interpolates between them and sets a third breakpoint object. BTW, this way you can also do extrapolation (1.) The interpolate-list subpatch maybe a bit cumbersome. I’m sure you could do this a lot more efficient with javascript. allof the following text. Then, in Max, select New From Clipboard. ----------begin_max5_patcher---------- 2328.3oc4bs0jihaE9Y2+JHN6CIYc6ft.B1GRUaUadOOj7zTolBaK6lDL3Bv yNY1Z+uGIcDsAavVFCX00zU0FPb6nOctqi32dY17UYekWL24mb9jyrY+1Kyl oZR1vL8wyluO5qqShJTW170Y62ySKmu.NWI+qkp1KNjDW5bHJNun5boG2mcr LgWptSWcqayRKKh+FW1V3xpVODUt9s3zceNmutDnGZPf3zNTpZCx0Usgsz04 equo3MpWc1p+yqn40d7oQ6UO94+bdbTRMxINshZPx198WdQ9yBC65aSxDOi1 6c36s2QB7k8GhumZi2EcN3gW9+NvgaP95iD3ty7UQo6l+90UlGWrNJQcUtKC 0s9lbfYcVRVNb2tKCXtdAAKZcOT6fJ9dAUciq107M6K9isn08Tu4dLVr5XYY VZ6iEnqf5HpDnYH4u3vpe6Fzah00vFxfwVkx+UwC7BApuk3jvSGHtMuPnGyT ceB81baRFsV63z6jo.OnPxt7riGFHPwmpX.HHkvG0+A.EumJnLb7IDp2Pwm3 OAPxddQQzN9EXhTpcgyli6ObcMDlK+fYJnfhka7ASQdWEWZCSXS.lr8X55x3 tzMR0slDmxanjln9aQq6U27PaFpcUnBM.3czFpkaLvXV0+MUzdHKNso4KBF4 4Gtn085v7UPEDrJKeCOuwiycQsepe+sZPKZyFEEo8TBg7WhPLWrhHBT1REcb gFDR.Nz2S1L0m3qZlhQK8B8XDJPxdAJKfBiPgdKYTJzUXr.rp8dYM77wbAbe 3X49rM758hwmSnRLwOvt3DBGONAzRAe.BKGqWFhAFAbvRFlPcUi3XOMHJzmh oB9h.Wlhl8A1FlKaofKfwT9EQnZOj9PyGfCv.e.yt3CPtiGifnqJ3CXH0Xdk Fgvkgh+BTZD7b0CsxXZPKoAgDZ3YbHAHxRgJDFizvc4dwLzg+BkNqbVMTtKP fwYWHVF2a6tfZPcwU7qFMEQxMY9MfYPvqTxC42.B+DclZcBOJefvCBnXfBtY RH8EOHS.dzUlN9W+ie4m+m+cm+v.klChmRIISEKhG9pI4f9DkMNYKwAMPbCz PU.F9pXSY8V33IF7kvl0FPk1.AHjSxE57dw5U5IP9S.qxzDlN0kAgoyd3vzQ rmGqxAGg2B77CYIQk7WShKJGJlFvtKEoPl.lA3SqXSvchMj5jDOWiHZHQ7Th S3egmWHcG8T2Z17SDNRQ2dg.0q348X9mHbYhDSxV+e4apoXc17M7s2yyH6.O 8z0eladtmekwoGx4EB09QkZBu96M5XR4mqOvfvKa87aiVy67laEamMeWd7lr TIQz3NkMW859jCjqp5ZI0WQZzgVt4xrrjUQ4eItHdUBuw3ffINJMdufYrLFn Gge5UmLd+gbA6ZimEOMR7LdqXcdVRRiGEbluzxYZL8B03NZHD0n8qZD9LkuB YHb8yzpk3tEl5zhLVYVBRmA4B4otl5gqK6zgF2YU5WfcVLRvDZDfIrh42IfZ FLgs.Xpgp4lnTtyez80SIRsaEy2OPoCClRgMgshTcqjtqYd3dAR2oAHEV+y4 65FFw8VrD7C.E.Ys22PXbwUfRZOgR7.Bk.Eal7YaLWXkahdtnJCCmYdniYpn yLanNQk9D4cMe.6rp2lYhXsltAXP2kbk9p4BR9cKpLAPw0EgJRhWeJPqgTLp Ztc7.OmPCfXDyBDidFZjpfRHq2ChFo.K.JeLgTnfFtSYTH4ucAJgOUA0GCNv fyjnqpe1bcVumA6uuTZgfziMjJsPnuS0ZUgkCnVKD15wxq5Ze+wRcIRPPsWu D8AKIV.Vdq3Ikv4tx2FgnJ0bmHHGndd2L5apcGWob511NBbdZyrPb2XCiorZ 12tt4VjmcCo7udH24G1hc9Qm+zOrk37p7f+ryeQrAMFguCJJoZOZn3GL9cje OgWxjIfmxix0oqd4Xj4HFTwcPcC4GbSYblcyPd7awciRj9muCn3Izbe2mDd0 VURb6BVCraq1aGAgYfYiPfR+kYFndC0kgVtAaPP1Yaz5xr7wvjsKjobpghyX W62j8oZjYPEogpvVOcU8Tjtwb9dA1h9.jl8MYo7QP1FCURg1mbO+GzNMF+cZ h1wtfHbn6fEfHlX8AHVujEFzPDgDzSg5agNHnI0tQyhaNWZ82VidkRAEJUGy kVcnx661ocTWsgP7zO7rNh8s.sg21KmkimaNvz2FPNs.xtJimkGyRwsMD2ej BVHHP.LcXGtNTELfP0oxhQFFaG0dhhlkmucLrH6X95JIB8LE5zj72vKJiSeu Hg9z6ym7YW2awa1zrlbTCRwMJdbn7tu4X98R0RGX9.R1BAGGjATcvEWmQTsb 494IKXEcoVBN4z7HDjgbWW0UhBqVjpCdekZ3HTO6qiDUGXHUyrKwAQnQNXCA a6RbvDvVlVwy5deHT9HCMvhfa4Dr9gjtQFp1TNWc8QWBF6VSuIl3USuY0Qfd Srq2Xq2rEveX6si4njwRyVDcapzbKcvmtNeikJrG51T1aYl.rI1DioauOlJ8 Q1kRero9OZYvs+8vdaQzc38n81hnaOS0BFZUpSvFasLzpvaiYSrKwRrohkH6 JbzVHmNTmXWQ1gLMNZLx9TmXLexPQ25FEc.XYGVD8E9lOKdM70keNprLOd0w RHge0WFh6RxVEknWihumWw2WzcstX8FqU32UNUWKtvWNgBC1hyMtjm6fGpUm qd0mRovrb8HqN2vm2pyszY6v8Uw.VCt5MWVadF+o8qsZl3IrdtG3E4t9avGE 8vKxc7T7sBoqOCDp4a5uJNJOBVS2Nu92FnOIDXMCjeWqn61pRgmAFHZekvtA 7Y+TVvfYNYhCqu31gOqPGg0h.dXvGViuH.HnLBwcAPjG.fTOvlyjDfWmayTi ZWOA+WI3r2ce4RKjsac7hQPSonv5oI1jvEmDJBcCJhNoTDyfQsvIkhZ5ykcP RHCF2XSJIYBHobJEOcnjuIzzzJvgXlhSSmZIifI1jRRMCewnTOL5zD10zgtI jcx0.SJnoUSPyD0ZxD8N9jTn0QRXCnny33lDF7aYU4rQ2IgA+Vzj+jRRlvd2 bt3F+QNjAzzDOvYBCNx097japoISF5NCLuOZRbvu+x+mn5E4V -----------end_max5_patcher----------- That’s a nice approach. I don’t think the subpatch is cumbersome, I like what you did with the iteration and the zl slice/zl reg. Though you could probably have just used [iter] if you’re processing the elements one at a time. That said, anytime I see a new way to do recursion I’m definitely interested, especially with something like a list-interpolation. Maybe the javascript version would be cleaner, but really you’d be doing the same thing: going through all the array elements, finding the difference between corresponding points, and applying the scale factor to determine the interp point. So it seems pretty much the same to me. Actually, using [vexpr] would be better here, process the lists all at once. Anyone weigh in on the possible speed differences between these two approaches (max vs. js)? Assuming there are (say) thousands of points, where it would actually matter? Maybe jit.expr would be the best approach for speed with massive lists… I would use vexpr. If you don’t set the scalarmode attribute of the vexpr $f1+$f2, it will automatically adapt to the smallest list. Finally if you want more than 128 points, you’ll have to set the first argument of zl iter 2 to something bigger. allof the following text. Then, in Max, select New From Clipboard. ----------begin_max5_patcher---------- 920.3oc0X01aaBCD9yI+J7h5m1nQXiwurOs86npZhBNsLQfHvYKqU8+9rOCs grzD2RBJqREhOCbO2CO24y7zzIytqZipYF5qnaPSl7zzIS.SVCSZGOY1xjMo EIMvkMqT86p694r.2TZ0FMX9wBTtVUiHcyrJQm9Pd48+nVkpcN.yBmGFfhDT 6IpzdjXLgts8dpVqKTZ8eVob2vrYAl+eY57LvUF2eMNpyOKpJ0kIKg6X12qy SJ1dll7GgYvDiyblKWuLuz3FHbHuZz4bmUqwmmN0dHXX7RQUR1cIk2e.ZgHD .s.7gihNLq.Ou8xJjSCqf2KqfOcrxmtFgme.Jg5XgXmdgLO9XTxBCOq2OmfO mJkOBmrbcgNuoHOSUebJ.GYOIYv.5GJWg24kp5bUoNQmWUZmrKLaTZSbZfn6 AA9Eukm1qzH0PYYI5j2VtbBShznBTZgJo1ihKwb6IdjeEWbO08wZzyYpzIja 9kZypZzUKve4pE9T7k.7Cl4S028wKQWZoSGiW9rgWPeqIMoHodYUlBgOXVWL vK7AxRjSKKspV0zk5tEZYL2JGTn7PX6gsSbuj31VE3f41KtB5KVWlBEUOZrS wbWP6JxF5wZZAsU1C1cc+UU4k5zphp5t51QDbLSFr2escA8s3RQGlKxKU8eX veA68WGc0g9bJs05cU0l0754ksjs8erIYYP.11aJ1RbDLgisWKQXNZWOTNWZ 9SDaMFGxXLNLAkhmSExHJPAj3V7GfDlkSEDImCgif0dGnaOOu0oQPmtXB9x5 st777V2DKqVqs0ANyZA7btULvrWqj3zBDwbNIhFR24UdjoqIJSDxAhg4TN7P 9bJkw4bH.omYc.VHc5fvKKc.N7+bgvEWQgkpllj6U+61.yazYqWt53hDID0w PaHXw6dww3Ktc67wYj1xmR7fXD1XwHfygLoc9fJPnYs2mlZpVWm1A8tOpA50 vKS0nyKeY6Z27xJ1acMOjmko5satk48xQdCUruPh5Ah1A1iAjvmVRhPhrehA Znzdx1YZ2nAhU6Gj4n7GaToOufT73BIrOprQERbOPD4hCQ3wkjrsOfhNsRoH Nj7QHR22D0sc7diDt9BjB6EJflnFTXH8HJdm4nLSSm1nHBfJi55+u2HAbjRg JOB7fiBlG5C4nJOh8QwFN9JVhGUZviFjjdfHxnhHeVfH58gHJFxGhL6bCNIe YzPeg5IVGOIlO8vQG.hLCdd5eMy0B0K -----------end_max5_patcher----------- Wow! What a beautifully elegant solution. Emmanuel, you’ve shown once more, that max patching belongs to the fine arts i really appreciate that. Thanks a lot, those are very inspiring pieces of code! They do exactly what I was looking for. However, I’ve been thinking of another application of this list interpolation for my additive synth, which I’m not entirely sure how to pull off. I’d like to use interpolation between two breakpoint functions also for the amplitude envelopes of the partials in the following manner: - Func1 gives the amplitude envelope for the lowest partial - Func2 gives the amplitude envelope for the highest partial - the partials in between would have amplitude envelopes interpolated between these two (-> ie. if I had 21 partials, the 2nd partial would have an envelope of 95% f1 and 5% f2 and the 11th partial would have 50%/50% interpolation of the two functions.) Or something of that sort, I thought it would be a fairly intuitive way of having a different amplitude envelope for the highest partials than the lowest ones. Should generate interesting results. I suppose I could just duplicate for example the code by Emmanuel Jourdan 20 times, but I’m sure there’s a more elegant way? Interestingly I was working on a similar project this summer. It is broken in 5.1 though. In 5.08 it works, haven’t found the time to fix it (maybe I should just send it to support?) Its an additive synth with some interpolation including complex sustain loop modes… Have a look at it… Another sort of primitive physical model of a bowed string I made for MaxFor Live is also using some function interpolation derived from my "Yet Another Additive Synth"… Have fun… Stefan
L4ur3nt Accéder au NAS DNS320 depuis toutes les applications sur Ubuntu 12.10 Bonjour, j'ai un soucis de connection à mon NAS DNS 320 sur Ubuntu 12.10. J'arrive à y avoir accès en recherchant dans le réseau local mais je parviens pas à "accéder au NAS depuis toutes les applications" J'ai déjà consulté plusieurs anciens topics du forum et des tutoriaux mais sans succès. Voici ce que j'ai fais :sudo mkdir /media/nasgksudo gedit /etc/fstab # Monter NAS //192.168.1.4/Volume_1 /media/nas cifs guest,uid=1000,iocharset=utf8,codepage=unicode,unicode 0 0 Quand j'entre la commande:sudo mount -a J'obtiens: mount error(22): Invalid argument Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) A noter que j'ai installé Fun Plug sur le NAS. Pensez-vous que ça ai pu interférer? Sur Windows, je peux me connecter sans aucun mot de passe au NAS. Mais pour me connecter en SSH via WinSCP ou Nautilus, je dois utiliser un login+password créé dans la foulée de l'installation de Fun Plug. Est-ce que quelqu'un aurait une piste pour résoudre ce problème? Un grand merci d'avance pour votre aide. Dernière modification par L4ur3nt (Le 29/12/2012, à 11:47) Hors ligne sinbad83 Re : Accéder au NAS DNS320 depuis toutes les applications sur Ubuntu 12.10 Salut, j'ai également le même modèle de NAS. En l'absence de toute doc Linux du constructeur, je me la suis rédigée tout seul. Elle est sur http://coursinforev.org/dokuwiki/doku.php/dns320 Tu verras que pour monter le NAS, il faut d'abord identifier les partages avec la commande showmount -e <IP_NAS> et ensuite tu ajoutes dans ton fstab, une ligne du type: # Entry for Backup on NAS 192.168.xxx.yyy:/mnt/HD/HD_a2/ /media/NAS nfs defaults,user,auto,noatime,intr,_netdev 0 0 Dernière modification par sinbad83 (Le 29/12/2012, à 11:57) La connaissance n'est pas une denrée rare, il faut la partager avec les autres. Linux registered #484707 Site: www.coursinforev.org/doku.php Desktop Quad8800 Ubuntu 14.04.1 et Seven, Samsung N150 U14.04.1 et Seven, HP Pavillon G6 U14.04.1 et Seven, Serveurs Ubuntu 14.04.1, Serveur virtualisation Proxmox Hors ligne L4ur3nt Re : Accéder au NAS DNS320 depuis toutes les applications sur Ubuntu 12.10 Salut sinbad, Merci beaucoup pour ta réponse, formidable documentation que tu as fais là. J'ai un soucis lorsque je tappe la commande : showmount -e 192.168.1.4 clnt_create: RPC: Port mapper failure - Unable to receive: errno 111 (Connection refused) Sais-tu d'ou peut venir le problème? Merci d'avance Hors ligne sinbad83 Re : Accéder au NAS DNS320 depuis toutes les applications sur Ubuntu 12.10 As-tu créé auparavant tes partages dans le NAS (§ Network shares) ? Je suppose que tu as mis la bonne IP. Le NAS répond au ping et à nmap ? Dernière modification par sinbad83 (Le 29/12/2012, à 12:44) La connaissance n'est pas une denrée rare, il faut la partager avec les autres. Linux registered #484707 Site: www.coursinforev.org/doku.php Desktop Quad8800 Ubuntu 14.04.1 et Seven, Samsung N150 U14.04.1 et Seven, HP Pavillon G6 U14.04.1 et Seven, Serveurs Ubuntu 14.04.1, Serveur virtualisation Proxmox Hors ligne L4ur3nt Re : Accéder au NAS DNS320 depuis toutes les applications sur Ubuntu 12.10 Oui, j'ai mis la bonne adresse IP. Dans l'administration du NAS, Management > Network Shares, j'ai juste une entrée que j'avais laissée par défaut : Share Name : Volume_1Path : Volume_1Read Only : -Read / Write : All AccountsDeny Access : -Oplocks : yesMap Archive : noComment : Recycle : no Est-ce suffisant? J'ai tester les 2 commandes suivants : ping 192.168.1.4 PING 192.168.1.4 (192.168.1.4) 56(84) bytes of data. 64 bytes from 192.168.1.4: icmp_req=1 ttl=64 time=0.214 ms 64 bytes from 192.168.1.4: icmp_req=2 ttl=64 time=0.111 ms 64 bytes from 192.168.1.4: icmp_req=3 ttl=64 time=0.126 ms 64 bytes from 192.168.1.4: icmp_req=4 ttl=64 time=0.128 ms 64 bytes from 192.168.1.4: icmp_req=5 ttl=64 time=0.152 ms nmap 192.168.1.4 Starting Nmap 6.00 ( http://nmap.org ) at 2012-12-29 11:51 CET Nmap scan report for 192.168.1.4 Host is up (0.0010s latency). Not shown: 993 closed ports PORT STATE SERVICE 22/tcp open ssh 80/tcp open http 139/tcp open netbios-ssn 443/tcp open https 445/tcp open microsoft-ds 515/tcp open printer 9091/tcp open xmltec-xmlmail Nmap done: 1 IP address (1 host up) scanned in 0.19 seconds Hors ligne sinbad83 Re : Accéder au NAS DNS320 depuis toutes les applications sur Ubuntu 12.10 Je pense que Volume 1 devrait suffire. Il faut par contre cocher dans le partage toutes les options CIFS, FTP et NFS. Par contre, as-tu aussi installé les paquets concernant nfs sur ton Ubuntu ? Autre question qui m'intéresse personnellement: comment as-tu fait pour avoir le port 22 ouvert pour le SSH. Je n'arrive plus à l'activer. Dernière modification par sinbad83 (Le 29/12/2012, à 13:06) La connaissance n'est pas une denrée rare, il faut la partager avec les autres. Linux registered #484707 Site: www.coursinforev.org/doku.php Desktop Quad8800 Ubuntu 14.04.1 et Seven, Samsung N150 U14.04.1 et Seven, HP Pavillon G6 U14.04.1 et Seven, Serveurs Ubuntu 14.04.1, Serveur virtualisation Proxmox Hors ligne L4ur3nt Re : Accéder au NAS DNS320 depuis toutes les applications sur Ubuntu 12.10 J'ai créé sur l'administration du NAS un utilisation ayant tous les accès au partage "Volume_1". J'ai également ajouter les options FTP et NFS qui n'étaient effectivement pas cochées. Après un reboot du NAS, j'ai toujours la même erreur :showmount -e 192.168.1.4 clnt_create: RPC: Port mapper failure - Unable to receive: errno 111 (Connection refused) En ce qui concerne ta question, je pense que le porte 22 a été ouvert quand j'ai installé Fun Plug sur le NAS (afin d'installer Transmission). J'ai suivi ce tutoriel : http://klseet.com/index.php/d-link-dns- … n-plug/173 Comment puis-je faire pour installer les paquets nfs? Merci EDIT: J'ai installé les paquets nfs-kernel-server EDIT2: Mais rien n'y fait, j'ai toujours le même message quand je tappe la commande showmount. EDIT3: Je précise que quand je vais sur Nautilus > Connect to server, je peux me conencter en SSH porte 22 avec le login et password configuré lors de l'installation de Fun Plug. Mon problème n'est-il pas lié à cette installation qui a créé un login SSH inexistant auparavant? Dernière modification par L4ur3nt (Le 29/12/2012, à 13:19) Hors ligne sinbad83 Re : Accéder au NAS DNS320 depuis toutes les applications sur Ubuntu 12.10 - Pour les paquets NFS, je trouve ~$ dpkg -l | grep nfs ii liblockfile1 1.09-3 NFS-safe locking library ii libnfsidmap2 0.25-1ubuntu2 NFS idmapping library ii nfs-common 1:1.2.5-3ubuntu3.1 NFS support files common to client and server J'ai cru comprendre que nfs-kernel-server n'était pas compatible de nfs-common. - Si tu accèdes au NAS par Firefox, c'est déjà pas mal, mais cela ne correspond pas à tous les besoins. - On peut aussi accéder par Raccourcis/Réseau ou par Raccourcis/Se connecter à un serveur, on choisit ensuite par exemple Partage Windows ou un autre protocole. - Regarde La connaissance n'est pas une denrée rare, il faut la partager avec les autres. Linux registered #484707 Site: www.coursinforev.org/doku.php Desktop Quad8800 Ubuntu 14.04.1 et Seven, Samsung N150 U14.04.1 et Seven, HP Pavillon G6 U14.04.1 et Seven, Serveurs Ubuntu 14.04.1, Serveur virtualisation Proxmox Hors ligne L4ur3nt Re : Accéder au NAS DNS320 depuis toutes les applications sur Ubuntu 12.10 En faisant la même commande, je trouve : ~$ dpkg -l | grep nfs ii libnfsidmap2 0.25-1ubuntu3 amd64 NFS idmapping library ii nfs-common 1:1.2.6-3ubuntu2 amd64 NFS support files common to client and server ii nfs-kernel-server 1:1.2.6-3ubuntu2 amd64 support for NFS kernel server J'ai supprimé nfs-kernel-server. Pour récapituler, j'arrive à accéder au NAS par les moyens suivants : - firefox, en allant à l'adresse 192.168.1.4, j'accède au panel admin - nautilus, en allant dans Files > Connect du server > connection par SSH avec login et password - nautilus, en allant dans les raccourcis à gauche > Network > browse network > ... Mais ce que je recherche, c'est d'accéder au NAS depuis les logiciels comme LibreOffie Writer (quand on clique sur Open/Save), ou tout autre logiciel. Pour cela, je dois ajouter un "lien" dans le dossier "media" d'Ubuntu mais je n'y parviens pas. En ce qui te concerne, est-ce que tu as réussis cette manipulation? As-tu un lien direct vers ton NAS dans "media"? Merci d'avance, Dernière modification par L4ur3nt (Le 29/12/2012, à 14:23) Hors ligne sinbad83 Re : Accéder au NAS DNS320 depuis toutes les applications sur Ubuntu 12.10 Non, moi, je n'ai aucun problème (sur un des NAS). J'utilise en fait deux NAS dans deux associations... D'un côté, serveur en 12.04 LTS, tout est OK. De l'autre, le serveur est encore en 10.04 LTS, je dois le changer bientôt. Mais sur le dernier, je n'arrive plus à le monter, alors qu'avant cela marchait parfaitement... Si tu n'arrives en NFS, essaye autre chose. Dans la doc du NAS, il y a plusieurs options (§ Accès Ubuntu). N'oublie pas la fonction "Se connecter à un serveur" (que tu trouves à partir du fond d'écran en faisant Fichier/Se connecter à un serveur). Dernière modification par sinbad83 (Le 29/12/2012, à 15:35) La connaissance n'est pas une denrée rare, il faut la partager avec les autres. Linux registered #484707 Site: www.coursinforev.org/doku.php Desktop Quad8800 Ubuntu 14.04.1 et Seven, Samsung N150 U14.04.1 et Seven, HP Pavillon G6 U14.04.1 et Seven, Serveurs Ubuntu 14.04.1, Serveur virtualisation Proxmox Hors ligne L4ur3nt Re : Accéder au NAS DNS320 depuis toutes les applications sur Ubuntu 12.10 Salut sinbad, Merci pour ta nouvelle réponse. J'ai suivi la partie de ta doc concernant le montage du NAS sur Ubuntu. La méthode avec "Partage Windows" correspondant à ce que je souhaite et fonctionne parfaitement (connection OK) sauf que je ne trouve pas d'icone à cocher "Ajouter un signet" pour qu'il apparaisse dans le poste de travail. Voici l'écran que j'ai : Merci d'avance Hors ligne sinbad83 Re : Accéder au NAS DNS320 depuis toutes les applications sur Ubuntu 12.10 Autant pour moi. Les menus ont changé. C'était pourtant bien commode. Dernière modification par sinbad83 (Le 29/12/2012, à 23:33) La connaissance n'est pas une denrée rare, il faut la partager avec les autres. Linux registered #484707 Site: www.coursinforev.org/doku.php Desktop Quad8800 Ubuntu 14.04.1 et Seven, Samsung N150 U14.04.1 et Seven, HP Pavillon G6 U14.04.1 et Seven, Serveurs Ubuntu 14.04.1, Serveur virtualisation Proxmox Hors ligne L4ur3nt Re : Accéder au NAS DNS320 depuis toutes les applications sur Ubuntu 12.10 Mince. C'est balo. N'y a-t-il pas d'autres solutions pour faire apparaitre le NAS dans le poste de travail? Suis-je condamné à suivre la procédure expliquée sur la page : Alors qu'elle ne fonctionne pas pour le DNS 320? Tu n'as pas une solution alternative par hasard? Merci encore pour ton aide. Bonne soirée, Laurent Hors ligne
Is there a math expressions parser + evaluator for Python? I am not the first to ask this question, but answers usually point to eval(). For instance, one could do this: >>> safe_list = ['math','acos', 'asin', 'atan', 'atan2', 'ceil', 'cos', 'cosh', 'degrees', 'e', 'exp', 'fabs', 'floor', 'fmod', 'frexp', 'hypot', 'ldexp', 'log', 'log10', 'modf', 'pi', 'pow', 'radians', 'sin', 'sinh', 'sqrt', 'tan', 'tanh', 'abs'] >>> safe_dict = dict([ (k, locals().get(k, None)) for k in safe_list ]) >>> s = "2+3" >>> eval(s, {"__builtins__":None}, safe_dict) 5 But this is not safe: >>> s_badbaduser = """ ... (lambda fc=( ... lambda n: [ ... c for c in ... ().__class__.__bases__[0].__subclasses__() ... if c.__name__ == n ... ][0] ... ): ... fc("function")( ... fc("code")( ... 0,0,0,0,"KABOOM",(),(),(),"","",0,"" ... ),{} ... )() ... )() ... """ >>> eval(s_badbaduser, {"__builtins__":None}, safe_dict) Segmentation fault Also, using eval for parsing and evaluating mathematical expressions just seems wrong to me. I have found PyMathParser, but it also uses eval under the hood and is no better: >>> import MathParser >>> m=MathParser.PyMathParser() >>> m.expression = s_badbaduser >>> m.evaluate(); Segmentation fault Is there a library available that would parse and evaluate mathematical expression without using Python parser?
After studying this page: I am hoping to find some setup.py files to study so as to make my own (with the goal of making a fedora rpm file). Could the s.o. community point me towards some good examples? Complete walkthrough of writing If you'd like a real-world example, I could point you towards the These aren't simple examples; the tutorial link I gave has those. These are more complex, but also more practical. You may find the HitchHiker's Guide to Packaging helpful, even though it is incomplete. I'd start with the Quick Start tutorial. Try also just browsing through Python packages on the Python Package Index. Just download the tarball, unpack it, and have a look at the My final suggestion is to just go for it and try making one; don't be afraid to fail. I really didn't understand it until I started making them myself. It's trivial to create a new package on PyPI and just as easy to remove it. So, create a dummy package and play around. I would recommend to get some understanding of packaging-ecosystem (from the guide pointed by gotgenes) before attempting mindless copy-pasting. Most of examples out there in the Internet start with from distutils.core import setup but this for example does not support building an egg from setuptools import setup And the reason is that they are Now according to the guide deprecated setuptools are to be replaced by distutils2, which "will be part of the standard library in Python 3.3". I must say I liked setuptools and eggs and have not yet been completely convinced by convenience of distutils2. It requires and to install So finally to quote the most clear message on state of setup.py itself: Packaging never was trivial (one learns this by trying to develop a new one), so I assume a lot of things have gone for reason. I just hope this time it will be done correctly. A very practical example/implementation of mixing scripts and single python files into setup.py is giving here
Introduction As noted previously, I recently rehosted this blog on Google’s App Engine (GAE). Writing a small, but functional, blog engine is a useful exercise in learning a web framework. I wrote blog software to learn more about Django, and I ported it to App Engine to learn more about App Engine. In this article, I recreate the steps necessary to build a blogging engine that runs under GAE. Acknowledgments I’m grateful to the following people for reviewing and commenting on this article before I published it. In addition, the following people sent me some valuable insights and corrections after the article was published: Bill Katz, for clarifying the querying of list properties. Fernando Correia, for reminding me that unique keys do have associated unique IDs, forcing me to re-read that part of the GAE docs again. Alexander Kojevnikov, for clarifying that the GAE user API works withGoogle accounts, not GMail accounts. (The difference is that a registeredGoogle user need not be a GMail user.) Alexander also pointed out thatQuery.fetch(1)can be more simply expressed asQuery.get(). Mark Lissaman, for pointing out a semantic error in the picoblogcode. Similar Articles and Software Experimenting with Google App Engine, by Bret Taylor. Also describes building a blogging engine on GAE. Bloog, Bill Katz’s RESTful GAE blogging engine. cpedialog, another GAE-based blogging engine. Caveats Before I jump into the tutorial, there are a few caveats: The point of this article is to build an App Engine application, to get to know some GAE internals. If you just want to host a blog on GAE, and you’re not interested in understanding the software involved, you might consider installing cpedialog, a blogging engine that will run on GAE. I am certainthere are things about GAE that I could do better. I welcome corrections and suggestions; just drop me an email. What this Blog Engine Supports The blog engine outlined in this article is fully functional; this blog runs on similar software. However, it lacks a few features some people might want, such as: Image uploading. I just haven’t put that in yet. When I do, I’ll updatethis article. In the meantime, I’m able to live without it by uploadingthe images via GAE’s appcfg.py updatecapability. Comments. It’s easy enough to drop Disqus into your templates, if you want. Integration into blog aggregators like Technorati. (There’s a follow-up article on this topic.) It does have the following features, though: Tag handling, including support for generating a tag cloud. Support for RSS and Atom feeds. Displaying articles by month or tag. Template-driven theme customization. Unpublished drafts. Secured administration screens. reStructuredText markup (instead of HTML) for the articles. In short, it’s a serviceable blog engine, with simple, straightforward code you can customize as you see fit. The Code The source code for this blog engine is available on GitHub. See the Picoblog web page, at http://software.clapper.org/picoblog/ Get Going with App Engine Register and Download First, of course, you have to register with GAE and download the development kit. This article assumes you’ve already done that. Create your Application Next, from your GAE account, create a new application. You’ll have to create a unique identifier for the application. In this article, I use the application ID “picoblog”. You’ll want to use something else. This article is not a tutorial on how to use the App Engine tools and web site; it’s an article about building a blog application. So, I’m not to go into details about how to create your application on the GAE site. Google’s wizard is easy enough to follow. Open the App Engine Docs You’ll want to have the online App Engine documentation available asyou develop your App Engine application. It wouldn’t hurt to review theGetting Starting section before jumping in. Create your Local Application Directory Create a directory called picoblog in which to do your work. Inthat directory, create a file called app.yaml: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 application: picoblog version: 1 runtime: python api_version: 1 handlers: - url: /static static_dir: static - url: /favicon.ico static_files: static/favicon.ico upload: static/favicon.ico - url: /admin script: admin.py login: admin - url: /admin/.* script: admin.py login: admin - url: /.* script: blog.py This file configures your application. You can treat most of the top of the file as magic. For now, the parts we care about are: :application: The application’s registered ID is “picoblog”. :handlers: Each url entry is a regular expression GAE will match against incoming URL requests; on a match, it’ll run the associated script. In this case, we’re saying: Any path starting with /staticis resolved via the built-in static file handler. This is where we’ll put our images. So go ahead and create astaticdirectory underpicoblog. Since browsers always look for /favicon.ico, and I get tired of seeing all the “not found” messages in the logs, there’s an entry for an icon. It’s stored in thestaticdirectory. The administrative screens (for creating and editing blog articles) live under ‘/admin’ and are secured: Only a Google account with administrative privileges on the project is allowed to get to them. They’re handled by the admin.pyscript. We’ll be creating that script in thepicoblogdirectory. Finally, the published blog itself matches everything else, and it’s handled by the blog.pyscript. That script, too, will end up in thepicoblogdirectory. Once you’ve created app.yaml, you can pretty much forget about itfor awhile. Create the Data Model The next step is to decide what data we’re storing in the database. Forthis blog engine, there’s a single object type in the database, called anArticle. It has the following properties (which would be columns in a SQLdatabase): title: The 1-line title of the article body: The body of the article, which is reStructuredText markup. draft: Whether or not the article is a draft (i.e., not published) or not. Drafts are only visible in the administration screen. published_when: The time the entry was published. In this context, “published” means “goes from being a draft to not being a draft”. This time stamp is initialized to the time the article is created, and it’s updated when the article is saved as a non-draft. (Toggling the “draft” flag multiple times will continue to update this time; you’re obviously free to change that behavior by hacking the code.) tags: a list of tags (strings) assigned to the articles. May be empty. id: a unique ID assigned to the article. A note about the unique ID: GAE does not provide support for anautomatically incremented integer ID field the same way that Django does.An item in the datastore does have a unique key, accessible via thekey() method. Further, that key can be converted to a correspondingunique name or number (depending on how the key was assigned) by callingkey().id(). For instance: 1 2 3 article = Article(title='Test title') article.put() print article.key().id() However, you cannot use this ID in a query. Quoting from the Keys and Entity Groups section of the GAE documentation: Key names and IDs cannot be used like property values in queries. However, you can use a named key, then store the name as a property. You could do something similar with numeric IDs by storing the object to assign the ID, getting the ID value using obj.key().id(), setting the property with the ID, then storing the object again. So, that’s what we’re going to do. The data model for our Article class looks like this: 1 2 3 4 5 6 7 8 9 10 11 12 import datetime import sys from google.appengine.ext import db class Article(db.Model): title = db.StringProperty(required=True) body = db.TextProperty() published_when = db.DateTimeProperty(auto_now_add=True) tags = db.ListProperty(db.Category) id = db.StringProperty() draft = db.BooleanProperty(required=True, default=False) If you’re familiar with Django, you’ll notice that it’s similar to Django’s data models, but not exactly the same. Next, since I like to hide as much of the database API semantics inside themodel, I’m going to add a get_all() method that returns all articles, aget() method that returns a single article by ID, and a published()method that returns all non-draft articles. (The published() method willbe separated into two methods, so the query itself can be shared. More onthat later.) 1 2 3 4 5 @classmethod def get_all(cls): q = db.Query(Article) q.order('-published_when') return q.fetch(FETCH_THEM_ALL) 1 2 3 4 5 @classmethod def get(cls, id): q = db.Query(Article) q.filter('id = ', id) return q.get() 1 2 3 4 5 @classmethod def published_query(cls): q = db.Query(Article) q.filter('draft = ', False) return q 1 2 3 @classmethod def published(cls): return Article.published_query().order('-published_when').fetch(FETCH_THEM_ALL) FETCH_THEM_ALL is an integer constant with a large value, definedat the top of the module. NOTE: In the original version of the code, and in the zip files postedto the web site, FETCH_THEM_ALL is defined as follows: 1 FETCH_THEM_ALL = sys.maxint - 1 On a 64-bit local machine, sys.maxint will evaluate to a 64-bit number.But GAE is a 32-bit environment, so the code may fail on certain machines.The code in the GitHub repository has been corrected. Finally, let’s add a save() method that does two important things: Copies the GAE-assigned unique ID into our idfield, so we can use it in queries. Updates the time stamp if the article being saved is going from draft to published status. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 def save(self): previous_version = Article.get(self.id) try: draft = previous_version.draft except AttributeError: draft = False if draft and (not self.draft): # Going from draft to published. Update the timestamp. self.published_when = datetime.datetime.now() try: obj_id = self.key().id() resave = False except db.NotSavedError: # No key, hence no ID yet. This one hasn't been saved. # We'll save it once without the ID field; this first # save will cause GAE to assign it a key. Then, we can # extract the ID, put it in our ID field, and resave # the object. resave = True self.put() if resave: self.id = self.key().id() self.put() Okay, that’s the model. (See the source code for the complete file.) Create the Administration Screens Now let’s create the administration screens, so we can edit and create articles. We’ll create two screens. The main administration screen contains three things: A button to create a new article. Pressing this button creates an empty article and launches the Edit Article screen to edit it. A button to go back to blog itself. A list of the existing articles. The articles will be sorted in reverse chronological order, and each article’s date and title will be displayed. The article’s date and title will also be a hyperlink to the edit screen for the article. Further, drafts will be shown in red, to distinguish them from published articles. The Edit Article screen contains A text box for the the title A text area for the body of the article, which is assumed to be reStructuredText A text box for the list of tags (comma-separated) A check box to indicate whether or not the item is a draft To create these screens requires five files: defs.pywill hold some constants that we share between all the blog scripts. request.pywill hold a base class request handler, which is basically a place to hang logic that we need in every script. admin.pycontains the Python code for the admin screens–the equivalent of a Djangoviews.pyfile for the admin screns. admin-main.htmlis the template for the main administration screen. admin-edit.htmlis the template for the Edit Article screen. To keep things organized, we’ll store the templates in a templatessubdirectory. The Templates Let’s start with the templates. GAE’s default template engine is Django’s template engine. If you don’t know Django’s template language, read the first few sections of the Django template language document. Describing Django templates is beyond the scope of this article. Main Administration Screen Template You can see the full template for the main administration screen here. It consists of a link to the style sheet, some Javascript, some standard HTML layout, and this block of template logic: 1 2 3 4 5 6 7 8 9 10 11 <ul> {% for article in articles %} {% if article.draft %} <li class="admin-draft"> {% else %} <li class="admin-published"> {% endif %} {{ article.published_when|date:"j F, Y" }} <a href="/admin/article/edit/?id={{ article.id }}">{{ article.title \}}</a> {% endfor %} </ul> This template code assumes that the variables passed to the template willinclude a Python list called articles, each element of which is anArticle object. We’ll see how that’s populated in the next section. The style sheet link looks like this: 1 <link href="/static/style.css" rel="stylesheet" type="text/css"/> Rather than use a template Include file '"style.css"' contains invalid characters or sequences directive topull the style sheet file inline at rendering time, we’re tellingthe browser to go get it. We’ll be using the same style sheet forall pages; using an external style sheet allows the browser tocache it. The Style Sheet To see the style sheet, follow this link. The style sheet is stored inthe static subdirectory, where it’ll be served by the GAE static filehandler. The Edit Screen Template The template for the edit screen is available here. The edit screen is slightly more complicated, since it has some Javascript to handle the various buttons. But overall, it’s still pretty simple as web screens go. The View Code The view code for the administration screens is in admin.py. It,too, is relatively simple. But first, let’s look at the two otherfiles we’re using to consolidate common logic. defs.py defs.py just contains some common constants: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 BLOG_NAME = 'PicoBlog' BLOG_OWNER = 'Joe Example' TEMPLATE_SUBDIR = 'templates' TAG_URL_PATH = 'tag' DATE_URL_PATH = 'date' ARTICLE_URL_PATH = 'id' MEDIA_URL_PATH = 'static' ATOM_URL_PATH = 'atom' RSS2_URL_PATH = 'rss2' ARCHIVE_URL_PATH = 'archive' MAX_ARTICLES_PER_PAGE = 5 TOTAL_RECENT = 10 We’ll see how they’re used as we get further into this tutorial. request.py request.py contains our base request handler class: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 import os from google.appengine.ext import webapp from google.appengine.ext.webapp import template import defs class BlogRequestHandler(webapp.RequestHandler): """ Base class for all request handlers in this application. This class serves primarily to isolate common logic. """ def get_template(self, template_name): """ Return the full path of the template. :Parameters: template_name : str Simple name of the template :rtype: str :return: the full path to the template. Does *not* ensure that the template exists. """ return os.path.join(os.path.dirname(__file__), defs.TEMPLATE_SUBDIR, template_name) def render_template(self, template_name, template_vars): """ Render a template and write the output to ``self.response.out``. :Parameters: template_name : str Simple name of the template template_vars : dict Dictionary of variables to make available to the template. Can be empty. """ template_path = self.get_template(template_name) template.render(template_path, template_vars) As you can see, it just contains some methods to make rendering templates a little simpler. admin.py Now we’re ready to look at the administration view code. First,we have some imports: 1 2 3 4 5 6 7 8 import cgi from google.appengine.api import users from google.appengine.ext import webapp from google.appengine.ext.webapp import util from models import * import request These are followed by the request handler classes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 class ShowArticlesHandler(request.BlogRequestHandler): def get(self): articles = Article.get_all() template_vars = {'articles' : articles} self.response.out.write(self.render_template('admin-main.html', template_vars)) class NewArticleHandler(request.BlogRequestHandler): def get(self): article = Article(title='New article', body='Content goes here', draft=True) template_vars = {'article' : article} self.response.out.write(self.render_template('admin-edit.html', template_vars)) class SaveArticleHandler(request.BlogRequestHandler): def post(self): title = cgi.escape(self.request.get('title')) body = cgi.escape(self.request.get('content')) id = int(cgi.escape(self.request.get('id'))) tags = cgi.escape(self.request.get('tags')) published_when = cgi.escape(self.request.get('published_when')) draft = cgi.escape(self.request.get('draft')) if tags: tags = [t.strip() for t in tags.split(',')] else: tags = [] tags = Article.convert_string_tags(tags) if not draft: draft = False else: draft = (draft.lower() == 'on') article = Article.get(id) if article: # It's an edit of an existing item. article.title = title article.body = body article.tags = tags article.draft = draft else: # It's new. article = Article(title=title, body=body, tags=tags, id=id, draft=draft) article.save() edit_again = cgi.escape(self.request.get('edit_again')) edit_again = edit_again and (edit_again.lower() == 'true') if edit_again: self.redirect('/admin/article/edit/?id=%s' % id) else: self.redirect('/admin/') class EditArticleHandler(request.BlogRequestHandler): def get(self): id = int(self.request.get('id')) article = Article.get(id) if not article: raise ValueError, 'Article with ID %d does not exist.' % id article.tag_string = ', '.join(article.tags) template_vars = {'article' : article} self.response.out.write(self.render_template('admin-edit.html', template_vars)) class DeleteArticleHandler(request.BlogRequestHandler): def get(self): id = int(self.request.get('id')) article = Article.get(id) if article: article.delete() self.redirect('/admin/') The file ends with some initialization logic and the main program: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 application = webapp.WSGIApplication( [('/admin/?', ShowArticlesHandler), ('/admin/article/new/?', NewArticleHandler), ('/admin/article/delete/?', DeleteArticleHandler), ('/admin/article/save/?', SaveArticleHandler), ('/admin/article/edit/?', EditArticleHandler), ], debug=True) def main(): util.run_wsgi_app(application) if __name__ == "__main__": main() Let’s break this down a bit. First, the initialization logic at the bottom(that is, the creation of the webapp.WSGIApplication object) defines whatclasses (handlers) will handle which URLs within the /admin/ URL space.Recall that the app.yaml file points all /admin URLs to this file.The application variable further breaks those URLs down, so that certainURLs map to certain handlers. The list passed to the WSGIApplicationconstructor contains tuples; each tuple defines a URL mapping. The first element of the tuple is a regular expression. Note that theregular expressions we’re using end with /?, allowing the trailing slash to be omitted in the URL. The name of the class that will handle requests to URLs that match the regular expression. Next, let’s look at some of the handlers. There are basically two kinds of handlers here: Handlers that just display a page (i.e., retrieve data from the database and stuff it into a template). Handlers that process form submissions. Handlers that Only Display a Page ShowArticlesHandler, NewArticlesHandler and EditArticlesHandler areexample of handlers that simply display a page. Here’s theShowArticlesHandler class again: 1 2 3 4 5 class ShowArticlesHandler(request.BlogRequestHandler): def get(self): articles = Article.get_all() template_vars = {'articles' : articles} self.render_template('admin-main.html', template_vars) First, because it defines only the get() method, it supports just theHTTP GET semantics. (POST is not supported for the associated URL.) The actual handler is simple: It retrieves all articles, whether draft orpublished, puts the resulting list in a dictionary, and uses thatdictionary to render the template. That’s it; that’s the entire handler.The NewArticleHandler is similarly simple. The EditArticleHandler is a little more complicated, only because it hasto handle a few additional things: 1 2 3 4 5 6 7 8 9 10 class EditArticleHandler(request.BlogRequestHandler): def get(self): id = int(self.request.get('id')) article = Article.get(id) if not article: raise ValueError, 'Article with ID %d does not exist.' % id article.tag_string = ', '.join(article.tags) template_vars = {'article' : article} self.render_template('admin-edit.html', template_vars) First, it determines whether the article being edited is in the database or not; if not, it throws an exception, because it should never be invoked on a non-existent article. (If it is, we have a bug.) Next, it creates a comma-separated string from the list of tags, so the template can simply stuff that string into the tags edit box. Handlers that Process Forms The most complicated handler is the SaveArticleHandler class.Let’s look at that one again: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 class SaveArticleHandler(request.BlogRequestHandler): def post(self): title = cgi.escape(self.request.get('title')) body = cgi.escape(self.request.get('content')) id = int(cgi.escape(self.request.get('id'))) tags = cgi.escape(self.request.get('tags')) timestamp = cgi.escape(self.request.get('timestamp')) draft = cgi.escape(self.request.get('draft')) if tags: tags = [t.strip() for t in tags.split(',')] else: tags = [] tags = Article.convert_string_tags(tags) if not draft: draft = False else: draft = (draft.lower() == 'on') article = Article.get(id) if article: # It's an edit of an existing item. article.title = title article.body = body article.tags = tags article.draft = draft else: # It's new. article = Article(title=title, body=body, tags=tags, id=id, draft=draft) article.save() edit_again = cgi.escape(self.request.get('edit_again')) edit_again = edit_again and (edit_again.lower() == 'true') if edit_again: self.redirect('/admin/article/edit/?id=%s' % id) else: self.redirect('/admin/') That one’s a little longer, but what it does is simple enough: First, it retrieves all the form variables. Next, if there are any tags in the form, it splits the tag string to make it into a list. The tags are actually stored as GAE db.Categoryobjects, not strings, so the code calls a specialArticleclass method to convert the strings from the form intoCategoryobjects. (Consult the code for that conversion method; it’s trivial, and it’s not included here.) It processes the Draft checkbox. It attempts to load the referenced article. If the article exists, the handler updates its contents. Otherwise, it creates a new Article object with the specified ID. Then, it saves the article. If the edit_againrequest variable is set, then the handler redisplays the edit screen; otherwise, it displays the main administration screen again. That’s it. We’ve finished our admin screens. Let’s take a look at them. Todo that, fire up a terminal window, change your working directory to thepicoblog directory, and run the following command. (You must have put theroot of the unpacked GAE toolkit in your path.) dev_appserver.py . You’ll see output something like this: INFO 2008-08-06 02:51:26,336 appcfg.py] Server: appengine.google.com INFO 2008-08-06 02:51:26,342 appcfg.py] Checking for updates to the SDK. INFO 2008-08-06 02:51:26,444 appcfg.py] The SDK is up to date. INFO 2008-08-06 02:51:26,534 dev_appserver_main.py] Running application pico on port 8080: http://localhost:8080 You can now surf to http://localhost:8080/admin/ using your browser. Here’s a screen shot of the main screen, showing several articles. The top-most article is a draft; the rest are published. And here’s the edit screen for the draft article: From a stylistic viewpoint, these screens are really simple. However, making them look fancier and slicker is simply a matter of fiddling with the templates and the stylesheet. The Python code doesn’t change. The Markup Language Rather than force the blogger (i.e., you or me) to enter HTML, I’ve chosen to use the reStructuredText (RST) markup language. Of course, this means the blog has to translate the RST text into HTML when someone wants to view the blog. We can either do that conversion when we save the article, or convert on the fly when someone visits the blog. Converting the markup when we save the article is more efficient, but it means we have to store the generated HTML and reconvert all previously saved articles whenever we change the templates or the style sheet. It’s simpler to convert on the fly. If this strategy ends up causing a performance problem, we can always go back later and add page caching. Docutils To support RST, the first thing we have to do is make the Docutilspackage available to our running code. The easiest way to do that is tovisit the Docutils web site, download the source code, unpack it, andmove the docutils subdirectory (and all its contents) into our blogdirectory. When we later upload the application to GAE, the Docutils codewill get uploaded, too. Docutils also looks for a roman.py file, which isn’t present inthe GAE Python environment. There’s one in the Google App Enginesource directory (which you downloaded); copy the roman.py filefrom there into the top directory of the blog. Translation code The code that actually translates RST to HTML is rather simple: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 import os from docutils.core import publish_parts def rst2html(s): settings = {'config' : None} # Necessary, because otherwise docutils attempts to read a config file # via the codecs module, which doesn't work with AppEngine. os.environ['DOCUTILSCONFIG'] = "" parts = publish_parts(source=s, writer_name='html4css1', settings_overrides=settings) return parts['fragment'] The only wrinkle is the setting of the DOCUTILSCONFIG environmentvariable. I determined empirically that if you don’t set that variable toan empty string, the Docutils package attempts to read a startup file viathe codecs module, and the way it calls the codecs.open() methodconflicts with how that method is defined in the GAE Python library. (GAEhas replaced Python’s file handling routines with routines of its own, andthey’re not always 100% compatible.) Store this code in file rst.py. We’ll then import it in our display code. Create the Display Screens Now we’re ready to create the display screens. There are six views to support: Mainshows the topnarticles (wherenis the value ofMAX_ARTICLES_PER_PAGEin thedefs.pyfile). This screen is the main blog screen–the one a visitor sees first. Show One Articleshows a single article. It’s used when someone clicks on the link for a single article. Show Articles by Tagshows all articles with a specific tag. Show Articles by Monthshows all articles in a specified month. Show Archivelists the titles and dates of all articles in the blog. Not Foundis a simple screen to display when an article or page isn’t found. We’ll also add some query methods to the Article class as we go along. Base Template The simplest way to build these screens is to use Django templateinheritance, which has the additional benefit of ensuring a consistentlook. Most of the HTML goes into a base template. That template definesthe basic look and feel of the display pages, with various templatesubstitutions like and. However, the base template also contains template code like the following: 1 2 3 4 5 6 7 <div id="articles_container"> {% for article in articles %} {% block main %} {% endblock %} {% endfor %} </div> and this: 1 2 3 4 5 6 <div id="right-margin"> {% block recent_list %} {% endblock %} {% block date_list %} {% endblock %} </div> The blocks can be filled in by other templates that inherit from this one. blog.py The handlers go in blog.py, which is similar to admin.py. There’s aninitialization section at the bottom that sets up the URL-to-handlermappings. Let look at that first: 1 2 3 4 5 6 7 8 9 10 11 application = webapp.WSGIApplication([('/', FrontPageHandler), ('/tag/([^/]+)/*$', ArticlesByTagHandler), ('/date/(\d\d\d\d)-(\d\d)/?$', ArticlesForMonthHandler), ('/id/(\d+)/?$', SingleArticleHandler), ('/archive/?$', ArchivePageHandler), ('/rss2/?$', RSSFeedHandler), ('/atom/?$', AtomFeedHandler), ('/.*$', NotFoundPageHandler), ], debug=True) AbstractPageHandler At the top of the file, there’s a base class that consolidates a lot of thecommon logic. The most important method it contains is render_articles(): 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 class AbstractPageHandler(request.BlogRequestHandler): def render_articles(self, articles, request, recent, template_name='show-articles.html'): url_prefix = 'http://' + request.environ['SERVER_NAME'] port = request.environ['SERVER_PORT'] if port: url_prefix += ':%s' % port self.augment_articles(articles, url_prefix) self.augment_articles(recent, url_prefix) last_updated = datetime.datetime.now() if articles: last_updated = articles[0].published_when self.adjust_timestamps(articles) last_updated = self.adjust_timestamp(last_updated) blog_url = url_prefix tag_path = '/' + defs.TAG_URL_PATH tag_url = url_prefix + tag_path date_path = '/' + defs.DATE_URL_PATH date_url = url_prefix + date_path media_path = '/' + defs.MEDIA_URL_PATH media_url = url_prefix + media_path template_variables = {'blog_name' : defs.BLOG_NAME, 'blog_owner' : defs.BLOG_OWNER, 'articles' : articles, 'tag_list' : self.get_tag_counts(), 'date_list' : self.get_month_counts(), 'version' : '0.3', 'last_updated' : last_updated, 'blog_path' : '/', 'blog_url' : blog_url, 'archive_path' : '/' + defs.ARCHIVE_URL_PATH, 'tag_path' : tag_path, 'tag_url' : tag_url, 'date_path' : date_path, 'date_url' : date_url, 'atom_path' : '/' + defs.ATOM_URL_PATH, 'rss2_path' : '/' + defs.RSS2_URL_PATH, 'media_path' : media_path, 'media_url' : media_url, 'recent' : recent} return self.render_template(template_name, template_variables) This method takes: a list of Articleobjects to be displayed the original incoming HTTP request a list of recent articles to display (which can be empty) the template name to use, which defaults to the show-articles.htmltemplate render_articles() then puts together the list of template variables,renders the specified template, and returns the result. All the display handlers will use this method, which is why it resides in the base class. Another method we should examine is augment_articles(), also in theAbstractPageHandler class: 1 2 3 4 5 6 7 8 9 def augment_articles(self, articles, url_prefix, html=True): for article in articles: if html: try: article.html = rst2html(article.body) except AttributeError: article.html = '' article.path = '/' + defs.ARTICLE_URL_PATH + '/%s' % article.id article.url = url_prefix + article.path This method renders the HTML for each article to be displayed (if requested), and computes the article’s path and URL. The base class also contains a few other methods used byrender_template(): get_tag_counts()assembles the list of unique tags, associating an article count with each one. It also determines which CSS class to associate with each tag, based on the tag’s relative frequency, for use when rendering the tag cloud; this information is returned in a list ofTagCountobjects. (TagCountis defined inblog.py. It’s not shown here.) get_month_counts()returns a list ofDateCountobjects that the number of articles in each unique month/year. get_recent()gets the most recent articles, making sure the list doesn’t exceed the maximum specified indefs.TOTAL_RECENT. (See the complete file in the source code for details.) Not Found Page Next, let’s get the Not Found page out of the way. The templateis very simple: 1 2 3 4 5 6 {% extends "base.html" %} {% block main %} <p class="article_title">Not Found</p> <p>Sorry, but there's no such page here.</p> {% endblock %} It extends the base template and fills in the main block with asimple static message. We’ll use this template in a couple places. The NotFoundHandler class is also simple: 1 2 3 4 5 6 class NotFoundPageHandler(AbstractPageHandler): def get(self): self.response.out.write(self.render_articles([], self.request, [], 'not-found.html')) Recall that this handler is the last, catch-all handler in the list of URLsin blog.py, so it’s automatically invoked if the incoming request doesn’tmatch any of the preceding URLs. That’s all we have to do to install a custom “not found” handler. Main Page The main screen requires a template and a handler. With the base templateand the AbstractPageHandler class in place, both are pretty simple.Here’s the template, which resides in show-articles.html: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 {% extends "base.html" %} {% block main %} {% for article in articles %} {% include "article.html" %} </td></tr></table> {% endfor %} {% endblock %} {% block recent_list %} {% if recent %} <b>Recent:</b> <ul> {% for article in recent %} <li><a href="{{ article.path }}">{{ article.title }}</a> {% endfor %} </ul> {% endif %} {% endblock %} {% block date_list %} {% if date_list %} <b>By month:</b> <ul> {% for date_count in date_list %} <li><a href="{{ date_path }}/{{ date_count.date|date:"Y-m" }}/"> {{ date_count.date|date:"F, Y" }}</a> ({{ date_count.count }}) {% endfor %} </ul> {% endif %} {% endblock %} {% block tag_list %} {% if tag_list %} <div id="tag-cloud"> {% for tag_count in tag_list %} <a class="{{ tag_count.css_class }}" href="{{ tag_path }}/{{ tag_count.tag }}/"> {{ tag_count.tag }}({{ tag_count.count }})</a> {% if not forloop.last %},{% endif %} {% endfor %} </div> {% endif %} {% endblock %} {% endraw} {% endcodeblock %} The template extends the base template, and then just fills in the HTML for each block that's defined in the base template. Note, in particular, this block: {% codeblock %} {% raw %} {% for article in articles %} {% include "article.html" %} {% endfor %} The actual template that displays an article resides in yet anotherfile, so it can be re-used in different templates.show-articles.html includes it, repeatedly, in a loop thattraverses the list of articles to be displayed. The article.html template looks like this: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 <table width="100%" border="0" cellspacing="0" cellpadding="0"> <tr class="article_title_line"> <td class="article_title">{{ article.title }}</td> <td class="timestamp"> {{ article.published_when|date:"j F, Y \a\t g:i A" }} </td> </tr> <tr> {% if article.draft %} <td colspan="2" class="article-body-draft"> {% else %} <td colspan="2" class="article-body"> {% endif %} {{ article.html }} </td> </tr> <tr> <td colspan="2" class="article-footer"> <a href="{{ article.path }}" class="reference">Permalink</a>| Tags: {{ article.tags|join:", " }}</td> </tr> </table> It’s relatively easy to understand: It assumes the existence of avariable called article that contains the article to bedisplayed. The handler for the main page is even simpler: 1 2 3 4 5 6 7 8 9 class FrontPageHandler(AbstractPageHandler): def get(self): articles = Article.published() if len(articles) > defs.MAX_ARTICLES_PER_PAGE: articles = articles[:defs.MAX_ARTICLES_PER_PAGE] self.response.out.write(self.render_articles(articles, self.request, self.get_recent())) It gets the list of published articles, trims it down to the maximum number of articles on the main page, renders the articles to HTML, and dumps the result to the App Engine HTTP response object. If you did not leave the dev_appserver running, bring it upagain. Then, connect to http://localhost:8080/, and check outyour main page. It should look something like this: Show One Article This screen shows a single article. It’s invoked when a readerselects a single post (e.g., http://www.example.org/blog/id/5). It re-uses the same show-articles.html template, but with justone article in the list. The handler looks like this: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 class SingleArticleHandler(AbstractPageHandler): def get(self, id): article = Article.get(int(id)) if article: template = 'show-articles.html' articles = [article] more = None else: template = 'not-found.html' articles = [] self.response.out.write(self.render_articles(articles=articles, request=self.request, recent=self.get_recent(), template_name=template)) It attempts to retrieve the article with the specified ID. If thearticle exists in the database, then the code puts it in asingle-element list and tells render_articles() to use theshow_articles.html to display it. If the article does not exist, the code uses the not-found.htmltemplate we defined earlier to display the generic “not found”screen. Note that this version of the get() method accepts an idparameter. Where does that come from? Recall the configuration forthis handler in the WSGIApplication object at the bottom of thescript: 1 2 3 4 5 6 7 application = webapp.WSGIApplication( [ ... ('/id/(\d+)/?$', SingleArticleHandler), ... ], Note that the regular expression, '/id/(\d+)/?$ contains a group,(\d+). Like Django, GAE maps each group into a parameter to the get()or post() method. In this case, the string that matches the regularexpression group (the article’s numeric ID, in this case) is passed as thefirst parameter to the get() method. Show Articles By Tag The ArticlesByTagHandler class again re-uses the show-articles.htmltemplate: 1 2 3 4 5 6 class ArticlesByTagHandler(AbstractPageHandler): def get(self, tag): articles = Article.all_for_tag(tag) self.response.out.write(self.render_articles(articles, self.request, self.get_recent())) Note, however, that it’s calling a class method calledall_for_tag() in the Article class. We have to extend Articleto support this query method. That method turns out to be trivial: 1 2 3 4 5 6 @classmethod def all_for_tag(cls, tag): return Article.published_query()\ .filter('tags = ', tag)\ .order('-published_when')\ .fetch(FETCH_THEM_ALL) My original version of this method loaded all published articles and manually searched through their tags. However, in an email to the Google App Engine mailing list, Bill Katz pointed me to something I missed in the GAE docs: In a query, comparing a list property to a value performs the test against the list members: list_property = valuetests if the value appears anywhere in the list. This is convenient and more efficient than my original solution. Show Articles By Month By now, you should be getting the hang of this. Next, we have to write a handler that’ll produce a page of posts for a given month. As with the tag handler, the month handler is trivial: 1 2 3 4 5 6 class ArticlesForMonthHandler(AbstractPageHandler): def get(self, year, month): articles = Article.all_for_month(int(year), int(month)) self.response.out.write(self.render_articles(articles, self.request, self.get_recent())) Again, though, it calls an Article class method we have yet to write: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 @classmethod def all_for_month(cls, year, month): start_date = datetime.date(year, month, 1) if start_date.month == 12: next_year = start_date.year + 1 next_month = 1 else: next_year = start_date.year next_month = start_date.month + 1 end_date = datetime.date(next_year, next_month, 1) query = Article.published_query()\\ .filter('published_when >=', start_date)\\ .filter('published_when <', end_date)\\ .order('-published_when') return query.fetch(FETCH_THEM_ALL) This method chains query filters to the query returned byArticle.published_query(). The filters ensure that the returned articlesare the ones published within the specified year and month. Show Archive This page shows the titles of all published articles, in reverse chronological order. I chose to make this page even simpler than the other pages: It lacks the tag cloud, recent posts, and posts-by-month sections in the margin. The template is trivial: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 {% extends "base.html" %} {% block main %} <span class="heading">Complete Archive:</span> {% if articles %} <ul> {% for article in articles %} <li><a href="{{ article.path }}">{{ article.title }}</a> ({{ article.timestamp|date:"j F, Y" }}) {% endfor %} </ul> {% else %} <p>This blog is empty. (Someone want to fix that?) {% endif %} {% endblock %} And the handler is, once again, trivial: 1 2 3 4 5 6 7 class ArchivePageHandler(AbstractPageHandler): def get(self): articles = Article.published() self.response.out.write(self.render_articles(articles, self.request, [], 'archive.html')) Note that ArchivePageHandler passes an empty list for the “recent” posts(since it won’t be used) and the archive template. Here’s what the archive page looks like with our two articles in the archive: RSS Feed Any decent blog supplies an RSS feed, so we should do that, too. Of course, that’s simply a matter of writing a template and a small handler. By now, the handler should look pretty familiar: 1 2 3 4 5 6 7 8 class RSSFeedHandler(AbstractPageHandler): def get(self): articles = Article.published() self.response.headers['Content-Type'] = 'text/xml' self.response.out.write(self.render_articles(articles, self.request, [], 'rss2.xml')) The template is simple, too: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 <?xml version="1.0" encoding="utf-8" ?> <rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/"> <channel> <title>{{ blog_name }}</title> <link>{{ blog_url }}</link> <description>{{ blog_name }}</description> <pubDate>{{ last_updated|date:"D, d M Y H:i:s T" }}</pubDate> {% for article in articles %} <item> <title>{{ article.title }}</title> <link>{{ article.url }}</link> <guid>{{ article.url }}</guid> <pubDate>{{ article.timestamp|date:"D, d M Y H:i:s T" }}</pubDate> <description> {{ article.html|escape }} </description> <author>{{ blog_author }}</author> </item> {% endfor %} </channel> </rss> Deploy the Application Test the application using dev_appserver; when you think it’sready, it’s time to deploy it. From within the blog’s top-levelsource directory, run this command: appcfg.py update . That’ll upload your application to your Google App Engine account.If the application name is picoblog, then your live applicationwill appear at http://picoblog.appspot.com/. Handling Static Files such as Images Of course, a blog should be able to display images. Since our newblog software doesn’t support image upload, how can we use images?The answer is simple, if slightly clunky: Put any images you wantto use in your picoblog/static directory. Then, use appcfg.pyto update the live application; appcfg.py will copy those imagesup to the Google App Engine server, where you can use them. For instance, assume you have a picture call foo.png that youwant to use in a blog article. Here’s how you might deploy it: 1 2 3 $ cd picoblog $ cp ~/foo.png static $ appcfg.py update . Then, you can use the reStructuredText .. image directive to pullit into an article: .. image:: /static/foo.pn :width: 180 :height: 150 Previewing a Draft There’s one last feature to add: The ability to preview a draft article without publishing it. With this software in place, you can already do that by using aseparate browser window or frame. For instance, suppose you’reediting a new article, and its ID happens to be 53. In anotherwindow, you can surf to that ID directly, using the URLhttp://picoblog.appspot.com/id/53/. But it might also be nice to preview the article in the same window where you’re doing your editing. That turns out to be trivial to implement: Merely go back to `The Edit Screen Template`_, and add these lines right after the end of the form: 1 2 3 4 <h1 class="admin-page-title">Preview:</h1> <div style="border-top: 1px solid black"> <iframe src="/id/{{ article.id }}" width="97%" scrolling="auto" height="750" frameborder="0"> </iframe> Now, you’ll always have a preview frame underneath the edit controls. Enhancements Now that you have the basic blog in place, you can start to add other enhancements, such as: Support for Pygments syntax coloring. Support for Google Analytics, which is useful for analyzing logs and traffic. Image uploading. A more individual theme. etc. In Closing In this (long) tutorial, we built a simple blog using Python and Google’s App Engine. The code represented in this article is very similar to the code that runs this very blog; it’s certainly effective, even if it lacks certain bells and whistles right now. With any luck, you now have a better understanding of what it means to build an application on App Engine. Feedback I welcome feedback. Feel free to submit a comment, below, or drop me an email with comments or corrections. I’ll update this article with any good stuff I receive. Related Software Update: 28 November, 2010 Related Brizzled Articles Additional Reading Experimenting with Google App Engine, by Bret Taylor. Building Scalable Web Applications with Google App Engine (presentation), by Google’s Brett Slatkin. Google App Engine documentation
Manipulating sys.modules You can manipulate the modules cache directly, making modules available or unavailable as you wish: >>> import sys >>> import ham Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named ham # Make the 'ham' module available -- as a non-module object even! >>> sys.modules['ham'] = 'ham, eggs, saussages and spam.' >>> import ham >>> ham 'ham, eggs, saussages and spam.' # Now remove it again. >>> sys.modules['ham'] = None >>> import ham Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named ham This works even for modules that are available, and to some extent for modules that already are imported: >>> import os # Stop future imports of 'os'. >>> sys.modules['os'] = None >>> import os Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named os # Our old imported module is still available. >>> os <module 'os' from '/usr/lib/python2.5/os.pyc'> As the last line shows, changing sys.modules only affects future import statements, not past ones, so if you want to affect other modules it's important to make these changes before you give them a chance to try and import the modules -- so before you import them, typically. None is a special value in sys.modules, used for negative caching (indicating the module was not found the first time, so there's no point in looking again.) Any other value will be the result of the import operation -- even when it is not a module object. You can use this to replace modules with objects that behave exactly like you want. Deleting the entry from sys.modules entirely causes the next import to do a normal search for the module, even if it was already imported before.
When trying to refresh a materialized view based on a query that uses a mssql table (using the SqlServer Gateway), I run into the following error: After some investigation I found this was caused by a “NOT NULL” constraint on the mview-table. A small scenario: ORA-12008: error in materialized view refresh path ORA-01400: cannot insert NULL into (%s) ORA-02063: preceding line from MSSQLDB1 ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2537 ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2743 ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2712 STEP 1: Create a Materialized View When trying to refresh a materialized view based on a query that uses a mssql table (using the SqlServer Gateway), I run into the following error: After some investigation I found this was caused by a “NOT NULL” constraint on the mview-table. A small scenario: SQL> CREATE MATERIALIZED VIEW COUNTY_IS_NULL 2 REFRESH FORCE ON DEMAND 3 AS 4 SELECT a.agreementnum 5 ,a.county county_id 6 ,a.createddate agree_cre_date 7 FROM bmssa.EXU_AGREEMENTTABLE@mssqldb1 a 8 WHERE a.agreementnum = ' E00000001'; Materialized view created. STEP 2: View Content of the Materialized View (County_Id is null!) SQL> SQL> SELECT * FROM county_is_null; AGREEMENTN COUNTY_ID AGREE_CRE ---------- ---------- --------- E00000001 05-JUL-07   SQL> SQL> SELECT NVL(county_id, 0) FROM county_is_null; NVL(COUNTY ---------- 0 STEP 3: Refresh the Materialized View SQL> SQL> BEGIN 2 dbms_mview.refresh('COUNTY_IS_NULL'); 3 END; 4 / BEGIN * ERROR at line 1: ORA-12008: error in materialized view refresh path ORA-01400: cannot insert NULL into (%s) ORA-02063: preceding line from MSSQLDB1 ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2537 ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2743 ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2712 ORA-06512: at line 2 STEP 4: Disable the NOT NULL Constraint on COUNTY_ID SQL> SQL> SELECT constraint_name, search_condition FROM user_constraints c WHERE c.table_name='COUNTY_IS_NULL'; CONSTRAINT_NAME SEARCH_CONDITION ------------------------------ ------------------------------------------------- SYS_C0029024 "AGREEMENTNUM" IS NOT NULL SYS_C0029025 "COUNTY_ID" IS NOT NULL SYS_C0029026 "AGREE_CRE_DATE" IS NOT NULL SQL> SQL> ALTER TABLE county_is_null MODIFY CONSTRAINT sys_c0029025 DISABLE; Table altered. STEP 5: Refresh the Materialized View SQL> SQL> BEGIN 2 dbms_mview.refresh('COUNTY_IS_NULL'); 3 END; 4 / PL/SQL procedure successfully completed. SQL>
Phoen1x Re : [HOW TO] adesklets : configuration des desklets J'ai besoin d'un coup de main, donc j'ai voulu installer les adesklets Weather forecast et system moniteur Le problème c'est que j'avais pas vu le tuto pour l'installation de adesklest . J'ai bigouilller un long moment sur weather car il ne me le lancait pas au démarrage.. Ici j'ai suivi le tutorial mais il ne veux plus s'afficher, meme lorsque je fait test, il ne me l'affiche plus. Vous pouvvez m'aider svp Lorsqu'il se lance , il apparait une fraction de seconde et puis disparait Dernière modification par Phoen1x (Le 03/02/2006, à 08:40) Hors ligne Haazheel Re : [HOW TO] adesklets : configuration des desklets Lorsqu'il se lance , il apparait une fraction de seconde et puis disparait Normalement tu devrais avoir un message dans la console de lancement si tu peux nous le copier-coller Sinon copie-colle ton config.txt qu'on voit s'il y a un soucis Hors ligne itoine Re : [HOW TO] adesklets : configuration des desklets petite question en passant : pourquoi adesklet et pas gdesklet? Hors ligne Haazheel Re : [HOW TO] adesklets : configuration des desklets petite réponse : http://forum.ubuntu-fr.org/viewtopic.php?id=19135 Hors ligne toma222 Re : [HOW TO] adesklets : configuration des desklets J'allais le dire. Je n'ai pas retesté les nouvelles versions de gdesklets, mais à l'époque de l'écriture du tuto, il n'y avait pas photo. Le principal atout de adesklets est à mon avis la consommation des ressources. Le seul petit défaut est qu'il faut un petit peu plus de temps pour l'appréhender, mais le résultat en vaut la peine et c'est un plaisir pour les bidouilleurs . Hors ligne Phoen1x Re : [HOW TO] adesklets : configuration des desklets Donc au fait, j'ai constaté que le le desklets se lance au démarrage, apparaît, ensuite le bureau et les icones apparaissent a leur tour masquant ainsi le desklet. Et lorsque j'arrete ma session, l'environnement graphique s'arrete et le programme est toujours bien là. Il s'arrete ensuite a son tour. C'est comme si le desklets se trouvait dérrière l'arrière plan de mon bureau, non pas l'inverse Le fichier config.txt, c'est celui de weatherforcecast que tu souhaites? Qu'est ce que la console de lancement ? Je suis sous linux depuis 5 jours ^^ seulement Merci pour ton aide Dernière modification par Phoen1x (Le 03/02/2006, à 16:39) Hors ligne toma222 Re : [HOW TO] adesklets : configuration des desklets Salut, je pense que le soucis est que lorsque tu as compilé adesklets, tu n'as pas passé l'option --enable-legacy-fake-root-window-detection lors du ./configure (cf : http://forum.ubuntu-fr.org/viewtopic.php?id=19135&p=1). Dernière modification par toma222 (Le 03/02/2006, à 16:43) Hors ligne Phoen1x Re : [HOW TO] adesklets : configuration des desklets euh normalement no, je pense a voir suivi le tuto mais après avoir trouvé qu'il y avait un tuto je recommence l'installation ? comment je fais pour supprimer lespaquets? Dernière modification par Phoen1x (Le 03/02/2006, à 17:39) Hors ligne toma222 Re : [HOW TO] adesklets : configuration des desklets Pour le désinstaller, fais : cd adeklets-0.5.0 sudo make uninstall Sinon, tu peux aussi essayer de lancer de cette manière : adesklets --nautilus Hors ligne Phoen1x Re : [HOW TO] adesklets : configuration des desklets Bon j'ai désinstallé et réinstaller, il se lance et se positionne au bon endroit. Mais la transparence n'y est pas, le transparent est remplacé par une case noir. Comment dois-je faire pour la transparence au lieu du noir. C'est weatherforecast J'ai lancer systemmonitor, mais il ne tourne pas lorsque je test. Comment dois je faire ? Voilà j'ai terminé mes questions pour le moment. Encore merci pour votre aide Dernière modification par Phoen1x (Le 03/02/2006, à 18:55) Hors ligne toma222 Re : [HOW TO] adesklets : configuration des desklets Une petite question, tu utilises quel environnement de bureau ? Hors ligne Phoen1x Re : [HOW TO] adesklets : configuration des desklets gnome theme 'chaotic' Il tournait normalement, donc avec la transparence et tout, mais je ne savais pas comment le lancer directement. Et j'ai fait des mauvaise manip, supprimer le ./ adesklets Ensuite j'ai trouvé le post qui expliquait comment isntaller correctement le paquet. Mais maintement j'ai qq ennuis Dernière modification par Phoen1x (Le 03/02/2006, à 18:57) Hors ligne toma222 Re : [HOW TO] adesklets : configuration des desklets Ah, c'est étrange alors. Tu n'as rien de particulier dans ta configuration ? Pour System Monitor, tu as correctement installé libstatgrab et pystatgrab ? Dernière modification par toma222 (Le 03/02/2006, à 18:59) Hors ligne Phoen1x Re : [HOW TO] adesklets : configuration des desklets no rien de spécial Hors ligne Phoen1x Re : [HOW TO] adesklets : configuration des desklets et bien je l'ai ai installé lors de ma première installation de adesklets. Vu que j'ai pas supprimé les paquets je suppose qu'il sont toujours correctes Hors ligne toma222 Re : [HOW TO] adesklets : configuration des desklets Alors là, ça sent la colle . Tu n'as pas eu de message particulier pendant la compilation ? As-tu essayé d'autres desklets (Yab par exemple) ? Hors ligne Phoen1x Re : [HOW TO] adesklets : configuration des desklets no je ne penses pas je n'ai pas essayer d'autre desklets avant Hors ligne toma222 Re : [HOW TO] adesklets : configuration des desklets Essaies Yab (en général il marche sans configuration particulière). Si tu as le même problème, j'avoue que je sèche. Hors ligne Enelos Re : [HOW TO] adesklets : configuration des desklets bonsoir à tous, bon j'ai installer aDesklet 0.5.0 et maintenant chui en train de galérer pour faire fonction systemonitor comme je veux. En fait j'arrive pas à faire fonctionner la transparence : J'ai bien essayé de faire des images vide pour le background ou de bidouiller la ligne id0 = {'background colour': (210, 210, 210, 130), mais non, ca veux pas, j'ai toujours ce foutu cadre noir autour de mon desklet que j'arrive pas virer... (ca me fait pareil avec weatherofrecast dailleur) mon config.txt : id0 = {'background colour': (210, 210, 210, 130), 'background images': ['images/shared/bg_top.png', 'images/shared/bg_middle.png', 'images/shared/bg_bottom.png'], 'meters': [('CPUMeter', {'horizontal padding': 8, 'icon': 'images/icons/cpu.png', 'krell': 'images/shared/krell.png', 'meter font name': 'Vera', 'meter font size': 8, 'trough': 'images/shared/trough.png', 'vertical padding': 8}), ('MemoryMeter', {'horizontal padding': 8, 'icon': 'images/icons/memory.png', 'krell': 'images/shared/krell.png', 'meter font name': 'Vera', 'meter font size': 8, 'trough': 'images/shared/trough.png', 'update speed': 10, 'vertical padding': 8}), ('SwapMeter', {'horizontal padding': 8, 'icon': 'images/icons/swap.png', 'krell': 'images/shared/krell_blue.png', 'meter font name': 'Vera', 'meter font size': 8, 'trough': 'images/shared/trough.png', 'update speed': 30, 'vertical padding': 8}), ('DiskSpaceMeter', {'horizontal padding': 8, 'icon': 'images/icons/diskfree.png', 'krell': 'images/shared/krell_blue_small.png', 'meter font name': 'Vera', 'meter font size': 6, 'trough': 'images/shared/trough_small.png', 'update speed': 60, 'vertical padding': 2}), ('TemperatureMeter', {'file': '/proc/acpi/thermal_zone/THRM/temperature', 'horizontal padding': 8, 'icon': 'images/icons/temperature.png', 'krell': 'images/shared/krell_red.png', 'max_temp': 100.0, 'meter font name': 'Vera', 'meter font size': 8, 'trough': 'images/shared/trough.png', 'update_speed': 30, 'vertical padding': 8})], 'text colour': (0, 0, 0, 200), 'update speed': 1} si quelqu'un à une idée... chui preneur... Hors ligne toma222 Re : [HOW TO] adesklets : configuration des desklets Salut, apparemment tu as le même soucis que Phoen1x. Je ne vois pas d'où peut venir le problème pour le moment. Hors ligne Phoen1x Re : [HOW TO] adesklets : configuration des desklets oui c 'est bien cela, j'ai aussi ce cadre noir a la place du transparent, je pensais format, mais je me dis que peut etre pas lol. Mais bon mon systeme moonitor ne se lance pas du tout moi Je n'ai pas de fichier dans le dossier thermal zone. C'est normal ? Dernière modification par Phoen1x (Le 03/02/2006, à 23:05) Hors ligne Enelos Re : [HOW TO] adesklets : configuration des desklets en fait j'ai essayé de mettre la version 0.4.8 des dépôt ubuntu, et avec cette version la transparence fonctionne... bug dans la 0.5.0 ou pb de config à la compil? Hors ligne Haazheel Re : [HOW TO] adesklets : configuration des desklets bug dans la 0.5.0 ou pb de config à la compil? Pb compil vu que j'ai la transparence en 0.5 Si ca peut vous aider voici mon config.txt (c'est un PC de bureau et pas un laptop) : id0 = {'background colour': (210, 210, 210, 130), 'background images': ['images/shared/bg_top.png', 'images/shared/bg_middle.png', 'images/shared/bg_bottom.png'], 'meters': [('CPUMeter', {'horizontal padding': 8, 'icon': 'images/icons/cpu.png', 'krell': 'images/shared/krell.png', 'meter font name': 'VeraBd', 'meter font size': 8, 'trough': 'images/shared/trough.png', 'vertical padding': 16}), ('MemoryMeter', {'horizontal padding': 8, 'icon': 'images/icons/memory.png', 'krell': 'images/shared/krell.png', 'meter font name': 'VeraBd', 'meter font size': 8, 'trough': 'images/shared/trough.png', 'update speed': 10, 'vertical padding': 8}), ('NetworkMeter', {'horizontal padding': 8, 'icon': 'images/icons/network.png', 'interface name': 'eth0', 'krell': ['images/shared/krell_green_small.png', 'images/shared/krell_red_small.png'], 'max down speed': 400, 'max up speed': 150, 'meter font name': 'Vera', 'meter font size': 7, 'trough': 'images/shared/trough.png', 'vertical padding': 0}), ('DiskSpaceMeter', {'horizontal padding': 8, 'icon': 'images/icons/diskfree.png', 'krell': 'images/shared/krell_blue_small.png', 'meter font name': 'Vera', 'meter font size': 8, 'trough': 'images/shared/trough_small.png', 'update speed': 60, 'vertical padding': 0})], 'text colour': (0, 0, 0, 200), 'update speed': 1} Hors ligne Phoen1x Re : [HOW TO] adesklets : configuration des desklets donc je viens de réinstallé ubuntu, j'ai installé adesklets 5.0 , ainsi que weatherforecast. J'ai troujours le problème de transparent. Comment faire pour installer system monitor ? Voici les erreurs qu'il me met lorsque j'essai de le lancer phoenix@MGD:~/.desklets/SystemMonitor-0.1.3$ ls config.txt COPYING images SystemMonitor.py config.txt~ example_theme.tar.gz README phoenix@MGD:~/.desklets/SystemMonitor-0.1.3$ ./SystemMonitor.py Do you want to (r)egister this desklet or to (t)est it? t Now testing... ============================================================ If you do not see anything (or just an initial flicker in the top left corner of your screen), try `--help', and see the FAQ: `info adesklets'. ============================================================ Traceback (most recent call last): File "./SystemMonitor.py", line 1107, in ? EventHandler(dirname(__file__)).pause() File "./SystemMonitor.py", line 974, in __init__ adesklets.Events_handler.__init__(self) File "usr/lib/python2.4/site-packages/adesklets/events_handler.py", line 157, in __init__ File "./SystemMonitor.py", line 1004, in ready self.meters[-1].create((8,tmp_height), self.basedir, meter[1]) File "./SystemMonitor.py", line 631, in create AbstractMeter.create(self, location, basedir, config_dictionary) File "./SystemMonitor.py", line 480, in create self._create_fonts(config_dictionary) File "./SystemMonitor.py", line 329, in _create_fonts self._meter_font = adesklets.load_font(self._meter_font_name + "/" + \ File "/usr/lib/python2.4/commands.py", line 706, in load_font File "usr/lib/python2.4/site-packages/adesklets/commands_handler.py", line 103, in out adesklets.error_handler.ADESKLETSError: adesklets command error - font 'VeraBd/8' could not be loaded phoenix@MGD:~/.desklets/SystemMonitor-0.1.3$ Dernière modification par Phoen1x (Le 04/02/2006, à 15:11) Hors ligne Enelos Re : [HOW TO] adesklets : configuration des desklets Phoen1x, J'avais la même erreur au début : adesklets command error - font 'VeraBd/8' could not be load Il trouve pas la police de caractère VeraBd... Donc, tu vas dans le config.txt du system monitor, et tu remplace tout les VeraBd par Vera Normalement cette police là il l'a trouve bien... Après moi ca fonctionnait, javais plus cette erreur. Hors ligne
Using Python PIL, I'm trying to adjust the hue of a given image. I'm not very comfortable with the jargon of graphics, so what I mean by “adjusting hue” is doing the Photoshop operation called “Hue/saturation”: this is to change the color of the image uniformly as shown below: Original: With hue adjusted to +180 (red): With hue adjusted to -78 (green): FYI, Photoshop uses a scale of -180 to +180 for this hue setting (where -180 equals +180), that may represents the HSL hue scale (expressed in 0-360 degree). What I'm looking for is a function that, given an PIL image and a float hue within [0, 1] (or int within [0, 360], it doesn't matter), returns the image with its hue shifted by hue as in the example above. What I've done so far is ridiculous and obviously doesn't give the desired result. It just half-blend my original image with a color-filled layer. import Image im = Image.open('tweeter.png') layer = Image.new('RGB', im.size, 'red') # "hue" selection is done by choosing a color... output = Image.blend(im, layer, 0.5) output.save('output.png', 'PNG') (Please-don't-laugh-at-) result: Thanks in advance! Solution: here is the unutbu code updated so it fits exactly what I've described. import Image import numpy as np import colorsys rgb_to_hsv = np.vectorize(colorsys.rgb_to_hsv) hsv_to_rgb = np.vectorize(colorsys.hsv_to_rgb) def shift_hue(arr, hout): r, g, b, a = np.rollaxis(arr, axis=-1) h, s, v = rgb_to_hsv(r, g, b) h = hout r, g, b = hsv_to_rgb(h, s, v) arr = np.dstack((r, g, b, a)) return arr def colorize(image, hue): """ Colorize PIL image `original` with the given `hue` (hue within 0-360); returns another PIL image. """ img = image.convert('RGBA') arr = np.array(np.asarray(img).astype('float')) new_img = Image.fromarray(shift_hue(arr, hue/360.).astype('uint8'), 'RGBA') return new_img
I'm using Cloud9 (which from what I've read uses Python 2.6 currently, not Python 3) to write a Django app. I'm trying to read a CSV file with DictReader and use each column in the CSV to create a new instance of a model and populate the model fields. views.py class GenerateFromCSV(CreateView): model = group template_name = "my_app/csv_generate.html" def form_valid(self, form): new_group = form.save() the_csv = open(new_group.group_csv, 'rbU') fieldnames = ['c_type', 'f_name', 'q_type', 'ans', 'n_questions', 'bucket'] data_file = csv.DictReader(the_csv, fieldnames = fieldnames, delimiter=',', dialect=csv.excel) for row in data_file: new_card = Card( name = 'card', card_type = row['c_type'], file_name = row['f_name'], question_type = row['q_type'], answer = row['ans'], num_questions = row['n_questions'], bucket = row['bucket'], exam = new_exam) new_card.save() models.py class Group(models.Model): name = models.CharField(max_length=255, blank = True) subject = models.CharField(max_length=255, choices = SUBJECT, blank = True) num_questions = models.IntegerField(default=0, blank = True, null = True) group_csv = models.FileField(upload_to='csv', blank = True, null = True) def __unicode__(self): return self.name class Card(models.Model): name = models.CharField(max_length=255, blank = True) #ordered same as column order in CSV card_type = models.CharField(max_length=255, choices = CARDTYPE, blank = True) file_name = models.CharField(max_length=255, blank = True) question_type = models.IntegerField(default = 0, blank = True, null = True) answer = models.IntegerField(max_length = 1, choices = ANSWERS, blank = True, null = True) num_questions = models.IntegerField(default = 0, blank = True, null = True) bucket = models.CharField(max_length=255, blank = True) exam = models.ForeignKey(Exam) def __unicode__(self): return self.name or 'card' With the code as it is above, I get a TypeError (coercing to Unicode: need string or buffer, FieldFile found) when I call open() on the CSV. If I remove the call to open(), I get the error: 'new-line character seen in unquoted field - do you need to open the file in universal-newline mode?' My CSV is in the format (not every column contains data in every row): 3,the_file_name.png,0,"00001",,Equations What is the correct syntax for this? EditHere's my stacktrace: Traceback: File "/usr/libexec/openshift/cartridges/c9-0.1/root/python2.6.6/site-packages/Django-1.5-py2.6.egg/django/core/handlers/base.py" in get_response 115. response = callback(request, *callback_args, **callback_kwargs) File "/usr/libexec/openshift/cartridges/c9-0.1/root/python2.6.6/site-packages/Django-1.5-py2.6.egg/django/views/generic/base.py" in view 68. return self.dispatch(request, *args, **kwargs) File "/usr/libexec/openshift/cartridges/c9-0.1/root/python2.6.6/site-packages/Django-1.5-py2.6.egg/django/views/generic/base.py" in dispatch 86. return handler(request, *args, **kwargs) File "/usr/libexec/openshift/cartridges/c9-0.1/root/python2.6.6/site-packages/Django-1.5-py2.6.egg/django/views/generic/edit.py" in post 199. return super(BaseCreateView, self).post(request, *args, **kwargs) File "/usr/libexec/openshift/cartridges/c9-0.1/root/python2.6.6/site-packages/Django-1.5-py2.6.egg/django/views/generic/edit.py" in post 165. return self.form_valid(form) File "/var/lib/stickshift/52a55ef4e0b8cde0ff000036/app-root/data/705411/zamrdjango/zamr/views.py" in form_valid 35. with new_exam.exam_csv.open('rbU') as the_csv: Exception Type: AttributeError at /import/ Exception Value: 'NoneType' object has no attribute '__exit__'
What is the criteria by which you'll be judging whether it's feasible or not? It certainly is possible. The QProcess class provides everything that you would need for running and interacting with external processes inside a Qt application. At its core, it can do everything that subprocess can do (albeit, less conveniently). Here's a contrived usage example: button = QPushButton('start') textedit = QTextEdit() process = QProcess() button.clicked.connect(on_clicked) def on_clicked(): process.readyReadStandardOutput.connect(read_ready) process.start('/bin/sh', ('-c', "while /bin/true; do echo hello world ; sleep 1; done")) def read_ready(self): chunk = process.readAllStandardOutput() textedit.append(str(chunk)) Since you're still at the planning stage, why not consider a tool such as zenity for the GUI part? It could save you a lot of work. Getting a list of checkboxes and sending the output of a command to a textarea becomes a matter of: parameters=$( zenity --list --text "Test parameters:" \ --checklist --column "Check" --column "Parameter" \ TRUE "One" TRUE "Two" TRUE "Three" FALSE "Four" \ --separator=":"); # parameters -> One:Two:Three ./instrument-test.py $parameters | zenity --text-info Best of luck with your project!
I have an Atom feed generator for my blog, which runs on AppEngine/Python. I use the Django 1.2 template engine to construct the feed. My template looks like this: <?xml version="1.0" encoding="utf-8"?> <feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en" xml:base="http://www.example.org"> <id>urn:uuid:4FC292A4-C69C-4126-A9E5-4C65B6566E05</id> <title>Adam Crossland's Blog</title> <subtitle>opinions and rants on software and...things</subtitle> <updated>{{ updated }}</updated> <author> <name>Adam Crossland</name> <email>adam@adamcrossland.net</email> </author> <link href="http://blog.adamcrossland.net/" /> <link rel="self" href="http://blog.adamcrossland.net/home/feed" /> {% for each_post in posts %}{{ each_post.to_atom|safe }} {% endfor %} </feed> Note: if you use any of this, you'll need to create your own uuid to go into the id node. The updated node should contain the time and date on which contents of the feed were last updated in rfc 3339 format. Fortunately, Python has a library to take care of this for you. An excerpt from the controller that generates the feed: from rfc3339 import rfc3339 posts = Post.get_all_posts() self.context['posts'] = posts # Initially, we'll assume that there are no posts in the blog and provide # an empty date. self.context['updated'] = "" if posts is not None and len(posts) > 0: # But there are posts, so we will pick the most recent one to get a good # value for updated. self.context['updated'] = rfc3339(posts[0].updated(), utc=True) response.content_type = "application/atom+xml" Don't worry about the self.context['updated'] stuff. That just how my framework provides a shortcut for setting template variables. The import part is that I encode the date that I want to use with the rfc3339 function. Also, I set the content_type property of the Response object to be application/atom+xml. The only other missing piece is that the template uses a method called to_atom to turn the Post object into Atom-formatted data: def to_atom(self): "Create an ATOM entry block to represent this Post." from rfc3339 import rfc3339 url_for = self.url_for() atom_out = "<entry>\n\t<title>%s</title>\n\t<link href=\"http://blog.adamcrossland.net/%s\" />\n\t<id>%s</id>\n\t<summary>%s</summary>\n\t<updated>%s</updated>\n </entry>" % (self.title, url_for, self.slug_text, self.summary_for(), rfc3339(self.updated(), utc=True)) return atom_out That's all that is required as far as I know, and this code does generate a perfectly-nice and working feed for my blog. Now, if you really want to do RSS instead of Atom, you'll need to change the format of the feed template, the Post template and the content_type, but I think that is the essence of what you need to do to get a feed generated from an AppEngine/Python application.
After you've made your .scheme or .schemedef file (see Add Support for Your Language) there's still something missing from that authentic professional feeling when programming with your newly defined language. For that you need to create an auto-indenter. Please note: You need to have PyPN installed for this to work! See Install PyPN for further instructions. To create an auto-indenter you must make a new Python script that PN recognizes as an indenter. Throughout this tutorial I'm going to use The Elder Scrolls scripting language for Oblivion as an example (_here should be tes.schemedef file but the site don't allow me to upload it_). More information about it can be found at http:_cs.elderscrolls.com/constwiki/index.php/Portal:Scripting. Every auto-indenter must import at least the following (in most cases these are enough too): import scintilla from pypn.decorators import indenter After the imports we define the indenter function. Let's see the start of that definition from the example before further explanation: @indenter("tes") def tes_indent(c, doc): The first line tells PyPN that the following function is meant to be an indenter. The string in parentheses must match the name of the scheme or schemedef of the language we're trying to indent. The second line is regular Python. You can name the function what ever you like but descriptive names like ''tes_indent'' have advantages. The function must have two parameters - their naming doesn't matter but as PyPN already uses ''c'' and ''doc'' for them there's no reason to change. In Oblivion scripts the indentation is made by keywords. There are few keywords that start a block that gets indented and there are other keywords that end such blocks and cause unindentation. Easiest way to define them in Python is to use a list. I find it best to define these outside of the function. # Keywords that cause indentation. i_kws = ['scriptname', 'scn', 'begin', 'if', 'elseif'] # Keywords that cause unindentation. u_kws = ['end', 'endif'] Lines starting with ''#'' are Python comments. Variable names are again inconsequential - just use what you consider descriptive. In Python lists are surrounded by brackets and list items are separated by commas. As Oblivion scripts are not case-sensitive it's easiest to type the keywords in lowercase. People like different tabwidths for indentation and such preferences often vary from language to language. To ease that we define one more variable before we move on to function itself (I've used tabwidth of two because that's how most of the code in their Wiki is indented but in most cases I'd prefer four): # Tabwidth used for indentation. tab = 2 To be able to auto-indent we need quite a bit of data from our document. Depending on how your language is indented you may need more or less than we do in our example but you should get the idea. Let's have the next bit of code before further explanation (I've included some of the stuff from earlier so that you know where to place this): @indenter("tes") def tes_indent(c, doc): sci = scintilla.Scintilla(doc) pos = sci.CurrentPos l_cur = sci.LineFromPosition(pos) t_start = sci.PositionFromLine(l_cur - 1) t_end = pos - 1 txt = sci.GetText(t_start, t_end) kw = txt.split()[0].lower() In almost every PyPN script we a have variable that is set to ''scintilla.Scintilla(doc)'' (or more often actually ''scintilla.Scintilla(pn.CurrentDoc())''). We need it to use Scintilla's functions through PyPN. For our needs we need to know the following about our document: With this information at hand we can move on. First, let's figure out what we're trying to do. Here's the logic in pseudo language: if [first word on previous line] == [a block starting keyword] -> indent current line elseif [first word on previous line] == [a block ending keyword] -> unindent previous and current line So obviously we're going to need some if statements. Let's see some code before further explanation: if kw in i_kws: c_ind = sci.GetLineIndentation(l_cur) p_ind = sci.GetLineIndentation(l_cur-1) if c_ind == p_ind or c_ind == 0: c_ind += tab sci.IndentLine(l_cur, c_ind) This is the if clause for indentation. First we check if the word ''kw'' is among those that start an indented block. Then we get the indentation levels for current and previous line with ''sci.GetLineIndentation()'' and check that either both lines are equally indented or that the current line is not indented at all. If either of these is correct current line is indented by the amount defined in our variable ''tab''. elif kw in u_kws: c_ind = sci.GetLineIndentation(l_cur) p_ind = sci.GetLineIndentation(l_cur-2) if c_ind == p_ind: c_ind -= tab sci.IndentLine(l_cur, c_ind) sci.IndentLine(l_cur-1, c_ind) Code for unindentation is very similar. Most notable differences are that we unindent both current and previous line, and due to that we use the line before previous when comparing indentation levels of lines (otherwise we'd unindent twice on Windows where EOL is marked by ''\n\r''). Also ''kw'' is obviously searched from the list containing block ending keywords. import scintilla from pypn.decorators import indenter # Keywords that cause indentation. i_kws = ['scriptname', 'scn', 'begin', 'if', 'elseif'] # Keywords that cause unindentation. u_kws = ['end', 'endif'] # Tabwidth used for indentation. tab = 2 @indenter("tes") def tes_indent(c, doc): sci = scintilla.Scintilla(doc) pos = sci.CurrentPos l_cur = sci.LineFromPosition(pos) t_start = sci.PositionFromLine(l_cur - 1) t_end = pos - 1 txt = sci.GetText(t_start, t_end) kw = txt.split()[0].lower() if kw in i_kws: c_ind = sci.GetLineIndentation(l_cur) p_ind = sci.GetLineIndentation(l_cur-1) if c_ind == p_ind or c_ind == 0: c_ind += tab sci.IndentLine(l_cur, c_ind) elif kw in u_kws: c_ind = sci.GetLineIndentation(l_cur) p_ind = sci.GetLineIndentation(l_cur-2) if c_ind == p_ind: c_ind -= tab sci.IndentLine(l_cur, c_ind) sci.IndentLine(l_cur-1, c_ind) With the current lack of documentation and some bugs in PyPN creating scripts is often about learning from errors. I used the Python indenter that comes with PyPN as a some sort of a starting point (I must notify that there is a redundant piece of code in that one - there is no need to check if the character is EOL as that's already done in ''pypn.glue''). PN source code and Scintilla documentation were also essential. I hope someone finds this useful.
Building on the answer given above with the single line Tree using defaultdict, you can make it a class. This will allow you to set up defaults in a constructor and build on it in other ways. class Tree(defaultdict): def __call__(self): return Tree(self) def __init__(self, parent): self.parent = parent self.default_factory = self This example allows you to make a back reference so that each node can refer to its parent in the tree. >>> t = Tree(None) >>> t[0][1][2] = 3 >>> t defaultdict(defaultdict(..., {...}), {0: defaultdict(defaultdict(..., {...}), {1: defaultdict(defaultdict(..., {...}), {2: 3})})}) >>> t[0][1].parent defaultdict(defaultdict(..., {...}), {1: defaultdict(defaultdict(..., {...}), {2: 3})}) >>> t2 = t[0][1] >>> t2 defaultdict(defaultdict(..., {...}), {2: 3}) >>> t2[2] 3 Next, you could even override __setattr__ on class Tree so that when reassigning the parent, it removes it as a child from that parent. Lots of cool stuff with this pattern.
fenix122 Re : erreur avec apache2 : « Unable to open logs » [RESOLU] non non je sais faire les serveur sans interface j'ai fait sa surtout pour simplifier après non je ne suis pas tout a lettre car le tuto que j'ai suivi avait des erreurs pour certaine chose surtout pour c'est source pas bonne j'ai du aller sur le site officier entre deux ainsi que pour les source j'ai suivi le tuto juste parce qu'il été plus claire et en français mais je voulais savoir comment il a pus m'infecté mon serveur car pendant quelque moi j'ai eu aucun souci en tout cas merci a tous de votre aide comment je met en résolu après car désolé je ne trouve pas Hors ligne tiramiseb Re : erreur avec apache2 : « Unable to open logs » [RESOLU] je voulais savoir comment il a pus m'infecté mon serveur car pendant quelque moi j'ai eu aucun souci As-tu bien maintenu ispconfig à jour en installant toutes les version qui sortaient au fur et à mesure ? As-tu bien maintenu les paquets Ubuntu à jour dès la sortie d'une mise à jour ? As-tu bien veillé à ce qu'aucun mot de passe lié à ce serveur ne transite en clair, même une fois (j'ai déjà la réponse : c'est non, vu que tu as du FTP, du POP et de l'IMAP) ? As-tu bien veillé à ce qu'un tel mot de passe ne soit jamais utilisé sur un réseau non sécurisé (wifi public par exemple) ? As-tu bien veillé à ne pas installer de clé SSH provenant de comptes potentiellement corrompus ? Tout ça (et bien plus) ça peut être des sources de piratage. Allez, au pif : tu avais un serveur FTP, donc des gens (toi ?) se connectaient en FTP, donc des mots de passe transitaient en clair. Si un seul de ces mots de passe est intercepté par quelqu'un (sur un wifi public, par exemple, c'est hyper facile) alors ce quelqu'un aura accès à ton serveur. Et si jamais le mot de passe en question est celui d'un compte administrateur, bingo ! Tu avais aussi un MDA (Courier). Les mots de passe transitaient-ils en clair ? Si oui, même punition. Hors ligne fenix122 Re : erreur avec apache2 : « Unable to open logs » [RESOLU] oui j'avais tout sa mais les mots de passe étais tous différent donc c'est pas du a une attaque après ispconfig étais a jour puisque sa ne faisait que quelque mois que j'ai le serveur mais je cherchais coté serveur donc ya aucun module a mettre qui aide a la protection et je ne passe jamais par wifi j'aime pas et après mot de passe en claire non après je me sert des logiciel comme filezilla etc je te remercie de ton aide peux tu me dire comment mettre en résolu s'il te plaie merci Hors ligne Elder Re : erreur avec apache2 : « Unable to open logs » [RESOLU] Vu qu'apache semble en avoir pris plein la poire (virtual host skynet ...) je commencerai par les logs d'apache et du système... Mais ceux-ci seront probablement détruits lors de la réinstallation. +1 tiramiseb, sur ce type de config il y a tellement de point d'entrée/d'écoute potentiel que ça pourrait venir de n'importe où... @+ Hors ligne Haleth Re : erreur avec apache2 : « Unable to open logs » [RESOLU] ton skynet, c'est juste une banderolle de login Soit tu l'as mis, soit un voisin l'as mis, soit t'as install un truc craignos qui l'a fait. sudo /etc/init.d/apache2.. en root, c'est inutile ta conf d'apache est pourri, y'a une doc en ligne, je te susurre de la lire avant d'écrire de la conf que tu comprends pas Ubuntu is an ancien African word which means "I can't configure Debian" Because accessor & mutator are against encapsulation (one of OOP principles), good OOP-programmers do not use them. Obviously, procedural-devs do not. In fact, only ugly-devs are still using them. Hors ligne fenix122 Re : erreur avec apache2 : « Unable to open logs » [RESOLU] non personne avait accés au serveur puis sa me connecté direct en root depuis sa aprés mon hebergeur lui meme ma anoncé sa Dear Sir! Please sort this problem within 24 hours to avoid server suspension. donc suis sur de l'infection ---------------------------- Hello abuse address for 46.17.97.111, The host at 46.17.97.111 seems to be infected with a worm. I believe that this is the case because over here at love.zweije.nl, I am receiving inexplicable TCP connect requests from that host. An extract from my system log (timezone +0100) is attached. Please ensure that the maintainer of 46.17.97.111 cleans up his host. Thank you. Vincent. Jan 31 08:43:25 love ippl: ssh connection attempt from 46.17.97.111 (46.17.97.111:17203->80.101.26.192:22) Jan 31 10:12:34 love ippl: ssh connection attempt from 46.17.97.111 (46.17.97.111:52283->80.101.26.192:22) Jan 31 10:12:35 love sshd[29541]: Connection from 46.17.97.111 port 52283 Jan 31 10:12:37 love sshd[29541]: debug1: PAM: setting PAM_RHOST to "46.17.97.111" Jan 31 10:12:37 love sshd[29541]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=46.17.97.111 user=root Jan 31 10:12:39 love ippl: ssh connection attempt from 46.17.97.111 (46.17.97.111:54605->80.101.26.192:22) Jan 31 10:12:39 love sshd[29541]: Failed password for root from 46.17.97.111 port 52283 ssh2 Jan 31 10:12:39 love sshd[29541]: Received disconnect from 46.17.97.111: 11: Bye Bye [preauth] Jan 31 10:12:39 love sshd[29543]: Connection from 46.17.97.111 port 54605 Jan 31 10:12:40 love sshd[29543]: debug1: PAM: setting PAM_RHOST to "46.17.97.111" Jan 31 10:12:41 love sshd[29543]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=46.17.97.111 user=root Jan 31 10:12:42 love ippl: ssh connection attempt from 46.17.97.111 (46.17.97.111:56845->80.101.26.192:22) Jan 31 10:12:42 love sshd[29543]: Failed password for root from 46.17.97.111 port 54605 ssh2 Jan 31 10:12:42 love sshd[29543]: Received disconnect from 46.17.97.111: 11: Bye Bye [preauth] Jan 31 10:12:42 love sshd[29545]: Connection from 46.17.97.111 port 56845 Jan 31 10:12:43 love sshd[29545]: debug1: PAM: setting PAM_RHOST to "46.17.97.111" Jan 31 10:12:43 love sshd[29545]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=46.17.97.111 user=root Jan 31 10:12:45 love ippl: ssh connection attempt from 46.17.97.111 (46.17.97.111:58920->80.101.26.192:22) Jan 31 10:12:45 love sshd[29545]: Failed password for root from 46.17.97.111 port 56845 ssh2 Jan 31 10:12:45 love sshd[29545]: Received disconnect from 46.17.97.111: 11: Bye Bye [preauth] Jan 31 10:12:45 love sshd[29547]: Connection from 46.17.97.111 port 58920 Jan 31 10:12:46 love sshd[29547]: debug1: PAM: setting PAM_RHOST to "46.17.97.111" Jan 31 10:12:46 love sshd[29547]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=46.17.97.111 user=root Jan 31 10:12:48 love ippl: ssh connection attempt from 46.17.97.111 (46.17.97.111:33714->80.101.26.192:22) Jan 31 10:12:48 love sshd[29547]: Failed password for root from 46.17.97.111 port 58920 ssh2 Jan 31 10:12:48 love sshd[29547]: Received disconnect from 46.17.97.111: 11: Bye Bye [preauth] Jan 31 10:12:48 love sshd[29549]: Connection from 46.17.97.111 port 33714 Jan 31 10:12:49 love sshd[29549]: debug1: PAM: setting PAM_RHOST to "46.17.97.111" Jan 31 10:12:49 love sshd[29549]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=46.17.97.111 user=root Jan 31 10:12:51 love ippl: ssh connection attempt from 46.17.97.111 (46.17.97.111:36290->80.101.26.192:22) Jan 31 10:12:51 love sshd[29549]: Failed password for root from 46.17.97.111 port 33714 ssh2 Jan 31 10:12:51 love sshd[29549]: Received disconnect from 46.17.97.111: 11: Bye Bye [preauth] Jan 31 10:12:51 love sshd[29551]: Connection from 46.17.97.111 port 36290 Jan 31 10:12:52 love sshd[29551]: debug1: PAM: setting PAM_RHOST to "46.17.97.111" Jan 31 10:12:52 love sshd[29551]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=46.17.97.111 user=root Jan 31 10:12:54 love ippl: ssh connection attempt from 46.17.97.111 (46.17.97.111:38511->80.101.26.192:22) Jan 31 10:12:54 love sshd[29551]: Failed password for root from 46.17.97.111 port 36290 ssh2 Jan 31 10:12:54 love sshd[29551]: Received disconnect from 46.17.97.111: 11: Bye Bye [preauth] Jan 31 10:12:54 love sshd[29553]: Connection from 46.17.97.111 port 38511 Jan 31 10:12:55 love sshd[29553]: debug1: PAM: setting PAM_RHOST to "46.17.97.111" Jan 31 10:12:55 love sshd[29553]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=46.17.97.111 user=root Jan 31 10:12:57 love ippl: ssh connection attempt from 46.17.97.111 (46.17.97.111:41110->80.101.26.192:22) Jan 31 10:12:57 love sshd[29553]: Failed password for root from 46.17.97.111 port 38511 ssh2 Jan 31 10:12:57 love sshd[29553]: Received disconnect from 46.17.97.111: 11: Bye Bye [preauth] Jan 31 10:12:57 love sshd[29555]: Connection from 46.17.97.111 port 41110 Jan 31 10:12:57 love sshd[29555]: debug1: PAM: setting PAM_RHOST to "46.17.97.111" Jan 31 10:12:57 love sshd[29555]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=46.17.97.111 user=root Jan 31 10:13:00 love ippl: ssh connection attempt from 46.17.97.111 (46.17.97.111:44092->80.101.26.192:22) Jan 31 10:13:00 love sshd[29555]: Failed password for root from 46.17.97.111 port 41110 ssh2 Jan 31 10:13:00 love sshd[29555]: Received disconnect from 46.17.97.111: 11: Bye Bye [preauth] Jan 31 10:13:00 love sshd[29557]: Connection from 46.17.97.111 port 44092 Jan 31 10:13:01 love sshd[29557]: debug1: PAM: setting PAM_RHOST to "46.17.97.111" Jan 31 10:13:01 love sshd[29557]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=46.17.97.111 user=root Jan 31 10:13:02 love sshd[29557]: Failed password for root from 46.17.97.111 port 44092 ssh2 Jan 31 10:13:03 love ippl: ssh connection attempt from 46.17.97.111 (46.17.97.111:46399->80.101.26.192:22) Jan 31 10:13:03 love sshd[29557]: Received disconnect from 46.17.97.111: 11: Bye Bye [preauth] Jan 31 10:13:03 love sshd[29559]: Connection from 46.17.97.111 port 46399 Jan 31 10:13:03 love sshd[29559]: debug1: PAM: setting PAM_RHOST to "46.17.97.111" Jan 31 10:13:03 love sshd[29559]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=46.17.97.111 user=root Jan 31 10:13:05 love ippl: ssh connection attempt from 46.17.97.111 (46.17.97.111:48460->80.101.26.192:22) Jan 31 10:13:05 love sshd[29559]: Failed password for root from 46.17.97.111 port 46399 ssh2 Jan 31 10:13:05 love sshd[29559]: Received disconnect from 46.17.97.111: 11: Bye Bye [preauth] Jan 31 10:13:05 love sshd[29561]: Connection from 46.17.97.111 port 48460 Jan 31 10:13:06 love sshd[29561]: debug1: PAM: setting PAM_RHOST to "46.17.97.111" Jan 31 10:13:06 love sshd[29561]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=46.17.97.111 user=root Jan 31 10:13:07 love ippl: ssh connection attempt from 46.17.97.111 (46.17.97.111:50206->80.101.26.192:22) Jan 31 10:13:07 love sshd[29561]: Failed password for root from 46.17.97.111 port 48460 ssh2 Jan 31 10:13:07 love sshd[29561]: Received disconnect from 46.17.97.111: 11: Bye Bye [preauth] Jan 31 10:13:07 love sshd[29563]: Connection from 46.17.97.111 port 50206 Jan 31 10:13:08 love sshd[29563]: debug1: PAM: setting PAM_RHOST to "46.17.97.111" Jan 31 10:13:08 love sshd[29563]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=46.17.97.111 user=root Jan 31 10:13:10 love ippl: ssh connection attempt from 46.17.97.111 (46.17.97.111:53045->80.101.26.192:22) Jan 31 10:13:10 love sshd[29563]: Failed password for root from 46.17.97.111 port 50206 ssh2 Jan 31 10:13:10 love sshd[29563]: Received disconnect from 46.17.97.111: 11: Bye Bye [preauth] Jan 31 10:13:10 love sshd[29565]: Connection from 46.17.97.111 port 53045 Jan 31 10:13:11 love sshd[29565]: debug1: PAM: setting PAM_RHOST to "46.17.97.111" Jan 31 10:13:11 love sshd[29565]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=46.17.97.111 user=root Jan 31 10:13:13 love ippl: ssh connection attempt from 46.17.97.111 (46.17.97.111:55378->80.101.26.192:22) Jan 31 10:13:13 love sshd[29565]: Failed password for root from 46.17.97.111 port 53045 ssh2 Jan 31 10:13:13 love sshd[29565]: Received disconnect from 46.17.97.111: 11: Bye Bye [preauth] Jan 31 10:13:13 love sshd[29567]: Connection from 46.17.97.111 port 55378 Jan 31 10:13:14 love sshd[29567]: debug1: PAM: setting PAM_RHOST to "46.17.97.111" Hors ligne Haleth Re : erreur avec apache2 : « Unable to open logs » [RESOLU] Vire tout, réinstalle, lit la doc avant de copier du code ALARACHE. Ubuntu is an ancien African word which means "I can't configure Debian" Because accessor & mutator are against encapsulation (one of OOP principles), good OOP-programmers do not use them. Obviously, procedural-devs do not. In fact, only ugly-devs are still using them. Hors ligne fenix122 Re : erreur avec apache2 : « Unable to open logs » [RESOLU] euuh j'ai pas dit j'ai copié du code a l’arrache et ce problème c'est la première fois que sa m'arrive sa fait deux ans que j'ai des serveur trés fréquenté c'est la premiere fois et pourtant toujours avec ispconfig mais je fait toujours les mise a jour Hors ligne Hoper Re : erreur avec apache2 : « Unable to open logs » [RESOLU] Whaou... 2 pages alors que le message d'erreur est explicite "could not open error log file" Ca veut quand meme bien dire ce que ca veut dire non ? apache n'arrive pas à ouvrir (ni même à créer je dirai) son fichier de log. Ce qui n'est pas surprenant puisque : -rw-r--r-- 1 root root 0 Jan 30 23:08 /var/log/apache2 Donc que seul root peut ecrire dedans, alors que je suppose que c'est l'utilisateur www-data qui dois essayer de créer le fichier. Bref, la première chose que je vérifierai moi, ce sont les droits du répertoire /var/log et /var/log/apache2, parce qu'on dirait qu'il y a eu des trucs pas très net de fait de ce coté la. Hors ligne Haleth Re : erreur avec apache2 : « Unable to open logs » [RESOLU] -rw-r----- 1 root adm 152204 1 févr. 14:18 error.log -rw-r----- 1 root adm 148947892 1 févr. 14:52 other_vhosts_access.log Ca marche. Ubuntu is an ancien African word which means "I can't configure Debian" Because accessor & mutator are against encapsulation (one of OOP principles), good OOP-programmers do not use them. Obviously, procedural-devs do not. In fact, only ugly-devs are still using them. Hors ligne Hoper Re : erreur avec apache2 : « Unable to open logs » [RESOLU] Ah oui tiens. Le processus apache qui cré les logs est bien root. Le message d'erreur reste très explicite et clairement etre la piste à suivre. (Fs monté en lecture seul ou je sais pas quoi dans le genre) Hors ligne Haleth Re : erreur avec apache2 : « Unable to open logs » [RESOLU] Regarde également la place et les inodes sur le FS. M'enfin, ca ne change rien au problème: ta machine est trouée. Ubuntu is an ancien African word which means "I can't configure Debian" Because accessor & mutator are against encapsulation (one of OOP principles), good OOP-programmers do not use them. Obviously, procedural-devs do not. In fact, only ugly-devs are still using them. Hors ligne Hoper Re : erreur avec apache2 : « Unable to open logs » [RESOLU] Tu as raison, une machine trouée, c'est ré-installation forcée... Hors ligne sorrodje Re : erreur avec apache2 : « Unable to open logs » [RESOLU] Hoper et Haleth : Hors ligne Haleth Re : erreur avec apache2 : « Unable to open logs » [RESOLU] Ubuntu is an ancien African word which means "I can't configure Debian" Because accessor & mutator are against encapsulation (one of OOP principles), good OOP-programmers do not use them. Obviously, procedural-devs do not. In fact, only ugly-devs are still using them. Hors ligne tiramiseb Re : erreur avec apache2 : « Unable to open logs » [RESOLU] Le message d'erreur reste très explicite et clairement etre la piste à suivre. (Fs monté en lecture seul ou je sais pas quoi dans le genre) Comme tu l'as cité toi-même : Ca veut quand meme bien dire ce que ca veut dire non ? apache n'arrive pas à ouvrir (ni même à créer je dirai) son fichier de log. Ce qui n'est pas surprenant puisque : -rw-r--r--1 root root 0 Jan 30 23:08 /var/log/apache2 Et comme je l'ai fait résoudre dans le message #8 : rm /var/log/apache2 mkdir /var/log/apache2 Le répertoire en question a été "transformé" en fichier (pas de "d" au début de la ligne de retour du "ld"). Vu le message suivant : j'ai juste trouver dans var/log/apache2/error.log (20)Not a directory: apache2: could not open error log file /var/log/ispconfig/httpd/monsite.com/error.log. Unable to open logs (20)Not a directory: apache2: could not open error log file /var/log/ispconfig/httpd/monsite.com/error.log. Unable to open logs (20)Not a directory: apache2: could not open error log file /var/log/ispconfig/httpd/monsite.com/error.log. Unable to open logs (20)Not a directory: apache2: could not open error log file /var/log/ispconfig/httpd/monsite.com/error.log. Unable to open logs Il est fort probable que ce soit la même chose sur ces fichiers/répertoires. Tu cherches à trouver une cause déjà trouvée Hors ligne Hoper Re : erreur avec apache2 : « Unable to open logs » [RESOLU] Pffff... c'est malin Tant qu'on me traite de vieux, ça me va plutôt bien. Le problème c'est que la suite logique ce sera forcément "vieux con", et la c'est sur, je vais mal le prendre Tu cherches à trouver une cause déjà trouvée En effet. Bien la preuve que je suis effectivement déjà devenu vieux... Et merde Dernière modification par Hoper (Le 01/02/2013, à 16:42) Hors ligne sorrodje Re : erreur avec apache2 : « Unable to open logs » [RESOLU] J'aurais pu mettre Heckel et Jeckel aussi ... spa trop l'âge qui me faisait rire mais le numéro de duettistes à se renvoyer la balle Hors ligne Hoper Re : erreur avec apache2 : « Unable to open logs » [RESOLU] J'aurais pu mettre Heckel et Jeckel aussi ... Remarque bien que vu les références utilisées (qui me font rires aussi hein), je suis pas le seul à être vieux ici Hors ligne sorrodje Re : erreur avec apache2 : « Unable to open logs » [RESOLU] Oulah non .. il faut y voir une forme de complicité de ma part d'autant que je suis assez complètement d'accord avec vos commentaires quant au sujet qui justifie le topic sur lequel nous devisons présentement Hors ligne fenix122 Re : erreur avec apache2 : « Unable to open logs » [RESOLU] excuser moi comment je met en résolu merci Hors ligne tiramiseb Re : erreur avec apache2 : « Unable to open logs » [RESOLU] Tu édites le tout premier message de la discussion et tu ajoutes [RESOLU] dans le titre. Hors ligne fenix122 Re : erreur avec apache2 : « Unable to open logs » [RESOLU] merci Hors ligne
JavaScript twistidphreak — 2011-05-26T16:31:16-04:00 — #1 I want the images to change (rotate) in sequence when the page is refreshed not random like i have below. Can someone help me with this please? Also I also need text to change on refresh to because I am going to have a descritption of the image below. So the image and text sync up together when some refreshes the page or goes page to the home page. Thanks I want the images to change like on this page Loading A Specific Image Sequence On Page Refresh Via JavaScript / DOM <script type="text/javascript" language="JavaScript"> var imgs = new Array('<a href="VW_1.shtml"><img border=0 src="img/samples/VW/large_1.jpg" width=165 height=109" class="thumbnail_img">', '<a href="fortshelby1.shtml"><img border=0 src="img/samples/Fort Shelby/image1-large.jpg" width=165 height=109 class="thumbnail_img">', '<a href="jaguar1.shtml"><img border=0 src="img/samples/Jag_of_Novi/large_1.jpg" width=165 height=109 class="thumbnail_img">', '<a href="harley1.shtml"><img border=0 src="img/samples/wolverine_harley/large_1.jpg" width=165 height=109 class="thumbnail_img">', '<a href="bc1.shtml"><img border=0 src="img/samples/BC_Coney_Island/large_1.jpg" width=165 height=109 class="thumbnail_img">', '<a href="mama1.shtml"><img border=0 src="img/samples/Pozios_Retail_Mama_Vickys_Coney_Island/large_1.jpg" width=165 height=109 class="thumbnail_img">', '<a href="children1.shtml"><img border=0 src="img/samples/Detroit_Childrens_Museum/large_1.jpg" width=165 height=109 class="thumbnail_img">', '<a href="harper1.shtml"><img border=0 src="img/samples/Harper_Woods_Library/large_1.jpg" width=165 height=109 class="thumbnail_img">', '<a href="dps1.shtml"><img border=0 src="img/samples/DPS_Facilities_building/large_1.jpg" width=165 height=109 class="thumbnail_img">', '<a href="lakeland1.shtml"><img border=0 src="img/samples/Lakeland_School_Huron_Valley/large_1.jpg" width=165 height=109 class="thumbnail_img">', '<a href="auto1.shtml"><img border=0 src="img/samples/Manhattan_Auto_Group/large_1.jpg" width=165 height=109 class="thumbnail_img">', '<a href="maxey1.shtml"><img border=0 src="img/samples/Maxey_Ford/large_1.jpg" width=165 height=109 class="thumbnail_img">', '<a href="benz1.shtml"><img border=0 src="img/samples/Mercedes_Benz_of_St_Clair_Shores/large_1.jpg" width=165 height=109 class="thumbnail_img">', '<a href="metro_lofts1.shtml"><img border=0 src="img/samples/Metro_Lofts/large_1.jpg" width=165 height=109 class="thumbnail_img">', '<a href="mtclemens1.shtml"><img border=0 src="img/samples/Mt_Clemens_Library/large_1.jpg" width=165 height=109 class="thumbnail_img">', '<a href="st_gertrudes.shtml"><img border=0 src="img/samples/st_gertrudes/large_1.jpg" width=165 height=109 class="thumbnail_img">', '<a href="rayconnect1.shtml"><img border=0 src="img/samples/Rayconnect/large 1.jpg" width=165 height=109 class="thumbnail_img">', '<a href="faulhauber1.shtml"><img border=0 src="img/samples/Faulhauber/large1.jpg" width=165 height=109 class="thumbnail_img">'); var max = imgs.length; var num = Math.floor((Math.random() * max)); document.writeln(imgs[num]); </script> paul_wilkins — 2011-05-26T20:20:40-04:00 — #2 Is that when one person refreshes the same page? How about when you have more than one visitor to the page. Do you want them all to start with seeing image number 1, or do you want the image sequence to be spread across all visitors. paul_wilkins — 2011-05-26T21:18:42-04:00 — #3 It seems then that using cookies to remember which was the last image to be shown to the visitor, could be an appropriate way to go. Any client-side solution is guaranteed to fail when javascript is not available though, so using a server-side script such as PHP or ASP is a preferred solution for this type of thing. twistidphreak — 2011-05-26T21:02:50-04:00 — #4 Either a new visitor or when they refresh because I am going to have a text description of the each image that needs to be in the same order with the image so they are in sync.
Lets say I want to set up a basic text encoding using a dictionary in python. Two ways of doing this come to mind immediately - using zip, and using list comprehension. characters = "ABCDEFGHIJKLMNOPQRSTUVWXYZ .,!;" dict_a = dict((x, characters[x]) for x in xrange(0, 31)) dict_b = dict(zip(xrange(0, 31), characters)) Which of these is more efficient? (The real encoding is longer than 31, this is a toy example). Is the difference significant? Alternatively, am I approaching this wrong and should be using something other than a dictionary? (I need to be able to go in both directions of encoding).
I am writing code that will search twitter for key words and store them in a python dictionary: base_url = 'http://search.twitter.com/search.json?rpp=100&q=4sq.com/' query = '7bOHRP' url_string = base_url + query logging.info("url string = " + url_string) json_text = fetch(url_string) json_response = simplejson.loads(json_text.content) result = json_response['results'] print "Contents" print result The resulting dictionary is : Contents[{ u 'iso_language_code': u 'en', u 'text': u "I'm at Cafe en Seine (40 Dawson Street, Dublin) w/ 2 others. http://4sq.com/7bOHRP", u 'created_at': u 'Wed, 06 Oct 2010 23:37:02 +0000', u 'profile_image_url': u 'http://a1.twimg.com/profile_images/573130785/twitterProfilePhoto_normal.jpg', u 'source': u '&lt;a href=&quot;http://foursquare.com&quot; rel=&quot;nofollow&quot;&gt;foursquare&lt;/a&gt;', u 'place': { u 'type': u 'neighborhood', u 'id': u '898cf727ca504e96', u 'full_name': u 'Mansion House B, Dublin' }, u 'from_user': u 'pkerssemakers', u 'from_user_id': 60241195, u 'to_user_id': None, u 'geo': None, u 'id': 26597357992, u 'metadata': { u 'result_type': u 'recent' } }] Status: 200 OK Content - Type: text / html;charset = utf - 8 Cache - Control: no - cache Expires: Fri, 01 Jan 1990 00: 00: 00 GMT Content - Length: 0 How can I access the 'from_user' and what is the 'u' before the key and value?
I have a numpy master array. Given another array of search values, with repeating elements, I want to produce the indices of these search values in the master array. E.g.: master array is [1,2,3,4,5], search array is [4,2,2,3] Solution: [3,1,1,2] Is there a "native" numpy function that does this efficiently (meaning at C speed, rather than python speed)? I'm aware of the following solution, but, first, it's a python list comprehension, and second, it'll search for the index of 2 twice. ma = np.array([1,2,3,4,5]) sl = np.array([4,2,2,3]) ans = [np.where(ma==i) for i in sl] Also, if I have to resort to sorting and binary search, I will do it as a last resort (puns not intended at all sorts of levels). I am interested in finding if I'm missing something basic from the numpy library. These lists are very large, so performance is paramount. Thanks. Edit: Before posting I'd tried the following with dismal results: [np.searchsorted(ma,x) for x in sl] The solution posted by @pierre is much more performant and exactly what I was looking for.
The following regular expression: ^\[([^\]]+)\] will capture the date at the beginning of the string plus square brackets, and will put the stuff in between the square brackets into a group that can be extracted by itself. Note that your text editor may have a slightly different syntax. Here's how this breaks down: ^ = beginning of line/string \[, \] = literal [ and ] characters () = signifies a group to capture [^\]] = matches any character _except_ a close bracket (this keeps the match from being too greedy) + = one or more of the previous EDIT: This assumes your regex facility supports groups (which most do). The easiest way to explain groups is just to show you how they work with one such engine. In the Python interpreter: >>> import re >>> s = '[2010-01-15 06:18:10.203] [0x00001388] [SHDNT] ...' >>> r = re.compile(r'^\[([^\]]+)\]') >>> m = r.search(s) This creates a regular expression object and searches the string for the first set of text that matches it. The result is returned in a match object: >>> m <_sre.SRE_Match object at 0x1004d9558> To get the entire set of text that was matched, the Python convention is to invoke group() on the match object: >>> m.group() '[2010-01-15 06:18:10.203]' and to get just the stuff in parentheses, I pass the number of the group I want (in this case there's just one set of parens, so just one group): >>> m.group(1) '2010-01-15 06:18:10.203' If I perform a replace instead of a search, I use the sub function. Sub takes the string I want to replace the full match by, followed by the input string, and returns the string with the replacement performed if a match was found: >>> r.sub('spam spam spam', s) 'spam spam spam [0x00001388] [SHDNT] ...' However, the replacement string supports escape sequences that refer to specific values of groups captured by the match. A group substitution is indicated by \N, where N is the number of the group. Hence: >>> r.sub(r' \1 ', s) ' 2010-01-15 06:18:10.203 [0x00001388] [SHDNT] ...' which is what you want.
You can script something like this: import trac.admin.console import trac.config import os def _get_project_options(envdir): confpath = os.path.join(envdir, 'conf/trac.ini') config = trac.config.Configuration(confpath) return dict([x for x in config.options(u'project')]) def _get_project_name(envdir): admin = trac.admin.console.TracAdmin(envdir) if admin.env_check(): options = _get_project_options(envdir) return options[u'name'] else: return None def iter_trac_projects_from_dir(dirname): for which in os.listdir(dirname): if not which in ('.', '..') and os.path.isdir(dirname): envdirname = os.path.join(dirname, which) project_name = _get_project_name(envdirname) if project_name: yield (project_name, envdirname) def get_trac_projects_from_dir(dirname): return [pr for pr in iter_trac_projects_from_dir(dirname)] Then you can use either iter_trac_projects_from_dir or get_trac_projects_from_dir whichever you think is best for you. Alternatively you could use the function get_enviroments from module trac.web.main but only as alternative to os.listdir -- you would still have to check whether or not each alleged env is really a trac environment. See why: >>> import trac.web.main >>> env = {'trac.env_parent_dir': ... '/home/manu/tmp'} >>> trac.web.main.get_environments(env) {'test': '/home/manu/tmp/test', 'no-a-real-trac-project': '/home/manu/tmp/no-a-real-trac-project', 'test2': '/home/manu/tmp/test2'}
I'm an admittedly pretty basic Python programmer, trying to learn as I encounter problems implementing various research problems. And I've hit one of those problems - particularly, how to handle loops where I'm returning a bunch of data, rather than the usual "out comes a single number" examples where you just add the result of the loop to everything previous. Here's a Gist of the unlooped script I'm trying to run: https://gist.github.com/1390355 The really salient point is the end of the model_solve function: def model_solve(t): # lots of variables set params = np.zeroes((n_steps,n_params) params[:,0] = beta params[:,1] = gamma timer = np.arange(n_steps).reshape(n_steps,1) SIR = spi.odeint(eq_system, startPop, t_interval) output = np.hstack((timer,SIR,params)) return output That returns the results of the ODE integration bit (spi.odeint) along with a simple "What time step are we on?" timer and essentially two columns of the value of two random variables repeated many, many times in the form of 4950 row and 7 column NumPy array. The goal however is to run a Monte Carlo analysis of the two parameters (beta and gamma) which have random values. Essentially, I want to make a function that loops somewhat like so: def loop_function(runs): for i in range(runs): model_solve(100) # output of those model_solves collected here # return collected output That collected output would then be written to a file. Normally, I'd just have each model_solve function write its results to a file, but this code is going to be run on PiCloud or another platform where I don't necessarily have the ability to write a file until the results are returned to the local machine. Instead, I'm trying to get a return of a huge NumPy array of runs*7 columns and 4950 rows - which can then be written to a file on my local machine. Any clues as to how to approach this?
cledesol Re : [Conky] Alternative à weather.com ( icones conkyforecast ) Merci de cette précision. C'est vrai aussi qu'en général c'est le soir que je me loggue sur mon PC .... Donc, la météo du jour ... je l'ai vue en réel CM : Asus M4A88TD-M µ : AMD Phenom II X6 1055T / RAM 4 Go Video : EVGA GeForce GTS450 1Go - Ubuntu 64 bits Version 14.04 avec Unity Notebook Asus A2500D Dual boot Ubuntu et Xubuntu 14.04 Hors ligne Didier-T Re : [Conky] Alternative à weather.com ( icones conkyforecast ) Merci de cette précision. C'est vrai aussi qu'en général c'est le soir que je me loggue sur mon PC .... Donc, la météo du jour ... je l'ai vue en réel moi c'est le contraire, mais je crois que météo France auras plus pour vocation de test qu'autre chose (il me fatigue à modifier le nombre d'info toutes la journée). Je vais voir ce que je peut faire avec un site plus contant dans sont nombres d'infos journalière, ce sera certainement previmeteo.com car après tous, ils nous propose plein de façon différent de récupérer les infos météo. Hors ligne cledesol Re : [Conky] Alternative à weather.com ( icones conkyforecast ) Tu as raison; c'est pénible de programmer quand les données en entrées ne sont pas constantes. Je ne connaissais pas le site "previmeteo", mais il a l'air pas mal du tout. Bon courage pour la suite. Si tu veux que je teste, pas de problème. CM : Asus M4A88TD-M µ : AMD Phenom II X6 1055T / RAM 4 Go Video : EVGA GeForce GTS450 1Go - Ubuntu 64 bits Version 14.04 avec Unity Notebook Asus A2500D Dual boot Ubuntu et Xubuntu 14.04 Hors ligne Phyllinux Re : [Conky] Alternative à weather.com ( icones conkyforecast ) Me revoilà de retour sur les ondes. Tout d'abord un grand bravo à Didier-T et Levi59 pour le travail effectué sur les scripts. N'étant pas en mesure moi-même de faire grand chose sur les scripts, je me contente donc de tester et de faire part de mes remarques afin de les perfectionner ou les modifier. Tout d'abord une petite impression d'écran pour expliciter les 2 bugs que je rencontre :1/ Affichage des icônes du vent Comme je l'ai déjà indiqué, lorsque je lance le conky en terminal, j'ai régulièrement un message d'erreur du genre : cp: la cible «/home/gilles/conky/meteo/images/conky/v1.png» n'est pas un répertoire cp: la cible «/home/gilles/conky/meteo/images/conky/v4.png» n'est pas un répertoire Ce qui se traduit par le fait que les icônes du vent qui déclenchent ce message d'erreur ne sont pas actualisées. Du coup, nous avons des icônes qui ne correspondent pas à la direction du vent. Cela se produit sur certaines icônes, et de manière aléatoire. Ce ne sont pas toujours les mêmes qui posent problème. On voit ici des icônes de vent d'Est, avec une description de vent SE. Le problème viendrait peut être du script de formatage des icônes ? Ou bien suis je le seul à avoir ce bug ? Tant que nous sommes dans les icônes du vent, est il possible d'avoir, comme cela se passait avec conkyForecast, des icônes de différentes couleurs en fonction de la force du vent ? Car actuellement, ce n'est pas le cas, elles sont toujours vertes, y compris pour du vent de + de 50 km/h2/ Affichage du pourcentage de probabilité de pluie: Comme cela est visible sur l'impression d'écran, j'ai un bug graphique avec un petit rectangle rouge pour tous les chiffres des pourcentage de pluie pour les soirées, ce qui correspond aux chiffres des lignes paires sur le fichier 'precipitation'. J'ai tout essayé pour tenter d'effacer ce bug, rien n'y fait. Si je demande d'afficher uniquement un chiffre d'une ligne paire, ce bug est toujours présent. Est il possible de régler ce problème ?3/ Améliorations possibles du script : Etant donné que le code de la localité est défini dans le shell du script, je pense qu'il pourrait être intéressant d'extraire 2 données du fichier ICS : Le nom de la localité et ses coordonnées : BEGIN:VEVENT SEQUENCE:1 CLASS:PUBLIC CREATED:20111028T000000Z LAST-MODIFIED:20111028T000000Z GEO:43.45000076;5.23000002 UID:1319803200-0-0 DTSTAMP:20111028T120000Z DTSTART;VALUE=DATE:20111028 ORGANIZER;CN=Weather Underground:MAILTO:support@wunderground.com DESCRIPTION:Friday - Partly Cloudy. High 21 C Wind SE 32 km/h. \nFriday Night - Chance of Rain. Low 15 C Wind East 32 km/h.Chance of precipitation 20%. LOCATION:Marseille\, France A savoir les lignes 'GEO' et 'LOCATION' (la mention du pays est superflue). Ainsi, il est possible de les appeler et les faire afficher dans le conky. Concernant les températures : Dans le fichier messages.wun les températures sont précédées des mentions 'Maxi' ou 'Mini'. Ces mentions peuvent toujours être rentrées 'à la main' dans le conky. Il me semble plus intéressant que de n'extraire que les données chiffrées brutes, et donc supprimer ces mention maxi ou mini du fichier messages.wun Traductions dans le script : Rajouter 'Rain' 'Pluie' - Changer 'Clear' 'Ensoleillé' par 'Clear' 'Dégagé' car, la nuit, ensoleillé ne veut rien dire.4/ Evolution du script Qu'il s'agisse du script wunderground ou celui de Météo France sur lequel travaillait Didier-T : Dans les 2 cas nous n'avons que l'affichage de prévisions, et en aucun cas l'affichage des données météo en live. Aussi, il me semble intéressant de rajouter ces données là (qui peuvent se substituer à la 1° partie du messages.wun, qui sont les prévisions de la journée. Pour ce faire, j'ai trouvé un script python, qui permet d'aller chercher la T° et les conditions météo en cours chez Wunderground. Le voici : #!/usr/bin/python # # Fetches Weather info from Weather Underground # # Usage: ./wundergound.py zipcode # # International: # * Go to http://www.wunderground.com/ # * Find your city # * Click the RSS icon # * Station ID is the number that follows /stations/ in the url # # # Values are either True or False metric=True international=True import sys import feedparser def usage(): print("Usage:") if international: print(" ./wunderground.py StationID") else: print(" ./weunderground.py zipcode") sys.exit(1) if not len(sys.argv) == 2: usage() location=sys.argv[1] if international: url="http://rss.wunderground.com/auto/rss_full/global/stations/" else: url="http://rss.wunderground.com/auto/rss_full/" feed=feedparser.parse(url+location) if not feed.feed: # Assume Error print("Error") sys.exit(1) current=feed['items'][0].title if metric: temp=current.split(",")[0].split(":")[1].split("/")[1].strip() else: temp=current.split(",")[0].split(":")[1].split("/")[0].strip() condition=current.split(",")[1].split("-")[0].strip() print(temp, "-", condition) Je pense qu'il est également possible par le même biais d'aller chercher les données relatives au vent... Les problèmes à régler avec ce script sont les suivants : Comment faire afficher ces données dans le conky ? Une ligne telle que celle ci dans le conky ${execi 600 /home/chemin du conky/wunderground 'zip code'} fait apparaître une ligne d'affichage avec Température - condition météo. C'est pas terrible. Comment faire associer les conditions météo avec l'icône du temps qui y est associée ? Je laisse ces questions à toute la connaissance dans l'écriture des scripts des spécialistes Je suis prêt à tester toutes les propositions qui seront faites ! The ship is sinking normally... Hors ligne Didier-T Re : [Conky] Alternative à weather.com ( icones conkyforecast ) Bonjour Phyllinux, Pour ton soucis n°2 tu peut tenter de changer de police de caractère, sa devrais régler le problème. pour le reste il faudrait que tu compresse ton répertoire de travail pour le mettre en ligne, et que tu nous fournisses ton conkyrc. Bien sure je prend bonne note de tes remarques pour la suite. Je savais pas que les icônes de vent changeaient de couleur en fonction de la vitesse (pourrais tu m'indiquer les paliers). Hors ligne Phyllinux Re : [Conky] Alternative à weather.com ( icones conkyforecast ) J'ai essayé de changer la police, mais pas d'évolution . J'ai toujours le même bug, quelle que soit la police utilisée. Concernant les couleurs en fonction des vitesses du vent, je ne sais pas quels sont les paliers définis dans conkyforecast, et je n'ai pas réussi à trouver dans quel fichier cela était défini. Je pense cependant que l'on peut partir sur la base suivante : De 0 à 25 : Vert De 26 à 50 : Orange > 50 km/h : Rouge Pour le répertoire de travail, je fais un fichier et je le poste tout à l'heure The ship is sinking normally... Hors ligne Phyllinux Re : [Conky] Alternative à weather.com ( icones conkyforecast ) Voici le répertoire compressé : C'est le répertoire Conky, qui est directement dans mon Home Répertoire conky pour test En dernière ligne du conky se rajoute le lien vers le script d'affichage de carte satellite dont je n'ai pas mis le script dans cette archive The ship is sinking normally... Hors ligne Didier-T Re : [Conky] Alternative à weather.com ( icones conkyforecast ) @ Phyllinux bon pour le problème des icônes vent tu met a la fin de ton DirShell un / dans le fichier formatage-icones-meteo.sh (retire le sa devrais régler le problème) DirShell="$HOME/conky/meteo" Hors ligne Phyllinux Re : [Conky] Alternative à weather.com ( icones conkyforecast ) Non, j'ai fait la modif, et cela ne règle rien du tout. Aujourd'hui, je n'ai qu'une seule icône qui pose problème, celle du vent pour la soirée du jour en cours (la v2.png): gilles@ubuntu:~$ conky -c /home/gilles/conky/meteo/conkyrc1 Conky: forked to background, pid is 27310 gilles@ubuntu:~$ Conky: desktop window (1400045) is subwindow of root window (102) Conky: window type - override Conky: drawing to created window (0x6600001) Conky: drawing to double buffer cp: la cible «/home/gilles/conky/meteo/images/conky/v2.png» n'est pas un répertoire Mais comme je disais, c'est aléatoire, je peux très bien avoir ce message d'erreur pour 5 ou 6 icônes du vent The ship is sinking normally... Hors ligne Didier-T Re : [Conky] Alternative à weather.com ( icones conkyforecast ) @ Phyllinux, essai ceci toujours le même script (c'est lui qui pose probleme) il faut modifier la ligne suivante (ligne 27) cp ${DirShell}/images/vent/$fichier.png ${DirShell}/images/conky/v$n.png en cp "${DirShell}/images/vent/$fichier.png" ${DirShell}/images/conky/v$n.png sa devrait fonctionner chez moi sa règle définitivement le problème. Dernière modification par Didier-T (Le 05/11/2011, à 17:04) Hors ligne Phyllinux Re : [Conky] Alternative à weather.com ( icones conkyforecast ) Un problème de réglé Plus de message d'erreur pour les icônes du vent ! Bravo Ne pas oublier de mettre à jour les fichiers au début du topic The ship is sinking normally... Hors ligne Phyllinux Re : [Conky] Alternative à weather.com ( icones conkyforecast ) Sinon, il y a une autre alternative. Au lieu de Weather Underground, Météo France, Weather.com, il existe aussi ACCUWEATHER. Et avec le dernier script de TeoBigusGeekus, on peut arriver à cela : Au passage, on voit que c'est un début de semaine pourri sur La Canebière! Pour la marche à suivre, voir ici : conky Accuweather post #2 Si quelqu'un sait retravailler le script pour pouvoir traduire les 'Conditions Météo' (Pluie, Soleil, Couvert...), ainsi que les jours, je suis preneur, ainsi que TeoBigusGeekus d'ailleurs Je recherche également comment extraire les icônes, les conditions météo et les températures pour pouvoir les insérer dans un autre conky (l'horloge avec la météo intégrée qui tourne actuellement sous conkyforecast). Je sais qu'il est toujours possible de faire afficher les points qui m'intéressent en faisant tourner le script et en allant les chercher. Mais c'est aussi simple si ils sont dans un fichier généré (à l'image du fichier messages.wun). Merci pour vos propositions The ship is sinking normally... Hors ligne Didier-T Re : [Conky] Alternative à weather.com ( icones conkyforecast ) @ Phyllinux, Tu peut obtenir les infos directement en français sans avoir à les traduire, il suffit de sélectionner la langue désirée dans le menu "Select Your Language" pour toi l'adresse web est Hors ligne Didier-T Re : [Conky] Alternative à weather.com ( icones conkyforecast ) Bonjour a tous, j'ai regardé le script que nous indiquait Phyllinux dans sont dernier post. fichier "acc_int_images" après quelque modification pour qu'il fonctionne correctement sur les pages météo en français du site accuweather.com acc_int_images #!/bin/bash # Modifié par Didier-T (forum.ubuntu-fr.org) pour utilisation sur les pages française de Accuweather DirShell="$HOME/Accuweather_Conky_Int_Images" #copier votre adresse Accuweather ici address="http://www.accuweather.com/fr/fr/%c3%aele-de-france/argenteuil/quick-look.aspx" #function: test_image test_image () { case $1 in 1) echo 1su ;; 2) echo 2msu ;; 3) echo 3psu ;; 4) echo 4ic ;; 5) echo 5h ;; 6) echo 6mc ;; 7) echo 7c ;; 8) echo 8d ;; 11) echo 9f ;; 12) echo 10s ;; 13) echo 11mcs ;; 14) echo 12psus ;; 15) echo 13t ;; 16) echo 14mct ;; 17) echo 15psut ;; 18) echo 16r ;; 19) echo 17fl ;; 20) echo 18mcfl ;; 21) echo 19psfl ;; 22) echo 20sn ;; 23) echo 21mcsn ;; 24) echo 22i ;; 25) echo 23sl ;; 26) echo 24fr ;; 29) echo 25rsn ;; 30) echo 27ho ;; 31) echo 28co ;; 32) echo 26w ;; 33) echo 29cl ;; 34) echo 31mcl ;; 35) echo 32pc ;; 36) echo 33ic ;; 37) echo 34h ;; 38) echo 35mc ;; 39) echo 36pcs ;; 40) echo 37mcs ;; 41) echo 38pct ;; 42) echo 39mct ;; 43) echo 40mcfl ;; 44) echo 41mcsn ;; esac } kill -STOP $(pidof conky) killall wget rm ${DirShell}/*.png rm ${DirShell}/messages_raw wget --save-cookies ${DirShell}/cookie -O ${DirShell}/curr_cond_raw $address addr_week=$(echo $address|sed 's/quick.*$/forecast.aspx/') wget --load-cookies ${DirShell}/cookie -O ${DirShell}/week_raw $addr_week #Current Conditions - curr_cond file egrep -i 'CurrentTemp|CurrentText|RealFeelValue|WindsValue|HumidityValue|DewPointValue|PressureValue|PressureTenValue|VisibilityValue|SunriseValue|SunsetValue|imgCurConCondition' ${DirShell}/curr_cond_raw > ${DirShell}/curr_cond sed -i 's/\(^.*blue\/\|_int.*$\|^.*">\|<\/span>.*$\|&deg;C\)//g' ${DirShell}/curr_cond curr_cond_raw_image=$(sed -n 1p ${DirShell}/curr_cond) sed -i 1s/$curr_cond_raw_image/$(test_image $curr_cond_raw_image)/ ${DirShell}/curr_cond cp ${DirShell}/Forecast_Images/$(sed -n 1p ${DirShell}/curr_cond).png ${DirShell}/cc.png sed -i 's/Unavailable/N\/A/g' ${DirShell}/curr_cond #Forecast of the week - week file egrep -i 'lundi|mardi|mercredi|jeudi|vendredi|samedi|dimanche|&deg|lblDesc|imgIcon' ${DirShell}/week_raw > ${DirShell}/week sed -i '1d' ${DirShell}/week sed -i 's/\(^.*lblDate">\|^.*lblDesc">\|^.*Label1">\|^.*lblRealFeel">\|^.*lblHigh">\|^.*lblRealFeelValue">\|^.*blue\/\|_int.jpg.*$\|<\/span>.*$\|&deg;C\)//g' ${DirShell}/week sed -i -e 's/[lL]undi/LUNDI/' -e 's/[Mm]ardi/MARDI/' -e 's/[Mm]ercredi/MERCREDI/' -e 's/[Jj]eudi/JEUDI/' -e 's/[Vv]endredi/VENDREDI/' -e 's/[Ss]amedi/SAMEDI/' -e 's/[Dd]imanche/DIMANCHE/' ${DirShell}/week for (( i=2; i<=67; i+=5 )) do sed -i "${i}s/ .*$//" ${DirShell}/week done for (( i=1; i<=66; i+=5 )) do image_raw=$(sed -n "${i}"p ${DirShell}/week) sed -i ${i}s/$image_raw/$(test_image $image_raw)/ ${DirShell}/week cp ${DirShell}/Forecast_Images/$(sed -n ${i}p ${DirShell}/week).png ${DirShell}/$i.png done #messages and messages_curr files for (( i=3; i<=68; i+=5 )) do sed -n ${i}p ${DirShell}/week >> ${DirShell}/messages_raw done sed -n 1p ${DirShell}/messages_raw | cut -c -60 > ${DirShell}/messages_curr sed -n 8p ${DirShell}/messages_raw | cut -c -60 >> ${DirShell}/messages_curr for (( i=1; i<=4; i++)) do no=$(sed -n ${i}p ${DirShell}/messages_curr|wc -m) if (( no<=31 )); then sed -i $i"s/$/\n/" ${DirShell}/messages_curr i=$((i+1)) elif (( no>31 )); then sed -i $i"s/^\(.\{31\}\)/\1\n/" ${DirShell}/messages_curr i=$((i+1)) fi done cat ${DirShell}/messages_raw | cut -c -40 > ${DirShell}/messages for (( i=1; i<=28; i++)) do no=$(sed -n ${i}p ${DirShell}/messages|wc -m) if (( no<=21 )); then sed -i $i"s/$/\n/" ${DirShell}/messages i=$((i+1)) elif (( no>21 )); then nbesp=$(awk '{ x=0; x+=gsub("\\ ",""); print x }' ${DirShell}/messages | sed -n "$(($i))p") pos=$(($(($nbesp/2))+1)) sed -i $i"s/ /\n/$pos" ${DirShell}/messages i=$((i+1)) fi done kill -CONT $(pidof conky) Une petite capture pour montrer le resultat Dernière modification par Didier-T (Le 06/11/2011, à 12:10) Hors ligne Phyllinux Re : [Conky] Alternative à weather.com ( icones conkyforecast ) @ Didier-T On y est presque ! La seule information non traduite, c'est celle des conditions météo du moment, comme on peut le voir sur cette capture d'écran : PARTLY SUNNY, ça fait pas trop marseillais comme langue The ship is sinking normally... Hors ligne Phyllinux Re : [Conky] Alternative à weather.com ( icones conkyforecast ) En s'actualisant, la traduction s'est faite : PARTLY SUNNY est devenu Partiellement Ensoleillé (Même la casse s'est modifiée : De tout majuscule, on est passé à 1° lettre en majuscule, le reste en minuscule). Il ne me reste donc plus qu'à voir comment extraire les données qui m'intéressent pour les basculer également sur mon autre conky : Je serai prêt avant la fermeture de l'accès aux données de weather.com The ship is sinking normally... Hors ligne olitask Re : [Conky] Alternative à weather.com ( icones conkyforecast ) Bonjour En passant à ocelot, je me suis dit que j'allais améliorer mon conky jusque là assez sommaire. Je suis assez déçu de voir se mélanger les posts pour les différents services de meteo ( méteofrance, accuweather wunderground ... ) Pour tout dire, et sans vouloir vous vexer car vous faite bien avancer les choses, c'est vraiment le boxon dans ce thread . Je ne sais pas si c'est possible d'améliorer les choses en regroupant les infos pour chaque fournisseur dans un seul message; enfin passons. Pour en revenir à conky Je tente d'installer le conky de Didier T post 21 et j'ai qqs problèmes -Il semble manquer des icones (celles du repertoire /conky ( les autres étant dispatchées dans différant messages... pas tres pratique ) - il ne semble pas trouver les infos sur le site, pour calais le code est bien 621930 ?? -j'ai ce message d'erreur : sed: impossible de lire /home/olivier/.conky//tt: Aucun fichier ou dossier de ce type Conky: Unable to load image '/home/olivier/.conky/conky/j1.gif' ps j'ai modifié le répertoire test en .conky voilà. Bon dimanche Olivier Dernière modification par olitask (Le 06/11/2011, à 17:33) Hors ligne olitask Re : [Conky] Alternative à weather.com ( icones conkyforecast ) Rebonjour Juste pour rajouter un site qui permet aussi d'avoir les prévisions météorologiques ( les prédictions météorologiques étant un terme réservées à meteo france AMHA) . Il s'agit de www.windguru.cz Il est très utilisé pour les sports nautiques ( voile, kitesurf...). Un utilisateur a déjà tenté de faire qq chose : voir ici Hors ligne Didier-T Re : [Conky] Alternative à weather.com ( icones conkyforecast ) @ olitask, bonjour pour commencer le conky météo france est une impasse car le nombre d'infos change au fur et à mesure que la journée avance, mais voici certainement les raisons de tes soucis : Il doit manquer le chemin "/home/olivier/.conky/conky/" qui sert à charger les icônes météo France le chemin indiqué dans ton DirShell est "$HOME/.conky/" supprime le dernier "/" sa ira mieux Oui 621930 correspond bien à ta ville Dernière modification par Didier-T (Le 06/11/2011, à 17:53) Hors ligne Phyllinux Re : [Conky] Alternative à weather.com ( icones conkyforecast ) @ Olitask Il est normal que tu trouves ce topic un peu 'brouillon'. En effet, nous cherchons une alternative à conkyForecast, étant donné que les données utilisées par conkyForecast proviennent du site Weather.com, et que l'accès à ces données sera fermé dans très peu de temps. Du coup, nous tâtonnons afin de trouver une solution qui nous convienne. Nous sommes partis sur les possibilités offertes par Weather Underground (alias Wunderground). De là Didier-T a essayé de récupérer les données provenant du site de Météo France, à l'instar de ce qui était fait par le script téléchargeant les données sur Wunderground. De mon côté, tout en continuant à tester les différents scripts, j'ai trouvé une possibilité avec un autre site météo : Celui d'Accuweather. Ce topic est un donc en quelque sorte un 'laboratoire' où tout le monde apporte ses idées et fait part de ce qu'il a pu trouver. Il s'avère qu'actuellement, le conky le plus abouti est celui utilisant le script récupérant les données d'Accuweather. Tu trouveras le lien sur mon post #38. Didier-T a modifié le script afin d'avoir la totalité des données en français (post #40) au lieu de la version originale en langue anglaise. Donc, si tu veux un conky météo fonctionnel et en français, tu peux partir sur ces bases. De mon côté, je tourne avec celui là, sur lequel je rajoute une couche du script réalisé par Didier-T et Levi59 afin d'obtenir en complément le pourcentage de probabilité de pluie (script avec les données de wunderground), étant donné que Accuweather ne fournit pas ces pourcentages dans les fichiers générés par le script. Bon conky The ship is sinking normally... Hors ligne olitask Re : [Conky] Alternative à weather.com ( icones conkyforecast ) Bonsoir Merci de votre réponse. En fait, il a suffit de créer le répertoire conky a coté de /vent et /metéo. Petit problème , le résultat me donne cela : les icônes sont décalées image , cliquer quel est le paramètre qu'il faut modifier ? Je vais me pencher sur accuweather, mais il faut s'enregistrer sur le forum ubuntu anglais ... merci encore Olivier Hors ligne Phyllinux Re : [Conky] Alternative à weather.com ( icones conkyforecast ) @ Didier-T et Levi59 Je suis satisfait du résultat obtenu avec les script d'Accuweather. Merci Teo pour son script d'origine et merci Didier-T pour la modification avec la traduction J'ai réussi à intégrer les données extraites dans mon autre conky qui donne des infos simplifiées pour la météo. (Je lance mes différents conkys par des scripts Nautilus pour afficher ce que je veux quand je le désire, avec un simple clic droit de souris). Sur ce dernier, j'ai ajouté la couche du script wunderground (merci Didier-T et Levi59 pour leur travail ) pour avoir les pourcentages de probabilité de précipitations. Maintenant, pour être pleinement satisfait du résultat obtenu, il ne me manque plus qu'à régler le problème du bug graphique que j'ai déjà présenté (le petit carré qui se rajoute après le chiffre). Ce bug ne touche que les lignes paires du fichier 'precipitation' généré par le script wunderground. J'ai essayé en changeant la police, en changeant l'ordre d'affichage, rien à faire. Dès que je demande d'afficher un pourcentage pour la nuit, j'ai ce petit carré. L'un d'entre vous pourrait il trouver la raison de ce bug ? Si j'arrive à le supprimer, je serais satisfait à 100% du résultat final ! A titre info, si l'un d'entre vous est intéressé, voici ce que cela donne (avec juste l'affichage des pourcentages de probabilités de pluie dans la journée pour la raison évoquée ci dessus). Je pourrais vous poster les différents fichiers nécessaires. @ Olitask Essaye en mettant {voffset -X} en début de la ligne qui te fait afficher les icônes du vent. Par essais multiples en changeant la valeur du X de ton voffset, tu devrais réussir à trouver le bon positionnement voffset (pixels) Change vertical offset by N pixels. Negative values will cause text to overlap. See also $offset. Souce : Variables pour conky The ship is sinking normally... Hors ligne Didier-T Re : [Conky] Alternative à weather.com ( icones conkyforecast ) Bonsoir Merci de votre réponse. En fait, il a suffit de créer le répertoire conky a coté de /vent et /metéo. Petit problème , le résultat me donne cela : les icônes sont décalées image , cliquer quel est le paramètre qu'il faut modifier ? Je vais me pencher sur accuweather, mais il faut s'enregistrer sur le forum ubuntu anglais ... merci encore Olivier en générale ta ligne d'affichage se présente ainsi ${image "chemin vers l'image/nom de limage" -p -x,y -s 90x90} donc x représente l'axe gauche droite (le plus à gauche étant 0) et y représente l'axe haut bas (le plus haut étant 0) bon courage Hors ligne nc2011 Re : [Conky] Alternative à weather.com ( icones conkyforecast ) bonsoir phillinux je viens d'installer ubuntu et n'e connais donc pas grand chose à linux. je cherchais un moyen d'installer la meteo sur le bureau et suis tombé sur vos recherches. pour ma part, je trouve super ta solution du post #41. ma question est la suivante, quelle est la procédure pour pouvoir l'installer sur le bureau. que dois faire du script? j'ai un petit point particulier, je suis en nouvelle caledonie. alors, nouméa est il pris en compte par le site meteo,? d'autre part désolé pour mes horaires d'envois mais je suis décalé de 10 heures avec vous. merci Hors ligne
I have a simple question in python. If I have a Facebook page ID let say '6127898346' for example, how can I retrieve this page information such as (Likes count) , and store the result on a file? Use some sort of Facebook API package, like https://github.com/pythonforfacebook/facebook-sdk available with import facebook graph = facebook.GraphAPI() page = graph.get_object('6127898346') print '{} has {} likes.'.format(page['name'], page['likes']) Easy way to save everything: import json with open('outf.json', 'w') as f: json.dump(page, f)
You are provided with a function Rand5(). This function returns perfectly random (equal distribution) integers between 1 and 5. Provide the function Rand7(), which uses Rand5() to produce perfectly random integers between 1 and 7. Java - 61 chars int rand7(){int s=0,c=7;while(c-->0)s+=rand5();return s%7+1;} Test driver for validation: class Rand { public static void main(String[] args) { int[] nums = new int[7]; // get a lot of numbers for(int i = 0; i < 10000000; i++) nums[rand7()-1]++; // print the results for(int i = 0; i < 7; i++) System.out.println((i+1) + ": " + nums[i]); } // just for rand5() static java.util.Random r = new java.util.Random(); static int rand5() { return r.nextInt(5)+1; // Random.nextInt(n) returns 0..n-1, so add 1 } static int rand7(){int s=0,c=7;while(c-->0)s+=rand5();return s%7+1;} } Results JavaScript, Perl - 47 (was 52) chars sub rand7{($x=5*&rand5+&rand5-3)<24?int($x/3):&rand7} Plus I get to use the ternary operator AND recursion. Best... day... ever! OK, 47 chars if you use mod instead of div: sub rand7{($x=5*&rand5+&rand5)<30?$x%7+1:&rand7} Ruby - 54 chars (based on Dan McGrath solution, using loop) def rand7;x=8;while x>7 do x=rand5+5*rand5-5 end;x;end Ruby - 45 chars (same solution, using recursion) def rand7;x=rand5+5*rand5-5;x>7 ?rand7: x;end In Python: def Rand7(): while True: x = (Rand5() - 1) * 5 + (Rand5() - 1) if x < 21: return x/3 + 1 In Common Lisp 70 characters: (defun rand7()(let((n(-(+(rand5)(* 5(rand5)))5)))(if(> n 7)(rand7)n))) The parenthesis take up more space than I would like. In c/c++ using rejection sampling int rand7(){int x=8;while(x>7)x=rand5()+5*rand5()-5;return x;} 62 characters. Translation to PHP, from the answer posted ny Dan McGrath. function Rand7(){$x=8;while($x>7)$x=rand5()+5*rand5()-5;return $x;} 67 characters. scala, In Java (or C/C++ I suppose) Translation to Javascript, from the answer posted by Dan McGrath. function Rand7(){x=8;while(x>7)x=rand5()+5*rand5()-5;return x} Python, C++ unsigned int Rand4() { unsigned int r = Rand5(); while(r == 5) r = Rand5(); return(r); } unsigned int myRand2() { return((Rand4() - 1) & 1); } unsigned int Rand7() { return(((myRand2() << 2) | (myRand2() << 1) | myRand2()) + 1); } C++ (106) Golfed int Rand7(){int n=0;for(int i=0;i<3;++i){int r=Rand5();while(r==5) r=Rand5();--r;r&=1;n|=r<<i;} return n;} (Going for speed.) R, 34 characters In R (a language built for statistical computation), a deliberately cheaterish solution: # Construct a Rand5 function Rand5 <- function() sample(seq(5),1) # And the golf Rand7=function(r=Rand5())sample(1:(r/r+6),1) # Or (same character count) Rand7=function(r=Rand5())sample.int(r/r+6,1) # Or even shorter(thanks to @Spacedman) Rand7=function()sample(7)[Rand5()] Thanks to lazy evaluation of arguments, I eliminated the semicolon and braces. Output over 10^6 replicates: > test <- replicate(10^6,Rand7()) > table(test) test 1 2 3 4 5 6 7 142987 142547 143133 142719 142897 142869 142848 library(ggplot2) qplot(test) Clojure - 58 chars (defn rand7[](#(if(<% 8)%(rand7))(+(rand5)(*(rand5)5)-5))) JavaScript, 85 function Rand7(){for(x=0,i=1;i<8;x^=i*((k=Rand5())%2),i*=1+(k<5));return x?x:Rand7()} I know there's shorter answer, but I wanted to show the test of this puzzle. It turns out that only Clyde Lobo's answer using Dan McGrath's rejection sampling is correct (between JS answers). С++ int Rand7() { int r = Rand5(); int n = 5; do { r = (r - 1) * 5 + Rand5(); int m = n * 5 / 7 * 7; if (r <= m) { return r % 7 + 1; } r -= m; n = n * 5 - m; } while (1); } Numbers distribution (1000000 integers): Average number of calls to Rand5() per every generated integer is about 2.2 (2 to 10+). Python, 70 chars def rand7(): while True: n=5*(rand5()-1)+(rand5()-1) if n<21:return n%7+1 but completely correct based on the reasoning here. Java, 65 chars: int rand7(){int r;do{r=rand5()+5*rand5()-5;}while(r>7);return r;} int result = 0; for (int i = 0; i++; i<7) if (((rand(5) + rand(5)) % 2) //check if odd result += 1; return result + 1; R (30 characters) Define rand7: rand7=function(n)sample(7,n,T) Because R was written with statistical analysis in mind, this task is trivial, and I use the built-in function Sample output: > rand7(20) [1] 4 3 6 1 2 4 3 2 3 2 5 1 4 6 4 2 4 6 6 1 > rand7(20) [1] 1 2 5 2 6 4 6 1 7 1 1 3 7 6 4 7 4 2 1 2 > rand7(20) [1] 6 7 1 3 3 1 5 4 3 4 2 1 5 4 4 4 7 7 1 5 Perl, 43 chars, iterative rejection sampling sub rand7{1while($_=5*&rand5-rand5)>6;$_+1} This gives a warning about Ps. The following 46-char version is about three times faster: sub rand7{1while($_=5*&rand5-rand5)>20;$_%7+1} Groovy rand7={if(b==null)b=rand5();(b=(rand5()+b)%7+1)} example distribution over 35,000 iterations: Is it bad that it's stateful? Mathematica, 30 Rand7=Rand5[]~Sum~{7}~Mod~7+1& Java - 66 chars int rand7(){int s;while((s=rand5()*5+rand5())<10);return(s%7+1);} Longer than previous routine, but I think this one returns uniformly distributed numbers in less time. This uses binary token encoding, therefore, here is a hexdump: 00000000 2f 72 61 6e 64 37 7b 38 7b 92 38 37 92 61 7b 92 |/rand7{8{.87.a{.| 00000010 40 7d 69 66 92 75 32 7b 72 61 6e 64 35 7d 92 83 |@}if.u2{rand5}..| 00000020 35 92 6c 92 01 35 92 a9 7d 92 65 7d 92 33 |5.l..5..}.e}.3| 0000002e To try it out, you can also download it. Here is the ungolfed and commented code, together with testing code. % This is the actual rand7 procedure. /rand7{ 8{ % potentialResult % only if the random number is less than or equal to 7, we're done dup 7 le{ % result exit % result }if % potentialResult pop % -/- 2{rand5}repeat % randomNumber1 randomNumber2 5 mul add 5 sub % randomNumber1 + 5*randomNumber2 - 5 = potentialResult }loop }def %Now, some testing code. % For testing, we use the built-in rand operator; % Doesn't really give a 100% even distribution as it returns numbers % from 0 to 2^31-1, which is of course not divisible by 5. /rand5 { rand 5 mod 1 add }def % For testing, we initialize a dict that counts the number of times any number % has been returned. Of course, we start the count at 0 for every number. <<1 1 7{0}for>>begin % Now we're calling the function quite a number of times % and increment the counters accordingly. 1000000 { rand7 dup load 1 add def }repeat % Print the results currentdict{ 2 array astore == }forall Java - 54 int m=0;int rand7(){return(m=m*5&-1>>>1|rand5())%7+1;} Distribution test: Algorithm: > The numbers are not mutually uncorrelated anymore, but individually perfectly random. How about this? int Rand7() { return Rand5()+ Rand5()/2; } static unsigned int gi = 0; int rand7() { return (((rand() % 5 + 1) + (gi++ % 7)) % 7) + 1; } //call this seed before rand7 //maybe it's not best seed, if yo have any good idea tell me please //and thanks JiminP again, he remind me to do this void srand7() { int i, n = time(0); for (i = 0; i < n % 7; i++) rand7(); } The srand7() is the seed of rand7, must call this function before rand7, just like call srand before rand in C. This is a very good one, because it call rand() only one time, and no loop thing, no expends extra memories. Let me explain it: consider a integer array with size of 5: So we got the TABLE, each one of 1-7 appears 5 times in it, and has all 35 numbers, so the probability of each number is 5/35=1/7. And next time, After enough times, we can get the uniform distribution of 1-7. So, we can allocate a array to restore the five elements of 1-7 by loop-left-shift, and get one number from array each time by rand5. Instead, we can generate the all seven arrays before, and using them circularly. The code is simple also, has many short codes can do this. But, we can using the properties of % operation, so the table 1-7 rows is equivalent with (rand5 + i) % 7, that is : a = rand() % 5 + 1 is rand5 in C language, b = gi++ % 7 generates all permutations in table above, and 0 - 6 replace 1 - 7 c = (a + b) % 7 + 1, generates 1 - 7 uniformly. Finally, we got this code: (((rand() % 5 + 1) + (gi++ % 7)) % 7) + 1 But, we can not get 6 and 7 at first call, so we need a seed, some like srand for rand in C/C++, to disarrange the permutation for first formal call. Here is the full code to testing: #include <stdio.h> #include <stdlib.h> #include <time.h> static unsigned int gi = 0; //a = rand() % 5 + 1 is rand5 in C language, //b = gi++ % 7 generates all permutations, //c = (a + b) % 7 + 1, generates 1 - 7 uniformly. //Dont forget call srand7 before rand7 int rand7() { return (((rand() % 5 + 1) + (gi++ % 7)) % 7) + 1; } //call this seed before rand7 //maybe it's not best seed, if yo have any good idea tell me please //and thanks JiminP again, he remind me to do this void srand7() { int i, n = time(0); for (i = 0; i < n % 7; i++) rand7(); } void main(void) { unsigned int result[10] = {0}; int k; srand((unsigned int)time(0)); //initialize the seed for rand srand7() //initialize the rand7 for (k = 0; k < 100000; k++) result[rand7() - 1]++; for (k = 0; k < 7; k++) printf("%d : %.05f\n", k + 1, (float)result[k]/100000); } Do not do the combinations of rand5 operations: 1.Any one expression with one rand5 will get 5 results, not 7, we 2.
274 digits 4444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444000000000000000000000000000000000000000000000000000000000000000000000000000000001111111111111111111111111111111 This took about 20 hours of CPU time to find, and about 2 minutes per prime to prove. In contrast, the 84 digit solution can be found in around 3 minutes. 84 digits 444444444444444444444444444444444444444444444444441111111113333333333333333333333333 77777777999999999999999777777777 (32 digits) 66666666666666622222222222222333 (32 digits) 647777777777777777777777777 (27 digits) 44444441333333333333 (20 digits) 999996677777777777 (18 digits) 167777777777777 (15 digits) I recommend this tool if you want to confirm primality: D. Alpern's ECM Applet Also using a repdigit approach, which seems to be the approach most likely to find large values. The following script algorithmically skips over most numbers or truncations which will result in multiples of 2, 3, 5 and now 11 c/o PeterTaylor (his contribution increased the efficiency by approximately 50%). from my_math import is_prime sets = [ (set('147'), set('0147369'), set('1379')), (set('369'), set('147'), set('1379')), (set('369'), set('0369'), set('17')), (set('258'), set('0258369'), set('39')), (set('369'), set('258'), set('39'))] div2or5 = set('024568') for n in range(3, 100): for sa, sb, sc in sets: for a in sa: for b in sb-set([a]): bm1 = int(b in div2or5) for c in sc-set([b]): if int(a+b+c)%11 == 0: continue for na in xrange(1, n-1, 1+(n&1)): eb = n - na for nb in xrange(1, eb-bm1, 1+(~eb&1)): nc = eb - nb if not is_prime(long(a*(na-1) + b*nb + c*nc)): continue if not is_prime(long(a*na + b*(nb-1) + c*nc)): continue if not is_prime(long(a*na + b*nb + c*(nc-1))): continue if not is_prime(long(a*na + b*nb + c*nc)): continue print a*na + b*nb + c*nc my_math.py can be found here: http://codepad.org/KtXsydxK Alternatively, you could also use the gmpy.is_prime function: GMPY Project Some small speed improvements as a result of profiling. The primality check for the longest of the four candidates has been moved to the end, xrange replaces range, and long replaces int type casts. int seems to have unnecessary overhead if the evaluated expression results in a long. Divisibility Rules Let N be a postitive integer of the form a...ab...bc...c, where a, b and c are repeated digits. By 2 and 5 - To avoid divisibility by 2 and 5, c may not be in the set [0, 2, 4, 5, 6, 8]. Additionally, if b is a member of this set, the length of c may be no less than 2. By 3 - If N = 1 (mod 3), then N may not contain any of [1, 4, 7], as removing any of these would trivially result in a multiple of 3. Likewise for N = 2 (mod 3) and [2, 5, 8]. This implementation uses a slightly weakened form of this: if N contains one of [1, 4, 7], it may not contain any of [2, 5, 8], and vice versa. Additionally, N may not consist solely of [0, 3, 6, 9]. This is largely an equivalent statement, but it does allow for some trivial cases, for example a, b, and c each being repeated a multiple of 3 times. By 11 - As PeterTaylor notes, if N is of the form aabbcc...xxyyzz, that is it consists only of digits repeated an even number of times, it is trivially divisible by 11: a0b0c...x0y0z. This observation eliminates half of the search space. If N is of odd length, then the length of a, b and c must all be odd as well (75% search space reduction), and if N is of even length, then only one of a, b or c may be even in length (25% search space reduction). - Conjecture: if abc is a multiple of 11, for example 407, then all odd repetitions of a, b and c will also be multiples of 11. This falls out of the scope of the above divisibility by 11 rule; in fact, only odd repetitions are among those which are explicitly allowed. I don't have a proof for this, but systematic testing was unable to find a counter-example. Compare: 444077777, 44444000777, 44444440000077777777777, etc. Anyone may feel free to prove or disprove this conjecture. aditsu has since demonstrated this to be correct. Other Forms 2 sets of repeated digits Numbers of the form that randomra was pursuing, a...ab...b, seem to be much more rare. There are only 7 solutions less than 101700, the largest of which is 12 digits in length. 4 sets of repeated digits Numbers of this form, a...ab...bc...cd...d, appear to be more densely distributed than those I was searching for. There are 69 solutions less than 10100, compared to the 32 using 3 sets of repeated digits. Those between 1011 and 10100 are as follows: 190000007777 700000011119 955666663333 47444444441111 66666622222399 280000000033333 1111333333334999 1111333333377779 1199999999900111 3355555666999999 2222233333000099 55555922222222233333 444444440004449999999 3366666633333333377777 3333333333999888883333 4441111113333333333311111 2222222293333333333333999999 999999999339999999977777777777 22222226666666222222222299999999 333333333333333333339944444444444999999999 559999999999933333333333339999999999999999 3333333333333333333111111111111666666666611111 11111111333330000000000000111111111111111111111 777777777770000000000000000000033333339999999999999999999999999 3333333333333333333333333333333333333333333333336666666977777777777777 666666666666666666611111113333337777777777777777777777777777777777777777 3333333333333333333888889999999999999999999999999999999999999999999999999933333333 There's a simple heuristic argument as to why this should be the case. For each digital length, there is a number of repeated sets (i.e. 3 repeated sets, or 4 repeated sets, etc.) for which the expected number of solutions will be the highest. The transition occurs when the number of additional possible solutions, taken as a ratio, outweighs the probability that the additional number to be checked is prime. Given the exponential nature of the possibilities to check, and the logarithmic nature of prime number distribution, this happens relatively quickly. If, for example, we wanted to find a 300 digit solution, checking 4 sets of repeated digits would be far more likely to produce a solution than 3 sets, and 5 sets would be more likely still. However, with the computing power that I have at my disposal, finding a solution much larger than 100 digits with 4 sets would be outside of my capacity, let alone 5 or 6.
I am using a set operation in python to perform a symmetric difference between two numpy arrays. The result, however, is a set and I need to convert it back to a numpy array to move forward. Is there a way to do this? Here's what I tried: a = numpy.array([1,2,3,4,5,6]) b = numpy.array([2,3,5]) c = set(a) ^ set(b) The results is a set: In [27]: c Out[27]: set([1, 4, 6]) If I convert to a numpy array, it places the entire set in the first array element. In [28]: numpy.array(c) Out[28]: array(set([1, 4, 6]), dtype=object) What I need, however, would be this: array([1,4,6],dtype=int) I could loop over the elements to convert one by one, but I will have 100,000 elements and hoped for a built-in function to save the loop. Thanks!