body
stringlengths
603
27.3k
question_id
stringlengths
4
6
label
stringclasses
5 values
meta_data
dict
answer
dict
<p>I just started playing with Python and was hoping to get some feedback regarding the quality of the following snippet. Does this look like proper Python? What would you change? Is this small script well structured?</p> <p>Quick description of functional goal: reorder list such that each element is followed by the value closest to it, starting from 20 (AKA shortest-seek-first algorithm).</p> <p>example: <code>[15, 24, 12, 13, 48, 56, 2]</code><br> becomes: <code>[24, 15, 13, 12, 2, 48, 56]</code></p> <p>better example: <code>[22, 19, 23, 18, 17, 16]</code><br> becomes: <code>[19, 18, 17, 16, 22, 23]</code></p> <pre><code>fcfs_working = [15, 24, 12, 13, 48, 56, 2] fcfs_track = [] track_number = 20 while len(fcfs_working) &gt; 0: track_number = min(fcfs_working, key=lambda x:abs(x-track_number)) fcfs_working.remove(track_number) fcfs_track.append(track_number) </code></pre>
5548
GOOD_ANSWER
{ "AcceptedAnswerId": "5554", "CommentCount": "0", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2011-10-18T05:02:50.907", "Id": "5548", "Score": "10", "Tags": [ "python" ], "Title": "Reorder list such that each element is followed by the value closest to it" }
{ "body": "<pre><code>while len(fcfs_working) &gt; 0:\n</code></pre>\n\n<p>is the same as</p>\n\n<pre><code>while fcfs_working:\n</code></pre>\n", "comments": [ { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2012-11-20T14:58:03.417", "Id": "30048", "Score": "0", "body": "@GarethRees, you're right. I don't recall what I was thinking at the time." } ], "meta_data": { "CommentCount": "1", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2011-10-24T03:39:44.840", "Id": "5554", "ParentId": "5548", "Score": "4" } }
<p>I just started playing with Python and was hoping to get some feedback regarding the quality of the following snippet. Does this look like proper Python? What would you change? Is this small script well structured?</p> <p>Quick description of functional goal: reorder list such that each element is followed by the value closest to it, starting from 20 (AKA shortest-seek-first algorithm).</p> <p>example: <code>[15, 24, 12, 13, 48, 56, 2]</code><br> becomes: <code>[24, 15, 13, 12, 2, 48, 56]</code></p> <p>better example: <code>[22, 19, 23, 18, 17, 16]</code><br> becomes: <code>[19, 18, 17, 16, 22, 23]</code></p> <pre><code>fcfs_working = [15, 24, 12, 13, 48, 56, 2] fcfs_track = [] track_number = 20 while len(fcfs_working) &gt; 0: track_number = min(fcfs_working, key=lambda x:abs(x-track_number)) fcfs_working.remove(track_number) fcfs_track.append(track_number) </code></pre>
5548
WEIRD
{ "AcceptedAnswerId": "5554", "CommentCount": "0", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2011-10-18T05:02:50.907", "Id": "5548", "Score": "10", "Tags": [ "python" ], "Title": "Reorder list such that each element is followed by the value closest to it" }
{ "body": "<p>I'm not familiar with the shortest-seek-first algorithm. However, it may work to calculate the distances of each number in the list from the current position, sort that and then output the results. Consequently:</p>\n\n<pre><code>working = [15, 24, 12, 13, 48, 56, 2]\nposition = 20\ndistances = [(abs(position-x), x) for x in working]\n&gt;&gt; [(5, 15), (4, 24), (8, 12), (7, 13), (28, 48), (36, 56), (18, 2)]\ndistances.sort()\n&gt;&gt; [(4, 24), (5, 15), (7, 13), (8, 12), (18, 2), (28, 48), (36, 56)]\n[x[1] for x in distances]\n&gt;&gt; [24, 15, 13, 12, 2, 48, 56]\n</code></pre>\n", "comments": [ { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2012-11-24T16:42:53.053", "Id": "30255", "Score": "0", "body": "not working as expected, for example: `working = [15, 24, 12, 13, 25, 56, 2]` results with `[24, 15, 25, 13, 12, 2, 56]`" } ], "meta_data": { "CommentCount": "1", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2012-11-20T21:22:58.190", "Id": "18860", "ParentId": "5548", "Score": "0" } }
<p>I just started playing with Python and was hoping to get some feedback regarding the quality of the following snippet. Does this look like proper Python? What would you change? Is this small script well structured?</p> <p>Quick description of functional goal: reorder list such that each element is followed by the value closest to it, starting from 20 (AKA shortest-seek-first algorithm).</p> <p>example: <code>[15, 24, 12, 13, 48, 56, 2]</code><br> becomes: <code>[24, 15, 13, 12, 2, 48, 56]</code></p> <p>better example: <code>[22, 19, 23, 18, 17, 16]</code><br> becomes: <code>[19, 18, 17, 16, 22, 23]</code></p> <pre><code>fcfs_working = [15, 24, 12, 13, 48, 56, 2] fcfs_track = [] track_number = 20 while len(fcfs_working) &gt; 0: track_number = min(fcfs_working, key=lambda x:abs(x-track_number)) fcfs_working.remove(track_number) fcfs_track.append(track_number) </code></pre>
5548
GOOD_ANSWER
{ "AcceptedAnswerId": "5554", "CommentCount": "0", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2011-10-18T05:02:50.907", "Id": "5548", "Score": "10", "Tags": [ "python" ], "Title": "Reorder list such that each element is followed by the value closest to it" }
{ "body": "<p>From the algorithmic perspective, I'd like to suggest another solution (regardless to Python as a language).</p>\n\n<p>Here goes:</p>\n\n<p>let's assume that all items in the given array are different. the other case has a similar solution, so let's focus on the algorithm itself.</p>\n\n<ol>\n<li>Find the nearest number, in terms of distance, from the array to the given external number (20).</li>\n<li>sort the given array.</li>\n<li>find the value in (1) in the array. it can be done using a binary search.</li>\n<li>now, to the main point: the next value would be either to the left of the current value or to the right of the current value, since the distance is now between the next value to the current one, and the array is sorted (!).</li>\n<li>so all you have to do is to hold two additional indexes: inner-left and inner-right. these indexes represent the inner boundaries of the \"hole\" that is being created while constructing the new list.</li>\n</ol>\n\n<p>Demonstration:</p>\n\n<p>original array:</p>\n\n<pre>\n[22, 19, 23, 18, 17, 8]\n</pre>\n\n<p>sorted:</p>\n\n<pre>\n[8, 17, 18, 19, 22, 23]\n</pre>\n\n<p>first element is 19, since abs(20-19) = 1 is the nearest distance to 20.<br>\nlet's find 19 in the sorted array:</p>\n\n<pre>\n[8, 17, 18, 19, 22, 23]\n ^\n</pre>\n\n<p>now the next element would be either 18 or 22, since the array is already sorted:</p>\n\n<pre>\n[8, 17, 18, 19, 22, 23]\n ? ^ ?\n</pre>\n\n<p>so now we have those inner-left and inner-right indexes, let's label them as \"il\" and \"ir\".</p>\n\n<pre>\n[8, 17, 18, 19, 22, 23]\n il ^ ir\n\nresult list: [19]\n</pre>\n\n<p>since 18 is closer to 19 than 22, we take it, and shift il to the left:</p>\n\n<pre>\n[8, 17, 18, 19, 22, 23]\n il ^ ir\n\nresult list: [19, 18]\n</pre>\n\n<p>again, now 17 is closer to 18 than 22</p>\n\n<pre>\n[8, 17, 18, 19, 22, 23]\n il ^ ir\n\nresult list: [19, 18, 17]\n</pre>\n\n<p>now 22 is closer to 17 than 8, so we'd shift the right-index:</p>\n\n<pre>\n[8, 17, 18, 19, 22, 23]\n il ^ ir\n\nresult list: [19, 18, 17, 22]\n</pre>\n\n<p>and the rest is obvious.</p>\n\n<p>Again, this has nothing to do with Python in particular. It's purely an algorithm to be implemented in any language.</p>\n\n<p>This algorithm takes O(n log(n)) in time, while the one suggested by the original poster takes O(n^2) in time.</p>\n", "comments": [], "meta_data": { "CommentCount": "0", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2012-11-23T09:51:18.787", "Id": "18926", "ParentId": "5548", "Score": "3" } }
<p>I'm working on a feature for the <a href="http://github.com/jessemiller/HamlPy" rel="nofollow">HamlPy (Haml for Django)</a> project:</p> <h2>About Haml</h2> <p>For those who don't know, Haml is an indentation-based markup language which compiles to HTML:</p> <pre><code>%ul#atheletes - for athelete in athelete_list %li.athelete{'id': 'athelete_{{ athelete.pk }}'}= athelete.name </code></pre> <p>compiles to</p> <pre><code>&lt;ul id='atheletes'&gt; {% for athelete in athelete_list %} &lt;li class='athelete' id='athelete_{{ athelete.pk }}'&gt;{{ athelete.name }}&lt;/li&gt; {% endfor %} &lt;/ul&gt; </code></pre> <h2>The code</h2> <p><code>{'id': 'athelete_{{ athelete.pk }}'}</code> is referred to as the 'attribute dictionary'. It is an (almost) valid Python dictionary and is currently parsed with some very ugly regular expressions and an <code>eval()</code>. However, I would like to add some features to it that would no longer make it a valid Python dictionary, e.g. using Haml within the attributes:</p> <pre><code>%a.link{ 'class': - if forloop.first link-first - else - if forloop.last link-last 'href': - url some_view } </code></pre> <p>among other things.</p> <p>I began by writing a class which I could swap out for the eval and would pass all of the current tests: </p> <pre><code>import re # Valid characters for dictionary key re_key = re.compile(r'[a-zA-Z0-9-_]+') re_nums = re.compile(r'[0-9\.]+') class AttributeParser: """Parses comma-separated HamlPy attribute values""" def __init__(self, data, terminator): self.terminator=terminator self.s = data.lstrip() # Index of current character being read self.ptr=1 def consume_whitespace(self, include_newlines=False): """Moves the pointer to the next non-whitespace character""" whitespace = (' ', '\t', '\r', '\n') if include_newlines else (' ', '\t') while self.ptr&lt;len(self.s) and self.s[self.ptr] in whitespace: self.ptr+=1 return self.ptr def consume_end_of_value(self): # End of value comma or end of string self.ptr=self.consume_whitespace() if self.s[self.ptr] != self.terminator: if self.s[self.ptr] == ',': self.ptr+=1 else: raise Exception("Expected comma for end of value (after ...%s), but got '%s' instead" % (self.s[max(self.ptr-10,0):self.ptr], self.s[self.ptr])) def read_until_unescaped_character(self, closing, pos=0): """ Moves the dictionary string starting from position *pos* until a *closing* character not preceded by a backslash is found. Returns a tuple containing the string which was read (without any preceding backslashes) and the number of characters which were read. """ initial_pos=pos while pos&lt;len(self.s): if self.s[pos]==closing and (pos==initial_pos or self.s[pos-1]!='\\'): break pos+=1 return (self.s[initial_pos:pos].replace('\\'+closing,closing), pos-initial_pos+1) def parse_value(self): self.ptr=self.consume_whitespace() # Invalid initial value val=False if self.s[self.ptr]==self.terminator: return val # String if self.s[self.ptr] in ("'",'"'): quote=self.s[self.ptr] self.ptr += 1 val,characters_read = self.read_until_unescaped_character(quote, pos=self.ptr) self.ptr += characters_read # Django variable elif self.s[self.ptr:self.ptr+2] == '={': self.ptr+=2 val,characters_read = self.read_until_unescaped_character('}', pos=self.ptr) self.ptr += characters_read val="{{ %s }}" % val # Django tag elif self.s[self.ptr:self.ptr+2] in ['-{', '#{']: self.ptr+=2 val,characters_read = self.read_until_unescaped_character('}', pos=self.ptr) self.ptr += characters_read val=r"{%% %s %%}" % val # Boolean Attributes elif self.s[self.ptr:self.ptr+4] in ['none','None']: val = None self.ptr+=4 # Integers and floats else: match=re_nums.match(self.s[self.ptr:]) if match: val = match.group(0) self.ptr += len(val) if val is False: raise Exception("Failed to parse dictionary value beginning at: %s" % self.s[self.ptr:]) self.consume_end_of_value() return val class AttributeDictParser(AttributeParser): """ Parses a Haml element's attribute dictionary string and provides a Python dictionary of the element attributes """ def __init__(self, s): AttributeParser.__init__(self, s, '}') self.dict={} def parse(self): while self.ptr&lt;len(self.s)-1: key = self.__parse_key() # Tuple/List parsing self.ptr=self.consume_whitespace() if self.s[self.ptr] in ('(', '['): tl_parser = AttributeTupleAndListParser(self.s[self.ptr:]) val = tl_parser.parse() self.ptr += tl_parser.ptr self.consume_end_of_value() else: val = self.parse_value() self.dict[key]=val return self.dict def __parse_key(self): '''Parse key variable and consume up to the colon''' self.ptr=self.consume_whitespace(include_newlines=True) # Consume opening quote quote=None if self.s[self.ptr] in ("'",'"'): quote = self.s[self.ptr] self.ptr += 1 # Extract key if quote: key,characters_read = self.read_until_unescaped_character(quote, pos=self.ptr) self.ptr+=characters_read else: key_match = re_key.match(self.s[self.ptr:]) if key_match is None: raise Exception("Invalid key beginning at: %s" % self.s[self.ptr:]) key = key_match.group(0) self.ptr += len(key) # Consume colon ptr=self.consume_whitespace() if self.s[self.ptr]==':': self.ptr+=1 else: raise Exception("Expected colon for end of key (after ...%s), but got '%s' instead" % (self.s[max(self.ptr-10,0):self.ptr], self.s[self.ptr])) return key def render_attributes(self): attributes=[] for k, v in self.dict.items(): if k != 'id' and k != 'class': # Boolean attributes if v==None: attributes.append( "%s" % (k,)) else: attributes.append( "%s='%s'" % (k,v)) return ' '.join(attributes) class AttributeTupleAndListParser(AttributeParser): def __init__(self, s): if s[0]=='(': terminator = ')' elif s[0]=='[': terminator = ']' AttributeParser.__init__(self, s, terminator) def parse(self): lst=[] # Todo: Must be easier way... val=True while val != False: val = self.parse_value() if val != False: lst.append(val) self.ptr +=1 if self.terminator==')': return tuple(lst) else: return lst </code></pre> <p>The class can be used stand-alone as follows:</p> <pre><code>&gt;&gt;&gt; from attribute_dict_parser import AttributeDictParser &gt;&gt;&gt; a=AttributeDictParser("{'id': 'a', 'class': 'b'}") &gt;&gt;&gt; d=a.parse() &gt;&gt;&gt; d {'id': 'a', 'class': 'b'} &gt;&gt;&gt; type(d) &lt;type 'dict'&gt; </code></pre> <p><code>AttributeDictParser</code> iterates through characters in <code>s</code> (the attribute dictionary) and uses the variable <code>ptr</code> to track its location (to prevent unnecessary string splicing). The function <code>parse_key</code> parses the keys (<code>'id':</code> and <code>'class':</code>), and the function <code>parse_value</code> parses the values (<code>'a'</code> and <code>'b'</code>). parse_value works with data types other than strings. It returns <code>False</code> if it reaches the end of the attribute dictionary, because <code>Null</code> is a valid value to return.</p> <p><code>AttributeTupleAndListParser</code> parses list and tuple values, as these are valid values (e.g. <code>{'id': ['a','b','c']}</code>. </p> <p>Both of these classes inherit from <code>AttributeParser</code> because they share the same way of parsing values.</p> <h2>Questions:</h2> <ol> <li><p>Is this a sensible approach? Am I insane to think that I can move from <code>eval()</code>ing the code as a Python dictionary to a custom parser without causing issues for users, just because it passes the tests?</p></li> <li><p>I'm worried that the performance hit of writing a parser in an interpreted language will be too much compared to doing the <code>eval()</code>. I've written similar things before for parsing JSON expressions, and was dismayed that for all my optimisations, the two-line regular expression won on the benchmarks. I will do some profiling on it once I've tidied up a few things. Are there any notable ineffeciencies in my approach?</p></li> <li><p>There are some things in the old parser that have not been ported to the new one (e.g. supporting the Ruby Haml <code>=&gt;</code> syntax). This feature has never been documented however and I doubt anybody knows it's there. What is a good rule of thumb for breaking an undocumented feature in open source projects?</p></li> <li><p>I would welcome any feedback on my coding style, as I don't get to be around other developers much.</p></li> </ol>
15395
GOOD_ANSWER
{ "AcceptedAnswerId": "15450", "CommentCount": "0", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2012-09-06T22:45:31.233", "Id": "15395", "Score": "5", "Tags": [ "python", "parsing", "django", "haml" ], "Title": "Python parser for attributes in a HAML template" }
{ "body": "<h3>1. Answers to your questions</h3>\n<ol>\n<li><p>If the goal of the project is to be able to include Haml in attribute values, then you've got no choice but to switch to your own parser. I haven't looked at the set of test cases, but it does seem plausible that you are going to introduce incompatibilities because of the complexity of Python's own parser. You are going to find that you have users who used the oddities of Python's string syntax (<code>r</code>-strings, <code>\\u</code>-escapes and all).</p>\n<p>The way to manage the transition from the old parser to the new is to start out by shipping both, with the old parser selected by default, but the new parser selectable with an option. This gives your users time to discover the incompatibilities and fix them (or submit bug reports). Then in a later release make the new parser the default, but with the old parser available but deprecated. Finally, remove the old parser.</p>\n</li>\n<li><p>Correctness and simplicity first, speed later. You can always port the parser to C if nothing else will do.</p>\n</li>\n<li><p>My answer to question 1 applies here too.</p>\n</li>\n<li><p>See below.</p>\n</li>\n</ol>\n<h3>2. Designing a parser</h3>\n<p>Now, let's look at the code. I thought about making a series of comments on the various misfeatures, but that seems less than helpful, given that the whole design of the parser isn't quite right:</p>\n<ol>\n<li><p>There's no separation between the lexer and the parser.</p>\n</li>\n<li><p>You have different classes for different productions in your syntax, so that each time you need to parse a tuple/list, you construct a new <code>AttributeTupleAndListParser</code> object, construct a string for it to parse (by copying the tail of the original string), and then throw away the parser object when done.</p>\n</li>\n<li><p>Some of your parsing methods don't seem well-matched to the syntax of the language, making it difficult to understand what they do. <code>consume_end_of_value</code> is a good example: it doesn't seem to correspond to anything natural in the syntax.</p>\n</li>\n</ol>\n<p>Computer science is by no means a discipline with all the answers, but one thing that we know how to do is write a parser! You don't have to have read <a href=\"http://en.wikipedia.org/wiki/Compilers:_Principles,_Techniques,_and_Tools\" rel=\"noreferrer\">the dragon book</a> from cover to cover to know that it's conventional to develop a <a href=\"http://en.wikipedia.org/wiki/Formal_grammar\" rel=\"noreferrer\"><em>formal grammar</em></a> for your language (often written down in a variation on <a href=\"http://en.wikipedia.org/wiki/Backus%E2%80%93Naur_form\" rel=\"noreferrer\">Backus–Naur form</a>), and then split your code into a <a href=\"http://en.wikipedia.org/wiki/Lexical_analysis\" rel=\"noreferrer\"><em>lexical analyzer</em></a> (which transforms source code into <em>tokens</em> using a finite state machine or something similar) and a <a href=\"http://en.wikipedia.org/wiki/Parsing\" rel=\"noreferrer\"><em>parser</em></a> which takes a stream of tokens and constructs a <em>syntax tree</em> or some other form of output based on the syntax of the input.</p>\n<p>Sticking to this convention has a bunch of advantages: the existence of a formal grammar makes it easier to build compatible implementations; you can modify and test the lexical analyzer independently from the parser and vice versa; and other programmers will find it easier to understand and modify your code.</p>\n<h3>3. Rewriting your parser conventionally</h3>\n<p>Here's how I might start rewriting your parser to use the conventional approach. This implements a deliberately incomplete subset of the HamlPy attribute language, in order to keep the code short and to get it finished in a reasonable amount of time.</p>\n<p>First, a class whose instances represent source tokens. The original string and position of each token is recorded in that token so that we can easily produce an error message related to that token. I've used the built-in exception <code>SyntaxError</code> here so that the error messages match those from other Python libraries. (You might later want to extend this so that the class can represent tokens from files as well as tokens from strings.)</p>\n<pre><code>class Token(object):\n &quot;&quot;&quot;\n An object representing a token in a HamlPy document. Construct it\n using `Token(type, value, source, start, end)` where:\n\n `type` is the token type (`Token.DELIMITER`, `Token.STRING`, etc);\n `value` is the token value;\n `source` is the string from which the token was taken;\n `start` is the character position in `source` where the token starts;\n `ends` is the character position in `source` where the token finishes.\n &quot;&quot;&quot;\n\n # Enumeration of token types.\n DELIMITER = 1\n STRING = 2\n END = 3\n ERROR = 4\n\n def __init__(self, type, value, source, start, end):\n self.type = type\n self.value = value\n self.source = source\n self.start = start\n self.end = end\n\n def __repr__(self):\n type_name = 'UNKNOWN'\n for attr in dir(self):\n if getattr(self, attr) == self.type:\n type_name = attr\n break\n return ('Token(Token.{0}, {1}, {2}, {3}, {4})'\n .format(type_name, repr(self.value), repr(self.source),\n self.start, self.end))\n\n def matches(self, type, value):\n &quot;&quot;&quot;\n Return True iff this token matches the given `type` and `value`.\n &quot;&quot;&quot;\n return self.type == type and self.value == value\n\n def error(self, msg):\n &quot;&quot;&quot;\n Return a `SyntaxError` object describing a problem with this\n token. The argument `msg` is the error message; the token's\n line number and position are also reported.\n &quot;&quot;&quot;\n line_start = 1 + self.source.rfind('\\n', 0, self.start)\n line_end = self.source.find('\\n', self.end)\n if line_end == -1: line_end = len(self.source)\n e = SyntaxError(msg)\n e.lineno = 1 + self.source.count('\\n', 0, self.start)\n e.text = self.source[line_start: line_end]\n e.offset = self.start - line_start + 1\n return e\n</code></pre>\n<p>Second, the lexical analyzer, using Python's <a href=\"http://docs.python.org/library/stdtypes.html#iterator-types\" rel=\"noreferrer\">iterator protocol</a>.</p>\n<pre><code>class Tokenizer(object):\n &quot;&quot;&quot;\n Tokenizer for a subset of HamlPy. Instances of this class support\n the iterator protocol, and yield tokens from the string `s` as\n Token object. When the string `s` runs out, yield an END token.\n\n &gt;&gt;&gt; from pprint import pprint\n &gt;&gt;&gt; pprint(list(Tokenizer('{&quot;a&quot;:&quot;b&quot;}')))\n [Token(Token.DELIMITER, '{', '{&quot;a&quot;:&quot;b&quot;}', 0, 1),\n Token(Token.STRING, 'a', '{&quot;a&quot;:&quot;b&quot;}', 2, 3),\n Token(Token.DELIMITER, ':', '{&quot;a&quot;:&quot;b&quot;}', 4, 5),\n Token(Token.STRING, 'b', '{&quot;a&quot;:&quot;b&quot;}', 6, 7),\n Token(Token.DELIMITER, '}', '{&quot;a&quot;:&quot;b&quot;}', 8, 9),\n Token(Token.END, '', '{&quot;a&quot;:&quot;b&quot;}', 9, 9)]\n &quot;&quot;&quot;\n def __init__(self, s):\n self.iter = self.tokenize(s)\n\n def __iter__(self):\n return self\n\n def next(self):\n return next(self.iter)\n\n # Regular expression matching a source token.\n token_re = re.compile(r'''\n \\s* # Ignore initial whitespace\n (?:([][{},:]) # 1. Delimiter\n |'([^\\\\']*(?:\\\\.[^\\\\']*)*)' # 2. Single-quoted string\n |&quot;([^\\\\&quot;]*(?:\\\\.[^\\\\&quot;]*)*)&quot; # 3. Double-quoted string\n |(\\S) # 4. Something else\n )''', re.X)\n\n # Regular expression matching a backslash and following character.\n backslash_re = re.compile(r'\\\\(.)')\n\n def tokenize(self, s):\n for m in self.token_re.finditer(s):\n if m.group(1):\n yield Token(Token.DELIMITER, m.group(1),\n s, m.start(1), m.end(1))\n elif m.group(2):\n yield Token(Token.STRING,\n self.backslash_re.sub(r'\\1', m.group(2)),\n s, m.start(2), m.end(2))\n elif m.group(3):\n yield Token(Token.STRING,\n self.backslash_re.sub(r'\\1', m.group(3)),\n s, m.start(3), m.end(3))\n else:\n t = Token(Token.ERROR, m.group(4), s, m.start(4), m.end(4))\n raise t.error('Unexpected character')\n yield Token(Token.END, '', s, len(s), len(s))\n</code></pre>\n<p>And third, the <a href=\"http://en.wikipedia.org/wiki/Compilers:_Principles,_Techniques,_and_Tools\" rel=\"noreferrer\">recursive descent</a> parser, with the formal grammar given in the docstring for the class. The parser needs one token of <a href=\"http://en.wikipedia.org/wiki/Parsing#Lookahead\" rel=\"noreferrer\">lookahead</a>.</p>\n<pre><code>class Parser(object):\n &quot;&quot;&quot;\n Parser for the subset of HamlPy with the following grammar:\n\n attribute-dict ::= '{' [attribute-list] '}'\n attribute-list ::= attribute (',' attribute)*\n attribute ::= string ':' value\n value ::= string | '[' [value-list] ']'\n value-list ::= value (',' value)*\n &quot;&quot;&quot;\n\n def __init__(self, s):\n self.tokenizer = Tokenizer(s)\n self.lookahead = None # The lookahead token.\n self.next_token() # Lookahead one token.\n\n def next_token(self):\n &quot;&quot;&quot;\n Return the next token from the lexer and update the lookahead\n token.\n &quot;&quot;&quot;\n t = self.lookahead\n self.lookahead = next(self.tokenizer)\n return t\n\n # Regular expression matching an allowable key.\n key_re = re.compile(r'[a-zA-Z_0-9-]+$')\n\n def parse_value(self):\n t = self.next_token()\n if t.type == Token.STRING:\n return t.value\n elif t.matches(Token.DELIMITER, '['):\n return list(self.parse_value_list())\n else:\n raise t.error('Expected a value')\n\n def parse_value_list(self):\n if self.lookahead.matches(Token.DELIMITER, ']'):\n self.next_token()\n return\n while True:\n yield self.parse_value()\n t = self.next_token()\n if t.matches(Token.DELIMITER, ']'):\n return\n elif not t.matches(Token.DELIMITER, ','):\n raise t.error('Expected &quot;,&quot; or &quot;]&quot;')\n\n def parse_attribute(self):\n t = self.next_token()\n if t.type != Token.STRING:\n raise t.error('Expected a string')\n key = t.value\n if not self.key_re.match(key):\n raise t.error('Invalid key')\n t = self.next_token()\n if not t.matches(Token.DELIMITER, ':'):\n raise t.error('Expected &quot;:&quot;')\n value = self.parse_value()\n return key, value\n\n def parse_attribute_list(self):\n if self.lookahead.matches(Token.DELIMITER, '}'):\n self.next_token()\n return\n while True:\n yield self.parse_attribute()\n t = self.next_token()\n if t.matches(Token.DELIMITER, '}'):\n return\n elif not t.matches(Token.DELIMITER, ','):\n raise t.error('Expected &quot;,&quot; or &quot;}&quot;')\n\n def parse_attribute_dict(self):\n t = self.next_token()\n if not t.matches(Token.DELIMITER, '{'):\n raise t.error('Expected &quot;{&quot;')\n return dict(self.parse_attribute_list())\n</code></pre>\n<p>You'll probably want to know how to handle Haml's significant whitespace. The way to do this is to modify the tokenizer to emit <code>NEWLINE</code>, <code>INDENT</code> and <code>DEDENT</code> tokens, and then modify <code>next_token</code> to take an <code>include_newlines</code> optional parameter, and discard or return these extra tokens as appropriate.</p>\n", "comments": [], "meta_data": { "CommentCount": "0", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2012-09-09T17:16:51.363", "Id": "15450", "ParentId": "15395", "Score": "7" } }
<p>I'm trying to learn how to write functional code with Python and have found some tutorials online. Please note that I know Python is not a promoter for functional programming. I just want to try it out. <a href="http://anandology.com/python-practice-book/functional-programming.html">One tutorial</a> in particular gives this as an exercise:</p> <blockquote> <p>Write a function flatten_dict to flatten a nested dictionary by joining the keys with . character.</p> </blockquote> <p>So I decided to give it a try. Here is what I have and it works fine:</p> <pre><code>def flatten_dict(d, result={}, prv_keys=[]): for k, v in d.iteritems(): if isinstance(v, dict): flatten_dict(v, result, prv_keys + [k]) else: result['.'.join(prv_keys + [k])] = v return result </code></pre> <p>I'd like to know whether this is the best way to solve the problem in python. In particular, I really don't like to pass a list of previous keys to the recursive call. </p>
21033
null
{ "AcceptedAnswerId": "21035", "CommentCount": "0", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-01-29T16:08:41.920", "Id": "21033", "Score": "14", "Tags": [ "python", "functional-programming" ], "Title": "Flatten dictionary in Python (functional style)" }
{ "body": "<p>Your solution really isn't at all functional. You should return a flattened dict and then merge that into your current dictionary. You should also not modify the dictionary, instead create it with all the values it should have. Here is my approach:</p>\n\n<pre><code>def flatten_dict(d):\n def items():\n for key, value in d.items():\n if isinstance(value, dict):\n for subkey, subvalue in flatten_dict(value).items():\n yield key + \".\" + subkey, subvalue\n else:\n yield key, value\n\n return dict(items())\n</code></pre>\n\n<p>Alternative which avoids yield</p>\n\n<pre><code>def flatten_dict(d):\n def expand(key, value):\n if isinstance(value, dict):\n return [ (key + '.' + k, v) for k, v in flatten_dict(value).items() ]\n else:\n return [ (key, value) ]\n\n items = [ item for k, v in d.items() for item in expand(k, v) ]\n\n return dict(items)\n</code></pre>\n", "comments": [ { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-01-29T16:39:53.393", "Id": "33764", "Score": "0", "body": "Thanks. I was trying to iterate through the result during each recursive step but it returns an error stating the size of the dictionary changed. I'm new to python and I barely understand yield. Is it because yield creates the value on the fly without storing them that the code is not blocked anymore?" }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-01-29T16:45:27.547", "Id": "33765", "Score": "0", "body": "One thing though. I did return a flattened dict and merged it into a current dictionary, which is the flattened result of the original dictionary. I'd like to know why it was not functional at all..." }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-01-29T17:05:55.500", "Id": "33767", "Score": "0", "body": "@LimH., if you got that error you were modifying the dictionary you were iterating over. If you are trying to be functional, you shouldn't be modifying dictionaries at all." }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-01-29T17:07:34.050", "Id": "33769", "Score": "0", "body": "No, you did not return a flattened dict and merge it. You ignore the return value of your recursive function call. Your function modifies what is passed to it which is exactly that which you aren't supposed to do in functional style." }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-01-29T17:08:37.700", "Id": "33771", "Score": "0", "body": "yield does create values on the fly, but that has nothing to do with why this works. It works because it creates new objects and never attempts to modify the existing ones." }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-01-29T17:09:11.227", "Id": "33772", "Score": "0", "body": "I see my mistakes now. Thank you very much :) I thought I was passing copies of result to the function. In fact, I think the author of the tutorial I'm following makes the same mistake, at least with the flatten_list function: http://anandology.com/python-practice-book/functional-programming.html#example-flatten-a-list" }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-01-29T20:49:06.400", "Id": "33785", "Score": "0", "body": "Unfortunately, both versions are bugged for 3+ levels of recursion." }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-01-29T21:36:07.440", "Id": "33793", "Score": "2", "body": "@JohnOptionalSmith, I see the problem in the second version, but the first seems to work for me... test case?" }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-01-30T10:17:53.050", "Id": "33807", "Score": "0", "body": "@WinstonEwert Try it with `a = { 'a' : 1, 'b' : { 'leprous' : 'bilateral' }, 'c' : { 'sigh' : 'somniphobia'} }` (actually triggers the bug with 2-level recursion)" }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-01-30T10:33:31.610", "Id": "33810", "Score": "0", "body": "My bad ! CopyPaste error ! Your first version is fine." }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2016-10-24T21:01:49.873", "Id": "272490", "Score": "0", "body": "Thanks for this nice code snippet @WinstonEwert! Could you explain why you defined a function in a function here? Is there a benefit to doing this instead of defining the same functions outside of one another?" }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2016-10-24T21:20:31.280", "Id": "272493", "Score": "2", "body": "@zelusp, the benefit is organizational, the function inside the function is part of of the implementation of the outer function, and putting the function inside makes it clear and prevents other functions from using it." }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2016-10-24T21:33:43.197", "Id": "272496", "Score": "0", "body": "[This link](https://realpython.com/blog/python/inner-functions-what-are-they-good-for/) helped me understand better" } ], "meta_data": { "CommentCount": "13", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-01-29T16:20:49.623", "Id": "21035", "ParentId": "21033", "Score": "19" } }
<p>I'm trying to learn how to write functional code with Python and have found some tutorials online. Please note that I know Python is not a promoter for functional programming. I just want to try it out. <a href="http://anandology.com/python-practice-book/functional-programming.html">One tutorial</a> in particular gives this as an exercise:</p> <blockquote> <p>Write a function flatten_dict to flatten a nested dictionary by joining the keys with . character.</p> </blockquote> <p>So I decided to give it a try. Here is what I have and it works fine:</p> <pre><code>def flatten_dict(d, result={}, prv_keys=[]): for k, v in d.iteritems(): if isinstance(v, dict): flatten_dict(v, result, prv_keys + [k]) else: result['.'.join(prv_keys + [k])] = v return result </code></pre> <p>I'd like to know whether this is the best way to solve the problem in python. In particular, I really don't like to pass a list of previous keys to the recursive call. </p>
21033
GOOD_ANSWER
{ "AcceptedAnswerId": "21035", "CommentCount": "0", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-01-29T16:08:41.920", "Id": "21033", "Score": "14", "Tags": [ "python", "functional-programming" ], "Title": "Flatten dictionary in Python (functional style)" }
{ "body": "<p>Beside avoiding mutations, functional mindset demands to split into elementary functions, along two axes:</p>\n\n<ol>\n<li>Decouple responsibilities.</li>\n<li>By case analysis (eg pattern matching). Here scalar vs dict. </li>\n</ol>\n\n<p>Regarding 1, nested dict traversal has nothing to do with the requirement to create dot separated keys. We've better return a list a keys, and concatenate them afterward. Thus, if you change your mind (using another separator, making abbreviations...), you don't have to dive in the iterator code -and worse, modify it.</p>\n\n<pre><code>def iteritems_nested(d):\n def fetch (suffixes, v0) :\n if isinstance(v0, dict):\n for k, v in v0.items() :\n for i in fetch(suffixes + [k], v): # \"yield from\" in python3.3\n yield i\n else:\n yield (suffixes, v0)\n\n return fetch([], d)\n\ndef flatten_dict(d) :\n return dict( ('.'.join(ks), v) for ks, v in iteritems_nested(d))\n #return { '.'.join(ks) : v for ks,v in iteritems_nested(d) }\n</code></pre>\n", "comments": [ { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-01-30T09:27:12.087", "Id": "33806", "Score": "0", "body": "Thank you for your response. Let me see if I get this right. Basically there are two strategies here. One is to recursively construct the dictionary result and one is to construct the suffixes. I used the second approach but my mistake was that I passed a reference of the result down the recursive chains but the key part is correct. Is that right? Winston Etwert used the first approach right? What's wrong with his code?" }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-01-30T10:25:24.513", "Id": "33808", "Score": "0", "body": "The point is to (a) collect keys until deepest level and (b) concatenate them. You indeed did separate the two (although packed in the same function). Winston concatenate on the fly without modifying (mutating) anything, but an issue lies in recursion." }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-01-30T10:34:48.367", "Id": "33811", "Score": "0", "body": "My bad : Winston's implementation with yield is ok !" }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-01-30T11:38:34.250", "Id": "33818", "Score": "0", "body": "Is packing multiple objectives in the same function bad? I.e., is it bad style or is it conceptually incorrect?" }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-01-30T21:01:43.017", "Id": "33859", "Score": "0", "body": "@LimH. [Both !](http://www.cs.utexas.edu/~shmat/courses/cs345/whyfp.pdf). It can be necessary for performance reason, since Python won't neither inline nor defer computation (lazy programming). To alleviate this point, you can follow the opposite approach : provide to the iterator the way to collect keys -via a folding function- (strategy pattern, kind of)." } ], "meta_data": { "CommentCount": "5", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-01-29T20:47:57.963", "Id": "21045", "ParentId": "21033", "Score": "9" } }
<p>I have a grid as </p> <pre><code>&gt;&gt;&gt; data = np.zeros((3, 5)) &gt;&gt;&gt; data array([[ 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0.]] </code></pre> <p>i wrote a function in order to get the ID of each tile.</p> <pre><code>array([[(0,0), (0,1), (0,2), (0,3), (0,4)], [(1,0), (1,1), (1,2), (1,3), (1,4)], [(2,0), (2,1), (2,2), (2,3), (2,4)]] def get_IDgrid(nx,ny): lstx = list() lsty = list() lstgrid = list() for p in xrange(ny): lstx.append([p]*nx) for p in xrange(ny): lsty.append(range(0,nx)) for p in xrange(ny): lstgrid.extend(zip(lstx[p],lsty[p])) return lstgrid </code></pre> <p>where nx is the number of columns and ny the number of rows</p> <pre><code>test = get_IDgrid(5,3) print test [(0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (1, 0), (1, 1), (1, 2), (1, 3), (1, 4), (2, 0), (2, 1), (2, 2), (2, 3), (2, 4)] </code></pre> <p>this function will be embed inside a class</p> <pre><code>class Grid(object): __slots__= ("xMin","yMax","nx","ny","xDist","yDist","ncell") def __init__(self,xMin,yMax,nx,ny,xDist,yDist): self.xMin = xMin self.yMax = yMax self.nx = nx self.ny = ny self.xDist = xDist self.yDist= yDist self.ncell = nx*ny </code></pre>
22968
GOOD_ANSWER
{ "AcceptedAnswerId": "22973", "CommentCount": "1", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-02-21T16:33:57.390", "Id": "22968", "Score": "1", "Tags": [ "python", "optimization", "performance" ], "Title": "python Improve a function in a elegant way" }
{ "body": "<p>Numpy has built in functions for most simple tasks like this one. In your case, <a href=\"http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndindex.html\" rel=\"nofollow\"><code>numpy.ndindex</code></a> should do the trick:</p>\n\n<pre><code>&gt;&gt;&gt; import numpy as np\n&gt;&gt;&gt; [j for j in np.ndindex(3, 5)]\n[(0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (1, 0), (1, 1), (1, 2), (1, 3), (1, 4),\n (2, 0), (2, 1), (2, 2), (2, 3), (2, 4)]\n</code></pre>\n\n<p>You can get the same result in a similarly compact way using <a href=\"http://docs.python.org/2/library/itertools.html#itertools.product\" rel=\"nofollow\"><code>itertools.product</code></a> :</p>\n\n<pre><code>&gt;&gt;&gt; import itertools\n&gt;&gt;&gt; [j for j in itertools.product(xrange(3), xrange(5))]\n[(0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (1, 0), (1, 1), (1, 2), (1, 3), (1, 4),\n (2, 0), (2, 1), (2, 2), (2, 3), (2, 4)]\n</code></pre>\n\n<p><strong>EDIT</strong> Note that the (now corrected) order of the parameters is reversed with respect to the OP's <code>get_IDgrid</code>.</p>\n\n<p>Both expressions above require a list comprehension, because what gets returned is a generator. You may want to consider whether you really need the whole list of index pairs, or if you could consume them one by one.</p>\n", "comments": [ { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-02-21T17:28:55.673", "Id": "35374", "Score": "0", "body": "Thanks @jaime. Do you think insert the def of [j for j in np.ndindex(5, 3)] in the class can be correct?" }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-02-21T17:33:07.450", "Id": "35376", "Score": "0", "body": "@Gianni It is kind of hard to tell without knowing what exactly you are trying to accomplish. I can't think of any situation in which I would want to store a list of all possible indices to a numpy array, but if you really do need it, then I guess it is OK." }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-02-21T17:34:34.857", "Id": "35379", "Score": "0", "body": "@jamie with [j for j in np.ndindex(5, 3)] return a different result respect my function and the Grid (see the example). Just invert the example :)" }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-02-21T17:47:24.460", "Id": "35381", "Score": "1", "body": "@Gianni I did notice that, it is now corrected. What `np.ndindex` does is what is known as [row major order](http://en.wikipedia.org/wiki/Row-major_order) arrays, which is the standard in C language, and the default in numpy. What your function is doing is column major order, which is the Fortran and Matlab way. If you are using numpy, it will probably make your life easier if you stick to row major." } ], "meta_data": { "CommentCount": "4", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-02-21T17:27:13.847", "Id": "22973", "ParentId": "22968", "Score": "4" } }
<p>I have been working on a project where I needed to analyze multiple, large datasets contained inside many CSV files at the same time. I am not a programmer but an engineer, so I did a lot of searching and reading. Python's stock CSV module provides the basic functionality, but I had a lot of trouble getting the methods to run quickly on 50k-500k rows since many strategies were simply appending. I had lots of problems getting what I wanted and I saw the same questions asked over and over again. I decided to spend some time and write a class that performed these functions and would be portable. If nothing else, myself and other people I work with could use it.</p> <p>I would like some input on the class and any suggestions you may have. I am not a programmer and don't have any formal background so this has been a good OOP intro for me. The end result is in two lines you can read all CSV files in a folder into memory as either pure Python lists or, as lists of NumPy arrays. I have tested it in many scenarios and hopefully found most of the bugs. I'd like to think this is good enough that other people can just copy and paste into their code and move on to the more important stuff. I am open to all critiques and suggestions. Is this something you could use? If not, why?</p> <p>You can try it with generic CSV data. The standard Python lists are flexible in size and data type. NumPy will only work with numeric (float specifically) data that is rectangular in format:</p> <pre><code>x, y, z, 1, 2, 3, 4, 5, 6, ... import numpy as np import csv import os import sys class EasyCSV(object): """Easily open from and save CSV files using lists or numpy arrays. Initiating and using the class is as easy as CSV = EasyCSV('location'). The class takes the following arguements: EasyCSV(location, width=None, np_array='false', skip_rows=0) location is the only mandatory field and is string of the folder location containing .CSV file(s). width is optional and specifies a constant width. The default value None will return a list of lists with variable width. When used with numpy the array will have the dimensions of the first valid numeric row of data. np_array will create a fixed-width numpy array of only float values. skip_rows will skip the specified rows at the top of the file. """ def __init__(self, location, width=None, np_array='false', skip_rows=0): # Initialize default vairables self.np_array = np_array self.skip_rows = skip_rows self.loc = str(location) os.chdir(self.loc) self.dataFiles = [] self.width = width self.i = 0 #Find all CSV files in chosen directory. for files in os.listdir(loc): if files.endswith('CSV') or files.endswith('csv'): self.dataFiles.append(files) #Preallocate array to hold csv data later self.allData = [0] * len(self.dataFiles) def read(self,): '''Reads all files contained in the folder into memory. ''' self.Dict = {} #Stores names of files for later lookup #Main processig loop for files in self.dataFiles: self.trim = 0 self.j = 0 with open(files,'rb') as self.rawFile: print files #Read in CSV object self.newData = csv.reader(self.rawFile) self.dataList = [] #Extend iterates through CSV object and passes to datalist self.dataList.extend(self.newData) #Trims off pre specified lines at the top if self.skip_rows != 0: self.dataList = self.dataList[self.skip_rows:] #Numpy route, requires all numeric input if self.np_array == 'true': #Finds width if not specified if self.width is None: self.width = len(self.dataList[self.skip_rows]) self.CSVdata = np.zeros((len(self.dataList),self.width)) #Iterate through data and adds it to numpy array self.k = 0 for data in self.dataList: try: self.CSVdata[self.j,:] = data self.j+=1 except ValueError: #non numeric data if self.width &lt; len(data): sys.exit('Numpy array too narrow. Choose another width') self.trim+=1 pass self.k+=1 #trims off excess if not self.trim == 0: self.CSVdata = self.CSVdata[:-self.trim] #Python nested lists route; tolerates multiple data types else: #Declare required empty str arrays self.CSVdata = [0]*len(self.dataList) for rows in self.dataList: self.k = 0 self.rows = rows #Handle no width imput, flexible width if self.width is None: self.numrow = [0]*len(self.rows) else: self.numrow = [0]*self.width #Try to convert to float, fall back on string. for data in self.rows: try: self.numrow[self.k] = float(data) except ValueError: try: self.numrow[self.k] = data except IndexError: pass except IndexError: pass self.k+=1 self.CSVdata[self.j] = self.numrow self.j+=1 #append file to allData which contains all files self.allData[self.i] = self.CSVdata #trim CSV off filename and store in Dict for indexing of allData self.dataFiles[self.i] = self.dataFiles[self.i][:-4] self.Dict[self.dataFiles[self.i]] = self.i self.i+=1 def write(self, array, name, destination=None): '''Writes array in memory to file. EasyCSV.write(array, name, destination=None) array is a pointer to the array you want written to CSV name will be the name of said file destination is optional and will change the directory to the location specified. Leaving it at the default value None will overwrite any CSVs that may have been read in by the class earlier. ''' self.array = array self.name = name self.dest = destination #Optional change directory if self.dest is not None: os.chdir(self.dest) #Dict does not hold CSV, check to see if present and trim if not self.name[-4:] == '.CSV' or self.name[-4:] == '.csv': self.name = name + '.CSV' #Create files and write data, 'wb' binary req'd for Win compatibility with open(self.name,'wb') as self.newCSV: self.CSVwrite = csv.writer(self.newCSV,dialect='excel') for data in self.array: self.CSVwrite.writerow(data) os.chdir(self.loc) #Change back to original __init__.loc def lookup(self, key=None): '''Prints a preview of data to the console window with just a key input ''' self.key = key #Dict does not hold CSV, check to see if present and trim if self.key[-4:] == '.CSV' or self.key[-4:] == '.csv': self.key = key[:-4] #Print None case elif self.key is None: print self.allData[0] print self.allData[0] print '... ' * len(self.allData[0][-2]) print self.allData[0][-2] print self.allData[0] #Print everything else else: self.index = self.Dict[self.key] print self.allData[self.index][0] print self.allData[self.index][1] print '... ' * len(self.allData[self.index][-2]) print self.allData[self.index][-2] print self.allData[self.index][-1] def output(self, key=None): '''Returns the array for assignment to a var with just a key input ''' self.key = key #Dict does not hold CSV, check to see if present and trim if self.key is None: return self.allData[0] elif self.key[-4:] == '.CSV' or self.key[-4:] == '.csv': self.key = key[:-4] #Return file requested self.index = self.Dict[self.key] return self.allData[self.Dict[self.key]] ################################################ loc = 'C:\Users\Me\Desktop' CSV = EasyCSV(loc, np_array='false', width=None, skip_rows=0) CSV.read() target = 'somecsv' #with or without .csv/.CSV CSV.lookup(target) A = CSV.output(target) loc2 = 'C:\Users\Me\Desktop\New folder' for keys in CSV.Dict: print keys CSV.write(CSV.output(keys),keys,destination=loc2) </code></pre>
24836
GOOD_ANSWER
{ "AcceptedAnswerId": null, "CommentCount": "4", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-04-07T20:15:44.593", "Id": "24836", "Score": "5", "Tags": [ "python", "parsing", "csv", "numpy", "portability" ], "Title": "Portable Python CSV class" }
{ "body": "<p>Some observations:</p>\n\n<ul>\n<li>You expect <code>read</code> to be called exactly once (otherwise it reads the same files again, right?). You might as well call it from <code>__init__</code> directly. Alternatively, <code>read</code> could take <code>location</code> as parameter, so one could read multiple directories into the object.</li>\n<li>You use strings <code>'true', 'false'</code> where you should use actual <code>bool</code> values <code>True, False</code></li>\n<li>You set instance variables such as <code>self.key = key</code> that you use only locally inside the function, where you could simply use the local variable <code>key</code>.</li>\n<li>The <code>read</code> method is very long. Divide the work into smaller functions and call them from <code>read</code>.</li>\n<li>You have docstrings and a fair amount of comments, good. But then you have really cryptic statements such as <code>self.i = 0</code>.</li>\n<li>Some variable names are misleading, such as <code>files</code> which is actually a single filename.</li>\n<li>Don't change the working directory (<code>os.chdir</code>). Use <code>os.path.join(loc, filename)</code> to construct paths. (If you think it's OK to change it, think what happens if you combine this module with some other module that <em>also</em> thinks it's OK)</li>\n</ul>\n", "comments": [ { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-04-08T15:05:13.380", "Id": "38399", "Score": "0", "body": "This was exactly the sort of stuff I was looking for. Thanks for taking the time to look through it. You hit on a lot of the issues that came up during. Allowing for separate read paths would be really helpful. Also I need to research more on the local vars for a function. I was having problems getting them to work and found it easier to declare a self.xyz. Tan" } ], "meta_data": { "CommentCount": "1", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-04-08T11:31:51.730", "Id": "24852", "ParentId": "24836", "Score": "6" } }
<p>I have been working on a project where I needed to analyze multiple, large datasets contained inside many CSV files at the same time. I am not a programmer but an engineer, so I did a lot of searching and reading. Python's stock CSV module provides the basic functionality, but I had a lot of trouble getting the methods to run quickly on 50k-500k rows since many strategies were simply appending. I had lots of problems getting what I wanted and I saw the same questions asked over and over again. I decided to spend some time and write a class that performed these functions and would be portable. If nothing else, myself and other people I work with could use it.</p> <p>I would like some input on the class and any suggestions you may have. I am not a programmer and don't have any formal background so this has been a good OOP intro for me. The end result is in two lines you can read all CSV files in a folder into memory as either pure Python lists or, as lists of NumPy arrays. I have tested it in many scenarios and hopefully found most of the bugs. I'd like to think this is good enough that other people can just copy and paste into their code and move on to the more important stuff. I am open to all critiques and suggestions. Is this something you could use? If not, why?</p> <p>You can try it with generic CSV data. The standard Python lists are flexible in size and data type. NumPy will only work with numeric (float specifically) data that is rectangular in format:</p> <pre><code>x, y, z, 1, 2, 3, 4, 5, 6, ... import numpy as np import csv import os import sys class EasyCSV(object): """Easily open from and save CSV files using lists or numpy arrays. Initiating and using the class is as easy as CSV = EasyCSV('location'). The class takes the following arguements: EasyCSV(location, width=None, np_array='false', skip_rows=0) location is the only mandatory field and is string of the folder location containing .CSV file(s). width is optional and specifies a constant width. The default value None will return a list of lists with variable width. When used with numpy the array will have the dimensions of the first valid numeric row of data. np_array will create a fixed-width numpy array of only float values. skip_rows will skip the specified rows at the top of the file. """ def __init__(self, location, width=None, np_array='false', skip_rows=0): # Initialize default vairables self.np_array = np_array self.skip_rows = skip_rows self.loc = str(location) os.chdir(self.loc) self.dataFiles = [] self.width = width self.i = 0 #Find all CSV files in chosen directory. for files in os.listdir(loc): if files.endswith('CSV') or files.endswith('csv'): self.dataFiles.append(files) #Preallocate array to hold csv data later self.allData = [0] * len(self.dataFiles) def read(self,): '''Reads all files contained in the folder into memory. ''' self.Dict = {} #Stores names of files for later lookup #Main processig loop for files in self.dataFiles: self.trim = 0 self.j = 0 with open(files,'rb') as self.rawFile: print files #Read in CSV object self.newData = csv.reader(self.rawFile) self.dataList = [] #Extend iterates through CSV object and passes to datalist self.dataList.extend(self.newData) #Trims off pre specified lines at the top if self.skip_rows != 0: self.dataList = self.dataList[self.skip_rows:] #Numpy route, requires all numeric input if self.np_array == 'true': #Finds width if not specified if self.width is None: self.width = len(self.dataList[self.skip_rows]) self.CSVdata = np.zeros((len(self.dataList),self.width)) #Iterate through data and adds it to numpy array self.k = 0 for data in self.dataList: try: self.CSVdata[self.j,:] = data self.j+=1 except ValueError: #non numeric data if self.width &lt; len(data): sys.exit('Numpy array too narrow. Choose another width') self.trim+=1 pass self.k+=1 #trims off excess if not self.trim == 0: self.CSVdata = self.CSVdata[:-self.trim] #Python nested lists route; tolerates multiple data types else: #Declare required empty str arrays self.CSVdata = [0]*len(self.dataList) for rows in self.dataList: self.k = 0 self.rows = rows #Handle no width imput, flexible width if self.width is None: self.numrow = [0]*len(self.rows) else: self.numrow = [0]*self.width #Try to convert to float, fall back on string. for data in self.rows: try: self.numrow[self.k] = float(data) except ValueError: try: self.numrow[self.k] = data except IndexError: pass except IndexError: pass self.k+=1 self.CSVdata[self.j] = self.numrow self.j+=1 #append file to allData which contains all files self.allData[self.i] = self.CSVdata #trim CSV off filename and store in Dict for indexing of allData self.dataFiles[self.i] = self.dataFiles[self.i][:-4] self.Dict[self.dataFiles[self.i]] = self.i self.i+=1 def write(self, array, name, destination=None): '''Writes array in memory to file. EasyCSV.write(array, name, destination=None) array is a pointer to the array you want written to CSV name will be the name of said file destination is optional and will change the directory to the location specified. Leaving it at the default value None will overwrite any CSVs that may have been read in by the class earlier. ''' self.array = array self.name = name self.dest = destination #Optional change directory if self.dest is not None: os.chdir(self.dest) #Dict does not hold CSV, check to see if present and trim if not self.name[-4:] == '.CSV' or self.name[-4:] == '.csv': self.name = name + '.CSV' #Create files and write data, 'wb' binary req'd for Win compatibility with open(self.name,'wb') as self.newCSV: self.CSVwrite = csv.writer(self.newCSV,dialect='excel') for data in self.array: self.CSVwrite.writerow(data) os.chdir(self.loc) #Change back to original __init__.loc def lookup(self, key=None): '''Prints a preview of data to the console window with just a key input ''' self.key = key #Dict does not hold CSV, check to see if present and trim if self.key[-4:] == '.CSV' or self.key[-4:] == '.csv': self.key = key[:-4] #Print None case elif self.key is None: print self.allData[0] print self.allData[0] print '... ' * len(self.allData[0][-2]) print self.allData[0][-2] print self.allData[0] #Print everything else else: self.index = self.Dict[self.key] print self.allData[self.index][0] print self.allData[self.index][1] print '... ' * len(self.allData[self.index][-2]) print self.allData[self.index][-2] print self.allData[self.index][-1] def output(self, key=None): '''Returns the array for assignment to a var with just a key input ''' self.key = key #Dict does not hold CSV, check to see if present and trim if self.key is None: return self.allData[0] elif self.key[-4:] == '.CSV' or self.key[-4:] == '.csv': self.key = key[:-4] #Return file requested self.index = self.Dict[self.key] return self.allData[self.Dict[self.key]] ################################################ loc = 'C:\Users\Me\Desktop' CSV = EasyCSV(loc, np_array='false', width=None, skip_rows=0) CSV.read() target = 'somecsv' #with or without .csv/.CSV CSV.lookup(target) A = CSV.output(target) loc2 = 'C:\Users\Me\Desktop\New folder' for keys in CSV.Dict: print keys CSV.write(CSV.output(keys),keys,destination=loc2) </code></pre>
24836
GOOD_ANSWER
{ "AcceptedAnswerId": null, "CommentCount": "4", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-04-07T20:15:44.593", "Id": "24836", "Score": "5", "Tags": [ "python", "parsing", "csv", "numpy", "portability" ], "Title": "Portable Python CSV class" }
{ "body": "<p><a href=\"https://codereview.stackexchange.com/a/24852/11728\">Janne's points</a> are good. In addition:</p>\n\n<ol>\n<li><p>When I try running this code, it fails:</p>\n\n<pre><code>&gt;&gt;&gt; e = EasyCSV('.')\nTraceback (most recent call last):\n File \"&lt;stdin&gt;\", line 1, in &lt;module&gt;\n File \"cr24836.py\", line 37, in __init__\n for files in os.listdir(loc):\nNameError: global name 'loc' is not defined\n</code></pre>\n\n<p>I presume that <code>loc</code> is a typo for <code>self.loc</code>. This makes me suspicious. Have you actually used or tested this code?</p></li>\n<li><p>The <code>width</code> and <code>skip_rows</code> arguments to the constructor apply to <em>all</em> CSV files in the directory. But isn't it likely that different CSV files will have different widths and need different numbers of rows to be skipped?</p></li>\n<li><p>Your class requires NumPy to be installed (otherwise the line <code>import numpy as np</code> will fail). But since it has a mode of operation that doesn't require NumPy (return lists instead), it would be nice if it worked even if NumPy is not installed. Wait until you're just about to call <code>np.zeros</code> before importing NumPy.</p></li>\n<li><p><code>location</code> is supposed to be the name of a directory, so name it <code>directory</code>.</p></li>\n<li><p>You write <code>self.key[-4:] == '.CSV'</code> but why not use <code>.endswith</code> like you did earlier in the program? Or better still, since you are testing this twice, write a function:</p>\n\n<pre><code>def filename_is_csv(filename):\n \"\"\"Return True if filename has the .csv extension.\"\"\"\n _, ext = os.path.splitext(filename)\n return ext.lower() == '.csv'\n</code></pre>\n\n<p>But having said that, do you really want to insist that this can only read CSV files whose names end with <code>.csv</code>? What if someone has CSV stored in a file named <code>foo.data</code>? They'd never be able to read it with your class.</p></li>\n<li><p>There's nothing in the documentation for the class that explains that I am supposed to call the <code>read()</code> method. (If I don't, nothing happens.)</p></li>\n<li><p>There's nothing in the documentation for the class that explains how I am supposed to access the data that has been loaded into memory.</p></li>\n<li><p>If I want to access the data for a filename, I have look up the filename in the <code>Dict</code> attribute to get the index, and then I could look up the index in the <code>allData</code> attribute to get the data. Why this double lookup? Why not have a dictionary that maps filename to data instead of going via an index?</p></li>\n<li><p>There is no need to preallocate arrays in Python. Wait to create the array until you have some data to put in it, and then <code>append</code> each entry to it. Python is not Fortran!</p></li>\n<li><p>In your <code>read()</code> method, you read all the CSV files into memory. This seems wasteful. What if I had hundreds of files but only wanted to read one of them? Why not wait to read a file until the caller needs it?</p></li>\n<li><p>You convert numeric elements to floating-point numbers. This might not be what I want. For example, if I have a file containing:</p>\n\n<pre><code>Apollo,Launch\n7,19681011\n8,19681221\n9,19690303\n10,19690518\n11,19690716\n12,19691114\n13,19700411\n14,19710131\n15,19710726\n16,19720416\n17,19721207\n</code></pre>\n\n<p>and then I try to read it, all the data has been wrongly converted to floating-point:</p>\n\n<pre><code>&gt;&gt;&gt; e = EasyCSV('.')\n&gt;&gt;&gt; e.read()\napollo.csv\n&gt;&gt;&gt; from pprint import pprint\n&gt;&gt;&gt; pprint(e.allData[e.Dict['apollo']])\n[['Apollo', 'Launch'],\n [7.0, 19681011.0],\n [8.0, 19681221.0],\n [9.0, 19690303.0],\n [10.0, 19690518.0],\n [11.0, 19690716.0],\n [12.0, 19691114.0],\n [13.0, 19700411.0],\n [14.0, 19710131.0],\n [15.0, 19710726.0],\n [16.0, 19720416.0],\n [17.0, 19721207.0]]\n</code></pre>\n\n<p>This can go wrong in other ways. For example, suppose I have a CSV file like this:</p>\n\n<pre><code>product code,inventory\n1a0,81\n7b4,61\n9c2,32\n8d3,90\n1e9,95\n2f4,71\n</code></pre>\n\n<p>When I read it with your class, look at what happens to the sixth row:</p>\n\n<pre><code>&gt;&gt;&gt; e = EasyCSV('.')\n&gt;&gt;&gt; e.read()\ninventory.csv\n&gt;&gt;&gt; pprint(e.allData[e.Dict['inventory']])\n[['product code', 'inventory'],\n ['1a0', 81.0],\n ['7b4', 61.0],\n ['9c2', 32.0],\n ['8d3', 90.0],\n [1000000000.0, 95.0],\n ['2f4', 71.0]]\n</code></pre></li>\n<li><p>You suggest that \"other people can just copy and paste into their code\" but this is never a good idea. How would you distribute bug fixes and other improvements? If you plan for other people to use your code, you should aim to make a package that can be distributed through the <a href=\"https://pypi.python.org/pypi\" rel=\"nofollow noreferrer\">Python Package Index</a>.</p></li>\n</ol>\n\n<p>In summary, your class is misnamed: it does not seem to me as if it would be easy to use in practice.</p>\n", "comments": [], "meta_data": { "CommentCount": "0", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-04-08T15:56:02.063", "Id": "24860", "ParentId": "24836", "Score": "5" } }
<p>As a follow-up to my previous <a href="https://codereview.stackexchange.com/questions/26071/computation-of-prefix-free-codes-with-many-repeated-weight-values-in-reduced-spa">question</a> about prefix free code, I learned about the module unittest and wrote the following set of functions, to be used in order to semi-automatically check the optimality of the output of any new algorithm to compute prefix free code.</p> <p>The code works but I would like it to be as elegant (and compact) as possible in order to potentially include it in a research article, in a more formal and reproducible maneer than the traditional "algorithm". I am proud of how it looks, but I do expect you to still criticize it!!!</p> <pre><code>import unittest, doctest, math def codeIsPrefixFreeCodeMinimal(L,W): """Checks if the prefix free code described by an array $L$ of pairs $(codeLenght_i,nbWeights_i)$ is minimal for weights $W$, by 1) checking if the code respects Kraft's inequality and 2) comparing the lenght of a code encoded with $L$ with the entropy of $W$. """ assert respectsKraftInequality(L) return compressedTextLenght(L,W) &lt;= NTimesEntropy(W)+len(W) def respectsKraftInequality(L): """Checks if the given array $L$ of pairs $(codeLenght_i,nbWeights_i)$ corresponds to a prefix free code by checking Kraft's inequality, i.e. $\sum_i nbWeights_i 2^{-codelenght_i} \leq 1$. """ return KraftSum(L) &lt;= 1 ; def KraftSum(L): """Computes the Kraft sum of the prefix free code described by an array $L$ of pairs $(codeLenght_i,nbWeights_i)$ i.e. $\sum_i nbWeights_i 2^{-codelenght_i}$. """ if len(L)==0: return 0 terms = map( lambda x: x[1] * math.pow(2,-x[0]), L) return sum(terms) class TestKraftSum(unittest.TestCase): def test_empty(self): """Empty input.""" self.assertEqual(KraftSum([]),0) def test_singleton(self): """Singleton with one single symbol.""" self.assertEqual(KraftSum([(0,1)]),1) def test_simpleCode(self): """Simple Code with code lenghts [1,2,2].""" self.assertEqual(KraftSum([(1,1),(2,2)]),1) def test_fourEqual(self): """Four equal weights""" self.assertEqual(KraftSum([(2,4)]),1) def test_HuffmanExample(self): """Example from Huffman's article""" self.assertEqual(KraftSum([(5,6),(4,3),(3,3),(2,1)]),1) def test_MoffatTurpinExample(self): """Example from Moffat and Turpin's article""" self.assertEqual(KraftSum([(5,4),(4,4),(3,3),(2,1)]),1) def NTimesEntropy(W): """Returns N times the entropy, rounded to the next integer, as computed by $\lceil \sum_{i=1}^N W[i]/\sum(W) \log (sum(W) / W[i]) \rceil$. """ if len(W)==0: return 0 assert min(W)&gt;0 sumWeights = sum(W) terms = map( lambda x: x * math.log(x,2), W ) return math.ceil(sumWeights * math.log(sumWeights,2) - sum(terms)) class TestNTimesEntropy(unittest.TestCase): def test_empty(self): """Empty input""" self.assertEqual(NTimesEntropy([]),0) def test_singleton(self): """Singleton""" self.assertEqual(NTimesEntropy([1]),0) def test_pair(self): """Pair""" self.assertEqual(NTimesEntropy([1,1]),2) def test_fourEqual(self): """Four equal weights""" self.assertEqual(NTimesEntropy([1,1,1,1]),8) def test_HuffmanExample(self): """Example from Huffman's article""" self.assertEqual(NTimesEntropy([1,3,4,4,4,4,6,6,10,10,10,18,20]),336) def test_MoffatTurpinExample(self): """Example from Moffat and Turpin's article""" self.assertEqual(NTimesEntropy([1,1,1,1,1,2,2,2,2,3,3,6]),84) def compressedTextLength(L,W): """Computes the lengths of a text which frequencies are given by an array $W$, when it is compressed by a prefix free code described by an array $L$ of pairs $(codeLenght_i,nbWeights_i)$. """ compressedTextLength = 0 Ls = sorted(L, reverse=True) Ws = sorted(W) for (l,n) in Ls: compressedTextLength += l*sum(Ws[0:n]) Ws = Ws[n:] return compressedTextLength class TestcompressedTextLength(unittest.TestCase): def test_empty(self): """Empty input""" self.assertEqual(compressedTextLength([],[]),0) def test_pair(self): """Pair of symbols, arbitrary text""" self.assertEqual(compressedTextLength([(1,2)],[1,1]),2) def test_fourEqual(self): """Four equal weights""" self.assertEqual(compressedTextLength([(2,4)],[1,1,1,1]),8) def test_HuffmanExample(self): """Example from Huffman's article (compares with value compared by hand)""" self.assertEqual(compressedTextLength([(5,6),(4,3),(3,3),(2,1)],[1,3,4,4,4,4,6,6,10,10,10,18,20]),342) def test_MoffatTurpinExample(self): """Example from Moffat and Turpin's article (compares with entropy value)""" self.assertEqual(compressedTextLength([(5,4),(4,4),(3,3),(2,1)],[1,1,1,1,1,2,2,2,2,3,3,6]),84) def main(): unittest.main() if __name__ == '__main__': doctest.testmod() main() </code></pre>
27821
GOOD_ANSWER
{ "AcceptedAnswerId": null, "CommentCount": "0", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-06-26T21:45:48.927", "Id": "27821", "Score": "1", "Tags": [ "python" ], "Title": "Code to check for the optimality of a minimal prefix free code" }
{ "body": "<p>Most Python I read these days prefers list comprehensions over <code>map</code> or <code>filter</code>. For example, I'd change</p>\n\n<pre><code>terms = map( lambda x: x[1] * math.pow(2,-x[0]), L)\nreturn sum(terms)\n</code></pre>\n\n<p>to</p>\n\n<pre><code>return sum(x[1] * math.pow(2, -x[0]) for x in L)\n</code></pre>\n\n<hr>\n\n<p>You consistently misspell \"length\" as \"lenght\".</p>\n\n<hr>\n\n<p>Your formatting is odd. Sometimes you have 3 blank lines between functions, sometimes 0. Likewise, sometimes you write <code>foo &lt;= bar</code> and sometimes <code>foo=bar+baz</code>. Sometimes your functions begin with a lowercase letter (<code>compressedTextLength</code>, <code>respectsKraftInequality</code>) and sometimes with an uppercase (<code>KraftSum</code>). Look over <a href=\"http://www.python.org/dev/peps/pep-0008/\" rel=\"nofollow\">PEP8</a> for formatting recommendations.</p>\n\n<hr>\n\n<p>In <code>compressedTextLength</code>, you can rewrite</p>\n\n<pre><code>for (l,n) in Ls:\n compressedTextLength += l*sum(Ws[0:n])\n Ws = Ws[n:]\n</code></pre>\n\n<p>as</p>\n\n<pre><code>for i, (l, n) in enumerate(Ls):\n compressedTextLength += l * sum(Ws[i:i+n])\n</code></pre>\n\n<hr>\n\n<p>You call <code>doctest.testmod()</code> but it doesn't look like you have any doctests.</p>\n\n<hr>\n\n<p>As a generalized note, I would find it difficult to learn anything about this algorithm from reading your code. I also have no idea how to use this module. I would add a docstring to the beginning of the module telling the reader about the functions they should care about, and in each function's docstring I would document what the arguments should be and what the return values are (arrays of ints? floating point numbers? etc).</p>\n\n<p>It looks like this may be a test harness for code that actually computes minimal prefix free codes. If that's a case, document it in the module's docstring.</p>\n", "comments": [], "meta_data": { "CommentCount": "0", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-06-27T05:05:39.987", "Id": "27827", "ParentId": "27821", "Score": "1" } }
<p>I have a file with just 3500 lines like these:</p> <pre><code>filecontent= "13P397;Fotostuff;t;IBM;IBM lalala 123|IBM lalala 1234;28.000 things;;IBMlalala123|IBMlalala1234" </code></pre> <p>Then I want to grab every line from the <code>filecontent</code> that matches a certain string (with python 2.7):</p> <pre><code>this_item= "IBMlalala123" matchingitems = re.findall(".*?;.*?;.*?;.*?;.*?;.*?;.*?"+this_item,filecontent) </code></pre> <p>It needs 17 seconds for each <code>findall</code>. I need to search 4000 times in these 3500 lines. It takes forever. Any idea how to speed it up?</p>
32449
GOOD_ANSWER
{ "AcceptedAnswerId": "32450", "CommentCount": "5", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-10-09T07:47:42.643", "Id": "32449", "Score": "22", "Tags": [ "python", "performance", "regex", "csv", "python-2.x" ], "Title": "Regex to parse semicolon-delimited fields is too slow" }
{ "body": "<p><code>.*?;.*?</code> will cause <a href=\"http://www.regular-expressions.info/catastrophic.html\" rel=\"nofollow noreferrer\">catastrophic backtracking</a>.</p>\n<p>To resolve the performance issues, remove <code>.*?;</code> and replace it with <code>[^;]*;</code>, that should be much faster.</p>\n", "comments": [ { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-10-09T08:35:18.953", "Id": "51840", "Score": "2", "body": "Thanks alot. I didn't got the fact at first, that I have to replace each old element with your new lement. Worked, when I did it like that :D" }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-10-09T14:11:40.400", "Id": "51860", "Score": "7", "body": "Also, one other thing you should do here, is rather than repeating it, as `[^;]*;[^;]*;[^;]*;[^;]*;[^;]*;[^;]*;[^;]*`, you should shrink it down to something more concise, for instance `[^;]*(?:;[^;]*){6}`." }, { "ContentLicense": "CC BY-SA 4.0", "CreationDate": "2021-10-12T21:38:07.453", "Id": "530450", "Score": "0", "body": "Another option (a fairly big shift) would be to use a regex engine other than the standard library one, which doesn't backtrack." } ], "meta_data": { "CommentCount": "3", "ContentLicense": "CC BY-SA 4.0", "CreationDate": "2013-10-09T07:53:31.477", "Id": "32450", "ParentId": "32449", "Score": "35" } }
<p>I have a file with just 3500 lines like these:</p> <pre><code>filecontent= "13P397;Fotostuff;t;IBM;IBM lalala 123|IBM lalala 1234;28.000 things;;IBMlalala123|IBMlalala1234" </code></pre> <p>Then I want to grab every line from the <code>filecontent</code> that matches a certain string (with python 2.7):</p> <pre><code>this_item= "IBMlalala123" matchingitems = re.findall(".*?;.*?;.*?;.*?;.*?;.*?;.*?"+this_item,filecontent) </code></pre> <p>It needs 17 seconds for each <code>findall</code>. I need to search 4000 times in these 3500 lines. It takes forever. Any idea how to speed it up?</p>
32449
BAD_ANSWER
{ "AcceptedAnswerId": "32450", "CommentCount": "5", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-10-09T07:47:42.643", "Id": "32449", "Score": "22", "Tags": [ "python", "performance", "regex", "csv", "python-2.x" ], "Title": "Regex to parse semicolon-delimited fields is too slow" }
{ "body": "<blockquote>\n <p>Some people, when confronted with a problem, think \"I know, I'll use regular expressions.\" Now they have two problems. -- Jamie Zawinski</p>\n</blockquote>\n\n<p>A few things to be commented :</p>\n\n<ol>\n<li><p>Regular expressions might not be the right tool for this.</p></li>\n<li><p><code>.*?;.*?;.*?;.*?;.*?;.*?;.*?\"</code> is potentially very slow and might not do what you want it to do (it could match many more <code>;</code> than what you want). <code>[^;]*;</code> would most probably do what you want.</p></li>\n</ol>\n", "comments": [ { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-10-09T08:14:44.210", "Id": "51838", "Score": "0", "body": "The actual expression would be:\n .*?;.*?;.*?;.*?;.*?;.*?;.*?IBMlalala123\n\n Mind being a bit more explicit? I tried some variations to replace my version with yours, but failed... (should return the whole line, [^;]*;IBMlalala123 just returns the id string)" } ], "meta_data": { "CommentCount": "1", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-10-09T07:56:29.723", "Id": "32451", "ParentId": "32449", "Score": "17" } }
<p>I have a file with just 3500 lines like these:</p> <pre><code>filecontent= "13P397;Fotostuff;t;IBM;IBM lalala 123|IBM lalala 1234;28.000 things;;IBMlalala123|IBMlalala1234" </code></pre> <p>Then I want to grab every line from the <code>filecontent</code> that matches a certain string (with python 2.7):</p> <pre><code>this_item= "IBMlalala123" matchingitems = re.findall(".*?;.*?;.*?;.*?;.*?;.*?;.*?"+this_item,filecontent) </code></pre> <p>It needs 17 seconds for each <code>findall</code>. I need to search 4000 times in these 3500 lines. It takes forever. Any idea how to speed it up?</p>
32449
GOOD_ANSWER
{ "AcceptedAnswerId": "32450", "CommentCount": "5", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-10-09T07:47:42.643", "Id": "32449", "Score": "22", "Tags": [ "python", "performance", "regex", "csv", "python-2.x" ], "Title": "Regex to parse semicolon-delimited fields is too slow" }
{ "body": "<p>Some thoughts:</p>\n\n<p>Do you need a regex? You want a line that contains the string so why not use 'in'?</p>\n\n<p>If you are using the regex to validate the line format, you can do that after the less expensive 'in' finds a candidate line reducing the number of times the regex is used.</p>\n\n<p>If you do need a regex then what about replacing '.<em>?;' with '[^;]</em>;' ?</p>\n", "comments": [], "meta_data": { "CommentCount": "0", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-10-09T13:43:20.077", "Id": "32472", "ParentId": "32449", "Score": "2" } }
<p>I have a file with just 3500 lines like these:</p> <pre><code>filecontent= "13P397;Fotostuff;t;IBM;IBM lalala 123|IBM lalala 1234;28.000 things;;IBMlalala123|IBMlalala1234" </code></pre> <p>Then I want to grab every line from the <code>filecontent</code> that matches a certain string (with python 2.7):</p> <pre><code>this_item= "IBMlalala123" matchingitems = re.findall(".*?;.*?;.*?;.*?;.*?;.*?;.*?"+this_item,filecontent) </code></pre> <p>It needs 17 seconds for each <code>findall</code>. I need to search 4000 times in these 3500 lines. It takes forever. Any idea how to speed it up?</p>
32449
GOOD_ANSWER
{ "AcceptedAnswerId": "32450", "CommentCount": "5", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-10-09T07:47:42.643", "Id": "32449", "Score": "22", "Tags": [ "python", "performance", "regex", "csv", "python-2.x" ], "Title": "Regex to parse semicolon-delimited fields is too slow" }
{ "body": "<p>Use split, like so:</p>\n\n<pre><code>&gt;&gt;&gt; filecontent = \"13P397;Fotostuff;t;IBM;IBM lalala 123|IBM lalala 1234;28.000 things;;IBMlalala123|IBMlalala1234\";\n&gt;&gt;&gt; items = filecontent.split(\";\");\n&gt;&gt;&gt; items;\n['13P397', 'Fotostuff', 't', 'IBM', 'IBM lalala 123|IBM lalala 1234', '28.000 things', '', 'IBMlalala123|IBMlalala1234']\n&gt;&gt;&gt; \n</code></pre>\n\n<p>I'm a bit unsure as what you wanted to do in the last step, but perhaps something like this?</p>\n\n<pre><code>&gt;&gt;&gt; [(i, e) for i,e in enumerate(items) if 'IBMlalala123' in e]\n[(7, 'IBMlalala123|IBMlalala1234')]\n&gt;&gt;&gt; \n</code></pre>\n\n<p>UPDATE: \nIf I get your requirements right on the second attempt: To find all lines in file having 'IBMlalala123' as any one of the semicolon-separated fields, do the following:</p>\n\n<pre><code>&gt;&gt;&gt; with open('big.file', 'r') as f:\n&gt;&gt;&gt; matching_lines = [line for line in f.readlines() if 'IBMlalala123' in line.split(\";\")]\n&gt;&gt;&gt; \n</code></pre>\n", "comments": [ { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-10-09T16:17:42.527", "Id": "51880", "Score": "1", "body": "+1: split is usually much faster than regex for these kind of cases. Especially if you need to use the captured field values aftwerward." }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-10-09T18:36:54.137", "Id": "51889", "Score": "0", "body": "Yup, +1 for split. Regex doesn't appear to be the best tool for this job." }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-11-13T13:44:37.137", "Id": "57136", "Score": "0", "body": "In my case it was about getting every full line (from some thousand lines) that has a certain string in it. Your solution gives back a part of the line, and would need some enhancements to work with some thousand files. I imagine you would suggest splitting by '\\n' and then checking each line with 'if string in line' and putting that into a list then? Don't know if this would be faster." }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-11-13T14:46:42.827", "Id": "57145", "Score": "0", "body": "@Mike: Ok, but in your example I don't see any reference to newlines, did you mean that semicolon should signify newlines? Anyway, there is no \"fast\" way of splitting lines. The OS does not keep track of where newlines are stored, so scanning for newline chars is the way any row-reading lib works AFAIK. But you can of course save a lot of mem by reading line by line." }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-11-14T15:17:33.993", "Id": "57288", "Score": "0", "body": "It wasn't in the example, but in the text before and after it ;) ...file with just 3500 lines... ...want to grab every line from the filecontent that matches a certain string..." }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-11-14T15:30:47.630", "Id": "57293", "Score": "0", "body": "@Mike: So \"filecontent\" actually signifies the content of one line in the file?" }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-11-14T15:54:58.790", "Id": "57301", "Score": "0", "body": "Hallo Alexander, \n...file with just 3500 lines... ...want to grab every line from the filecontent that matches a certain string...\nThis means: the file has 3500 lines in it. I refer to these 3500 lines as filecontent, as it is the content of the file...\nThe phrase 'every line' wouldn't make sense, if we were talking about one-line files." }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-11-14T16:02:03.143", "Id": "57303", "Score": "0", "body": "Your solution is close, but if you look closer at my first attempt (that awful slow one), you will see, I tried to match the entry in the last field. Your last solution would have to look only for the last filed, e.g. line.split(\";\")[5] (if I counted right). It's a nice solution, thanks!" } ], "meta_data": { "CommentCount": "8", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2013-10-09T14:56:41.627", "Id": "32477", "ParentId": "32449", "Score": "16" } }
<p>This is a similar question to <a href="https://stackoverflow.com/questions/492716/reversing-a-regular-expression-in-python">this</a>, but I am looking for the set of <strong>all possible values</strong> that will match a regular expression pattern.</p> <p>To avoid an infinite set of possible values, I am willing to restrict the regular expression pattern to a subset of the regular expression language.</p> <p>Here's the approach I took (Python code):</p> <pre><code>def generate_possible_strings(pattern): ''' input: 'K0[2468]' output: ['K02', 'K04', 'K06', 'K08'] generates a list of possible strings that would match pattern ie, any value X such that re.search(pattern, X) is a match ''' query = re.compile(pattern, re.IGNORECASE) fill_in = string.uppercase + string.digits + '_' # Build a re for a language subset that is supported by reverse_search bracket = r'\[[^\]]*\]' #finds [A-Z], [0-5], [02468] symbol = r'\\.' #finds \w, \d expression = '|'.join((bracket,symbol)) #search query tokens = re.split(expression, pattern) for c in product(fill_in, repeat=len(tokens)-1): candidate = ''.join(roundrobin(tokens, c)) #roundrobin recipe from itertools documentation if query.match(candidate): yield candidate </code></pre> <p>Supported subset of regular expressions language</p> <ul> <li>Supports <code>[]</code> set of characters (<code>[A-Z]</code>, <code>[0-5]</code>, etc)</li> <li>Supports escaped special characters (<code>\w</code>, <code>\d</code>, <code>\D</code>, etc)</li> </ul> <p>Basically what this does is locate all parts of a regular expression that could match a single character (<code>[A-Z]</code> or <code>[0-5]</code> or <code>[02468]</code> or <code>\w</code> or <code>\d</code>), then for all of the valid replacement characters <code>A-Z0-9_</code> test to see if the replacement matches the regular expression.</p> <p>This algorithm is slow for regular expressions with many fields or if <code>fill_in</code> is not restricted to just <code>A-Z0-9_</code>, but at least it guarantees finding every possible string that will match a regular expression in finite time (if the solution set is finite).</p> <p>Is there a faster approach to solving this problem, or an approach that supports a larger percentage of the standard regular expression language?</p>
43981
GOOD_ANSWER
{ "AcceptedAnswerId": null, "CommentCount": "8", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2014-03-10T17:32:16.360", "Id": "43981", "Score": "3", "Tags": [ "python", "performance", "regex" ], "Title": "All possible values that will match a regular expression" }
{ "body": "<p>A major inefficiency in your solution is that you try every <code>fill_in</code> character as a replacement for any character class in the pattern. Instead, you could use the character class to select matching characters from <code>fill_in</code> and only loop over those. </p>\n\n<pre><code>&gt;&gt;&gt; pattern = 'K0[2468]'\n&gt;&gt;&gt; re.findall(expression, pattern)\n['[2468]']\n&gt;&gt;&gt; re.findall('[2468]', fill_in)\n['2', '4', '6', '8']\n</code></pre>\n\n<p>For a more complete existing solution, you may want to look into these:</p>\n\n<ul>\n<li><a href=\"http://qntm.org/lego\" rel=\"nofollow\">Regular expression parsing in Python</a></li>\n<li>invRegex.py in <a href=\"http://pyparsing.wikispaces.com/Examples\" rel=\"nofollow\">Pyparsing examples</a></li>\n</ul>\n", "comments": [ { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2014-03-12T16:05:50.130", "Id": "76503", "Score": "0", "body": "Thank you! This helps a lot, particularly for cases like '\\d\\d\\d', where only 1000 of 50653 candidates are matches (given fill_in is [A-Z0-9_])" } ], "meta_data": { "CommentCount": "1", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2014-03-12T07:33:20.617", "Id": "44129", "ParentId": "43981", "Score": "4" } }
<p>I am trying create an algorithm for finding the zero crossing (check that the signs of all the entries around the entry of interest are not the same) in a two dimensional matrix, as part of implementing the Laplacian of Gaussian edge detection filter for a class, but I feel like I'm fighting against Numpy instead of working with it.</p> <pre><code>import numpy as np range_inc = lambda start, end: range(start, end+1) # Find the zero crossing in the l_o_g image # Done in the most naive way possible def z_c_test(l_o_g_image): print(l_o_g_image) z_c_image = np.zeros(l_o_g_image.shape) for i in range(1, l_o_g_image.shape[0] - 1): for j in range(1, l_o_g_image.shape[1] - 1): neg_count = 0 pos_count = 0 for a in range_inc(-1, 1): for b in range_inc(-1, 1): if a != 0 and b != 0: print("a ", a, " b ", b) if l_o_g_image[i + a, j + b] &lt; 0: neg_count += 1 print("neg") elif l_o_g_image[i + a, j + b] &gt; 0: pos_count += 1 print("pos") else: print("zero") # If all the signs around the pixel are the same # and they're not all zero # then it's not a zero crossing and an edge. # Otherwise, copy it to the edge map. z_c = ((neg_count &gt; 0) and (pos_count &gt; 0)) if z_c: print("True for", i, ",", j) print("pos ", pos_count, " neg ", neg_count) z_c_image[i, j] = 1 return z_c_image </code></pre> <p>Here is the test cases it should pass:</p> <pre><code>test1 = np.array([[0,0,1], [0,0,0], [0,0,0]]) test2 = np.array([[0,0,1], [0,0,0], [0,0,-1]]) test3 = np.array([[0,0,0], [0,0,-1], [0,0,0]]) test4 = np.array([[0,0,0], [0,0,0], [0,0,0]]) true_result = np.array([[0,0,0], [0,1,0], [0,0,0]]) false_result = np.array([[0,0,0], [0,0,0], [0,0,0]]) real_result1 = z_c_test(test1) real_result2 = z_c_test(test2) real_result3 = z_c_test(test3) real_result4 = z_c_test(test4) assert(np.array_equal(real_result1, false_result)) assert(np.array_equal(real_result2, true_result)) assert(np.array_equal(real_result3, false_result)) assert(np.array_equal(real_result4, false_result)) </code></pre> <p>How do I vectorize checking a property in a matrix range? Is there a quick way of accessing all of the entries adjacent to an entry in a matrix?</p>
45458
GOOD_ANSWER
{ "AcceptedAnswerId": "67662", "CommentCount": "7", "ContentLicense": "CC BY-SA 4.0", "CreationDate": "2014-03-27T02:00:06.057", "Id": "45458", "Score": "19", "Tags": [ "python", "matrix", "numpy" ], "Title": "Finding a zero crossing in a matrix" }
{ "body": "<p>One way to get the neighbor coordinates without checking for (a != 0) or (b != 0) on every iteration would be to use a generator. Something like this:</p>\n\n<pre><code>def nborz():\n l = [(-1,-1), (-1,0), (-1,1), (0,-1), (0,1), (1,-1),(1,0),(1,1)]\n try:\n while True:\n yield l.pop(0)\n except StopIteration:\n return None\n....\n\nfor i in range(1,l_o_g_image.shape[0]-1):\n for j in range(1,l_o_g_image.shape[1]-1):\n neg_count = 0\n pos_count = 0\n nbrgen = nborz()\n for (a,b) in nbrgen:\n print \"a \" + str(a) + \" b \" + str(b)\n if(l_o_g_image[i+a,j+b] &lt; 0):\n neg_count += 1\n print \"neg\"\n elif(l_o_g_image[i+a,j+b] &gt; 0):\n pos_count += 1\n print \"pos\"\n else:\n print \"zero\"\n</code></pre>\n", "comments": [ { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2014-04-20T06:02:03.390", "Id": "83594", "Score": "0", "body": "Would that actually be faster?" }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2014-04-20T10:43:59.747", "Id": "83616", "Score": "0", "body": "I would think it might, 1) because it avoids a comparison on every iteration of the inner loops, and 2) because it avoids computation of the index values for the inner loops (counting -1, 0, 1 twice in a nested fashion). However, I have not actually tried it so I don't know for sure." } ], "meta_data": { "CommentCount": "2", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2014-04-20T03:39:41.580", "Id": "47693", "ParentId": "45458", "Score": "1" } }
<p>I am trying create an algorithm for finding the zero crossing (check that the signs of all the entries around the entry of interest are not the same) in a two dimensional matrix, as part of implementing the Laplacian of Gaussian edge detection filter for a class, but I feel like I'm fighting against Numpy instead of working with it.</p> <pre><code>import numpy as np range_inc = lambda start, end: range(start, end+1) # Find the zero crossing in the l_o_g image # Done in the most naive way possible def z_c_test(l_o_g_image): print(l_o_g_image) z_c_image = np.zeros(l_o_g_image.shape) for i in range(1, l_o_g_image.shape[0] - 1): for j in range(1, l_o_g_image.shape[1] - 1): neg_count = 0 pos_count = 0 for a in range_inc(-1, 1): for b in range_inc(-1, 1): if a != 0 and b != 0: print("a ", a, " b ", b) if l_o_g_image[i + a, j + b] &lt; 0: neg_count += 1 print("neg") elif l_o_g_image[i + a, j + b] &gt; 0: pos_count += 1 print("pos") else: print("zero") # If all the signs around the pixel are the same # and they're not all zero # then it's not a zero crossing and an edge. # Otherwise, copy it to the edge map. z_c = ((neg_count &gt; 0) and (pos_count &gt; 0)) if z_c: print("True for", i, ",", j) print("pos ", pos_count, " neg ", neg_count) z_c_image[i, j] = 1 return z_c_image </code></pre> <p>Here is the test cases it should pass:</p> <pre><code>test1 = np.array([[0,0,1], [0,0,0], [0,0,0]]) test2 = np.array([[0,0,1], [0,0,0], [0,0,-1]]) test3 = np.array([[0,0,0], [0,0,-1], [0,0,0]]) test4 = np.array([[0,0,0], [0,0,0], [0,0,0]]) true_result = np.array([[0,0,0], [0,1,0], [0,0,0]]) false_result = np.array([[0,0,0], [0,0,0], [0,0,0]]) real_result1 = z_c_test(test1) real_result2 = z_c_test(test2) real_result3 = z_c_test(test3) real_result4 = z_c_test(test4) assert(np.array_equal(real_result1, false_result)) assert(np.array_equal(real_result2, true_result)) assert(np.array_equal(real_result3, false_result)) assert(np.array_equal(real_result4, false_result)) </code></pre> <p>How do I vectorize checking a property in a matrix range? Is there a quick way of accessing all of the entries adjacent to an entry in a matrix?</p>
45458
GOOD_ANSWER
{ "AcceptedAnswerId": "67662", "CommentCount": "7", "ContentLicense": "CC BY-SA 4.0", "CreationDate": "2014-03-27T02:00:06.057", "Id": "45458", "Score": "19", "Tags": [ "python", "matrix", "numpy" ], "Title": "Finding a zero crossing in a matrix" }
{ "body": "<p>Here's concise method to get the coordinates of the zero-crossings that seems to work according to my tests :</p>\n\n<pre><code>def zcr(x, y):\n return x[numpy.diff(numpy.sign(y)) != 0]\n</code></pre>\n\n<p>Some simple test case :</p>\n\n<pre><code>&gt;&gt;&gt; zcr(numpy.array([0, 1, 2, 3, 4, 5, 6, 7]), [1, 2, 3, -1, -2, 3, 4, -4])\narray([2, 4, 6])\n</code></pre>\n\n<p>This is 2d only, but I believe it is easy to adapt to more dimensions.</p>\n", "comments": [], "meta_data": { "CommentCount": "0", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2014-10-23T08:35:01.540", "Id": "67662", "ParentId": "45458", "Score": "7" } }
<p>I wrote a program that reads a pcap file and parses the HTTP traffic in the pcap to generate a dictionary that contains HTTP headers for each request and response in this pcap.</p> <p>My code does the following:</p> <ol> <li>Uses tcpflow to reassemble the tcp segments</li> <li>Read the files generated by tcpflow and check if it related to HTTP</li> <li>If the file contains HTTP traffic, my code will read the file and generate a corresponding dictionary that contains the HTTP header fields.</li> </ol> <p>I test my code with multiple test cases, but honestly I don't have a good experience in Python, so could anyone check it for me please?</p> <pre><code>import os from os import listdir from os.path import isfile, join from StringIO import StringIO import mimetools def getFields(headers): fields={} i=1 for header in headers: if len(header)==0: continue # if this line is complement for the previous line if header.find(" ")==0 or header.find("\t")==0: continue if len(header.split(":"))&gt;=2: key = header.split(":")[0].strip() # if the key has multiple values such as cookie if fields.has_key(key): fields[key]=fields[key]+" "+header[header.find(":")+1:].strip() else: fields[key]=header[header.find(":")+1:].strip() while headers[i].find(" ")==0 or headers[i].find("\t")==0 : fields[key]=fields[key]+" "+headers[i].strip() i=i+1 # end of the while loop # end of the else else: # else for [if len(header.split(":"))&gt;=2: ] print "ERROR: RFC VIOLATION" # end of the for loop return fields def main(): # you have to write it in the terminal "cd /home/user/Desktop/empty-dir" os.system("tcpflow -r /home/user/Desktop/12.pcap -v") for f in listdir("/home/user/Desktop/empty-dir"): if f.find("80")==19 or f.find("80")==41: with open("/home/user/Desktop/empty-dir"+f) as fh: fields={} content=fh.read() #to test you could replace it with content="any custom http header" if content.find("\r\n\r\n")==-1: print "ERROR: RFC VIOLATION" return headerSection=content.split("\r\n\r\n")[0] headerLines=headerSection.split("\r\n") firstLine=headerLines[0] firstLineFields=firstLine.split(" ") if len(headerLines)&gt;1: fields=getFields(headerLines[1:]) if len(firstLineFields)&gt;=3: if firstLine.find("HTTP")==0: fields["Version"]=firstLineFields[0] fields["Status-code"]=firstLineFields[1] fields["Status-desc"]=" ".join(firstLineFields[2:]) else: fields["Method"]=firstLineFields[0] fields["URL"]=firstLineFields[1] fields["Version"]=firstLineFields[2] else: print "ERROR: RFC VIOLATION" continue print fields print "__________________" return 0 if __name__ == '__main__': main() </code></pre>
57715
GOOD_ANSWER
{ "AcceptedAnswerId": null, "CommentCount": "4", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2014-07-22T18:26:02.543", "Id": "57715", "Score": "10", "Tags": [ "python", "http" ], "Title": "Parse HTTP header using Python and tcpflow" }
{ "body": "<p>New Lines and indentations help the interpreter know where the code terminates and blocks end, you have to be super careful with them </p>\n\n<p>Like in your if condition, you can't have a newline in between the conditions.</p>\n\n<pre><code>if header.find(\" \")==0 or \n header.find(\"\\t\")==0:\n continue\n</code></pre>\n\n<p>This code will error out because you can't have a new line in your condition statement.</p>\n\n<p>Python is New Line Terminated. It should read like this</p>\n\n<pre><code>if header.find(\" \")==0 or header.find(\"\\t\")==0\n continue\n</code></pre>\n\n<p>Same with this piece of code</p>\n\n<pre><code>while headers[i].find(\" \")==0 or \n headers[i].find(\"\\t\")==0 :\n fields[key]=fields[key]+\" \"+headers[i].strip()\n i=i+1\n</code></pre>\n\n<p>It should read:</p>\n\n<pre><code>while headers[i].find(\" \")==0 or headers[i].find(\"\\t\")==0 :\n fields[key]=fields[key]+\" \"+headers[i].strip()\n i=i+1\n</code></pre>\n", "comments": [ { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2014-07-22T19:12:24.773", "Id": "103315", "Score": "0", "body": "when I wrote my code in gedit the indentation was correct,but when I copy/past the code here the indentation is changed." }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2014-07-22T19:14:13.387", "Id": "103316", "Score": "0", "body": "@RaghdaHraiz Edit your question code, but I think you still had issues with Scope of variables" }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2014-07-22T19:15:38.797", "Id": "103317", "Score": "0", "body": "regarding the while statement, it should be inside the loop and in the else statement.\n\nthe else which print the error message is related to this if statement \nif len(header.split(\":\"))>=2: @Malachi" }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2014-07-22T19:30:40.147", "Id": "103321", "Score": "1", "body": "I edited the code indentation and added comments ..could you check it now @Malachi" }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2014-07-22T19:34:47.800", "Id": "103325", "Score": "0", "body": "I saw, and I have changed my review to reflect your edit, thank you." }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2014-07-22T19:36:59.400", "Id": "103326", "Score": "1", "body": "@RaghdaHraiz: Please be aware that code edits based on answers are normally disallowed, but this is a somewhat different case. If someone mentions *other* changes that you weren't aware of, then the original code must stay intact." } ], "meta_data": { "CommentCount": "6", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2014-07-22T18:55:35.333", "Id": "57719", "ParentId": "57715", "Score": "5" } }
<p>I wrote a program that reads a pcap file and parses the HTTP traffic in the pcap to generate a dictionary that contains HTTP headers for each request and response in this pcap.</p> <p>My code does the following:</p> <ol> <li>Uses tcpflow to reassemble the tcp segments</li> <li>Read the files generated by tcpflow and check if it related to HTTP</li> <li>If the file contains HTTP traffic, my code will read the file and generate a corresponding dictionary that contains the HTTP header fields.</li> </ol> <p>I test my code with multiple test cases, but honestly I don't have a good experience in Python, so could anyone check it for me please?</p> <pre><code>import os from os import listdir from os.path import isfile, join from StringIO import StringIO import mimetools def getFields(headers): fields={} i=1 for header in headers: if len(header)==0: continue # if this line is complement for the previous line if header.find(" ")==0 or header.find("\t")==0: continue if len(header.split(":"))&gt;=2: key = header.split(":")[0].strip() # if the key has multiple values such as cookie if fields.has_key(key): fields[key]=fields[key]+" "+header[header.find(":")+1:].strip() else: fields[key]=header[header.find(":")+1:].strip() while headers[i].find(" ")==0 or headers[i].find("\t")==0 : fields[key]=fields[key]+" "+headers[i].strip() i=i+1 # end of the while loop # end of the else else: # else for [if len(header.split(":"))&gt;=2: ] print "ERROR: RFC VIOLATION" # end of the for loop return fields def main(): # you have to write it in the terminal "cd /home/user/Desktop/empty-dir" os.system("tcpflow -r /home/user/Desktop/12.pcap -v") for f in listdir("/home/user/Desktop/empty-dir"): if f.find("80")==19 or f.find("80")==41: with open("/home/user/Desktop/empty-dir"+f) as fh: fields={} content=fh.read() #to test you could replace it with content="any custom http header" if content.find("\r\n\r\n")==-1: print "ERROR: RFC VIOLATION" return headerSection=content.split("\r\n\r\n")[0] headerLines=headerSection.split("\r\n") firstLine=headerLines[0] firstLineFields=firstLine.split(" ") if len(headerLines)&gt;1: fields=getFields(headerLines[1:]) if len(firstLineFields)&gt;=3: if firstLine.find("HTTP")==0: fields["Version"]=firstLineFields[0] fields["Status-code"]=firstLineFields[1] fields["Status-desc"]=" ".join(firstLineFields[2:]) else: fields["Method"]=firstLineFields[0] fields["URL"]=firstLineFields[1] fields["Version"]=firstLineFields[2] else: print "ERROR: RFC VIOLATION" continue print fields print "__________________" return 0 if __name__ == '__main__': main() </code></pre>
57715
GOOD_ANSWER
{ "AcceptedAnswerId": null, "CommentCount": "4", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2014-07-22T18:26:02.543", "Id": "57715", "Score": "10", "Tags": [ "python", "http" ], "Title": "Parse HTTP header using Python and tcpflow" }
{ "body": "<p>A few brief comments:</p>\n\n<ul>\n<li>Use four spaces for each indentation level</li>\n<li>Use a space around each operator (<code>==</code>, <code>&gt;=</code>, ...)</li>\n<li>Use the <code>in</code> operator instead of the <code>has_key</code> method</li>\n<li>Use <code>subprocess.Popen</code> instead of <code>os.system</code></li>\n<li>Use <code>x.startswith(y)</code> (returns a boolean directly) instead of <code>x.find(y) == 0</code></li>\n</ul>\n\n<p>A few longer comments:</p>\n\n<ul>\n<li>I'm not sure what is the logic regarding filenames that you need to implement, but I recommend to have a look at the <code>fnmatch</code> module.</li>\n<li>For the parsing of the HTTP fields, you might want to use a regular expression.</li>\n<li>Also, a comment to make clear when a request or a response is being parsed would make the code more readable.</li>\n<li>Rename the <code>i</code> variable to make clear what is being used for (is <code>headers[i]</code> supposed to be the same as <code>header</code>?).</li>\n<li>Do not reinvent the wheel unless you need to. Check if there's an HTTP parsing library around already.</li>\n</ul>\n", "comments": [ { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2014-07-23T09:22:25.267", "Id": "103431", "Score": "0", "body": "why the in operator is better than has_key and subprocess.Popen is better than os.system @jcollado" }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2014-07-23T10:38:44.627", "Id": "103440", "Score": "0", "body": "According to the documentation [has_key](https://docs.python.org/2/library/stdtypes.html#dict.has_key) has been deprecated (`in` is more generic and can be used with user defined classes that implement the `__contains__` method) and [os.system](https://docs.python.org/2/library/os.html#os.system) is not as powerful as `subprocess.Popen`. There's a section about how to use `subprocess.Popen` instead of `os.system` [here](https://docs.python.org/2/library/subprocess.html#replacing-os-system)." } ], "meta_data": { "CommentCount": "2", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2014-07-22T22:39:01.000", "Id": "57736", "ParentId": "57715", "Score": "3" } }
<p>The profile tells me it took ~15s to run, but without telling me more.</p> <blockquote> <pre class="lang-none prettyprint-override"><code>Tue Aug 19 20:55:38 2014 Profile.prof 3 function calls in 15.623 seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 1 15.623 15.623 15.623 15.623 {singleLoan.genLoan} 1 0.000 0.000 15.623 15.623 &lt;string&gt;:1(&lt;module&gt;) 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} </code></pre> </blockquote> <pre class="lang-python prettyprint-override"><code>import numpy as np cimport numpy as np from libc.stdlib cimport malloc, free from libc.stdlib cimport rand, srand, RAND_MAX import cython cimport cython import StringIO cdef extern from "math.h": int floor(double x) double pow(double x, double y) double exp(double x) cdef double[:] zeros = np.zeros(360) cdef double[:] stepCoupons = np.array([2.0,60,3.0,12.0,4.0,12.0,5.0]) cdef double[:,:] zeros2 = np.empty(shape=(999,2)) paraC2P = StringIO.StringIO('''1 10 0 0 (Intercept) 0 -4.981792 1 10 0 0 lv 50 0.55139 1 10 0 0 lv 51 0.53667 1 10 0 0 lv 52 0.52194 1 10 0 0 lv 53 0.50722 1 10 0 0 lv 54 0.49249 1 10 0 0 lv 55 0.47776 1 10 0 0 lv 56 0.46301 1 10 0 0 lv 57 0.44825 1 10 0 0 lv 58 0.43347 1 10 0 0 lv 59 0.41867 1 10 0 0 lv 60 0.40384 1 10 0 0 lv 61 0.38897 1 10 0 0 lv 62 0.37405 1 10 0 0 lv 63 0.35908 1 10 0 0 lv 64 0.34406 1 10 0 0 lv 65 0.32897 1 10 0 0 lv 66 0.31381 1 10 0 0 lv 67 0.29856 1 10 0 0 lv 68 0.28322 1 10 0 0 lv 69 0.26778 1 10 0 0 lv 70 0.25224 1 10 0 0 lv 71 0.23657 1 10 0 0 lv 72 0.22078 1 10 0 0 lv 73 0.20486 1 10 0 0 lv 74 0.18879 1 10 0 0 lv 75 0.17258 1 10 0 0 lv 76 0.1562 1 10 0 0 lv 77 0.13966 1 10 0 0 lv 78 0.12294 1 10 0 0 lv 79 0.10604 1 10 0 0 lv 80 0.08896 1 10 0 0 lv 81 0.07167 1 10 0 0 lv 82 0.05419 1 10 0 0 lv 83 0.0365 1 10 0 0 lv 84 0.0186 1 10 0 0 lv 85 0.00048 1 10 0 0 lv 86 -0.01785 1 10 0 0 lv 87 -0.03641 1 10 0 0 lv 88 -0.0552 1 10 0 0 lv 89 -0.07422 1 10 0 0 lv 90 -0.09347 1 10 0 0 lv 91 -0.11295 1 10 0 0 lv 92 -0.13267 1 10 0 0 lv 93 -0.15263 1 10 0 0 lv 94 -0.17282 1 10 0 0 lv 95 -0.19325 1 10 0 0 lv 96 -0.21392 1 10 0 0 lv 97 -0.23482 1 10 0 0 lv 98 -0.25596 1 10 0 0 lv 99 -0.27734 1 10 0 0 lv 100 -0.29895 1 10 0 0 lv 101 -0.32078 1 10 0 0 lv 102 -0.34285 1 10 0 0 lv 103 -0.36513 1 10 0 0 lv 104 -0.38764 1 10 0 0 lv 105 -0.41037 1 10 0 0 lv 106 -0.43331 1 10 0 0 lv 107 -0.45646 1 10 0 0 lv 108 -0.47982 1 10 0 0 lv 109 -0.50338 1 10 0 0 lv 110 -0.52713 1 10 0 0 lv 111 -0.55107 1 10 0 0 lv 112 -0.5752 1 10 0 0 lv 113 -0.59952 1 10 0 0 lv 114 -0.624 1 10 0 0 lv 115 -0.64866 1 10 0 0 lv 116 -0.67348 1 10 0 0 lv 117 -0.69846 1 10 0 0 lv 118 -0.72359 1 10 0 0 lv 119 -0.74887 1 10 0 0 lv 120 -0.77429 1 10 0 0 lv 121 -0.79985 1 10 0 0 lv 122 -0.82554 1 10 0 0 lv 123 -0.85135 1 10 0 0 lv 124 -0.87728 1 10 0 0 lv 125 -0.90332 1 10 0 0 lv 126 -0.92947 1 10 0 0 lv 127 -0.95572 1 10 0 0 lv 128 -0.98206 1 10 0 0 lv 129 -1.0085 1 10 0 0 lv 130 -1.03502 1 10 0 0 lv 131 -1.06162 1 10 0 0 lv 132 -1.0883 1 10 0 0 lv 133 -1.11504 1 10 0 0 lv 134 -1.14185 1 10 0 0 lv 135 -1.16872 1 10 0 0 lv 136 -1.19565 1 10 0 0 lv 137 -1.22263 1 10 0 0 lv 138 -1.24965 1 10 0 0 lv 139 -1.27672 1 10 0 0 lv 140 -1.30382 1 10 0 0 lv 141 -1.33097 1 10 0 0 lv 142 -1.35814 1 10 0 0 lv 143 -1.38534 1 10 0 0 lv 144 -1.41257 1 10 0 0 lv 145 -1.43982 1 10 0 0 lv 146 -1.46709 1 10 0 0 lv 147 -1.49437 1 10 0 0 lv 148 -1.52167 1 10 0 0 lv 149 -1.54898 1 10 0 0 lv 150 -1.57629 1 10 0 0 dollarSaving -50 -1.10708 1 10 0 0 dollarSaving -48 -1.08948 1 10 0 0 dollarSaving -46 -1.07188 1 10 0 0 dollarSaving -44 -1.05429 1 10 0 0 dollarSaving -42 -1.03669 1 10 0 0 dollarSaving -40 -1.0191 1 10 0 0 dollarSaving -38 -1.0015 1 10 0 0 dollarSaving -36 -0.98391 1 10 0 0 dollarSaving -34 -0.96632 1 10 0 0 dollarSaving -32 -0.94873 1 10 0 0 dollarSaving -30 -0.93115 1 10 0 0 dollarSaving -28 -0.91357 1 10 0 0 dollarSaving -26 -0.896 1 10 0 0 dollarSaving -24 -0.87843 1 10 0 0 dollarSaving -22 -0.86087 1 10 0 0 dollarSaving -20 -0.84331 1 10 0 0 dollarSaving -18 -0.82577 1 10 0 0 dollarSaving -16 -0.80823 1 10 0 0 dollarSaving -14 -0.79071 1 10 0 0 dollarSaving -12 -0.7732 1 10 0 0 dollarSaving -10 -0.7557 1 10 0 0 dollarSaving -8 -0.73821 1 10 0 0 dollarSaving -6 -0.72074 1 10 0 0 dollarSaving -4 -0.70329 1 10 0 0 dollarSaving -2 -0.68586 1 10 0 0 dollarSaving 0 -0.66844 1 10 0 0 dollarSaving 2 -0.65105 1 10 0 0 dollarSaving 4 -0.63368 1 10 0 0 dollarSaving 6 -0.61634 1 10 0 0 dollarSaving 8 -0.59901 1 10 0 0 dollarSaving 10 -0.58172 1 10 0 0 dollarSaving 12 -0.56446 1 10 0 0 dollarSaving 14 -0.54722 1 10 0 0 dollarSaving 16 -0.53002 1 10 0 0 dollarSaving 18 -0.51285 1 10 0 0 dollarSaving 20 -0.49572 1 10 0 0 dollarSaving 22 -0.47862 1 10 0 0 dollarSaving 24 -0.46157 1 10 0 0 dollarSaving 26 -0.44455 1 10 0 0 dollarSaving 28 -0.42757 1 10 0 0 dollarSaving 30 -0.41064 1 10 0 0 dollarSaving 32 -0.39376 1 10 0 0 dollarSaving 34 -0.37692 1 10 0 0 dollarSaving 36 -0.36014 1 10 0 0 dollarSaving 38 -0.3434 1 10 0 0 dollarSaving 40 -0.32672 1 10 0 0 dollarSaving 42 -0.31009 1 10 0 0 dollarSaving 44 -0.29352 1 10 0 0 dollarSaving 46 -0.27701 1 10 0 0 dollarSaving 48 -0.26056 1 10 0 0 dollarSaving 50 -0.24417 1 10 0 0 dollarSaving 52 -0.22785 1 10 0 0 dollarSaving 54 -0.21159 1 10 0 0 dollarSaving 56 -0.1954 1 10 0 0 dollarSaving 58 -0.17928 1 10 0 0 dollarSaving 60 -0.16323 1 10 0 0 dollarSaving 62 -0.14725 1 10 0 0 dollarSaving 64 -0.13135 1 10 0 0 dollarSaving 66 -0.11553 1 10 0 0 dollarSaving 68 -0.09978 1 10 0 0 dollarSaving 70 -0.08411 1 10 0 0 dollarSaving 72 -0.06853 1 10 0 0 dollarSaving 74 -0.05303 1 10 0 0 dollarSaving 76 -0.03761 1 10 0 0 dollarSaving 78 -0.02229 1 10 0 0 dollarSaving 80 -0.00704 1 10 0 0 dollarSaving 82 0.00811 1 10 0 0 dollarSaving 84 0.02317 1 10 0 0 dollarSaving 86 0.03813 1 10 0 0 dollarSaving 88 0.05301 1 10 0 0 dollarSaving 90 0.06778 1 10 0 0 dollarSaving 92 0.08246 1 10 0 0 dollarSaving 94 0.09704 1 10 0 0 dollarSaving 96 0.11152 1 10 0 0 dollarSaving 98 0.1259 1 10 0 0 dollarSaving 100 0.14018 1 10 0 0 dollarSaving 102 0.15435 1 10 0 0 dollarSaving 104 0.16841 1 10 0 0 dollarSaving 106 0.18237 1 10 0 0 dollarSaving 108 0.19623 1 10 0 0 dollarSaving 110 0.20997 1 10 0 0 dollarSaving 112 0.2236 1 10 0 0 dollarSaving 114 0.23712 1 10 0 0 dollarSaving 116 0.25053 1 10 0 0 dollarSaving 118 0.26383 1 10 0 0 dollarSaving 120 0.27701 1 10 0 0 dollarSaving 122 0.29007 1 10 0 0 dollarSaving 124 0.30302 1 10 0 0 dollarSaving 126 0.31585 1 10 0 0 dollarSaving 128 0.32857 1 10 0 0 dollarSaving 130 0.34116 1 10 0 0 dollarSaving 132 0.35364 1 10 0 0 dollarSaving 134 0.36599 1 10 0 0 dollarSaving 136 0.37822 1 10 0 0 dollarSaving 138 0.39034 1 10 0 0 dollarSaving 140 0.40233 1 10 0 0 dollarSaving 142 0.4142 1 10 0 0 dollarSaving 144 0.42594 1 10 0 0 dollarSaving 146 0.43756 1 10 0 0 dollarSaving 148 0.44906 1 10 0 0 dollarSaving 150 0.46043 1 10 0 0 dollarSaving 152 0.47168 1 10 0 0 dollarSaving 154 0.4828 1 10 0 0 dollarSaving 156 0.49379 1 10 0 0 dollarSaving 158 0.50466 1 10 0 0 dollarSaving 160 0.51541 1 10 0 0 dollarSaving 162 0.52602 1 10 0 0 dollarSaving 164 0.53651 1 10 0 0 dollarSaving 166 0.54687 1 10 0 0 dollarSaving 168 0.55711 1 10 0 0 dollarSaving 170 0.56722 1 10 0 0 dollarSaving 172 0.5772 1 10 0 0 dollarSaving 174 0.58705 1 10 0 0 dollarSaving 176 0.59678 1 10 0 0 dollarSaving 178 0.60637 1 10 0 0 dollarSaving 180 0.61584 1 10 0 0 dollarSaving 182 0.62519 1 10 0 0 dollarSaving 184 0.6344 1 10 0 0 dollarSaving 186 0.64349 1 10 0 0 dollarSaving 188 0.65245 1 10 0 0 dollarSaving 190 0.66129 1 10 0 0 dollarSaving 192 0.67 1 10 0 0 dollarSaving 194 0.67858 1 10 0 0 dollarSaving 196 0.68704 1 10 0 0 dollarSaving 198 0.69537 1 10 0 0 dollarSaving 200 0.70358 1 10 0 0 dollarSaving 202 0.71167 1 10 0 0 dollarSaving 204 0.71963 1 10 0 0 dollarSaving 206 0.72746 1 10 0 0 dollarSaving 208 0.73517 1 10 0 0 dollarSaving 210 0.74276 1 10 0 0 dollarSaving 212 0.75023 1 10 0 0 dollarSaving 214 0.75758 1 10 0 0 dollarSaving 216 0.7648 1 10 0 0 dollarSaving 218 0.77191 1 10 0 0 dollarSaving 220 0.77889 1 10 0 0 dollarSaving 222 0.78576 1 10 0 0 dollarSaving 224 0.79251 1 10 0 0 dollarSaving 226 0.79914 1 10 0 0 dollarSaving 228 0.80565 1 10 0 0 dollarSaving 230 0.81205 1 10 0 0 dollarSaving 232 0.81833 1 10 0 0 dollarSaving 234 0.8245 1 10 0 0 dollarSaving 236 0.83055 1 10 0 0 dollarSaving 238 0.8365 1 10 0 0 dollarSaving 240 0.84233 1 10 0 0 dollarSaving 242 0.84805 1 10 0 0 dollarSaving 244 0.85366 1 10 0 0 dollarSaving 246 0.85916 1 10 0 0 dollarSaving 248 0.86455 1 10 0 0 dollarSaving 250 0.86984 1 10 0 0 dollarSaving 252 0.87502 1 10 0 0 dollarSaving 254 0.8801 1 10 0 0 dollarSaving 256 0.88507 1 10 0 0 dollarSaving 258 0.88994 1 10 0 0 dollarSaving 260 0.89471 1 10 0 0 dollarSaving 262 0.89938 1 10 0 0 dollarSaving 264 0.90395 1 10 0 0 dollarSaving 266 0.90843 1 10 0 0 dollarSaving 268 0.91281 1 10 0 0 dollarSaving 270 0.91709 1 10 0 0 dollarSaving 272 0.92128 1 10 0 0 dollarSaving 274 0.92539 1 10 0 0 dollarSaving 276 0.9294 1 10 0 0 dollarSaving 278 0.93332 1 10 0 0 dollarSaving 280 0.93715 1 10 0 0 dollarSaving 282 0.9409 1 10 0 0 dollarSaving 284 0.94457 1 10 0 0 dollarSaving 286 0.94815 1 10 0 0 dollarSaving 288 0.95165 1 10 0 0 dollarSaving 290 0.95507 1 10 0 0 dollarSaving 292 0.95842 1 10 0 0 dollarSaving 294 0.96169 1 10 0 0 dollarSaving 296 0.96488 1 10 0 0 dollarSaving 298 0.96801 1 10 0 0 dollarSaving 300 0.97106 1 10 0 0 dollarSaving 302 0.97404 1 10 0 0 dollarSaving 304 0.97696 1 10 0 0 dollarSaving 306 0.97981 1 10 0 0 dollarSaving 308 0.9826 1 10 0 0 dollarSaving 310 0.98532 1 10 0 0 dollarSaving 312 0.98799 1 10 0 0 dollarSaving 314 0.9906 1 10 0 0 dollarSaving 316 0.99315 1 10 0 0 dollarSaving 318 0.99565 1 10 0 0 dollarSaving 320 0.99809 1 10 0 0 dollarSaving 322 1.00049 1 10 0 0 dollarSaving 324 1.00283 1 10 0 0 dollarSaving 326 1.00513 1 10 0 0 dollarSaving 328 1.00738 1 10 0 0 dollarSaving 330 1.00959 1 10 0 0 dollarSaving 332 1.01176 1 10 0 0 dollarSaving 334 1.01389 1 10 0 0 dollarSaving 336 1.01598 1 10 0 0 dollarSaving 338 1.01803 1 10 0 0 dollarSaving 340 1.02005 1 10 0 0 dollarSaving 342 1.02204 1 10 0 0 dollarSaving 344 1.02399 1 10 0 0 dollarSaving 346 1.02592 1 10 0 0 dollarSaving 348 1.02782 1 10 0 0 dollarSaving 350 1.02969 1 10 0 0 dollarSaving 352 1.03154 1 10 0 0 dollarSaving 354 1.03337 1 10 0 0 dollarSaving 356 1.03518 1 10 0 0 dollarSaving 358 1.03696 1 10 0 0 dollarSaving 360 1.03873 1 10 0 0 dollarSaving 362 1.04049 1 10 0 0 dollarSaving 364 1.04222 1 10 0 0 dollarSaving 366 1.04395 1 10 0 0 dollarSaving 368 1.04566 1 10 0 0 dollarSaving 370 1.04736 1 10 0 0 dollarSaving 372 1.04906 1 10 0 0 dollarSaving 374 1.05074 1 10 0 0 dollarSaving 376 1.05242 1 10 0 0 dollarSaving 378 1.05409 1 10 0 0 dollarSaving 380 1.05576 1 10 0 0 dollarSaving 382 1.05742 1 10 0 0 dollarSaving 384 1.05908 1 10 0 0 dollarSaving 386 1.06073 1 10 0 0 dollarSaving 388 1.06238 1 10 0 0 dollarSaving 390 1.06403 1 10 0 0 dollarSaving 392 1.06568 1 10 0 0 dollarSaving 394 1.06733 1 10 0 0 dollarSaving 396 1.06898 1 10 0 0 dollarSaving 398 1.07063 1 10 0 0 dollarSaving 400 1.07228 1 10 0 0 fo 500 0.58651 1 10 0 0 fo 501 0.58111 1 10 0 0 fo 502 0.57571 1 10 0 0 fo 503 0.57031 1 10 0 0 fo 504 0.56492 1 10 0 0 fo 505 0.55952 1 10 0 0 fo 506 0.55412 1 10 0 0 fo 507 0.54872 1 10 0 0 fo 508 0.54332 1 10 0 0 fo 509 0.53793 1 10 0 0 fo 510 0.53253 1 10 0 0 fo 511 0.52713 1 10 0 0 fo 512 0.52173 1 10 0 0 fo 513 0.51633 1 10 0 0 fo 514 0.51093 1 10 0 0 fo 515 0.50554 1 10 0 0 fo 516 0.50014 1 10 0 0 fo 517 0.49474 1 10 0 0 fo 518 0.48934 1 10 0 0 fo 519 0.48394 1 10 0 0 fo 520 0.47855 1 10 0 0 fo 521 0.47315 1 10 0 0 fo 522 0.46775 1 10 0 0 fo 523 0.46235 1 10 0 0 fo 524 0.45695 1 10 0 0 fo 525 0.45156 1 10 0 0 fo 526 0.44616 1 10 0 0 fo 527 0.44076 1 10 0 0 fo 528 0.43536 1 10 0 0 fo 529 0.42996 1 10 0 0 fo 530 0.42457 1 10 0 0 fo 531 0.41917 1 10 0 0 fo 532 0.41377 1 10 0 0 fo 533 0.40837 1 10 0 0 fo 534 0.40297 1 10 0 0 fo 535 0.39758 1 10 0 0 fo 536 0.39218 1 10 0 0 fo 537 0.38678 1 10 0 0 fo 538 0.38138 1 10 0 0 fo 539 0.37598 1 10 0 0 fo 540 0.37059 1 10 0 0 fo 541 0.36519 1 10 0 0 fo 542 0.35979 1 10 0 0 fo 543 0.35439 1 10 0 0 fo 544 0.34899 1 10 0 0 fo 545 0.34359 1 10 0 0 fo 546 0.3382 1 10 0 0 fo 547 0.3328 1 10 0 0 fo 548 0.3274 1 10 0 0 fo 549 0.322 1 10 0 0 fo 550 0.3166 1 10 0 0 fo 551 0.31121 1 10 0 0 fo 552 0.30581 1 10 0 0 fo 553 0.30041 1 10 0 0 fo 554 0.29501 1 10 0 0 fo 555 0.28961 1 10 0 0 fo 556 0.28422 1 10 0 0 fo 557 0.27882 1 10 0 0 fo 558 0.27342 1 10 0 0 fo 559 0.26802 1 10 0 0 fo 560 0.26262 1 10 0 0 fo 561 0.25723 1 10 0 0 fo 562 0.25183 1 10 0 0 fo 563 0.24643 1 10 0 0 fo 564 0.24103 1 10 0 0 fo 565 0.23563 1 10 0 0 fo 566 0.23024 1 10 0 0 fo 567 0.22484 1 10 0 0 fo 568 0.21944 1 10 0 0 fo 569 0.21404 1 10 0 0 fo 570 0.20864 1 10 0 0 fo 571 0.20325 1 10 0 0 fo 572 0.19785 1 10 0 0 fo 573 0.19245 1 10 0 0 fo 574 0.18705 1 10 0 0 fo 575 0.18165 1 10 0 0 fo 576 0.17625 1 10 0 0 fo 577 0.17086 1 10 0 0 fo 578 0.16546 1 10 0 0 fo 579 0.16006 1 10 0 0 fo 580 0.15466 1 10 0 0 fo 581 0.14926 1 10 0 0 fo 582 0.14387 1 10 0 0 fo 583 0.13847 1 10 0 0 fo 584 0.13307 1 10 0 0 fo 585 0.12767 1 10 0 0 fo 586 0.12227 1 10 0 0 fo 587 0.11688 1 10 0 0 fo 588 0.11148 1 10 0 0 fo 589 0.10608 1 10 0 0 fo 590 0.10068 1 10 0 0 fo 591 0.09528 1 10 0 0 fo 592 0.08989 1 10 0 0 fo 593 0.08449 1 10 0 0 fo 594 0.07909 1 10 0 0 fo 595 0.07369 1 10 0 0 fo 596 0.06829 1 10 0 0 fo 597 0.0629 1 10 0 0 fo 598 0.0575 1 10 0 0 fo 599 0.0521 1 10 0 0 fo 600 0.0467 1 10 0 0 fo 601 0.0413 1 10 0 0 fo 602 0.03591 1 10 0 0 fo 603 0.03051 1 10 0 0 fo 604 0.02511 1 10 0 0 fo 605 0.01972 1 10 0 0 fo 606 0.01433 1 10 0 0 fo 607 0.00895 1 10 0 0 fo 608 0.00357 1 10 0 0 fo 609 -0.0018 1 10 0 0 fo 610 -0.00716 1 10 0 0 fo 611 -0.01251 1 10 0 0 fo 612 -0.01784 1 10 0 0 fo 613 -0.02316 1 10 0 0 fo 614 -0.02846 1 10 0 0 fo 615 -0.03374 1 10 0 0 fo 616 -0.03899 1 10 0 0 fo 617 -0.04422 1 10 0 0 fo 618 -0.04942 1 10 0 0 fo 619 -0.05459 1 10 0 0 fo 620 -0.05972 1 10 0 0 fo 621 -0.06482 1 10 0 0 fo 622 -0.06988 1 10 0 0 fo 623 -0.07489 1 10 0 0 fo 624 -0.07986 1 10 0 0 fo 625 -0.08478 1 10 0 0 fo 626 -0.08964 1 10 0 0 fo 627 -0.09445 1 10 0 0 fo 628 -0.09921 1 10 0 0 fo 629 -0.1039 1 10 0 0 fo 630 -0.10853 1 10 0 0 fo 631 -0.11309 1 10 0 0 fo 632 -0.11758 1 10 0 0 fo 633 -0.122 1 10 0 0 fo 634 -0.12634 1 10 0 0 fo 635 -0.1306 1 10 0 0 fo 636 -0.13478 1 10 0 0 fo 637 -0.13888 1 10 0 0 fo 638 -0.14288 1 10 0 0 fo 639 -0.1468 1 10 0 0 fo 640 -0.15063 1 10 0 0 fo 641 -0.15436 1 10 0 0 fo 642 -0.15799 1 10 0 0 fo 643 -0.16152 1 10 0 0 fo 644 -0.16494 1 10 0 0 fo 645 -0.16826 1 10 0 0 fo 646 -0.17147 1 10 0 0 fo 647 -0.17457 1 10 0 0 fo 648 -0.17756 1 10 0 0 fo 649 -0.18043 1 10 0 0 fo 650 -0.18319 1 10 0 0 fo 651 -0.18582 1 10 0 0 fo 652 -0.18834 1 10 0 0 fo 653 -0.19073 1 10 0 0 fo 654 -0.19299 1 10 0 0 fo 655 -0.19513 1 10 0 0 fo 656 -0.19714 1 10 0 0 fo 657 -0.19901 1 10 0 0 fo 658 -0.20076 1 10 0 0 fo 659 -0.20237 1 10 0 0 fo 660 -0.20385 1 10 0 0 fo 661 -0.20519 1 10 0 0 fo 662 -0.20639 1 10 0 0 fo 663 -0.20746 1 10 0 0 fo 664 -0.20838 1 10 0 0 fo 665 -0.20917 1 10 0 0 fo 666 -0.20981 1 10 0 0 fo 667 -0.21031 1 10 0 0 fo 668 -0.21067 1 10 0 0 fo 669 -0.21088 1 10 0 0 fo 670 -0.21095 1 10 0 0 fo 671 -0.21087 1 10 0 0 fo 672 -0.21065 1 10 0 0 fo 673 -0.21028 1 10 0 0 fo 674 -0.20977 1 10 0 0 fo 675 -0.20911 1 10 0 0 fo 676 -0.20831 1 10 0 0 fo 677 -0.20736 1 10 0 0 fo 678 -0.20626 1 10 0 0 fo 679 -0.20502 1 10 0 0 fo 680 -0.20363 1 10 0 0 fo 681 -0.2021 1 10 0 0 fo 682 -0.20042 1 10 0 0 fo 683 -0.1986 1 10 0 0 fo 684 -0.19664 1 10 0 0 fo 685 -0.19453 1 10 0 0 fo 686 -0.19228 1 10 0 0 fo 687 -0.1899 1 10 0 0 fo 688 -0.18737 1 10 0 0 fo 689 -0.1847 1 10 0 0 fo 690 -0.1819 1 10 0 0 fo 691 -0.17896 1 10 0 0 fo 692 -0.17588 1 10 0 0 fo 693 -0.17267 1 10 0 0 fo 694 -0.16933 1 10 0 0 fo 695 -0.16586 1 10 0 0 fo 696 -0.16226 1 10 0 0 fo 697 -0.15853 1 10 0 0 fo 698 -0.15468 1 10 0 0 fo 699 -0.1507 1 10 0 0 fo 700 -0.1466 1 10 0 0 fo 701 -0.14238 1 10 0 0 fo 702 -0.13805 1 10 0 0 fo 703 -0.1336 1 10 0 0 fo 704 -0.12903 1 10 0 0 fo 705 -0.12436 1 10 0 0 fo 706 -0.11957 1 10 0 0 fo 707 -0.11468 1 10 0 0 fo 708 -0.10969 1 10 0 0 fo 709 -0.1046 1 10 0 0 fo 710 -0.0994 1 10 0 0 fo 711 -0.09412 1 10 0 0 fo 712 -0.08874 1 10 0 0 fo 713 -0.08326 1 10 0 0 fo 714 -0.0777 1 10 0 0 fo 715 -0.07206 1 10 0 0 fo 716 -0.06633 1 10 0 0 fo 717 -0.06053 1 10 0 0 fo 718 -0.05465 1 10 0 0 fo 719 -0.04869 1 10 0 0 fo 720 -0.04267 1 10 0 0 fo 721 -0.03658 1 10 0 0 fo 722 -0.03042 1 10 0 0 fo 723 -0.02421 1 10 0 0 fo 724 -0.01793 1 10 0 0 fo 725 -0.0116 1 10 0 0 fo 726 -0.00522 1 10 0 0 fo 727 0.00121 1 10 0 0 fo 728 0.00769 1 10 0 0 fo 729 0.01421 1 10 0 0 fo 730 0.02077 1 10 0 0 fo 731 0.02737 1 10 0 0 fo 732 0.034 1 10 0 0 fo 733 0.04066 1 10 0 0 fo 734 0.04735 1 10 0 0 fo 735 0.05407 1 10 0 0 fo 736 0.06081 1 10 0 0 fo 737 0.06758 1 10 0 0 fo 738 0.07436 1 10 0 0 fo 739 0.08115 1 10 0 0 fo 740 0.08797 1 10 0 0 fo 741 0.09479 1 10 0 0 fo 742 0.10162 1 10 0 0 fo 743 0.10846 1 10 0 0 fo 744 0.11531 1 10 0 0 fo 745 0.12216 1 10 0 0 fo 746 0.12902 1 10 0 0 fo 747 0.13588 1 10 0 0 fo 748 0.14274 1 10 0 0 fo 749 0.1496 1 10 0 0 fo 750 0.15646 1 10 0 0 xcumC 0 -0.64115 1 10 0 0 xcumC 1 -0.622 1 10 0 0 xcumC 2 -0.60285 1 10 0 0 xcumC 3 -0.5837 1 10 0 0 xcumC 4 -0.56455 1 10 0 0 xcumC 5 -0.54541 1 10 0 0 xcumC 6 -0.52628 1 10 0 0 xcumC 7 -0.50716 1 10 0 0 xcumC 8 -0.48805 1 10 0 0 xcumC 9 -0.46896 1 10 0 0 xcumC 10 -0.4499 1 10 0 0 xcumC 11 -0.43086 1 10 0 0 xcumC 12 -0.41186 1 10 0 0 xcumC 13 -0.39291 1 10 0 0 xcumC 14 -0.374 1 10 0 0 xcumC 15 -0.35515 1 10 0 0 xcumC 16 -0.33636 1 10 0 0 xcumC 17 -0.31764 1 10 0 0 xcumC 18 -0.29899 1 10 0 0 xcumC 19 -0.28044 1 10 0 0 xcumC 20 -0.26197 1 10 0 0 xcumC 21 -0.24361 1 10 0 0 xcumC 22 -0.22536 1 10 0 0 xcumC 23 -0.20722 1 10 0 0 xcumC 24 -0.18921 1 10 0 0 xcumC 25 -0.17133 1 10 0 0 xcumC 26 -0.1536 1 10 0 0 xcumC 27 -0.13602 1 10 0 0 xcumC 28 -0.11859 1 10 0 0 xcumC 29 -0.10134 1 10 0 0 xcumC 30 -0.08426 1 10 0 0 xcumC 31 -0.06736 1 10 0 0 xcumC 32 -0.05066 1 10 0 0 xcumC 33 -0.03416 1 10 0 0 xcumC 34 -0.01786 1 10 0 0 xcumC 35 -0.00179 ''') matParaC2P = np.genfromtxt(paraC2P, names= ['pt1','ct','mods','mbas','para','x','y'], dtype=['&lt;f8','&lt;f8','&lt;f8','&lt;f8','|S30','&lt;f8','&lt;f8'], delimiter='\t') cdef packed struct paraArray: double pt1 double ct double mods double mbas char[30] para double x double y @cython.boundscheck(False) @cython.wraparound(False) @cython.nonecheck(False) @cython.cdivision(True) cpdef genc2p(paraArray [:] mat_p = matParaC2P, double pt_p= 1.0, double ct_p=10.0, double mbas_p =0.0, double mods_p=0.0, double count_c_p=24.0, double lv_p=60.0, double fo_p=740.0, double incentive_p = 200.0): cdef double[:,:] lvsubmat = getSubParaMat(input_s = mat_p, para_s='lv', ct_s = ct_p, pt1_s=pt_p, mbas_s=mbas_p,mods_s=0.0) cdef double lv0 = interp2d(lvsubmat[:,0], lvsubmat[:,1], lv_p) cdef double[:,:] fosubmat = getSubParaMat(input_s = mat_p, para_s='fo', ct_s = ct_p, pt1_s=pt_p, mbas_s=mbas_p,mods_s=0.0) cdef double fo0 = interp2d(fosubmat[:,0], fosubmat[:,1], fo_p) cdef double[:,:] cumCsubmat = getSubParaMat(input_s = mat_p, para_s='xcumC', ct_s = ct_p, pt1_s=pt_p, mbas_s=mbas_p,mods_s=0.0) cdef double cumC0 = interp2d(cumCsubmat[:,0], cumCsubmat[:,1], count_c_p) cdef double[:,:] incsubmat = getSubParaMat(input_s= mat_p, para_s='dollarSaving', ct_s = ct_p, pt1_s=pt_p, mbas_s=mbas_p,mods_s=mods_p) cdef double inc0 = interp2d(incsubmat[:,0], incsubmat[:,1], incentive_p) cdef double[:,:] intercept = getSubParaMat(input_s= mat_p, para_s='(Intercept)', ct_s = ct_p, pt1_s=pt_p, mbas_s=mbas_p,mods_s=0.0) cdef double intercept0 = intercept[0,1] return genLogit(lv0+fo0+cumC0+inc0+intercept0) #return intercept0 cpdef double[:,:] getSubParaMat(paraArray [:] input_s = matParaC2P, double pt1_s =1.0, double ct_s=10.0, double mods_s=0.0, double mbas_s=0.0, char[30] para_s = 'lv'): cdef int start = 0, end =0, k=0 cdef double[:,:] output = zeros2 for i from 0&lt;=i&lt;input_s.size: if input_s[i].pt1 == pt1_s and input_s[i].ct == ct_s and input_s[i].mods==mods_s and input_s[i].mbas==mbas_s and input_s[i].para[:]==para_s[:] and start==0: start = i end = i while end&lt;input_s.size and input_s[end].pt1 == pt1_s and input_s[end].ct == ct_s and input_s[end].mods==mods_s and input_s[end].mbas==mbas_s and input_s[end].para[:]==para_s[:]: (output[k,0], output[k, 1]) = (input_s[end].x, input_s[end].y) end += 1 k+=1 break return output[:k, :] cpdef double interp2d(double[:] x, double[:] y, double new_x, int ex = 0): cdef int nx = x.shape[0]-1 cdef int ny = y.shape[0]-1 cdef double new_y cdef int steps=0 if ex==0 and new_x&lt;x[0]: new_x = x[0] elif ex==0 and new_x&gt;x[nx]: new_x = x[nx] if new_x&lt;=x[0]: new_y = (new_x - x[0])*(y[0]-y[1])/(x[0] - x[1]) + y[0] elif new_x&gt;=x[nx]: new_y = (new_x - x[nx])*(y[ny] - y[ny-1])/(x[nx] - x[nx-1]) + y[ny] else: while new_x&gt;x[steps]: steps +=1 new_y = (new_x - x[steps-1])*(y[steps] - y[steps-1])/(x[steps] - x[steps-1]) + y[steps-1] return new_y cpdef double genLogit(double total): return 1.0/(1.0+exp(-1.0*(total))) cpdef genLoan(): for j from 0&lt;=j&lt;100: for i from 1&lt;=i&lt;240: prob_p = genc2p(mat_p = matParaC2P) </code></pre>
60540
GOOD_ANSWER
{ "AcceptedAnswerId": null, "CommentCount": "3", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2014-08-20T01:02:35.297", "Id": "60540", "Score": "3", "Tags": [ "python", "performance", "matrix", "numpy", "cython" ], "Title": "Where is the bottleneck in my Cython code?" }
{ "body": "<p>There's a lot of code here, so I'm just going to review <code>interp2d</code>.</p>\n\n<ol>\n<li><p>There's no docstring. What does this function do? How am I supposed to call it? Are there any constraints on the parameters (for example, does the <code>x</code> array need to be sorted)?.</p></li>\n<li><p>The function seems to be misnamed: it interpolates the function that takes <code>x</code> to <code>y</code>, but this is a function of one argument, so surely <code>interp1d</code> or just <code>interp</code> would be better names. (Compare <a href=\"http://docs.scipy.org/doc/numpy/reference/generated/numpy.interp.html\" rel=\"nofollow\"><code>numpy.interp</code></a>, which does something very similar to your function, but is documentated as \"one-dimensional linear interpolation\".)</p></li>\n<li><p>The parameter <code>ex</code> is opaquely named. What does it mean? Reading the code, it seems that it controls whether or not to <em>extrapolate</em> for values of <em>x</em> outside the given range. So it should be named <code>extrapolate</code> and it should be a Boolean (<code>True</code> or <code>False</code>) not a number.</p></li>\n<li><p>Python allows indexing from the end of an array, so you can write <code>x[-1]</code> for the last element of an array, instead of the clusmy <code>x[x.shape[0] - 1]</code>.</p></li>\n<li><p>You find the interval in which to do the interpolation by a linear search over the array <code>x</code>, which takes time proportional to the size of <code>x</code> and so will be slow when <code>x</code> is large. Since <code>x</code> needs to be sorted in order for this algorithm to make sense, you should use binary search (for example, <a href=\"http://docs.scipy.org/doc/numpy/reference/generated/numpy.searchsorted.html\" rel=\"nofollow\"><code>numpy.searchsorted</code></a>) so that the time taken is logarithmic in the size of <code>x</code>.</p></li>\n<li><p>You have failed to vectorize this function. The whole point of NumPy is that it provides fast operations on arrays of fixed-size numbers. If you find yourself writing a function that operates on just one value at a time (here, the single value <code>new_x</code>) then you probably aren't getting much, if any, benefit from NumPy.</p></li>\n<li><p>Putting all this together, I'd write something like this:</p>\n\n<pre><code>def interp1d(x, y, a, extrapolate=False):\n \"\"\"Interpolate a 1-D function.\n\n x is a 1-dimensional array sorted into ascending order.\n y is an array whose first axis has the same length as x.\n a is an array of interpolants.\n If extrapolate is False, clamp all interpolants to the range of x.\n\n Let f be the piecewise-linear function such that f(x) = y.\n Then return f(a).\n\n &gt;&gt;&gt; x = np.arange(0, 10, 0.5)\n &gt;&gt;&gt; y = np.sin(x)\n &gt;&gt;&gt; interp1d(x, y, np.pi)\n 0.0018202391415163\n\n \"\"\"\n if not extrapolate:\n a = np.clip(a, x[0], x[-1])\n i = np.clip(np.searchsorted(x, a), 1, len(x) - 1)\n j = i - 1\n xj, yj = x[j], y[j]\n return yj + (a - xj) * (y[i] - yj) / (x[i] - xj)\n</code></pre>\n\n<p>(The rest of the program may need to be reorganized to pass arrays of values instead of one value at a time.)</p></li>\n<li><p>Finally, what was wrong with <a href=\"http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.interpolate.interp1d.html\" rel=\"nofollow\"><code>scipy.interpolate.interp1d</code></a>? It doesn't provide quite the same handling of points outside the range, but you could call <a href=\"http://docs.scipy.org/doc/numpy/reference/generated/numpy.clip.html\" rel=\"nofollow\"><code>numpy.clip</code></a> yourself before calling it.</p></li>\n</ol>\n", "comments": [], "meta_data": { "CommentCount": "0", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2014-08-26T13:05:22.670", "Id": "61075", "ParentId": "60540", "Score": "4" } }
<p>I'm new to Python and I just wrote my first program with OOP. The program works just fine and gives me what I want. Is the code clear? Can you review the style or anything else?</p> <pre><code>import numpy as np import time, os, sys import datetime as dt class TemplateGenerator(object): """This is a general class to generate all of the templates""" def excelWrtGE(self,*typos): for typo in typos: self.excelWriteGE.write(typo) return def create_KeywordsGE(self,list): list1=list.split("/") for x in list1: list1=[list1.strip()for list1 in list1] print list1 keyword1="Patrone, Toner aufladen "+str(list1[0]) keyword2="Patronen, CMYK, Set of, Ein" keyword3="Set, of, Schwarz, Cyan, Magenta, Gelb, Photoblack" keyword4="Remanufactured ink, Druckerpatronen Ersatz " keyword5=str(list1[0]) print keyword5 i=1 while len(keyword5)&lt;50: keyword5_old=keyword5 if i&gt;=len(list1): break if len(list1)==1: break keyword5=keyword5+", "+str(list1[i]) if len(keyword5)&gt;50: keyword5=keyword5_old break i=i+1 keywords=keyword1+"\t"+keyword2+"\t"+keyword3+"\t"+keyword4+"\t"+keyword5 + "\t" return keywords def keyFeaturesGE(self,feat,inkNumber): list1=feat.split(",") for x in list1: list1=[list1.strip()for list1 in list1] if inkNumber==5: feat1="Lieferumfang: : 1 Schwarz, 1 Cyan, 1 Magenta, 1 Gelb , 1 Photoblack" if inkNumber==4: feat1="Lieferumfang: : 1 Schwarz, 1 Cyan, 1 Magenta, 1 Gelb " if inkNumber==1: feat1="Lieferumfang: : 1 Cyan " if inkNumber==2: feat1="Lieferumfang: : 1 Magenta " if inkNumber==3: feat1="Lieferumfang: : 1 Gelb " if inkNumber==0: feat1="Lieferumfang: : 1 Schwarz " if inkNumber==242: feat1="Lieferumfang: : 4 Schwarz, 2 Cyan, 2 Magenta, 2 Gelb " if inkNumber==54: feat1="Lieferumfang: : 5 Schwarz, 5 Cyan, 5 Magenta, 5 Gelb " if inkNumber==11: feat1="Lieferumfang: : 1 Photoblack" if inkNumber==25: feat1="Lieferumfang: : 2 Schwarz, 2 Cyan, 2 Magenta, 2 Gelb, 2 Photoblack " if inkNumber==45: feat1="Lieferumfang: : 4 Schwarz, 4 Cyan, 4 Magenta, 4 Gelb, 4 Photoblack " if inkNumber==24: feat1="Lieferumfang: : 2 Schwarz, 2 Cyan, 2 Magenta, 2 Gelb " feat2="100% Kompatibel" feat3="Bis zu 80% Druckkosten sparen - Premium Qualit\xE4t" feat4="Geeignet f\xFCr Folgende Drucker: "+str(list1[0]) feat5="Inhalt der Druckerpatronen ca. bei 5% Deckung (DIN A4)" print feat1 i=1 while len(feat4)&lt;500: feat4_old=feat4 if i&gt;=len(list1): break if len(list1)==1: break feat4=feat4+", "+str(list1[i]) if len(feat4)&gt;500: feat4=feat4_old break i=i+1 ListOfFeatures=feat1+"\t"+ feat2+"\t"+feat3+"\t"+feat4 + "\t" +feat5+"\t" return ListOfFeatures def ManProNum(self,num): list1=num.split("/") for x in list1: list1=[list1.strip()for list1 in list1] ProdNum=str(list1[0]) i=1 while len(ProdNum)&lt;40: ProdNum_old=ProdNum if i&gt;=len(list1): break if len(list1)==1: break ProdNum=ProdNum+", "+str(list1[i]) if len(ProdNum)&gt;40: ProdNum=ProdNum_old break i=i+1 ProdNum=ProdNum+"\t" return ProdNum def pictureLink(self,amount): if amount==1: linkW="dfghj" elif amount==2: linkW="fgh" elif amount==4: linkW="hj" elif amount==5: linkW="lk" elif amount==6: linkW="erty" return str(linkW) # GERMANY STARTS HERE----------------- def GenerateGETemplatesFor6(self,ElementsOfPrinter): FourSetSalePrice=round((float(ElementsOfPrinter[5])+2.5)*1.26*1.2*1.4*1.2,2) descr="Industriell wiederaufgearbeitete und gepr\xFCfte Druckerpatronen in Premium qualit\xE4t 100% Kompatible, keine Original Geeignet f\xFCr folgende Ger\xE4te: "+ElementsOfPrinter[0]+ ". Hinweis: Die auf unseren Shop aufgef\xFChrten Markennamen und Original Zubeh\xF6rbezeichnungen sind eingetragene Warenzeichen Ihrer jeweiligen Hersteller und dienen nur als Anwenderhilfe f\xFCr die Zubeh\xF6ridentifikation.\t" title0="4 Druckerpatronen kompatibel f\xFCr "+ElementsOfPrinter[1]+" - "+ElementsOfPrinter[2]+" - " +ElementsOfPrinter[3]+" - "+ElementsOfPrinter[4]+" Set mit Chip\t" +str(FourSetSalePrice)+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],4) +self.create_KeywordsGE(ElementsOfPrinter[0])+self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" self.excelWrtGE(title0) def GenerateGETemplatesFor7(self,ElementsOfPrinter): FiveSetSalePrice=round((float(ElementsOfPrinter[6])+2.5)*1.26*1.2*1.4*1.2,2) descr="Industriell wiederaufgearbeitete und gepr\xFCfte Druckerpatrone in Premium qualit\xE4t 100% Kompatible, keine Original Geeignet f\xFCr folgende Ger\xE4te: "+ElementsOfPrinter[0]+ ". Hinweis: Die auf unseren Shop aufgef\xFChrten Markennamen und Original Zubeh\xF6rbezeichnungen sind eingetragene Warenzeichen Ihrer jeweiligen Hersteller und dienen nur als Anwenderhilfe f\xFCr die Zubeh\xF6ridentifikation.\t" title0="5 Druckerpatronen kompatibel f\xFCr "+ElementsOfPrinter[1]+" - "+ElementsOfPrinter[2]+" - " +ElementsOfPrinter[3]+" - "+ElementsOfPrinter[4]+" - "+ElementsOfPrinter[5]+" Set mit Chip\t" +str(round(FiveSetSalePrice,2))+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],5) +self.create_KeywordsGE(ElementsOfPrinter[0])+self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" self.excelWrtGE(title0) def GenerateGETemplatesFor9(self,ElementsOfPrinter): FourSETBuyingPrice=sum(map(float,ElementsOfPrinter[5::])) FourSetSalePrice=round((FourSETBuyingPrice+2.5)*1.26*1.2*1.4*1.2,2) descr="Industriell wiederaufgearbeitete und gepr\xFCfte Druckerpatrone in Premium qualit\xE4t 100% Kompatible, keine Original Geeignet f\xFCr folgende Ger\xE4te: "+ElementsOfPrinter[0]+ ". Hinweis: Die auf unseren Shop aufgef\xFChrten Markennamen und Original Zubeh\xF6rbezeichnungen sind eingetragene Warenzeichen Ihrer jeweiligen Hersteller und dienen nur als Anwenderhilfe f\xFCr die Zubeh\xF6ridentifikation.\t" title0="4 Druckerpatronen kompatibel f\xFCr "+ElementsOfPrinter[2]+" - "+ElementsOfPrinter[3]+" - " +ElementsOfPrinter[4]+" - "+ElementsOfPrinter[5]+" Set mit Chip\t" +str(round(FourSetSalePrice-1,2))+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],4) +self.create_KeywordsGE(ElementsOfPrinter[0])+self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title1="Druckerpatrone kompatibel f\xFCr "+ElementsOfPrinter[2]+" Schwarz mit Chip\t" +str(round((float(ElementsOfPrinter[5])+2.5)*1.26*1.2*1.4*1.2,2))+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],0) +self.create_KeywordsGE(ElementsOfPrinter[0])+self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title2="Druckerpatrone kompatibel f\xFCr "+ElementsOfPrinter[3]+" Cyan mit Chip\t" +str(round((float(ElementsOfPrinter[6])+2.5)*1.26*1.2*1.4*1.2,2))+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],1) +self.create_KeywordsGE(ElementsOfPrinter[0])+self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title3="Druckerpatrone kompatibel f\xFCr "+ElementsOfPrinter[4]+" Magenta mit Chip\t" +str(round((float(ElementsOfPrinter[7])+2.5)*1.26*1.2*1.4*1.2,2))+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],2) +self.create_KeywordsGE(ElementsOfPrinter[0])+self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title4="Druckerpatrone kompatibel f\xFCr "+ElementsOfPrinter[5]+" Gelb mit Chip\t" +str(round((float(ElementsOfPrinter[8])+2.5)*1.26*1.2*1.4*1.2,2))+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],3) +self.create_KeywordsGE(ElementsOfPrinter[0])+self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title5="4 Druckerpatronen mit CHIP kompatibel f\xFCr "+ElementsOfPrinter[1]+" - "+ElementsOfPrinter[2]+" - " +ElementsOfPrinter[3]+" - "+ElementsOfPrinter[4] +" Multipack SET f\xFCr" +ElementsOfPrinter[0] +" mit Chip\t" +str(round(FourSetSalePrice,2))+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],4) +self.create_KeywordsGE(ElementsOfPrinter[0])+self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title6="XXL PREMIUM Druckerpatronen kompatibel f\xFCr "+ElementsOfPrinter[1]+", "+ElementsOfPrinter[2]+", "+ElementsOfPrinter[3]+", "+ElementsOfPrinter[4]+" SET "+ElementsOfPrinter[0]+"\t" +str(round(FourSetSalePrice,2))+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],4) +self.create_KeywordsGE(ElementsOfPrinter[0])+self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title7="20 Patronen MIT CHIP kompatibel zu "+ ElementsOfPrinter[1]+" - "+ElementsOfPrinter[2]+" - " +ElementsOfPrinter[3]+" - "+ElementsOfPrinter[4]+"\t" +str(round((FourSETBuyingPrice*5+2.5)*1.26*1.2*1.4*1.2-5,2))+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],54) +self.create_KeywordsGE(ElementsOfPrinter[0])+self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title8="10 Premium Druckerpatronen kompatibel f\xFCr "+ElementsOfPrinter[1]+" - "+ElementsOfPrinter[2]+" - " +ElementsOfPrinter[3]+" - "+ElementsOfPrinter[4]+"f\xFCr"+ElementsOfPrinter[0] +"2 SET + 2 BLACK\t" +str(round(((FourSETBuyingPrice*2+2*float(ElementsOfPrinter[5]))+2.5)*1.26*1.2*1.4*1.2-2,2))+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],242) +self.create_KeywordsGE(ElementsOfPrinter[0]) +self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title9="XXL PREMIUM Druckerpatronen kompatibel f\xFCr "+ElementsOfPrinter[1]+ "f\xFCr "+ElementsOfPrinter[2]+" Schwarz\t" +str(round((float(ElementsOfPrinter[5])+2.5)*1.26*1.2*1.4*1.2,2))+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],0) +self.create_KeywordsGE(ElementsOfPrinter[0])+self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title10="XXL PREMIUM Druckerpatronen kompatibel f\xFCr "+ElementsOfPrinter[2]+ "f\xFCr "+ElementsOfPrinter[3]+" Cyan\t" +str(round((float(ElementsOfPrinter[6])+2.5)*1.26*1.2*1.4*1.2,2))+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],1) +self.create_KeywordsGE(ElementsOfPrinter[0])+self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title11="XXL PREMIUM Druckerpatronen kompatibel f\xFCr "+ElementsOfPrinter[3]+ "f\xFCr "+ElementsOfPrinter[4]+" Magenta\t" +str(round((float(ElementsOfPrinter[7])+2.5)*1.26*1.2*1.4*1.2,2))+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],2) +self.create_KeywordsGE(ElementsOfPrinter[0]) +self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title12="XXL PREMIUM Druckerpatronen kompatibel f\xFCr "+ElementsOfPrinter[4]+ "f\xFCr "+ElementsOfPrinter[5]+" Gelb\t"+str(round((float(ElementsOfPrinter[8])+2.5)*1.26*1.2*1.4*1.2,2))+"\t" +self.keyFeaturesGE(ElementsOfPrinter[0],3) +self.create_KeywordsGE(ElementsOfPrinter[0]) +self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" self.excelWrtGE(title0,title1,title2,title3,title4,title5,title6,title7,title8,title9,title10,title11,title12) def GenerateGETemplatesFor11(self,ElementsOfPrinter): # print ElementsOfPrinter[5], ElementsOfPrinter[10] FiveSETBuyingPrice=sum(map(float,ElementsOfPrinter[6::])) FourSETBuyingPrice=sum(map(float,ElementsOfPrinter[7::])) FourSetSalePrice=(FourSETBuyingPrice+2.5)*1.26*1.2*1.4*1.2 FiveSetSalePrice=round((FiveSETBuyingPrice+2.5)*1.26*1.2*1.4*1.2) descr="Industriell wiederaufgearbeitete und gepr\xFCfte Druckerpatrone in Premium qualit\xE4t 100% Kompatible, keine Original Geeignet f\xFCr folgende Ger\xE4te: "+ElementsOfPrinter[0]+ ". Hinweis: Die auf unseren Shop aufgef\xFChrten Markennamen und Original Zubeh\xF6rbezeichnungen sind eingetragene Warenzeichen Ihrer jeweiligen Hersteller und dienen nur als Anwenderhilfe f\xFCr die Zubeh\xF6ridentifikation.\t" title0="4 Druckerpatronen kompatibel f\xFCr "+ElementsOfPrinter[1]+" - "+ElementsOfPrinter[2]+" - " +ElementsOfPrinter[3]+" - "+ElementsOfPrinter[4]+" Set mit Chip\t" +str(round(FourSetSalePrice,2))+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],4) +self.create_KeywordsGE(ElementsOfPrinter[0])+self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title1="Druckerpatrone kompatibel f\xFCr "+ElementsOfPrinter[1]+" Photoblack mit Chip\t" +str(round((float(ElementsOfPrinter[6])+2.5)*1.26*1.2*1.4*1.2,2))+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],11) +self.create_KeywordsGE(ElementsOfPrinter[0]) +self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title2="Druckerpatrone kompatibel f\xFCr "+ElementsOfPrinter[2]+" Schwarz mit Chip\t" +str(round((float(ElementsOfPrinter[7])+2.5)*1.26*1.2*1.4*1.2,2))+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],1) +self.create_KeywordsGE(ElementsOfPrinter[0])+self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title3="Druckerpatrone kompatibel f\xFCr "+ElementsOfPrinter[3]+" Cyan mit Chip\t" +str(round((float(ElementsOfPrinter[8])+2.5)*1.26*1.2*1.4*1.2,2))+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],1) +self.create_KeywordsGE(ElementsOfPrinter[0])+self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title4="Druckerpatrone kompatibel f\xFCr "+ElementsOfPrinter[4]+" Magenta mit Chip\t" +str(round((float(ElementsOfPrinter[9])+2.5)*1.26*1.2*1.4*1.2,2))+"\t"+self.keyFeaturesGE(ElementsOfPrinter[2],2) +self.create_KeywordsGE(ElementsOfPrinter[0])+self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title5="Druckerpatrone kompatibel f\xFCr "+ElementsOfPrinter[5]+" Gelb mit Chip\t" +str(round((float(ElementsOfPrinter[10])+2.5)*1.26*1.2*1.4*1.2,2))+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],3) +self.create_KeywordsGE(ElementsOfPrinter[0])+self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title6="5 Druckerpatronen mit CHIP kompatibel f\xFCr "+ElementsOfPrinter[1]+" - "+ElementsOfPrinter[2]+" - " +ElementsOfPrinter[3]+" - "+ElementsOfPrinter[4] +" - "+ElementsOfPrinter[5]+" Multipack SET f\xFCr " +ElementsOfPrinter[0] +" mit Chip\t" +str(FiveSetSalePrice)+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],5) +self.create_KeywordsGE(ElementsOfPrinter[0])+self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title7="XXL PREMIUM Druckerpatronen kompatibel f\xFCr "+ElementsOfPrinter[1]+", "+ElementsOfPrinter[2]+", "+ElementsOfPrinter[3]+", "+ElementsOfPrinter[4]+","+ElementsOfPrinter[5]+" SET "+ElementsOfPrinter[0]+"\t" +str(round(FiveSetSalePrice,2))+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],5) +self.create_KeywordsGE(ElementsOfPrinter[0])+self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title8="20 Patronen MIT CHIP kompatibel zu "+ ElementsOfPrinter[1]+" - "+ElementsOfPrinter[2]+" - " +ElementsOfPrinter[3]+" - "+ElementsOfPrinter[4]+" - "+ElementsOfPrinter[5]+"\t" +str(round((FiveSETBuyingPrice*4+2.5)*1.26*1.2*1.4*1.2-4,2))+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],45) +self.create_KeywordsGE(ElementsOfPrinter[0])+self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title9="10 Premium Druckerpatronen kompatibel f\xFCr "+ElementsOfPrinter[1]+" - "+ElementsOfPrinter[2]+" - " +ElementsOfPrinter[3]+" - "+ElementsOfPrinter[4]+" - "+ElementsOfPrinter[5]+" f\xFCr"+ElementsOfPrinter[0] +"2 SET\t" +str(round((FiveSETBuyingPrice*2+2.5)*1.26*1.2*1.4*1.2-2,2))+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],25) +self.create_KeywordsGE(ElementsOfPrinter[0]) +self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title10="XXL PREMIUM Druckerpatronen kompatibel f\xFCr "+ElementsOfPrinter[1]+ "f\xFCr "+ElementsOfPrinter[0]+" Photoblack\t"+str(round((float(ElementsOfPrinter[6])+2.5)*1.26*1.2*1.4*1.2,2))+"\t" +self.keyFeaturesGE(ElementsOfPrinter[0],11) +self.create_KeywordsGE(ElementsOfPrinter[0]) +self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title11="XXL PREMIUM Druckerpatronen kompatibel f\xFCr "+ElementsOfPrinter[2]+ "f\xFCr "+ElementsOfPrinter[0]+" Schwarz\t" +str(round((float(ElementsOfPrinter[7])+2.5)*1.26*1.2*1.4*1.2,2))+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],0) +self.create_KeywordsGE(ElementsOfPrinter[0])+self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title12="XXL PREMIUM Druckerpatronen kompatibel f\xFCr "+ElementsOfPrinter[3]+ "f\xFCr "+ElementsOfPrinter[0]+" Cyan\t" +str(round((float(ElementsOfPrinter[8])+2.5)*1.26*1.2*1.4*1.2,2))+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],1) +self.create_KeywordsGE(ElementsOfPrinter[0])+self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title13="XXL PREMIUM Druckerpatronen kompatibel f\xFCr "+ElementsOfPrinter[4]+ "f\xFCr "+ElementsOfPrinter[0]+" Magenta\t" +str(round((float(ElementsOfPrinter[9])+2.5)*1.26*1.2*1.4*1.2,2))+"\t"+self.keyFeaturesGE(ElementsOfPrinter[0],2) +self.create_KeywordsGE(ElementsOfPrinter[0]) +self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" title14="XXL PREMIUM Druckerpatronen kompatibel f\xFCr "+ElementsOfPrinter[5]+ "f\xFCr "+ElementsOfPrinter[0]+" Gelb\t"+str(round((float(ElementsOfPrinter[10])+2.5)*1.26*1.2*1.4*1.2,2))+"\t" +self.keyFeaturesGE(ElementsOfPrinter[0],3) +self.create_KeywordsGE(ElementsOfPrinter[0]) +self.ManProNum(ElementsOfPrinter[0])+descr+self.pictureLink(1)+"\n" self.excelWrtGE(title0,title1,title2,title3,title4,title5,title6,title7,title8,title9,title10,title11,title12,title13,title14) def __init__(self): super(TemplateGenerator, self).__init__() # self.arg = arg now=dt.datetime.now() cur_time=time.mktime(now.timetuple()) path='/Users/dfghj/Desktop/listings/THM/'+str(cur_time) print str(cur_time) os.mkdir(path) excelRead=open('/Users/dfghj/Desktop/listings/dataGA.txt','r') self.excelWriteGE=open(path +'/Them.GE.Sets.txt','w') header="Titles\t"+"Price equal to title:\t"+"Key Features1\t"+"Key Features2\t"+"Key Features3\t"+"Key Features4\t"+"Key Feature5\t"+"keyword1\t"+"keyword2\t"+"keyword3\t"+"keyword4\t"+"keyword5\t"+"Manufacturer Part Number\t"+"Description\t"+"Picture Link\n" self.excelWrtGE(header) data=np.genfromtxt(excelRead, delimiter="\n", dtype=None) for line in data: titles=line.split("+") # print title, len(title) if len(titles)==11: self.GenerateGETemplatesFor11(titles) elif len(titles)==9: self.GenerateGETemplatesFor9(titles) elif len(titles)==7: self.GenerateGETemplatesFor7(titles) elif len(titles)==6: self.GenerateGETemplatesFor6(titles) else: print "Check your names in file once again!!!" excelRead.close() self.excelWriteGE.close() self.excelWriteSP.close() if __name__=='__main__': TemplateGenerator() </code></pre>
70989
GOOD_ANSWER
{ "AcceptedAnswerId": "70994", "CommentCount": "3", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2014-11-27T11:04:44.870", "Id": "70989", "Score": "4", "Tags": [ "python", "beginner", "parsing", "numpy" ], "Title": "Printer Color Templates" }
{ "body": "<p>I could make out some parts that could be improved.</p>\n\n<ol>\n<li>Good that you have used docstring for the class. You could do the same for the methods. In general, the code could have more comments where things are not obvious. Right now there are hardly any comments.</li>\n<li>Python programmers prefer <code>functions_named_like_this()</code> rather than <code>functionsNamedLikeThis()</code>.</li>\n<li><code>__init__()</code> should preferably be at the start of the class. Those who read the code will want to know how it is initialized before looking at other things.</li>\n<li>Is this really necessary: <code>super(TemplateGenerator, self).__init__()</code> ? I don't know the answer but I believe the base class <code>__init__</code> method either does nothing useful or is invoked by default.</li>\n<li>A better way to open/close files is using the <code>with</code> statement. That way, you never need to worry about files you have forgotten to close.</li>\n<li><code>header=\"Titles\\t\"+\"Price equal to title:\\t\"+\"Key Features1\\t\"+\"Key Features2\\t\"+\"Key Features3\\t\"+\"Key Features4\\t\"+\"Key Feature5\\t\"+\"keyword1\\t\"+\"keyword2\\t\"+\"keyword3\\t\"+\"keyword4\\t\"+\"keyword5\\t\"+\"Manufacturer Part Number\\t\"+\"Description\\t\"+\"Picture Link\\n\"</code>: this is definitely not Pythonic. Use instead a list and then call <code>'\\t'.join()</code> on that list.</li>\n<li><code>(x+2.5)*1.26*1.2*1.4*1.2</code>: since this is used in many places, put it into a method.</li>\n<li><code>ElementsOfPrinter[]</code>: this is really cryptic since it's hard to remember the semantics of each position. It would be better to extract them into named variables. Example: <code>x,y,z = position</code>, assuming position is a list of 3 items.</li>\n<li><code>list1=[list1.strip()for list1 in list1]</code>: nothing wrong here but it is better for readability to use different variable names.</li>\n<li>test0, test1, ...: can be converted to a list instead. Also, Python allows you to have multiline strings. No need to type them out in long lines.</li>\n<li><code>if inkNumber==4:</code> : convert to <code>elif</code>.</li>\n</ol>\n", "comments": [ { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2014-11-27T15:24:26.833", "Id": "129792", "Score": "0", "body": "Thanks a lot, I really appreciate that, will try to bring some parts of code according to some rules, thanks again" }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2014-11-27T15:41:13.727", "Id": "129799", "Score": "0", "body": "ElementsOfPrinter[] I do remember the order of all variables, but since data which is read from file can be different, I wanted to send all variables to their methods[GenerateGETemplatesFor6] (according to element amount) and then work with elements inside, isn't that bette?" } ], "meta_data": { "CommentCount": "2", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2014-11-27T11:55:48.850", "Id": "70994", "ParentId": "70989", "Score": "2" } }
<p>In honor of Star Wars day, I've put together this small Python program I'm calling JediScript. JediScript is essentially a scrapped-down version of BrainFuck without input or looping. Here are the commands in JediScript.</p> <ul> <li><code>SlashWithSaber</code>: Move forward on the tape.</li> <li><code>ParryBladeWithSaber</code>: Move backward on the tape.</li> <li><code>StabWithSaber</code>: Increment a cell.</li> <li><code>BlockBladeWithSaber</code>: Decrement cell.</li> <li><code>UseForceWithHands</code>: Output the current cell.</li> </ul> <p>Each command is semicolon <code>;</code> separated, like so: <code>StabWithSaber;UseForceWithHands</code>. Here's an example input. This will output the character <code>p</code>.</p> <blockquote> <pre><code>StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;StabWithSaber;UseForceWithHands; </code></pre> </blockquote> <p>This is something that I threw together in about 20 minutes, so it's not the greatest, but I'd still appreciate a review.</p> <pre><code>#!/usr/bin/env python """ JediScript, BrainFuck for Star Wars. May the 4th be with you. """ DEFAULT_SPLIT = ";" """ Variables below this are environment variables for the user to modify. """ data_tape = [0 for _ in range(256)] tape_pos = 0 def increment_cell(): global data_tape global tape_pos data_tape[tape_pos] += 1 if data_tape[tape_pos] &lt;= 127 else 0 def decrement_cell(): global data_tape global tape_pos data_tape[tape_pos] -= 1 if data_tape[tape_pos] &gt;= 0 else 0 def move_forward(): global tape_pos tape_pos += 1 if tape_pos &lt;= len(data_tape) - 1 else 0 def move_backward(): global tape_pos tape_pos -= 1 if tape_pos &gt;= 0 else 0 def output_cell(): print chr(data_tape[tape_pos]) """ Dictionary contains tokens that reference their relevant functions. """ TOKENS = { "SlashWithSaber": move_forward, "ParryBladeWithSaber": move_backward, "StabWithSaber": increment_cell, "BlockBladeWithSaber": decrement_cell, "UseForceWithHands": output_cell, } def execute_commands(tokenized_string): """ Executes commands from the tokenized string. tokenized_string - The tokenized string """ for token in tokenized_string: if token in TOKENS: TOKENS[token]() def tokenize_input(string): """ Tokenize a string into it's the form [ token, ... ] string - The string to tokenize. """ string = string.replace(" ", "") string = string.split(DEFAULT_SPLIT) return string def get_user_input(prompt): """ Get input from the user. prompt - The prompt to be used. """ while True: string = raw_input(prompt) string = tokenize_input(string) execute_commands(string) if __name__ == "__main__": get_user_input("JediScript $ ") </code></pre>
88783
WEIRD
{ "AcceptedAnswerId": "88797", "CommentCount": "2", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2015-05-04T14:07:32.930", "Id": "88783", "Score": "16", "Tags": [ "python", "interpreter", "language-design" ], "Title": "JediScript - May the 4th be with you" }
{ "body": "<ul>\n<li>Why do you allow each cell of the tape to hold numbers from -1 to 128? seems like an odd range.</li>\n<li>in <code>move_backward()</code> why do you allow the tape to reach position -1?</li>\n<li>in <code>move_forward()</code> why do you allow the tape's position to be beyond the end of the tape?</li>\n<li>In general you should be using exclusive comparisons (without the =) as you'll make fewer mistakes.</li>\n</ul>\n", "comments": [ { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2015-05-04T17:16:54.343", "Id": "161788", "Score": "4", "body": "I don't understand why using exclusive comparisons would make fewer mistakes ?" }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2015-05-04T17:33:36.117", "Id": "161790", "Score": "4", "body": "I don't either, but there's actually research on it. Coders who use an exclusive comparison for the top-end of the loop or the range make fewer off-by-one errors than coders who use the \"-or-equal-to\" comparators." }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2015-05-04T17:36:02.913", "Id": "161792", "Score": "6", "body": "I would be interested in the link to that research!" }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2015-05-04T19:14:45.967", "Id": "161821", "Score": "1", "body": "Searched a while and all I came up with was https://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD831.html I'll keep looking" } ], "meta_data": { "CommentCount": "4", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2015-05-04T17:02:51.233", "Id": "88797", "ParentId": "88783", "Score": "11" } }
<h2><strong><a href="https://codegolf.stackexchange.com/questions/50472/check-if-words-are-isomorphs">Are the words isomorphs? (Code-Golf)</a></strong></h2> <p>This is my non-golfed, readable and linear (quasi-linear?) in complexity take of the above problem. For completeness I include the description:</p> <blockquote> <p>Two words are isomorphs if they have the same pattern of letter repetitions. For example, both ESTATE and DUELED have pattern abcdca</p> <pre><code>ESTATE DUELED abcdca </code></pre> <p>because letters 1 and 6 are the same, letters 3 and 5 are the same, and nothing further. This also means the words are related by a substitution cipher, here with the matching <code>E &lt;-&gt; D, S &lt;-&gt; U, T &lt;-&gt; E, A &lt;-&gt; L</code>.</p> <p>Write code that takes two words and checks whether they are isomorphs.</p> </blockquote> <p>As always tests are included for easier understanding and modification.</p> <pre><code>def repetition_pattern(text): """ Same letters get same numbers, small numbers are used first. Note: two-digits or higher numbers may be used if the the text is too long. &gt;&gt;&gt; repetition_pattern('estate') '012320' &gt;&gt;&gt; repetition_pattern('dueled') '012320' &gt;&gt;&gt; repetition_pattern('longer example') '012345647891004' # ^ ^ ^ 4 stands for 'e' because 'e' is at 4-th position. # ^^ Note the use of 10 after 9. """ for index, unique_letter in enumerate(sorted(set(text), key=text.index)): text = text.replace(unique_letter, str(index)) return text def are_isomorph(word_1, word_2): """ Have the words (or string of arbitrary characters) the same the same `repetition_pattern` of letter repetitions? All the words with all different letters are trivially isomorphs to each other. &gt;&gt;&gt; are_isomorph('estate', 'dueled') True &gt;&gt;&gt; are_isomorph('estate'*10**4, 'dueled'*10**4) True &gt;&gt;&gt; are_isomorph('foo', 'bar') False """ return repetition_pattern(word_1) == repetition_pattern(word_2) </code></pre>
94776
GOOD_ANSWER
{ "AcceptedAnswerId": "94810", "CommentCount": "2", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2015-06-26T13:36:14.983", "Id": "94776", "Score": "3", "Tags": [ "python", "strings", "unit-testing", "rags-to-riches" ], "Title": "Are the words isomorph?" }
{ "body": "<pre><code>for index, unique_letter in enumerate(sorted(set(text), key=text.index)):\n text = text.replace(unique_letter, str(index))\nreturn text\n</code></pre>\n\n<p>You don't need to go through the set and sort, you can do:</p>\n\n<pre><code>for index, letter in enumerate(text):\n text = text.replace(letter, str(index))\nreturn text\n</code></pre>\n\n<p>By the time it gets to the last \"e\" in \"estate\" and tries to replace it with \"5\", it will already have replaced it with \"0\" earlier on, and there won't be any \"e\"s left to replace. And it's not changing something while iterating over it, because strings are immutable.</p>\n\n<p>More golf-ish and less readable is:</p>\n\n<pre><code>return ''.join(str(text.index(letter)) for letter in text)\n</code></pre>\n\n<p><strong>Edit:</strong> Note for posterity: the above two methods have the same bug Gareth Rees identified, comparing 'decarbonist' and 'decarbonized', where one ends by adding number 10 and the other is a character longer and ends by adding numbers 1, 0 and they compare equally.<strong>End Edit</strong></p>\n\n<p>But @jonsharpe's comment asks if you need it to return a string; if you don't, then it could be a list of numbers, which is still very readable:</p>\n\n<pre><code>return [text.index(letter) for letter in text]\n</code></pre>\n\n<p>Minor things:</p>\n\n<ul>\n<li><code>are_isomorph</code> is a mix of plural/singular naming. Maybe <code>isomorphic(a, b)</code>?</li>\n<li>Your code doesn't take uppercase/lowercase into account - should it?</li>\n<li>This approach will break without warning for an input with numbers in it, e.g.\n\n<ul>\n<li><code>repetition_pattern('abcdefghi1')</code> is <code>0923456789</code>. \"b\" changes to 1, then changes to 9 because the \"1\" at the end catches it, then it appears the two were originally the same character. (It might be outside the scope of the question and not a problem).</li>\n</ul></li>\n</ul>\n", "comments": [], "meta_data": { "CommentCount": "0", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2015-06-26T19:34:24.933", "Id": "94810", "ParentId": "94776", "Score": "2" } }
<h2><strong><a href="https://codegolf.stackexchange.com/questions/50472/check-if-words-are-isomorphs">Are the words isomorphs? (Code-Golf)</a></strong></h2> <p>This is my non-golfed, readable and linear (quasi-linear?) in complexity take of the above problem. For completeness I include the description:</p> <blockquote> <p>Two words are isomorphs if they have the same pattern of letter repetitions. For example, both ESTATE and DUELED have pattern abcdca</p> <pre><code>ESTATE DUELED abcdca </code></pre> <p>because letters 1 and 6 are the same, letters 3 and 5 are the same, and nothing further. This also means the words are related by a substitution cipher, here with the matching <code>E &lt;-&gt; D, S &lt;-&gt; U, T &lt;-&gt; E, A &lt;-&gt; L</code>.</p> <p>Write code that takes two words and checks whether they are isomorphs.</p> </blockquote> <p>As always tests are included for easier understanding and modification.</p> <pre><code>def repetition_pattern(text): """ Same letters get same numbers, small numbers are used first. Note: two-digits or higher numbers may be used if the the text is too long. &gt;&gt;&gt; repetition_pattern('estate') '012320' &gt;&gt;&gt; repetition_pattern('dueled') '012320' &gt;&gt;&gt; repetition_pattern('longer example') '012345647891004' # ^ ^ ^ 4 stands for 'e' because 'e' is at 4-th position. # ^^ Note the use of 10 after 9. """ for index, unique_letter in enumerate(sorted(set(text), key=text.index)): text = text.replace(unique_letter, str(index)) return text def are_isomorph(word_1, word_2): """ Have the words (or string of arbitrary characters) the same the same `repetition_pattern` of letter repetitions? All the words with all different letters are trivially isomorphs to each other. &gt;&gt;&gt; are_isomorph('estate', 'dueled') True &gt;&gt;&gt; are_isomorph('estate'*10**4, 'dueled'*10**4) True &gt;&gt;&gt; are_isomorph('foo', 'bar') False """ return repetition_pattern(word_1) == repetition_pattern(word_2) </code></pre>
94776
WEIRD
{ "AcceptedAnswerId": "94810", "CommentCount": "2", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2015-06-26T13:36:14.983", "Id": "94776", "Score": "3", "Tags": [ "python", "strings", "unit-testing", "rags-to-riches" ], "Title": "Are the words isomorph?" }
{ "body": "<p>There is a Python feature that almost directly solves this problem, leading to a simple and fast solution.</p>\n\n<pre><code>from string import maketrans # &lt;-- Python 2, or\nfrom str import maketrans # &lt;-- Python 3\n\ndef are_isomorph(string1, string2):\n return len(string1) == len(string2) and \\\n string1.translate(maketrans(string1, string2)) == string2\n string2.translate(maketrans(string2, string1)) == string1\n</code></pre>\n", "comments": [ { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2015-06-26T23:09:47.493", "Id": "172956", "Score": "0", "body": "`are_isomorph('abcdef', 'estate')` returns `true`. This is because the order in which the translation table is made. Maybe instead you could use `are_isomorph(a, b) and are_isomorph(b, a)`." }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2015-06-29T14:43:39.380", "Id": "173407", "Score": "0", "body": "I get `ImportError: No module named 'str'` when I run this code." } ], "meta_data": { "CommentCount": "2", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2015-06-26T20:55:17.720", "Id": "94817", "ParentId": "94776", "Score": "2" } }
<h2><strong><a href="https://codegolf.stackexchange.com/questions/50472/check-if-words-are-isomorphs">Are the words isomorphs? (Code-Golf)</a></strong></h2> <p>This is my non-golfed, readable and linear (quasi-linear?) in complexity take of the above problem. For completeness I include the description:</p> <blockquote> <p>Two words are isomorphs if they have the same pattern of letter repetitions. For example, both ESTATE and DUELED have pattern abcdca</p> <pre><code>ESTATE DUELED abcdca </code></pre> <p>because letters 1 and 6 are the same, letters 3 and 5 are the same, and nothing further. This also means the words are related by a substitution cipher, here with the matching <code>E &lt;-&gt; D, S &lt;-&gt; U, T &lt;-&gt; E, A &lt;-&gt; L</code>.</p> <p>Write code that takes two words and checks whether they are isomorphs.</p> </blockquote> <p>As always tests are included for easier understanding and modification.</p> <pre><code>def repetition_pattern(text): """ Same letters get same numbers, small numbers are used first. Note: two-digits or higher numbers may be used if the the text is too long. &gt;&gt;&gt; repetition_pattern('estate') '012320' &gt;&gt;&gt; repetition_pattern('dueled') '012320' &gt;&gt;&gt; repetition_pattern('longer example') '012345647891004' # ^ ^ ^ 4 stands for 'e' because 'e' is at 4-th position. # ^^ Note the use of 10 after 9. """ for index, unique_letter in enumerate(sorted(set(text), key=text.index)): text = text.replace(unique_letter, str(index)) return text def are_isomorph(word_1, word_2): """ Have the words (or string of arbitrary characters) the same the same `repetition_pattern` of letter repetitions? All the words with all different letters are trivially isomorphs to each other. &gt;&gt;&gt; are_isomorph('estate', 'dueled') True &gt;&gt;&gt; are_isomorph('estate'*10**4, 'dueled'*10**4) True &gt;&gt;&gt; are_isomorph('foo', 'bar') False """ return repetition_pattern(word_1) == repetition_pattern(word_2) </code></pre>
94776
WEIRD
{ "AcceptedAnswerId": "94810", "CommentCount": "2", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2015-06-26T13:36:14.983", "Id": "94776", "Score": "3", "Tags": [ "python", "strings", "unit-testing", "rags-to-riches" ], "Title": "Are the words isomorph?" }
{ "body": "<p><code>text.replace()</code> is \\$\\mathcal{O}(n)\\$. Assuming we have a string of \\$n\\$ distinct characters your algorithm could devolve in to \\$\\mathcal{O}(n^2)\\$. Of course your strings are likely never long enough for big O to matter.</p>\n\n<p>In problems like this no matter what language you use it is often useful to build tables of characters.</p>\n\n<pre><code>def base_pattern(text):\n lst=[chr(i) for i in range(256)]\n\n for index, c in enumerate(text):\n lst[ord(c)] = index\n\n return [lst[ord(c)] for c in text]\n\ndef are_isomorph(word1, word2):\n return base_pattern(word1) == base_pattern(word2)\n</code></pre>\n", "comments": [ { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2015-06-29T15:06:56.350", "Id": "173414", "Score": "0", "body": "This is brittle: what if `ord(c)` is greater than 255? `ord('\\N{HIRAGANA LETTER KA}')` → 12363." }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2015-06-30T19:14:09.133", "Id": "173866", "Score": "0", "body": "@GarethRees This code was never meant to support anything other than ASCII. The codegolf even defines the range as `A..Z`." }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2015-06-30T20:00:01.200", "Id": "173872", "Score": "0", "body": "Sure, but a robust implementation would also be shorter: `d={c: i for i, c in enumerate(text)}; return [d[c] for c in text]`" }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2015-06-30T20:34:57.877", "Id": "173891", "Score": "0", "body": "And then I would have a hash table which provides an \\$\\mathrm{O}(n^2)\\$ worst case to the entire algorithm and goes against the point of my post." }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2015-06-30T21:02:56.740", "Id": "173904", "Score": "0", "body": "All characters have different hashes, so the worst case is \\$O(n)\\$." } ], "meta_data": { "CommentCount": "5", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2015-06-26T23:30:04.890", "Id": "94836", "ParentId": "94776", "Score": "0" } }
<h2><strong><a href="https://codegolf.stackexchange.com/questions/50472/check-if-words-are-isomorphs">Are the words isomorphs? (Code-Golf)</a></strong></h2> <p>This is my non-golfed, readable and linear (quasi-linear?) in complexity take of the above problem. For completeness I include the description:</p> <blockquote> <p>Two words are isomorphs if they have the same pattern of letter repetitions. For example, both ESTATE and DUELED have pattern abcdca</p> <pre><code>ESTATE DUELED abcdca </code></pre> <p>because letters 1 and 6 are the same, letters 3 and 5 are the same, and nothing further. This also means the words are related by a substitution cipher, here with the matching <code>E &lt;-&gt; D, S &lt;-&gt; U, T &lt;-&gt; E, A &lt;-&gt; L</code>.</p> <p>Write code that takes two words and checks whether they are isomorphs.</p> </blockquote> <p>As always tests are included for easier understanding and modification.</p> <pre><code>def repetition_pattern(text): """ Same letters get same numbers, small numbers are used first. Note: two-digits or higher numbers may be used if the the text is too long. &gt;&gt;&gt; repetition_pattern('estate') '012320' &gt;&gt;&gt; repetition_pattern('dueled') '012320' &gt;&gt;&gt; repetition_pattern('longer example') '012345647891004' # ^ ^ ^ 4 stands for 'e' because 'e' is at 4-th position. # ^^ Note the use of 10 after 9. """ for index, unique_letter in enumerate(sorted(set(text), key=text.index)): text = text.replace(unique_letter, str(index)) return text def are_isomorph(word_1, word_2): """ Have the words (or string of arbitrary characters) the same the same `repetition_pattern` of letter repetitions? All the words with all different letters are trivially isomorphs to each other. &gt;&gt;&gt; are_isomorph('estate', 'dueled') True &gt;&gt;&gt; are_isomorph('estate'*10**4, 'dueled'*10**4) True &gt;&gt;&gt; are_isomorph('foo', 'bar') False """ return repetition_pattern(word_1) == repetition_pattern(word_2) </code></pre>
94776
GOOD_ANSWER
{ "AcceptedAnswerId": "94810", "CommentCount": "2", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2015-06-26T13:36:14.983", "Id": "94776", "Score": "3", "Tags": [ "python", "strings", "unit-testing", "rags-to-riches" ], "Title": "Are the words isomorph?" }
{ "body": "<p>The code in the post does not work! Here's an example where it fails:</p>\n\n<pre><code>&gt;&gt;&gt; are_isomorph('decarbonist', 'decarbonized')\nTrue\n</code></pre>\n\n<p>The problem is that the eleventh distinct character gets represented by <code>10</code>, but an occurrence of the second distinct character followed by the first distinct character is also represented by <code>10</code>. You could avoid this confusion by representing the pattern as a list instead of a string:</p>\n\n<pre><code>def repetition_pattern(s):\n \"\"\"Return the repetition pattern of the string s as a list of numbers,\n with same/different characters getting same/different numbers.\n\n \"\"\"\n return list(map({c: i for i, c in enumerate(s)}.get, s))\n</code></pre>\n\n<p>or, if you don't care about algorithmic efficiency:</p>\n\n<pre><code> return list(map(s.find, s))\n</code></pre>\n\n<p>but an alternative approach avoids generating a repetition pattern at all, but instead considers the pairs of corresponding characters in the two strings. If we have:</p>\n\n<pre><code>ESTATE\nDUELED\n</code></pre>\n\n<p>then the pairs are E↔D, S↔U, T↔E and A↔L. The two strings are isomorphic if and only if these pairs constitute a bijection: that is, iff the first characters of the pairs are all distinct, and so are the second characters.</p>\n\n<pre><code>def isomorphic(s, t):\n \"\"\"Return True if strings s and t are isomorphic: that is, if they\n have the same pattern of characters.\n\n &gt;&gt;&gt; isomorphic('estate', 'dueled')\n True\n &gt;&gt;&gt; isomorphic('estate'*10**4, 'dueled'*10**4)\n True\n &gt;&gt;&gt; isomorphic('foo', 'bar')\n False\n\n \"\"\"\n if len(s) != len(t):\n return False\n pairs = set(zip(s, t))\n return all(len(pairs) == len(set(c)) for c in zip(*pairs))\n</code></pre>\n\n<p>From a purely code golf point of view, however, the following seems hard to beat (especially in Python 2, where you can drop the <code>list</code> calls):</p>\n\n<pre><code>list(map(s.find, s)) == list(map(t.find, t))\n</code></pre>\n", "comments": [ { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2015-06-29T16:22:51.073", "Id": "173431", "Score": "0", "body": "Ooh, I was wondering about where it switched to double digits, but decided that as long as both strings were the same it would work. I should have thought of different length strings. (Sorry, I was trying to edit my post with your correct name, but you got to it first)." }, { "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2015-06-29T16:28:29.787", "Id": "173432", "Score": "1", "body": "@TessellatingHeckler: Even strings of the same length can go wrong. Consider `abcdefgihjkba` versus `abcdefgihjbak`." } ], "meta_data": { "CommentCount": "2", "ContentLicense": "CC BY-SA 3.0", "CreationDate": "2015-06-29T15:18:18.900", "Id": "95090", "ParentId": "94776", "Score": "2" } }

Dataset Card for "my_section_5"

More Information needed

Downloads last month
2
Edit dataset card