instruction
stringlengths
12
148
input
stringlengths
16
30.2k
output
stringlengths
11
27.3k
How (in what form) to share (deliver) a Python function?
The final outcome of my work should be a Python function that takes a JSON object as the only input and return another JSON object as output. To keep it more specific, I am a data scientist, and the function that I am speaking about, is derived from data and it delivers predictions (in other words, it is a machine learning model). So, my question is how to deliver this function to the "tech team" that is going to incorporate it into a web-service. At the moment I face few problems. First, the tech team does not necessarily work in Python environment. So, they cannot just "copy and paste" my function into their code. Second, I want to make sure that my function runs in the same environment as mine. For example, I can imagine that I use some library that the tech team does not have or they have a version that differ from the version that I use. ADDED As a possible solution I consider the following. I start a Python process that listen to a socket, accept incoming strings, transforms them into JSON, gives the JSON to the "published" function and returns the output JSON as a string. Does this solution have disadvantages? In other words, is it a good idea to "publish" a Python function as a background process listening to a socket?
You have the right idea with using a socket but there are tons of frameworks doing exactly what you want. Like hleggs, I suggest you checkout Flask to build a microservice. This will let the other team post JSON objects in an HTTP request to your flask application and receive JSON objects back. No knowledge of the underlying system or additional requirements required! Here's a template for a flask app that replies and responds with JSON from flask import Flask, request, jsonify app = Flask(__name__) @app.route('/', methods=['POST']) def index(): json = request.json return jsonify(your_function(json)) if __name__=='__main__': app.run(host='0.0.0.0', port=5000) Edit: embeded my code directly as per Peter Britain's advice
How do I turn a dataframe into a series of lists?
I have had to do this several times and I'm always frustrated. I have a dataframe: df = pd.DataFrame([[1, 2, 3, 4], [5, 6, 7, 8]], ['a', 'b'], ['A', 'B', 'C', 'D']) print df A B C D a 1 2 3 4 b 5 6 7 8 I want to turn df into: pd.Series([[1, 2, 3, 4], [5, 6, 7, 8]], ['a', 'b']) a [1, 2, 3, 4] b [5, 6, 7, 8] dtype: object I've tried df.apply(list, axis=1) Which just gets me back the same df What is a convenient/effective way to do this?
You can first convert DataFrame to numpy array by values, then convert to list and last create new Series with index from df if need faster solution: print (pd.Series(df.values.tolist(), index=df.index)) a [1, 2, 3, 4] b [5, 6, 7, 8] dtype: object Timings with small DataFrame: In [76]: %timeit (pd.Series(df.values.tolist(), index=df.index)) 1000 loops, best of 3: 295 µs per loop In [77]: %timeit pd.Series(df.T.to_dict('list')) 1000 loops, best of 3: 685 µs per loop In [78]: %timeit df.T.apply(tuple).apply(list) 1000 loops, best of 3: 958 µs per loop and with large: from string import ascii_letters letters = list(ascii_letters) df = pd.DataFrame(np.random.choice(range(10), (52 ** 2, 52)), pd.MultiIndex.from_product([letters, letters]), letters) In [71]: %timeit (pd.Series(df.values.tolist(), index=df.index)) 100 loops, best of 3: 2.06 ms per loop In [72]: %timeit pd.Series(df.T.to_dict('list')) 1 loop, best of 3: 203 ms per loop In [73]: %timeit df.T.apply(tuple).apply(list) 1 loop, best of 3: 506 ms per loop
Understanding Keras LSTMs
I am trying to reconcile my understand of LSTMs and pointed out here: http://colah.github.io/posts/2015-08-Understanding-LSTMs/ with the LSTM implemented in Keras. I am following the blog written http://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/ for the Keras tutorial. What I am mainly confused about is, The reshaping of the data series into [samples, time steps, features] and, The stateful LSTMs Lets concentrate on the above two questions with reference to the code pasted below: # reshape into X=t and Y=t+1 look_back = 3 trainX, trainY = create_dataset(train, look_back) testX, testY = create_dataset(test, look_back) # reshape input to be [samples, time steps, features] trainX = numpy.reshape(trainX, (trainX.shape[0], look_back, 1)) testX = numpy.reshape(testX, (testX.shape[0], look_back, 1)) ######################## # The IMPORTANT BIT ########################## # create and fit the LSTM network batch_size = 1 model = Sequential() model.add(LSTM(4, batch_input_shape=(batch_size, look_back, 1), stateful=True)) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') for i in range(100): model.fit(trainX, trainY, nb_epoch=1, batch_size=batch_size, verbose=2, shuffle=False) model.reset_states() Note: create_dataset takes a sequence of length N and returns a N-look_back array of which each element is a look_back length sequence. What is Time Steps and Features? As can be seen TrainX is a 3-D array with Time_steps and Feature being the last two dimensions respectively (3 and 1 in this particular code). With respect to the image below, does this mean that we are considering the many to one case, where the number of pink boxes are 3? Or does it literally mean the chain length is 3 (i.e. only 3 green boxes considered). Does the features argument become relevant when we consider multivariate series? e.g. modelling two financial stocks simultaneously? Stateful LSTMs Does stateful LSTMs mean that we save the cell memory values between runs of batches? If this is the case, batch_size is one, and the memory is reset between the training runs so what was the point of saying that it was stateful. I'm guessing this is related to the fact that training data is not shuffled, but I'm not sure how. Any thoughts? Image reference: http://karpathy.github.io/2015/05/21/rnn-effectiveness/ Edit 1: A bit confused about @van's comment about the red and green boxes being equal. So just to confirm, does the following API calls correspond to the unrolled diagrams? Especially noting the second diagram (batch_size was arbitrarily chosen.): Edit 2: For people who have done Udacity's deep learning course and still confused about the time_step argument, look at the following discussion: https://discussions.udacity.com/t/rnn-lstm-use-implementation/163169
First of all, you choose great tutorials(1,2) to start. What Time-step means: Time-steps==3 in X.shape (Describing data shape) means there are three pink boxes. Since in Keras each step requires an input, therefore the number of the green boxes should usually equal to the number of red boxes. Unless you hack the structure. many to many vs. many to one: In keras, there is a return_sequences parameter when your initializing LSTM or GRU or SimpleRNN. When return_sequences is False (by default), then it is many to one as shown in the picture. Its return shape is (batch_size, hidden_unit_length), which represent the last state. When return_sequences is True, then it is many to many. Its return shape is (batch_size, time_step, hidden_unit_length) Does the features argument become relevant: Feature argument means "How big is your red box" or what is the input dimension each step. If you want to predict from, say, 8 kinds of market information, then you can generate your data with feature==8. Stateful: You can look up the source code. When initializing the state, if stateful==True, then the state from last training will be used as the initial state, otherwise it will generate a new state. I haven't turn on stateful yet. However, I disagree with that the batch_size can only be 1 when stateful==True. Currently, you generate your data with collected data. Image your stock information is coming as stream, rather than waiting for a day to collect all sequential, you would like to generate input data online while training/predicting with network. If you have 400 stocks sharing a same network, then you can set batch_size==400.
Returning string matches between two lists for a given number of elements in a third list
I've got a feeling that I will be told to go to the 'beginner's guide' or what have you but I have this code here that goes does = ['my','mother','told','me','to','choose','the'] it = ['my','mother','told','me','to','choose','the'] work = [] while 5 > len(work): for nope in it: if nope in does: work.append(nope) print (work) And I get ['my', 'mother', 'told', 'me', 'to', 'choose', 'the'] Why is this? And how do I convince it to return ['my', 'mother', 'told', 'me']
You could try something like this: for nope in it: if len(work) < 5 and nope in does: work.append(nope) else: break The problem with your code is that it does the check of the work's length, after having looped through all the items of it and having added all of them that are in does.
How to make this Block of python code short and efficient
I am total newbie to programming and python. I was solving a problem. I found the solution but it seems like too slow. if n % 2 == 0 and n % 3 == 0 and\ n % 4 == 0 and n % 5 == 0 and\ n % 6 == 0 and n % 7 == 0 and\ n % 8 == 0 and n % 9 == 0 and\ n % 10 == 0 and n % 11 == 0 and\ n % 12 == 0 and n % 13 == 0 and\ n % 14 == 0 and n % 15 == 0 and\ n % 16 == 0 and n % 17 == 0 and\ n % 18 == 0 and n % 19 == 0 and\ n % 20 == 0: This is the piece the code to check whether n is divisible by all numbers from 2 to 20 or not. How I can make it short and efficient.
There's a trade-off between short and efficient. The Short way is if all(n % i == 0 for i in range(2, 21)): The Efficient way is to notice that things like n % 20 == 0 also mean that n % f == 0 where f is any factor of 20. For example, you can drop n % 2 == 0. So you'll end up with fewer comparisons which will run faster. In doing this you'll notice a pattern and you'll notice that the entire statement reduces to if n % 232792560 == 0! But that has now deeply embedded the 20 within it so will be difficult to unpick if you need a different upper limit. So you see that the efficient way is not so easy to read and maintain. So pick the one best suited to your requirements.
Slow equality evaluation for identical objects (x == x)
Is there any reason x == x is not evaluated quickly? I was hoping __eq__ would check if its two arguments are identical, and if so return True instantly. But it doesn't do it: s = set(range(100000000)) s == s # this doesn't short-circuit, so takes ~1 sec For built-ins, x == x always returns True I think? For user-defined classes, I guess someone could define __eq__ that doesn't satisfy this property, but is there any reasonable use case for that? The reason I want x == x to be evaluated quickly is because it's a huge performance hit when memoizing functions with very large arguments: from functools import lru_cache @lru_cache() def f(s): return sum(s) large_obj = frozenset(range(50000000)) f(large_obj) # this takes >1 sec every time Note that the reason @lru_cache is repeatedly slow for large objects is not because it needs to calculate __hash__ (this is only done once and is then hard-cached as pointed out by @jsbueno), but because the dictionary's hash table needs to execute __eq__ every time to make sure it found the right object in the bucket (equality of hashes is obviously insufficient). UPDATE: It seems it's worth considering this question separately for three situations. 1) User-defined types (i.e., not built-in / standard library). As @donkopotamus pointed out, there are cases where x == x should not evaluate to True. For example, for numpy.array and pandas.Series types, the result is intentionally not convertible to boolean because it's unclear what the natural semantics should be (does False mean the container is empty, or does it mean all items in it are False?). But here, there's no need for python to do anything, since the users can always short-circuit x == x comparison themselves if it's appropriate: def __eq__(self, other): if self is other: return True # continue normal evaluation 2) Python built-in / standard library types. a) Non-containers. For all I know the short-circuit may already be implemented for this case - I can't tell since either way it's super fast. b) Containers (including str). As @Karl Knechtel commented, adding short-circuit may hurt total performance if the savings from short-circuit are outweighed by the extra overhead in cases where self is not other. While theoretically possible, even in that case the overhead is a small in relative terms (container comparison is never super-fast). And of course, in cases where short-circuit helps, the savings can be dramatic. BTW, it turns out that str does short-circuit: comparing huge identical strings is instant.
As you say, someone could quite easily define an __eq__ that you personally don't happen to approve of ... for example, the Institute of Electrical and Electronics Engineers might be so foolish as to do that: >>> float("NaN") == float("NaN") False Another "unreasonable use case": >>> bool(numpy.ma.masked == numpy.ma.masked) False Or even: >>> numpy.arange(10) == numpy.arange(10) array([ True, True, True, True, True, True, True, True, True, True], dtype=bool) which has the audacity to not even be convertible to bool! So there is certainly practical scope for x == x to not automagically be short-circuited to be true. Going Off Course However the following is perhaps a good question: Why doesn't set.__eq__ check for instance identity? Well, one might think ... because a set S might contain NaN and since NaN cannot equal itself then surely such a set S cannot equal itself? Investigating: >>> s = set([float("NaN")]) >>> s == s True Hmm, that's interesting, especially since: >>> {float("NaN")} == {float("NaN")} False This behaviour is due to Python's desire for sequences to be reflexive.
Passing a "pointer to a virtual function" as argument in Python
Compare the following code in C++: #include <iostream> #include <vector> struct A { virtual void bar(void) { std::cout << "one" << std::endl; } }; struct B : public A { virtual void bar(void) { std::cout << "two" << std::endl; } }; void test(std::vector<A*> objs, void (A::*fun)()) { for (auto o = objs.begin(); o != objs.end(); ++o) { A* obj = (*o); (obj->*fun)(); } } int main() { std::vector<A*> objs = {new A(), new B()}; test(objs, &A::bar); } and in Python: class A: def bar(self): print("one") class B(A): def bar(self): print("two") def test(objs, fun): for o in objs: fun(o) objs = [A(), B()] test(objs, A.bar) The C++ code will print: one two while the Python code will print one one How can I pass "a pointer to a method" and resolve it to the overridden one, achieving the same behavior in Python as in C++? To add some context and explain why I initially thought about this pattern. I have a tree consisting of nodes that can be subclassed. I would like to create a generic graph traversal function which takes a node of the graph as well as a function which might be overridden in subclasses of graph nodes. The function calculates some value for a node, given values calculated for adjacent nodes. The goal is to return a value calculated for the given node (which requires traversing the whole graph).
Regarding your edit, one thing you could do is use a little wrapper lambda that calls the method you want to reference. This way the method call looks like "regular python code" instead of being something complicated based on string-based access. In your example, the only part that would need to change is the call to the test function: test(objs, (lambda x: x.bar()))
Convert float to string without scientific notation and false precision
I want to print some floating point numbers so that they're always written in decimal form (e.g. 12345000000000000000000.0 or 0.000000000000012345, not in scientific notation, yet I'd want to keep the 15.7 decimal digits of precision and no more. It is well-known that the repr of a float is written in scientific notation if the exponent is greater than 15, or less than -4: >>> n = 0.000000054321654321 >>> n 5.4321654321e-08 # scientific notation If str is used, the resulting string again is in scientific notation: >>> str(n) '5.4321654321e-08' It has been suggested that I can use format with f flag and sufficient precision to get rid of the scientific notation: >>> format(0.00000005, '.20f') '0.00000005000000000000' It works for that number, though it has some extra trailing zeroes. But then the same format fails for .1, which gives decimal digits beyond the actual machine precision of float: >>> format(0.1, '.20f') '0.10000000000000000555' And if my number is 4.5678e-20, using .20f would still lose relative precision: >>> format(4.5678e-20, '.20f') '0.00000000000000000005' Thus these approaches do not match my requirements. This leads to the question: what is the easiest and also well-performing way to print arbitrary floating point number in decimal format, having the same digits as in repr(n) (or str(n) on Python 3), but always using the decimal format, not the scientific notation. That is, a function or operation that for example converts the float value 0.00000005 to string '0.00000005'; 0.1 to '0.1'; 420000000000000000.0 to '420000000000000000.0' or 420000000000000000 and formats the float value -4.5678e-5 as '-0.000045678'. After the bounty period: It seems that there are at least 2 viable approaches, as Karin demonstrated that using string manipulation one can achieve significant speed boost compared to my initial algorithm on Python 2. Thus, If performance is important and Python 2 compatibility is required; or if the decimal module cannot be used for some reason, then Karin's approach using string manipulation is the way to do it. On Python 3, my somewhat shorter code will also be faster. Since I am primarily developing on Python 3, I will accept my own answer, and shall award Karin the bounty.
Unfortunately it seems that not even the new-style formatting with float.__format__ supports this. The default formatting of floats is the same as with repr; and with f flag there are 6 fractional digits by default: >>> format(0.0000000005, 'f') '0.000000' However there is a hack to get the desired result - not the fastest one, but relatively simple: first the float is converted to a string using str() or repr() then a new Decimal instance is created from that string. Decimal.__format__ supports f flag which gives the desired result, and, unlike floats it prints the actual precision instead of default precision. Thus we can make a simple utility function float_to_str: import decimal # create a new context for this task ctx = decimal.Context() # 20 digits should be enough for everyone :D ctx.prec = 20 def float_to_str(f): """ Convert the given float to a string, without resorting to scientific notation """ d1 = ctx.create_decimal(repr(f)) return format(d1, 'f') Care must be taken to not use the global decimal context, so a new context is constructed for this function. This is the fastest way; another way would be to use decimal.local_context but it would be slower, creating a new thread-local context and a context manager for each conversion. This function now returns the string with all possible digits from mantissa, rounded to the shortest equivalent representation: >>> float_to_str(0.1) '0.1' >>> float_to_str(0.00000005) '0.00000005' >>> float_to_str(420000000000000000.0) '420000000000000000' >>> float_to_str(0.000000000123123123123123123123) '0.00000000012312312312312313' The last result is rounded at the last digit As @Karin noted, float_to_str(420000000000000000.0) does not strictly match the format expected; it returns 420000000000000000 without trailing .0.
How to Bind and Send from Google Cloud Forwarding Rule IP Address?
I've followed the instructions for Using Protocol Forwarding on the Google Cloud Platform. So I now have something like this: $ gcloud compute forwarding-rules list NAME REGION IP_ADDRESS IP_PROTOCOL TARGET x-fr-1 us-west1 104.198.?.?? TCP us-west1-a/targetInstances/x-target-instance x-fr-2 us-west1 104.198.?.?? TCP us-west1-a/targetInstances/x-target-instance x-fr-3 us-west1 104.198.??.??? TCP us-west1-a/targetInstances/x-target-instance x-fr-4 us-west1 104.198.??.??? TCP us-west1-a/targetInstances/x-target-instance x-fr-5 us-west1 104.198.?.??? TCP us-west1-a/targetInstances/x-target-instance (Note: Names have been changed and question-marks have been substituted. I'm not sure it matters to keep these private but better safe than sorry.) My instance "x" is in the "x-target-instance" and has five forwarding rules "x-fr-1" through "x-fr-5". I'm running nginx on "x" and I can access it from any of its 6 external IP addresses (1 for the instance + 5 forwarding rules). So far, so good. I am interested now in binding a server to these external IP addresses. To explore, I tried using Python: import socket import time def serve(ip_address, port=80): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.bind((ip_address, port)) try: sock.listen(5) while True: con, _ = sock.accept() print con.getpeername(), con.getsockname() con.send(time.ctime()) con.close() finally: sock.close() Now I can bind "0.0.0.0" and I get some interesting results: >>> serve("0.0.0.0") ('173.228.???.??', 57288) ('10.240.?.?', 80) ('173.228.???.??', 57286) ('104.198.?.??', 80) When I communicate with the server on its external IP address, the "getsockname" method returns the instance's internal IP address. But when I communicate with the server on an external IP address as used by a forwarding rule, then the "getsockname" methods returns the external IP address. Ok, now I bind the instance's internal IP address: >>> serve("10.240.?.?") ('173.228.???.??', 57295) ('10.240.?.?', 80) Again I can communicate with the server on its external IP address, and the "getsockname" method returns the instance's internal IP address. That seems a bit odd. Also, if I try to bind the instance's external IP address: >>> serve("104.198.?.??") error: [Errno 99] Cannot assign requested address Then I get an error. But, if I try to bind the external IP addresses used by the forwarding rules and then make a request: >>> serve("104.198.??.???") ('173.228.???.??', 57313) ('104.198.??.???', 80) It works. Finally I look at "ifconfig": ens4 Link encap:Ethernet HWaddr 42:01:0a:??:??:?? inet addr:10.240.?.? Bcast:10.240.?.? Mask:255.255.255.255 inet6 addr: fe80::4001:???:????:2/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1460 Metric:1 RX packets:37554 errors:0 dropped:0 overruns:0 frame:0 TX packets:32286 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:41201244 (41.2 MB) TX bytes:3339072 (3.3 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:9403 errors:0 dropped:0 overruns:0 frame:0 TX packets:9403 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:3155046 (3.1 MB) TX bytes:3155046 (3.1 MB) And I see only two interfaces. Clearly, the abilities of Google Cloud Platform Networking has exceeded what I can remember from my Computer Networking class in college. To summarize my observations: If I want to bind on the instance's external IP address, then I bind its internal IP address. A process bound to the instance's internal IP address can not differentiate the destination IP between the instance's internal or external IP addresses. The single networking adapter, "ens4", is receiving packets bound for any of the instance's 6 external IP address. And here's my questions: Why can I not bind the instance's external IP address? How is it that I can bind the external IP addresses used by forwarding rules when I have no associated network adapters? If I want to restrict SSH access to the instance's external IP address, should I configure SSH to bind the internal IP address? If I setup an HTTP proxy on one of the external IP addresses used by a forwarding rule, what will be the source IP of the proxied request? Lastly, and this may be a bug, why is the forwarding rules list empty in the web interface at https://console.cloud.google.com/networking/loadbalancing/advanced/forwardingRules/list?project=xxx when I can see them with "gcloud compute forwarding-rules list"?
it's not in the local routing table ('ip route show table local') [ you could of course add it (e.g. 'ip address add x.x.x.x/32 dev ens4'), but doing so wouldn't do you much good, since no packets will be delivered to your VM using that as the destination address - see below... ] because the forwarded addresses have been added to your local routing table ('ip route show table local') you could [ though note that this would restrict ssh access to either external clients targeting the external IP address, or to clients within your virtual network targeting either the external or internal IP address ]. However, as already noted - it might be more important to restrict the allowed client addresses (not the server address), and for that the firewall would be more effective. it depends on where the destination of the proxied request goes. If it's internal to your virtual network, then it will be the VM's internal IP address, otherwise it's NAT-ed (outside of your VM) to be the VM's external IP address. There are multiple tabs on that page - two of which list different classes of forwarding rule ("global forwarding rules" vs "forwarding rules"). Admittedly somewhat confusing :P One other thing that's slightly confusing - when sending packets to your VM using its external IP as the destination address, an entity outside the VM (think of it as a virtual switch / router / NAT device) automatically modifies the destination to be the internal IP before the packet arrives at the virtio driver for the virtual NIC - so there's nothing you can do to modify that behavior. Packets addressed to the IP of a forwarding rule, however, are not NAT-ed (as you've seen). Hope that helps!
Splitting a list into uneven groups?
I know how to split a list into even groups, but I'm having trouble splitting it into uneven groups. Essentially here is what I have: some list, let's call it mylist, that contains x elements. I also have another file, lets call it second_list, that looks something like this: {2, 4, 5, 9, etc.} Now what I want to do is divide mylist into uneven groups by the spacing in second_list. So, I want my first group to be the first 2 elements of mylist, the second group to be the next 4 elements of mylist, the third group to be the next 5 elements of mylist, the fourth group to be the next 9 elements of `mylist, and so on. Is there some easy way to do this? I tried doing something similar to if you want to split it into even groups: for j in range(0, len(second_list)): for i in range(0, len(mylist), second_list[j]): chunk_mylist = mylist[i:i+second_list[j]] However this doesn't split it like I want it to. I want to end up with my # of sublists being len(second_list), and also split correctly, and this gives a lot more than that (and also splits incorrectly).
You can create an iterator and itertools.islice: mylist = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] seclist = [2,4,6] from itertools import islice it = iter(mylist) sliced =[list(islice(it, 0, i)) for i in seclist] Which would give you: [[1, 2], [3, 4, 5, 6], [7, 8, 9, 10, 11, 12]] Once i elements are consumed they are gone so we keep getting the next i elements. Not sure what should happen with any remaining elements, if you want them added, you could add something like: mylist = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 ,14] seclist = [2, 4, 6] from itertools import islice it = iter(mylist) slices = [sli for sli in (list(islice(it, 0, i)) for i in seclist)] remaining = list(it) if remaining: slices.append(remaining) print(slices) Which would give you: [[1, 2], [3, 4, 5, 6], [7, 8, 9, 10, 11, 12], [13, 14]] Or in contrast if there were not enough, you could use a couple of approaches to remove empty lists, one an inner generator expression: from itertools import islice it = iter(mylist) slices = [sli for sli in (list(islice(it, 0, i)) for i in seclist) if sli] Or combine with itertools.takewhile: from itertools import islice, takewhile it = iter(mylist) slices = list(takewhile(bool, (list(islice(it, 0, i)) for i in seclist))) Which for: mylist = [1, 2, 3, 4, 5, 6] seclist = [2, 4, 6,8] would give you: [[1, 2], [3, 4, 5, 6]] As opposed to: [[1, 2], [3, 4, 5, 6], [], []] What you use completely depends on your possible inouts and how you would like to handle the various possibilities.
How to optimize multiprocessing in Python
EDIT: I've had questions about what the video stream is, so I will offer more clarity. The stream is a live video feed from my webcam, accessed via OpenCV. I get each frame as the camera reads it, and send it to a separate process for processing. The process returns text based on computations done on the image. The text is then displayed onto the image. I need to display the stream in realtime, and it is ok if there is a lag between the text and the video being shown (i.e. if the text was applicable to a previous frame, that's ok). Perhaps an easier way to think of this is that I'm doing image recognition on what the webcam sees. I send one frame at a time to a separate process to do recognition analysis on the frame, and send the text back to be put as a caption on the live feed. Obviously the processing takes more time than simply grabbing frames from the webcam and showing them, so if there is a delay in what the caption is and what the webcam feed shows, that's acceptable and expected. What's happening now is that the live video I'm displaying is lagging due to the other processes (when I don't send frames to the process for computing, there is no lag). I've also ensured only one frame is enqueued at a time so avoid overloading the queue and causing lag. I've updated the code below to reflect this detail. I'm using the multiprocessing module in python to help speed up my main program. However I believe I might be doing something incorrectly, as I don't think the computations are happening quite in parallel. I want my program to read in images from a video stream in the main process, and pass on the frames to two child processes that do computations on them and send text back (containing the results of the computations) to the main process. However, the main process seems to lag when I use multiprocessing, running about half as fast as without it, leading me to believe that the processes aren't running completely in parallel. After doing some research, I surmised that the lag may have been due to communicating between the processes using a queue (passing an image from the main to the child, and passing back text from child to main). However I commented out the computational step and just had the main process pass an image and the child return blank text, and in this case, the main process did not slow down at all. It ran at full speed. Thus I believe that either 1) I am not optimally using multiprocessing OR 2) These processes cannot truly be run in parallel (I would understand a little lag, but it's slowing the main process down in half). Here's a outline of my code. There is only one consumer instead of 2, but both consumers are nearly identical. If anyone could offer guidance, I would appreciate it. class Consumer(multiprocessing.Process): def __init__(self, task_queue, result_queue): multiprocessing.Process.__init__(self) self.task_queue = task_queue self.result_queue = result_queue #other initialization stuff def run(self): while True: image = self.task_queue.get() #Do computations on image self.result_queue.put("text") return import cv2 tasks = multiprocessing.Queue() results = multiprocessing.Queue() consumer = Consumer(tasks,results) consumer.start() #Creating window and starting video capturer from camera cv2.namedWindow("preview") vc = cv2.VideoCapture(0) #Try to get the first frame if vc.isOpened(): rval, frame = vc.read() else: rval = False while rval: if tasks.empty(): tasks.put(image) else: text = tasks.get() #Add text to frame cv2.putText(frame,text) #Showing the frame with all the applied modifications cv2.imshow("preview", frame) #Getting next frame from camera rval, frame = vc.read()
I want my program to read in images from a video stream in the main process In producer/consumer implementations, which is what you have above, the producer, what puts tasks into the queue to be executed by the consumers, needs to be separate from the main/controlling process so that it can add tasks in parallel with the main process reading output from results queue. Try the following. Have added a sleep in the consumer processes to simulate processing and added a second consumer to show they are being run in parallel. It would also be a good idea to limit the size of the task queue to avoid having it run away with memory usage if processing cannot keep up with input stream. Can specify a size when calling Queue(<size>). If the queue is at that size, calls to .put will block until the queue is not full. import time import multiprocessing import cv2 class ImageProcessor(multiprocessing.Process): def __init__(self, tasks_q, results_q): multiprocessing.Process.__init__(self) self.tasks_q = tasks_q self.results_q = results_q def run(self): while True: image = self.tasks_q.get() # Do computations on image time.sleep(1) # Display the result on stream self.results_q.put("text") # Tasks queue with size 1 - only want one image queued # for processing. # Queue size should therefore match number of processes tasks_q, results_q = multiprocessing.Queue(1), multiprocessing.Queue() processor = ImageProcessor(tasks_q, results_q) processor.start() def capture_display_video(vc): rval, frame = vc.read() while rval: image = frame.get_image() if not tasks_q.full(): tasks_q.put(image) if not results_q.empty(): text = results_q.get() cv2.putText(frame, text) cv2.imshow("preview", frame) rval, frame = vc.read() cv2.namedWindow("preview") vc = cv2.VideoCapture(0) if not vc.isOpened(): raise Exception("Cannot capture video") capture_display_video(vc) processor.terminate()
Οpposite of any() function
The Python built-in function any(iterable) can help to quickly check if any bool(element) is True in a iterable type. >>> l = [None, False, 0] >>> any(l) False >>> l = [None, 1, 0] >>> any(l) True But is there an elegant way or function in Python that could achieve the opposite effect of any(iterable)? That is, if any bool(element) is False then return True, like the following example: >>> l = [True, False, True] >>> any_false(l) >>> True
There is also the all function which does the opposite of what you want, it returns True if all are True and False if any are False. Therefore you can just do: not all(l)
Debugging Python and C++ exposed by boost together
I can debug Python code using ddd -pydb prog.py. All the python command line arguments can be passed too after prog.py. In my case, many classes have been implemented in C++ that are exposed to python using boost-python. I wish I could debug python code and C++ together. For example I want to set break points like this : break my_python.py:123 break my_cpp.cpp:456 cont Of course I am trying it after compiling c++ codes with debug option but the debugger does not cross boost boundary. Is there any way? EDIT: I saw http://www.boost.org/doc/libs/1_61_0/libs/python/doc/html/faq/how_do_i_debug_my_python_extensi.html. I followed it and I can do debugging both for python and C++. But I preferably want to do visual debugging with DDD but I don't know how to give 'target exec python' command inside DDD. If not (just using gdb as in the link) I should be able to debug for a Python script not interactively giving python commands as in the link.
I found out how to debug the C++ part while running python. (realized it while reading about process ID detection in Python book..). First you run the python program which includes C++ programs. At the start of the python program, use raw_input() to make the program wait for you input. But just before that do print os.getpid() (of course you should have imported os package). When you run the python program, it will have printed the pid of the python program you are running and will be waiting for your keyboard input. python stop code : import os def w1(str): print (str) wait = raw_input() return print os.getpid() w1('starting main..press a key') result : 27352 starting main..press a key Or, you can use import pdb, pdb.set_trace() as comment below.(thanks @AndyG) and see EDIT* to get pid using ps -aux. Now, suppose the C++ shared library is _caffe.so (which is my case. This _caffe.so library has all the C++ codes and boost python wrapper functions). 27352 is the pid. Then in another shell start gdb like gdb caffe-fast-rcnn/python/caffe/_caffe.so 27352 or if you want to use graphical debugging using like DDD, do ddd caffe-fast-rcnn/python/caffe/_caffe.so 27352 Then you'll see gdb starts and wait with prompt. The python program is interrupted by gdb and waits in stopped mode (it was waiting for your key input but now it's really in stopeed mode, and it needs gdb continue command from the second debugger to proceed with the key waiting). Now you can give break point command in gdb like br solver.cpp:225 and you can see message like Breakpoint 1 at 0x7f2cccf70397: file src/caffe/solver.cpp, line 226. (2 locations) When you give continue command in the second gdb window(that was holding the program), the python code runs again. Of course you should give a key input in the first gdb window to make it proceed. Now at least you can debug the C++ code while running python program(that's what I wanted to do)! I later checked if I can do python and C++ debugging at the same time and it works. You start the debugger(DDD) like ddd -pydb prog1.py options.. and attach another DDD using method explained above. Now you can set breakpoints for python and C++ and using other debug functions in each window(I wish I had known this a couple of months earlier.. I should have helped tons.). EDIT : to get the pid, you can do ps -aux | grep python instead. This pid is the next of ddd's pid.
TensorFlow REST Frontend but not TensorFlow Serving
I want to deploy a simple TensorFlow model and run it in REST service like Flask. Did not find so far good example on github or here. I am not ready to use TF Serving as suggested in other posts, it is perfect solution for Google but it overkill for my tasks with gRPC, bazel, C++ coding, protobuf...
There are different ways to do this. Purely, using tensorflow is not very flexible, however relatively straightforward. The downside of this approach is that you have to rebuild the graph and initialize variables in the code where you restore the model. There is a way shown in tensorflow skflow/contrib learn which is more elegant, however this doesn't seem to be functional at the moment and the documentation is out of date. I put a short example together on github here that shows how you would named GET or POST parameters to a flask REST-deployed tensorflow model. The main code is then in a function that takes a dictionary based on the POST/GET data: @app.route('/model', methods=['GET', 'POST']) @parse_postget def apply_model(d): tf.reset_default_graph() with tf.Session() as session: n = 1 x = tf.placeholder(tf.float32, [n], name='x') y = tf.placeholder(tf.float32, [n], name='y') m = tf.Variable([1.0], name='m') b = tf.Variable([1.0], name='b') y = tf.add(tf.mul(m, x), b) # fit y_i = m * x_i + b y_act = tf.placeholder(tf.float32, [n], name='y_') error = tf.sqrt((y - y_act) * (y - y_act)) train_step = tf.train.AdamOptimizer(0.05).minimize(error) feed_dict = {x: np.array([float(d['x_in'])]), y_act: np.array([float(d['y_star'])])} saver = tf.train.Saver() saver.restore(session, 'linear.chk') y_i, _, _ = session.run([y, m, b], feed_dict) return jsonify(output=float(y_i))
Compute first order derivative with MongoDB aggregation framework
Is it possible to calculate a first order derivative using the aggregate framework? For example, I have the data : {time_series : [10,20,40,70,110]} I'm trying to obtain an output like: {derivative : [10,20,30,40]}
We can do this using the aggregation framework in MongoDB 3.2 or newer because what we really need is a way to keep tracking of the index of the current and previous element in our array and fortunately starting from MongoDB 3.2 we can use the $unwind operator to deconstruct our array and include the index of each element in the array by specifying a document as operand instead of the traditional "path" prefixed by $. From there we have two options. The first is in MongoDB 3.2 and the second in the upcoming release of MongoDB (as of this writing). Next in the pipeline, we need to $group our documents and use the $push accumulator operator to return an array of sub-documents that look like this: { "_id" : ObjectId("57c11ddbe860bd0b5df6bc64"), "time_series" : [ { "value" : 10, "index" : NumberLong(0) }, { "value" : 20, "index" : NumberLong(1) }, { "value" : 40, "index" : NumberLong(2) }, { "value" : 70, "index" : NumberLong(3) }, { "value" : 110, "index" : NumberLong(4) } ] } Finally comes the $project stage. In this stage, we need to use the $map operator to apply a series of expression to each element in the the newly computed array in the $group stage. Here is what is going on inside the $map (see $map as a for loop) in expression: For each subdocument, we assign the value field to a variable using the $let variable operator. We then subtract it value from the value of the "value" field of the next element in the array. Since the next element in the array is the element at the current index plus one, all we need is the help of the $arrayElemAt operator and a simple $addition of the current element's index and 1. The $subtract expression return a negative value so we need to multiply the value by -1 using the $multiply operator. We also need to $filter the resulted array because it the last element is None or null. The reason is that when the current element is the last element, $subtract return None because the index of the next element equal the size of the array. db.collection.aggregate( [ { "$unwind": { "path": "$time_series", "includeArrayIndex": "index" }}, { "$group": { "_id": "$_id", "time_series": { "$push": { "value": "$time_series", "index": "$index" } } }}, { "$project": { "time_series": { "$filter": { "input": { "$map": { "input": "$time_series", "as": "el", "in": { "$multiply": [ { "$subtract": [ "$$el.value", { "$let": { "vars": { "nextElement": { "$arrayElemAt": [ "$time_series", { "$add": [ "$$el.index", 1 ]} ]} }, "in": "$$nextElement.value" } } ]}, -1 ] } } }, "as": "item", "cond": { "$gte": [ "$$item", 0 ] } } } }} ] ) In the upcoming version will provide another alternative. First in the $group stage we return two different arrays. One for the elements and the other one for their indexes then $zip the two arrays as shown here. From their, we simply access each element using integer indexing instead of assigning their value to a variable with $let. db.collection.aggregate( [ { "$unwind": { "path": "$time_series", "includeArrayIndex": "index" }}, { "$group": { "_id": "$_id", "values": { "$push": "$time_series" }, "indexes": { "$push": "$index" } }}, { "$project": { "time_series": { "$filter": { "input": { "$map": { "input": { "$zip": { "inputs": [ "$values", "$indexes" ] } }, "as": "el", "in": { "$multiply": [ { "$subtract": [ { "$arrayElemAt": [ "$$el", 0 ]}, { "$arrayElemAt": [ "$values", { "$add": [ { "$arrayElemAt": [ "$$el", 1 ]}, 1 ]} ]} ]}, -1 ] } } }, "as": "item", "cond": { "$gte": [ "$$item", 0 ] } } } }} ] ) Note that we could also reverse the array early in the $project stage using $reverse as shown here to avoid using $multiply. Both queries* yield something like: { "_id" : ObjectId("57c11ddbe860bd0b5df6bc64"), "time_series" : [ 10, 20, 30, 40 ] } Another option which I think is less efficient is perform a map/reduce operation on our collection using the map_reduce method. >>> import pymongo >>> from bson.code import Code >>> client = pymongo.MongoClient() >>> db = client.test >>> collection = db.collection >>> mapper = Code(""" ... function() { ... var derivatives = []; ... for (var index=1; index<this.time_series.length; index++) { ... derivatives.push(this.time_series[index] - this.time_series[index-1]); ... } ... emit(this._id, derivatives); ... } ... """) >>> reducer = Code(""" ... function(key, value) {} ... """) >>> for res in collection.map_reduce(mapper, reducer, out={'inline': 1})['results']: ... print(res) # or do something with the document. ... {'value': [10.0, 20.0, 30.0, 40.0], '_id': ObjectId('57c11ddbe860bd0b5df6bc64')} You can also retrieve all the document and use the numpy.diff to return the derivative like this: import numpy as np for document in collection.find({}, {'time_series': 1}): result = np.diff(document['time_series']) Now how about a little benchmarking: Machine: OS: Ubuntu 16.04 Memory: 15.6 GiB Processor: Intel® Xeon(R) CPU E3-1231 v3 @ 3.40GHz × 8 The three queries run in this order on my machine give respectively the following result: Benchmark test result with 500 documents: MongoDB 3.2 100 loops, best of 3: 2.32 ms per loop MongoDB 3.3.11 1000 loops, best of 3: 1.72 ms per loop MapReduce 100 loops, best of 3: 15.7 ms per loop Numpy using numpy.diff 100 loops, best of 3: 3.61 ms per loop Conclusion Using the aggregation is the best option here as expected even if the solution is not obvious. The mapReduce solution is trivial but very inefficient because of the JavaScript evaluation. * You can test the second query by installing a current development version of MongoDB (as the time of this writing).
How to keep track of players' rankings?
I have a Player class with a score attribute: class Player(game_engine.Player): def __init__(self, id): super().__init__(id) self.score = 0 This score increases/decreases as the player succeeds/fails to do objectives. Now I need to tell the player his rank out of the total amount of players with something like print('Your rank is {0} out of {1}') First I thought of having a list of all the players, and whenever anything happens to a player: I check if his score increased or decreased find him in the list move him until his score is in the correct place But this would be extremely slow. There can be hundreds of thousands of players, and a player can reset his own score to 0 which would mean that I'd have to move everyone after him in the stack. Even finding the player would be O(n). What I'm looking for is a high performance solution. RAM usage isn't quite as important, although common sense should be used. How could I improve the system to be a lot faster? Updated info: I'm storing a player's data into a MySQL database with SQLAlchemy everytime he leaves the gameserver, and I load it everytime he joins the server. These are handled through 'player_join' and 'player_leave' events: @Event('player_join') def load_player(id): """Load player into the global players dict.""" session = Session() query = session.query(Player).filter_by(id=id) players[id] = query.one_or_none() or Player(id=id) @Event('player_leave') def save_player(id): """Save player into the database.""" session = Session() session.add(players[id]) session.commit() Also, the player's score is updated upon 'player_kill' event: @Event('player_kill') def update_score(id, target_id): """Update players' scores upon a kill.""" players[id].score += 2 players[target_id].score -= 2
Redis sorted sets help with this exact situation (the documentation uses leader boards as the example usage) http://redis.io/topics/data-types-intro#redis-sorted-sets The key commands you care about are ZADD (update player rank) and ZRANK (get rank for specific player). Both operations are O(log(N)) complexity. Redis can be used as a cache of player ranking. When your application starts, populate redis from the SQL data. When updating player scores in mysql also update redis. If you have multiple server processes/threads and they could trigger player score updates concurrently then you should also account for the mysql/redis update race condition, eg: only update redis from a DB trigger; or serialise player score updates; or let data get temporarily out of sync and do another cache update after a delay; or let data get temporarily out of sync and do a full cache rebuild at fixed intervals
Active tasks is a negative number in Spark UI
When using spark-1.6.2 and pyspark, I saw this: where you see that the active tasks are a negative number (the difference of the the total tasks from the completed tasks). What is the source of this error? Node that I have many executors. However, it seems like there is a task that seems to have been idle (I don't see any progress), while another identical task completed normally. Also this is related: that mail I can confirm that many tasks are being created, since I am using 1k or 2k executors. The error I am getting is a bit different: 16/08/15 20:03:38 ERROR LiveListenerBus: Dropping SparkListenerEvent because no remaining room in event queue. This likely means one of the SparkListeners is too slow and cannot keep up with the rate at which tasks are being started by the scheduler. 16/08/15 20:07:18 WARN TaskSetManager: Lost task 20652.0 in stage 4.0 (TID 116652, myfoo.com): FetchFailed(BlockManagerId(61, mybar.com, 7337), shuffleId=0, mapId=328, reduceId=20652, message= org.apache.spark.shuffle.FetchFailedException: java.util.concurrent.TimeoutException: Timeout waiting for task.
It is a Spark issue. It occurs when executors restart after failures. The JIRA issue for the same is already created. You can get more details about the same from https://issues.apache.org/jira/browse/SPARK-10141 link.
Remove first encountered elements from a list
I have two Python lists with the same number of elements. The elements of the first list are unique, the ones in the second list - not necessarily so. For instance list1 = ['e1', 'e2', 'e3', 'e4', 'e5', 'e6', 'e7'] list2 = ['h1', 'h2', 'h1', 'h3', 'h1', 'h2', 'h4'] I want to remove all the "first encountered" elements from the second list and their corresponding elements from the first list. Basically, this means removing all unique elements and the first element of the duplicates. With the above example, the correct result should be >>>list1 ['e3', 'e5', 'e6'] >>>list2 ['h1', 'h1', 'h2'] That is, the element 'e1' was removed because its corresponding 'h1' was encountered for the first time, 'e2' was removed because 'h2' was seen for the first time, 'e3' was left because 'h1' was already seen, 'e4' was removed because 'h3' was seen for the first time, 'e5' was left because 'h1' was already seen, 'e6' was left because 'h2' was already seen, and 'e7' was removed because 'h4' was seen for the first time. What would be an efficient way to solve this problem? The lists could contain thousands of elements, so I'd rather not make duplicates of them or run multiple loops, if possible.
Just use a set object to lookup if the current value is already seen, like this >>> list1 = ['e1', 'e2', 'e3', 'e4', 'e5', 'e6', 'e7'] >>> list2 = ['h1', 'h2', 'h1', 'h3', 'h1', 'h2', 'h4'] >>> >>> def filterer(l1, l2): ... r1 = [] ... r2 = [] ... seen = set() ... for e1, e2 in zip(l1, l2): ... if e2 not in seen: ... seen.add(e2) ... else: ... r1.append(e1) ... r2.append(e2) ... return r1, r2 ... >>> list1, list2 = filterer(list1, list2) >>> list1 ['e3', 'e5', 'e6'] >>> list2 ['h1', 'h1', 'h2'] If you are going to consume the elements one-by-one and if the input lists are pretty big, then I would recommend making a generator, like this >>> def filterer(l1, l2): ... seen = set() ... for e1, e2 in zip(l1, l2): ... if e2 not in seen: ... seen.add(e2) ... else: ... yield e1, e2 ... >>> list(filterer(list1, list2)) [('e3', 'h1'), ('e5', 'h1'), ('e6', 'h2')] >>> >>> zip(*filterer(list1, list2)) [('e3', 'e5', 'e6'), ('h1', 'h1', 'h2')]
Text based data format which supports multiline strings
I search a text based data format which supports multiline strings. JSON does not allow multiline strings: >>> import json >>> json.dumps(dict(text='first line\nsecond line')) '{"text": "first line\\nsecond line"}' My desired output: {"text": "first line second line"} This question is about input and output. The data format should be editable with a editor like vi, emacs or notepad. I don't care if simple quotes " or tripple quotes (like in Python) """ get used. Is there a easy for human beings readable textual data interchange format which supports this? Use case I want to edit data with multiline strings with vi. This is not fun, if the data is in json format.
I think you should consider YAML format. It supports block notation which is able to preserve newlines like this data: | There once was a short man from Ealing Who got on a bus to Darjeeling It said on the door "Please don't spit on the floor" So he carefully spat on the ceiling Also there is a lot of parsers for any kind of programming languages including python (i.e pyYaml). Also there is a huge advantage that any valid JSON is YAML.
Function chaining in Python
On codewars.com I encountered the following task: Create a function add that adds numbers together when called in succession. So add(1) should return 1, add(1)(2) should return 1+2, ... While I'm familiar with the basics of Python, I've never encountered a function that is able to be called in such succession, i.e. a function f(x) that can be called as f(x)(y)(z).... Thus far, I'm not even sure how to interpret this notation. As a mathematician, I'd suspect that f(x)(y) is a function that assigns to every x a function g_{x} and then returns g_{x}(y) and likewise for f(x)(y)(z). Should this interpretation be correct, Python would allow me to dynamically create functions which seems very interesting to me. I've searched the web for the past hour, but wasn't able to find a lead in the right direction. Since I don't know how this programming concept is called, however, this may not be too surprising. How do you call this concept and where can I read more about it?
I don't know whether this is function chaining as much as it's callable chaining, but, since functions are callables I guess there's no harm done. Either way, there's two ways I can think of doing this: Sub-classing int and defining __call__: The first way would be with a custom int subclass that defines __call__ which returns a new instance of itself with the updated value: class CustomInt(int): def __call__(self, v): return CustomInt(self + v) Function add can now be defined to return a CustomInt instance, which, as a callable that returns an updated value of itself, can be called in succession: >>> def add(v): ... return CustomInt(v) >>> add(1) 1 >>> add(1)(2) 3 >>> add(1)(2)(3)(44) # and so on.. 50 In addition, as an int subclass, the returned value retains the __repr__ and __str__ behavior of ints. For more complex operations though, you should define other dunders appropriately. As @Caridorc noted in a comment, add could also be simply written as: add = CustomInt Renaming the class to add instead of CustomInt also works similarly. Define a closure, requires extra call to yield value: The only other way I can think of involves a nested function that requires an extra empty argument call in order to return the result. I'm not using nonlocal and opt for attaching attributes to the function objects to make it portable between Pythons: def add(v): def _inner_adder(val=None): """ if val is None we return _inner_adder.v else we increment and return ourselves """ if val is None: return _inner_adder.v _inner_adder.v += val return _inner_adder _inner_adder.v = v # save value return _inner_adder This continuously returns itself (_inner_adder) which, if a val is supplied, increments it (_inner_adder += val) and if not, returns the value as it is. Like I mentioned, it requires an extra () call in order to return the incremented value: >>> add(1)(2)() 3 >>> add(1)(2)(3)() # and so on.. 6
How can I slice each element of a numpy array of strings?
Numpy has some very useful string operations, which vectorize the usual Python string operations. Compared to these operation and to pandas.str, the numpy strings module seems to be missing a very important one: the ability to slice each string in the array. For example, a = numpy.array(['hello', 'how', 'are', 'you']) numpy.char.sliceStr(a, slice(1, 3)) >>> numpy.array(['el', 'ow', 're' 'ou']) Am I missing some obvious method in the module with this functionality? Otherwise, is there a fast vectorized way to achieve this?
Here's a vectorized approach - def slicer_vectorized(a,start,end): b = a.view('S1').reshape(len(a),-1)[:,start:end] return np.fromstring(b.tostring(),dtype='S'+str(end-start)) Sample run - In [68]: a = np.array(['hello', 'how', 'are', 'you']) In [69]: slicer_vectorized(a,1,3) Out[69]: array(['el', 'ow', 're', 'ou'], dtype='|S2') In [70]: slicer_vectorized(a,0,3) Out[70]: array(['hel', 'how', 'are', 'you'], dtype='|S3') Runtime test - Testing out all the approaches posted by other authors that I could run at my end and also including the vectorized approach from earlier in this post. Here's the timings - In [53]: # Setup input array ...: a = np.array(['hello', 'how', 'are', 'you']) ...: a = np.repeat(a,10000) ...: # @Alberto Garcia-Raboso's answer In [54]: %timeit slicer(1, 3)(a) 10 loops, best of 3: 23.5 ms per loop # @hapaulj's answer In [55]: %timeit np.frompyfunc(lambda x:x[1:3],1,1)(a) 100 loops, best of 3: 11.6 ms per loop # Using loop-comprehension In [56]: %timeit np.array([i[1:3] for i in a]) 100 loops, best of 3: 12.1 ms per loop # From this post In [57]: %timeit slicer_vectorized(a,1,3) 1000 loops, best of 3: 787 µs per loop
Coding style (PEP8) - Module level "dunders"
Definition of "Dunder" (Double underscore): http://www.urbandictionary.com/define.php?term=Dunder I have a question according the placement of module level "dunders" (like __all__, __version__, __author__ etc.) in Python code. The question came up to me while reading through PEP8 and seeing this Stack Overflow question. The accepted answer says: __author__ is a global "variable" and should therefore appear below the imports. But in the PEP8 section Module level dunder names I read the following: Module level "dunders" (i.e. names with two leading and two trailing underscores) such as __all__ , __author__ , __version__ , etc. should be placed after the module docstring but before any import statements except from __future__ imports. Python mandates that future-imports must appear in the module before any other code except docstrings. The authors also give a code example: """This is the example module. This module does stuff. """ from __future__ import barry_as_FLUFL __all__ = ['a', 'b', 'c'] __version__ = '0.1' __author__ = 'Cardinal Biggles' import os import sys But when I put the above into PyCharm, I see this warning (also see the screenshot): PEP8: module level import not at top of file Question: What is the correct way/place to store these variables with double underscores?
PEP 8 recently was updated to put the location before the imports. See revision cf8e888b9555, committed on June 7th, 2016: Relax __all__ location. Put all module level dunders together in the same location, and remove the redundant version bookkeeping information. Closes #27187. Patch by Ian Lee. The text was further updated the next day to address the from __future__ import ... caveat. The patch links to issue #27187, which in turn references this pycodestyle issue, where it was discovered PEP 8 was unclear. Before this change, as there was no clear guideline on module-level dunder globals, so PyCharm and the other answer were correct at the time. I'm not sure how PyCharm implements their PEP 8 checks; if they use the pycodestyle project (the defacto Python style checker), then I'm sure it'll be fixed automatically. Otherwise, perhaps file a bug with them to see this fixed.
NumPy performance: uint8 vs. float and multiplication vs. division?
I have just noticed that the execution time of a script of mine nearly halves by only changing a multiplication to a division. To investigate this, I have written a small example: import numpy as np import timeit # uint8 array arr1 = np.random.randint(0, high=256, size=(100, 100), dtype=np.uint8) # float32 array arr2 = np.random.rand(100, 100).astype(np.float32) arr2 *= 255.0 def arrmult(a): """ mult, read-write iterator """ b = a.copy() for item in np.nditer(b, op_flags=["readwrite"]): item[...] = (item + 5) * 0.5 def arrmult2(a): """ mult, index iterator """ b = a.copy() for i, j in np.ndindex(b.shape): b[i, j] = (b[i, j] + 5) * 0.5 def arrmult3(a): """ mult, vectorized """ b = a.copy() b = (b + 5) * 0.5 def arrdiv(a): """ div, read-write iterator """ b = a.copy() for item in np.nditer(b, op_flags=["readwrite"]): item[...] = (item + 5) / 2 def arrdiv2(a): """ div, index iterator """ b = a.copy() for i, j in np.ndindex(b.shape): b[i, j] = (b[i, j] + 5) / 2 def arrdiv3(a): """ div, vectorized """ b = a.copy() b = (b + 5) / 2 def print_time(name, t): print("{: <10}: {: >6.4f}s".format(name, t)) timeit_iterations = 100 print("uint8 arrays") print_time("arrmult", timeit.timeit("arrmult(arr1)", "from __main__ import arrmult, arr1", number=timeit_iterations)) print_time("arrmult2", timeit.timeit("arrmult2(arr1)", "from __main__ import arrmult2, arr1", number=timeit_iterations)) print_time("arrmult3", timeit.timeit("arrmult3(arr1)", "from __main__ import arrmult3, arr1", number=timeit_iterations)) print_time("arrdiv", timeit.timeit("arrdiv(arr1)", "from __main__ import arrdiv, arr1", number=timeit_iterations)) print_time("arrdiv2", timeit.timeit("arrdiv2(arr1)", "from __main__ import arrdiv2, arr1", number=timeit_iterations)) print_time("arrdiv3", timeit.timeit("arrdiv3(arr1)", "from __main__ import arrdiv3, arr1", number=timeit_iterations)) print("\nfloat32 arrays") print_time("arrmult", timeit.timeit("arrmult(arr2)", "from __main__ import arrmult, arr2", number=timeit_iterations)) print_time("arrmult2", timeit.timeit("arrmult2(arr2)", "from __main__ import arrmult2, arr2", number=timeit_iterations)) print_time("arrmult3", timeit.timeit("arrmult3(arr2)", "from __main__ import arrmult3, arr2", number=timeit_iterations)) print_time("arrdiv", timeit.timeit("arrdiv(arr2)", "from __main__ import arrdiv, arr2", number=timeit_iterations)) print_time("arrdiv2", timeit.timeit("arrdiv2(arr2)", "from __main__ import arrdiv2, arr2", number=timeit_iterations)) print_time("arrdiv3", timeit.timeit("arrdiv3(arr2)", "from __main__ import arrdiv3, arr2", number=timeit_iterations)) This prints the following timings: uint8 arrays arrmult : 2.2004s arrmult2 : 3.0589s arrmult3 : 0.0014s arrdiv : 1.1540s arrdiv2 : 2.0780s arrdiv3 : 0.0027s float32 arrays arrmult : 1.2708s arrmult2 : 2.4120s arrmult3 : 0.0009s arrdiv : 1.5771s arrdiv2 : 2.3843s arrdiv3 : 0.0009s I always thought a multiplication is computationally cheaper than a division. However, for uint8 a division seems to be nearly twice as effective. Does this somehow relate to the fact, that * 0.5 has to calculate the multiplication in a float and then casting the result back to to an integer? At least for floats multiplications seem to be faster than divisions. Is this generally true? Why is a multiplication in uint8 more expansive than in float32? I thought an 8-bit unsigned integer should be much faster to calculate than 32-bit floats?! Can someone "demystify" this? EDIT: to have more data, I've included vectorized functions (like suggested) and added index iterators as well. The vectorized functions are much faster, thus not really comparable. However, if timeit_iterations is set much higher for the vectorized functions, it turns out that multiplication is faster for both, uint8 and float32. I guess this confuses even more?! Maybe multiplication is in fact always faster than division, but the main performance leaks in the for-loops is not the arithmetical operation, but the loop itself. Although this does not explain why the loops behave differently for different operations. EDIT2: Like @jotasi already stated, we are looking for a full explanation of division vs. multiplication and int(or uint8) vs. float (or float32). Additionally, explaining the different trends of the vectorized approaches and the iterators would be interesting, as in the vectorized case, the division seems to be slower, whereas it is faster in the iterator case.
The problem is your assumption, that you measure the time needed for division or multiplication, which is not true. You are measuring the overhead needed for a division or multiplication. One has really to look at the exact code to explain every effect, which can vary from version to version. This answer can only give an idea, what one has to consider. The problem is that a simple int is not simple at all in python: it is a real object which must be registered in the garbage collector, it grows in size with its value - for all that you have to pay: for example for a 8bit integer 24 bytes memory are needed! similar goes for python-floats. On the other hand, a numpy array consists of simple c-style integers/floats without overhead, you save a lot of memory, but pay for it during the access to an element of numpy-array. a[i] means: a python-integer must be constructed, registered in the garbage collector and only than it can be used - there is a lot of overhead. Consider this code: li1=[x%256 for x in xrange(10**4)] arr1=np.array(li1, np.uint8) def arrmult(a): for i in xrange(len(a)): a[i]*=5; arrmult(li1) is 25 faster than arrmult(arr1) because integers in the list are already python-ints and don't have to be created! The lion's share of the calculation time is needed for creation of the objects - everything else can be almost neglected. Let's take a look at your code, first the multiplication: def arrmult2(a): ... b[i, j] = (b[i, j] + 5) * 0.5 In the case of the uint8 the following must happen (I neglect +5 for simplicity): a python-int must be created it must be casted to a float (python-float creation), in order to be able to do float multiplication and casted back to a python-int or/and uint8 For float32, there is less work to do (multiplication does not cost much): 1. a python-float created 2. casted back float32. So the float-version should be faster and it is. Now let's take a look at the division: def arrdiv2(a): ... b[i, j] = (b[i, j] + 5) / 2 The pitfall here: All operations are integer-operations. So compared to multiplication there is no need to cast to python-float, thus we have less overhead as in the case of multiplication. Division is "faster" for unint8 than multiplication in your case. However, division and multiplication are equally fast/slow for float32, because almost nothing has changed in this case - we still need to create a python-float. Now the vectorized versions: they work with c-style "raw" float32s/uint8s without conversion (and its cost!) to the corresponding python-objects under the hood. To get meaningful results you should increase the number of iteration (right now the running time is too small to say something with certainty). division and multiplication for float32 could have the same running time, because I would expect numpy to replace the division by 2 through multiplication by 0.5 (but to be sure one has to look into the code). multiplication for uint8 should be slower, because every uint8-integer must be casted to a float prior to multiplication with 0.5 and than casted back to uint8 afterwards. for the uint8 case, the numpy cannot replace the division by 2 through multiplication with 0.5 because it is an integer division. Integer division is slower than float-multiplication for a lot of architectures - this is the slowest vectorized operation. PS: I would not dwell too much about costs multiplication vs. division - there are too many other things that can have a bigger hit on the performance. For example creating unnecessary temporary objects or if the numpy-array is large and does not fit into the cache, than the memory access will be the bottle-neck - you will see no difference between multiplication and division at all.
Matching Unicode word boundaries in Python
In order to match the Unicode word boundaries [as defined in the Annex #29] in Python, I have been using the regex package with flags regex.WORD | regex.V1 (regex.UNICODE should be default since the pattern is a Unicode string) in the following way: >>> s="here are some words" >>> regex.findall(r'\w(?:\B\S)*', s, flags = regex.V1 | regex.WORD) ['here', 'are', 'some', 'words'] It works well in this rather simple cases. However, I was wondering what is the expected behavior in case the input string contains certain punctuation. It seems to me that WB7 says that for example the apostrophe in x'z does not qualify as a word boundary which seems to be indeed the case: >>> regex.findall(r'\w(?:\B\S)*', "x'z", flags = regex.V1 | regex.WORD) ["x'z"] However, if there is a vowel, the situation changes: >>> regex.findall(r'\w(?:\B\S)*', "l'avion", flags = regex.V1 | regex.WORD) ["l'", 'avion'] This would suggest that the regex module implements the rule WB5a mentioned in the standard in the Notes section. However, this rule also says that the behavior should be the same with \u2019 (right single quotation mark) which I can't reproduce: >>> regex.findall(r'\w(?:\B\S)*', "l\u2019avion", flags = regex.V1 | regex.WORD) ['l’avion'] Moreover, even with "normal" apostrophe, a ligature (or y) seems to behave as a "non-vowel": >>> regex.findall(r'\w(?:\B\S)*', "l'œil", flags = regex.V1 | regex.WORD) ["l'œil"] >>> regex.findall(r'\w(?:\B\S)*', "J'y suis", flags = regex.V1 | regex.WORD) ["J'y", 'suis'] Is this the expected behavior? (all examples above were executed with regex 2.4.106 and Python 3.5.2)
1- RIGHT SINGLE QUOTATION MARK ’ seems to be just simply missed in source file: /* Break between apostrophe and vowels (French, Italian). */ /* WB5a */ if (pos_m1 >= 0 && char_at(state->text, pos_m1) == '\'' && is_unicode_vowel(char_at(state->text, text_pos))) return TRUE; 2- Unicode vowels are determined with is_unicode_vowel() function which translates to this list: a, à, á, â, e, è, é, ê, i, ì, í, î, o, ò, ó, ô, u, ù, ú, û So a LATIN SMALL LIGATURE OE œ character is not considered as a unicode vowel: Py_LOCAL_INLINE(BOOL) is_unicode_vowel(Py_UCS4 ch) { #if PY_VERSION_HEX >= 0x03030000 switch (Py_UNICODE_TOLOWER(ch)) { #else switch (Py_UNICODE_TOLOWER((Py_UNICODE)ch)) { #endif case 'a': case 0xE0: case 0xE1: case 0xE2: case 'e': case 0xE8: case 0xE9: case 0xEA: case 'i': case 0xEC: case 0xED: case 0xEE: case 'o': case 0xF2: case 0xF3: case 0xF4: case 'u': case 0xF9: case 0xFA: case 0xFB: return TRUE; default: return FALSE; } } This bug is now fixed in regex 2016.08.27 after a bug report. [_regex.c:#1668]
Imported a Python module; why does a reassigning a member in it also affect an import elsewhere?
I am seeing Python behavior that I don't understand. Consider this layout: project | main.py | test1.py | test2.py | config.py main.py: import config as conf import test1 import test2 print(conf.test_var) test1.test1() print(conf.test_var) test2.test2() test1.py: import config as conf def test1(): conf.test_var = 'test1' test2.py: import config as conf def test2(): print(conf.test_var) config.py: test_var = 'initial_value' so, python main.py produce: initial_value test1 test1 I am confused by the last line. I thought that it would print initial_value again because I'm importing config.py in test2.py again, and I thought that changes that I've made in the previous step would be overwritten. Am I misunderstanding something?
Python caches imported modules. The second import call doesn't reload the file.
How to get a python script to invoke "python -i" when called normally?
I have a python script that I like to run with python -i script.py, which runs the script and then enters interactive mode so that I can play around with the results. Is it possible to have the script itself invoke this option, such that I can just run python script.py and the script will enter interactive mode after running? Of course, I can simply add the -i, or if that is too much effort, I can write a shell script to invoke this.
From within script.py, set the PYTHONINSPECT environment variable to any nonempty string. Python will recheck this environment variable at the end of the program and enter interactive mode. import os # This can be placed at top or bottom of the script, unlike code.interact os.environ['PYTHONINSPECT'] = 'TRUE'
interactive conditional histogram bucket slicing data visualization
I have a df that looks like: df.head() Out[1]: A B C city0 40 12 73 city1 65 56 10 city2 77 58 71 city3 89 53 49 city4 33 98 90 An example df can be created by the following code: df = pd.DataFrame(np.random.randint(100,size=(1000000,3)), columns=list('ABC')) indx = ['city'+str(x) for x in range(0,1000000)] df.index = indx What I want to do is: a) determine appropriate histogram bucket lengths for column A and assign each city to a bucket for column A b) determine appropriate histogram bucket lengths for column B and assign each city to a bucket for column B Maybe the resulting df looks like (or is there a better built in way in pandas?) df.head() Out[1]: A B C Abkt Bbkt city0 40 12 73 2 1 city1 65 56 10 4 3 city2 77 58 71 4 3 city3 89 53 49 5 3 city4 33 98 90 2 5 Where Abkt and Bbkt are histogram bucket identifiers: 1-20 = 1 21-40 = 2 41-60 = 3 61-80 = 4 81-100 = 5 Ultimately, I want to better understand the behavior of each city with respect to columns A, B and C and be able to answer questions like: a) What does the distribution of Column A (or B) look like - i.e. what buckets are most/least populated. b) Conditional on a particular slice/bucket of Column A, what does the distribution of Column B look like - i.e. what buckets are most/least populated. c) Conditional on a particular slice/bucket of Column A and B, what does the behavior of C look like. Ideally, I want to be able to visualize the data (heat maps, region identifiers etc). I'm a relative pandas/python newbie and don't know what is possible to develop. If the SO community can kindly provide code examples of how I can do what I want (or a better approach if there are better pandas/numpy/scipy built in methods) I would be grateful. As well, any pointers to resources that can help me better summarize/slice/dice my data and be able to visualize at intermediate steps as I proceed with my analysis. UPDATE: I am following some of the suggestions in the comments. I tried: 1) df.hist() ValueError: The first argument of bincount must be non-negative 2) df[['A']].hist(bins=10,range=(0,10)) array([[<matplotlib.axes._subplots.AxesSubplot object at 0x000000A2350615C0>]], dtype=object) Isn't #2 suppose to show a plot? instead of producing an object that is not rendered? I am using jupyter notebook. Is there something I need to turn-on / enable in Jupyter Notebook to render the histogram objects? UPDATE2: I solved the rendering problem by: in Ipython notebook, Pandas is not displying the graph I try to plot. UPDATE3: As per suggestions from the comments, I started looking through pandas visualization, bokeh and seaborn. However, I'm not sure how I can create linkages between plots. Lets say I have 10 variables. I want to explore them but since 10 is a large number to explore at once, lets say I want to explore 5 at any given time (r,s,t,u,v). If I want an interactive hexbin with marginal distributions plot to examine the relationship between r & s, how do I also see the distribution of t, u and v given interactive region selections/slices of r&s (polygons). I found hexbin with marginal distribution plot here hexbin plot: But: 1) How to make this interactive (allow selections of polygons) 2) How to link region selections of r & s to other plots, for example 3 histogram plots of t,u, and v (or any other type of plot). This way, I can navigate through the data more rigorously and explore the relationships in depth.
In order to get the interaction effect you're looking for, you must bin all the columns you care about, together. The cleanest way I can think of doing this is to stack into a single series then use pd.cut Considering your sample df df_ = pd.cut(df[['A', 'B']].stack(), 5, labels=list(range(5))).unstack() df_.columns = df_.columns.to_series() + 'bkt' pd.concat([df, df_], axis=1) Let's build a better example and look at a visualization using seaborn df = pd.DataFrame(dict(A=(np.random.randn(10000) * 100 + 20).astype(int), B=(np.random.randn(10000) * 100 - 20).astype(int))) import seaborn as sns df.index = df.index.to_series().astype(str).radd('city') df_ = pd.cut(df[['A', 'B']].stack(), 30, labels=list(range(30))).unstack() df_.columns = df_.columns.to_series() + 'bkt' sns.jointplot(x=df_.Abkt, y=df_.Bbkt, kind="scatter", color="k") Or how about some data with some correlation mean, cov = [0, 1], [(1, .5), (.5, 1)] data = np.random.multivariate_normal(mean, cov, 100000) df = pd.DataFrame(data, columns=["A", "B"]) df.index = df.index.to_series().astype(str).radd('city') df_ = pd.cut(df[['A', 'B']].stack(), 30, labels=list(range(30))).unstack() df_.columns = df_.columns.to_series() + 'bkt' sns.jointplot(x=df_.Abkt, y=df_.Bbkt, kind="scatter", color="k") Interactive bokeh Without getting too complicated from bokeh.io import show, output_notebook, output_file from bokeh.plotting import figure from bokeh.layouts import row, column from bokeh.models import ColumnDataSource, Select, CustomJS output_notebook() # generate random data flips = np.random.choice((1, -1), (5, 5)) flips = np.tril(flips, -1) + np.triu(flips, 1) + np.eye(flips.shape[0]) half = np.ones((5, 5)) / 2 cov = (half + np.diag(np.diag(half))) * flips mean = np.zeros(5) data = np.random.multivariate_normal(mean, cov, 10000) df = pd.DataFrame(data, columns=list('ABCDE')) df.index = df.index.to_series().astype(str).radd('city') # Stack and cut to get dependent relationships b = 20 df_ = pd.cut(df.stack(), b, labels=list(range(b))).unstack() # assign default columns x and y. These will be the columns I set bokeh to read df_[['x', 'y']] = df_.loc[:, ['A', 'B']] source = ColumnDataSource(data=df_) tools = 'box_select,pan,box_zoom,wheel_zoom,reset,resize,save' p = figure(plot_width=600, plot_height=300) p.circle('x', 'y', source=source, fill_color='olive', line_color='black', alpha=.5) def gcb(like, n): code = """ var data = source.get('data'); var f = cb_obj.get('value'); data['{0}{1}'] = data[f]; source.trigger('change'); """ return CustomJS(args=dict(source=source), code=code.format(like, n)) xcb = CustomJS( args=dict(source=source), code=""" var data = source.get('data'); var colm = cb_obj.get('value'); data['x'] = data[colm]; source.trigger('change'); """ ) ycb = CustomJS( args=dict(source=source), code=""" var data = source.get('data'); var colm = cb_obj.get('value'); data['y'] = data[colm]; source.trigger('change'); """ ) options = list('ABCDE') x_select = Select(options=options, callback=xcb, value='A') y_select = Select(options=options, callback=ycb, value='B') show(column(p, row(x_select, y_select)))
Better way to swap elements in a list?
I have a bunch of lists that look like this one: l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] I want to swap elements as follows: final_l = [2, 1, 4, 3, 6, 5, 8, 7, 10, 9] The size of the lists may vary, but they will always contain an even number of elements. I'm fairly new to Python and am currently doing it like this: l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] final_l = [] for i in range(0, len(l)/2): final_l.append(l[2*i+1]) final_l.append(l[2*i]) I know this isn't really Pythonic and would like to use something more efficient. Maybe a list comprehension?
No need for complicated logic, simply rearrange the list with slicing and step: In [1]: l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] In [2]: l[::2], l[1::2] = l[1::2], l[::2] In [3]: l Out[3]: [2, 1, 4, 3, 6, 5, 8, 7, 10, 9]  TLDR; Edited with explanation I believe most viewers are already familiar with list slicing and multiple assignment. In case you don't I will try my best to explain what's going on (hope I do not make it worse). To understand list slicing, here already has an excellent answer and explanation of list slice notation. Simply put: a[start:end] # items start through end-1 a[start:] # items start through the rest of the array a[:end] # items from the beginning through end-1 a[:] # a copy of the whole array There is also the step value, which can be used with any of the above: a[start:end:step] # start through not past end, by step Let's look at OP's requirements: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] # list l ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ 0 1 2 3 4 5 6 7 8 9 # respective index of the elements l[0] l[2] l[4] l[6] l[8] # first tier : start=0, step=2 l[1] l[3] l[5] l[7] l[9] # second tier: start=1, step=2 ----------------------------------------------------------------------- l[1] l[3] l[5] l[7] l[9] l[0] l[2] l[4] l[6] l[8] # desired output First tier will be: l[::2] = [1, 3, 5, 7, 9] Second tier will be: l[1::2] = [2, 4, 6, 8, 10] As we want to re-assign first = second & second = first, we can use multiple assignment, and update the original list in place: first , second = second , first that is: l[::2], l[1::2] = l[1::2], l[::2] As a side note, to get a new list but not altering original l, we can assign a new list from l, and perform above, that is: n = l[:] # assign n as a copy of l (without [:], n still points to l) n[::2], n[1::2] = n[1::2], n[::2] Hopefully I do not confuse any of you with this added explanation. If it does, please help update mine and make it better :-)
Can a line of Python code know its indentation nesting level?
From something like this: print(get_indentation_level()) print(get_indentation_level()) print(get_indentation_level()) I would like to get something like this: 1 2 3 Can the code read itself in this way? All I want is the output from the more nested parts of the code to be more nested. In the same way that this makes code easier to read, it would make the output easier to read. Of course I could implement this manually, using e.g. .format(), but what I had in mind was a custom print function which would print(i*' ' + string) where i is the indentation level. This would be a quick way to make readable output on my terminal. Is there a better way to do this which avoids painstaking manual formatting?
If you want indentation in terms of nesting level rather than spaces and tabs, things get tricky. For example, in the following code: if True: print( get_nesting_level()) the call to get_nesting_level is actually nested one level deep, despite the fact that there is no leading whitespace on the line of the get_nesting_level call. Meanwhile, in the following code: print(1, 2, get_nesting_level()) the call to get_nesting_level is nested zero levels deep, despite the presence of leading whitespace on its line. In the following code: if True: if True: print(get_nesting_level()) if True: print(get_nesting_level()) the two calls to get_nesting_level are at different nesting levels, despite the fact that the leading whitespace is identical. In the following code: if True: print(get_nesting_level()) is that nested zero levels, or one? In terms of INDENT and DEDENT tokens in the formal grammar, it's zero levels deep, but you might not feel the same way. If you want to do this, you're going to have to tokenize the whole file up to the point of the call and count INDENT and DEDENT tokens. The tokenize module would be very useful for such a function: import inspect import tokenize def get_nesting_level(): caller_frame = inspect.currentframe().f_back filename, caller_lineno, _, _, _ = inspect.getframeinfo(caller_frame) with open(filename) as f: indentation_level = 0 for token_record in tokenize.generate_tokens(f.readline): token_type, _, (token_lineno, _), _, _ = token_record if token_lineno > caller_lineno: break elif token_type == tokenize.INDENT: indentation_level += 1 elif token_type == tokenize.DEDENT: indentation_level -= 1 return indentation_level
Find all n-dimensional lines and diagonals with NumPy
Using NumPy, I would like to produce a list of all lines and diagonals of an n-dimensional array with lengths of k. Take the case of the following three-dimensional array with lengths of three. array([[[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8]], [[ 9, 10, 11], [12, 13, 14], [15, 16, 17]], [[18, 19, 20], [21, 22, 23], [24, 25, 26]]]) For this case, I would like to obtain all of the following types of sequences. For any given case, I would like to obtain all of the possible sequences of each type. Examples of desired sequences are given in parentheses below, for each case. 1D lines x axis (0, 1, 2) y axis (0, 3, 6) z axis (0, 9, 18) 2D diagonals x/y axes (0, 4, 8, 2, 4, 6) x/z axes (0, 10, 20, 2, 10, 18) y/z axes (0, 12, 24, 6, 12, 18) 3D diagonals x/y/z axes (0, 13, 26, 2, 13, 24) The solution should be generalized, so that it will generate all lines and diagonals for an array, regardless of the array's number of dimensions or length (which is constant across all dimensions).
This solution generalized over n Lets rephrase this problem as "find the list of indices". We're looking for all of the 2d index arrays of the form array[i[0], i[1], i[2], ..., i[n-1]] Let n = arr.ndim Where i is an array of shape (n, k) Each of i[j] can be one of: The same index repeated n times, ri[j] = [j, ..., j] The forward sequence, fi = [0, 1, ..., k-1] The backward sequence, bi = [k-1, ..., 1, 0] With the requirements that each sequence is of the form ^(ri)*(fi)(fi|bi|ri)*$ (using regex to summarize it). This is because: there must be at least one fi so the "line" is not a point selected repeatedly no bis come before fis, to avoid getting reversed lines def product_slices(n): for i in range(n): yield ( np.index_exp[np.newaxis] * i + np.index_exp[:] + np.index_exp[np.newaxis] * (n - i - 1) ) def get_lines(n, k): """ Returns: index (tuple): an object suitable for advanced indexing to get all possible lines mask (ndarray): a boolean mask to apply to the result of the above """ fi = np.arange(k) bi = fi[::-1] ri = fi[:,None].repeat(k, axis=1) all_i = np.concatenate((fi[None], bi[None], ri), axis=0) # inedx which look up every possible line, some of which are not valid index = tuple(all_i[s] for s in product_slices(n)) # We incrementally allow lines that start with some number of `ri`s, and an `fi` # [0] here means we chose fi for that index # [2:] here means we chose an ri for that index mask = np.zeros((all_i.shape[0],)*n, dtype=np.bool) sl = np.index_exp[0] for i in range(n): mask[sl] = True sl = np.index_exp[2:] + sl return index, mask Applied to your example: # construct your example array n = 3 k = 3 data = np.arange(k**n).reshape((k,)*n) # apply my index_creating function index, mask = get_lines(n, k) # apply the index to your array lines = data[index][mask] print(lines) array([[ 0, 13, 26], [ 2, 13, 24], [ 0, 12, 24], [ 1, 13, 25], [ 2, 14, 26], [ 6, 13, 20], [ 8, 13, 18], [ 6, 12, 18], [ 7, 13, 19], [ 8, 14, 20], [ 0, 10, 20], [ 2, 10, 18], [ 0, 9, 18], [ 1, 10, 19], [ 2, 11, 20], [ 3, 13, 23], [ 5, 13, 21], [ 3, 12, 21], [ 4, 13, 22], [ 5, 14, 23], [ 6, 16, 26], [ 8, 16, 24], [ 6, 15, 24], [ 7, 16, 25], [ 8, 17, 26], [ 0, 4, 8], [ 2, 4, 6], [ 0, 3, 6], [ 1, 4, 7], [ 2, 5, 8], [ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 13, 17], [11, 13, 15], [ 9, 12, 15], [10, 13, 16], [11, 14, 17], [ 9, 10, 11], [12, 13, 14], [15, 16, 17], [18, 22, 26], [20, 22, 24], [18, 21, 24], [19, 22, 25], [20, 23, 26], [18, 19, 20], [21, 22, 23], [24, 25, 26]]) Another good set of test data is np.moveaxis(np.indices((k,)*n), 0, -1), which gives an array where every value is its own index I've solved this problem before to implement a higher dimensional tic-tac-toe
How `[System.Console]::OutputEncoding/InputEncoding` with Python?
Under Powershell v5, Windows 8.1, Python 3. Why these fails and how to fix? [system.console]::InputEncoding = [System.Text.Encoding]::UTF8; [system.console]::OutputEncoding = [System.Text.Encoding]::UTF8; chcp; "import sys print(sys.stdout.encoding) print(sys.stdin.encoding) sys.stdout.write(sys.stdin.readline()) " | sc test.py -Encoding utf8; [char]0x0422+[char]0x0415+[char]0x0421+[char]0x0422+"`n" | py -3 test.py prints: Active code page: 65001 cp65001 cp1251 п»ї????
You are piping data into Python; at that point Python's stdin is no longer attached to a TTY (your console) and won't guess at what the encoding might be. Instead, the default system locale is used; on your system that's cp1251 (the Windows Latin-1-based codepage). Set the PYTHONIOENCODING environment variable to override: PYTHONIOENCODING If this is set before running the interpreter, it overrides the encoding used for stdin/stdout/stderr, in the syntax encodingname:errorhandler. Both the encodingname and the :errorhandler parts are optional and have the same meaning as in str.encode(). PowerShell doesn't appear to support per-command-line environment variables the way UNIX shells do; the easiest is to just set the variable first: Set-Item Env:PYTHONIOENCODING "UTF-8" or even Set-Item Env:PYTHONIOENCODING "cp65001" as the Windows UTF-8 codepage is apparently not quite UTF-8 really, depending on the Windows version and on wether or not pipe redirection is used.
How to subclass list and trigger an event whenever the data change?
I would like to subclass list and trigger an event (data checking) every time any change happens to the data. Here is an example subclass: class MyList(list): def __init__(self, sequence): super().__init__(sequence) self._test() def __setitem__(self, key, value): super().__setitem__(key, value) self._test() def append(self, value): super().append(value) self._test() def _test(self): """ Some kind of check on the data. """ if not self == sorted(self): raise ValueError("List not sorted.") Here, I am overriding methods __init__, __setitem__ and __append__ to perform the check if data changes. I think this approach is undesirable, so my question is: Is there a possibilty of triggering data checking automatically if any kind of mutation happens to the underlying data structure?
As you say, this is not the best way to go about it. To correctly implement this, you'd need to know about every method that can change the list. The way to go is to implement your own list (or rather a mutable sequence). The best way to do this is to use the abstract base classes from Python which you find in the collections.abc module. You have to implement only a minimum amount of methods and the module automatically implements the rest for you. For your specific example, this would be something like this: from collections.abc import MutableSequence class MyList(MutableSequence): def __init__(self, iterable=()): self._list = list(iterable) def __getitem__(self, key): return self._list.__getitem__(key) def __setitem__(self, key, item): self._list.__setitem__(key, item) # trigger change handler def __delitem__(self, key): self._list.__delitem__(key) # trigger change handler def __len__(self): return self._list.__len__() def insert(self, index, item): self._list.insert(index, item) # trigger change handler Performance Some methods are slow in their default implementation. For example __contains__ is defined in the Sequence class as follows: def __contains__(self, value): for v in self: if v is value or v == value: return True return False Depending on your class, you might be able to implement this faster. However, performance is often less important than writing code which is easy to understand. It can also make writing a class harder, because you're then responsible for implementing the methods correctly.
Why does list(next(iter(())) for _ in range(1)) == []?
Why does list(next(iter(())) for _ in range(1)) return an empty list rather than raising StopIteration? >>> next(iter(())) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration >>> [next(iter(())) for _ in range(1)] Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration >>> list(next(iter(())) for _ in range(1)) # ?! [] The same thing happens with a custom function that explicitly raises StopIteration: >>> def x(): ... raise StopIteration ... >>> x() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 2, in x StopIteration >>> [x() for _ in range(1)] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 2, in x StopIteration >>> list(x() for _ in range(1)) # ?! []
assuming all goes well, the generator comprehension x() for _ in range(1) should raise StopIteration when it is finished iterating over range(1) to indicate that there are no more items to pack into the list. However because x() raises StopIteration it ends up exiting early meaning this behaviour is a bug in python that is being addressed with PEP 479 In python 3.6 or using from __future__ import generator_stop in python 3.5 when a StopIteration propagates out farther it is converted into a RuntimeError so that list doesn't register it as the end of the comprehension. When this is in effect the error looks like this: Traceback (most recent call last): File "/Users/Tadhg/Documents/codes/test.py", line 6, in <genexpr> stuff = list(x() for _ in range(1)) File "/Users/Tadhg/Documents/codes/test.py", line 4, in x raise StopIteration StopIteration The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/Tadhg/Documents/codes/test.py", line 6, in <module> stuff = list(x() for _ in range(1)) RuntimeError: generator raised StopIteration
Build 2 lists in one go while reading from file, pythonically
I'm reading a big file with hundreds of thousands of number pairs representing the edges of a graph. I want to build 2 lists as I go: one with the forward edges and one with the reversed. Currently I'm doing an explicit for loop, because I need to do some pre-processing on the lines I read. However, I'm wondering if there is a more pythonic approach to building those lists, like list comprehensions, etc. But, as I have 2 lists, I don't see a way to populate them using comprehensions without reading the file twice. My code right now is: with open('SCC.txt') as data: for line in data: line = line.rstrip() if line: edge_list.append((int(line.rstrip().split()[0]), int(line.rstrip().split()[1]))) reversed_edge_list.append((int(line.rstrip().split()[1]), int(line.rstrip().split()[0])))
I would keep your logic as it is the Pythonic approach just not split/rstrip the same line multiple times: with open('SCC.txt') as data: for line in data: spl = line.split() if spl: i, j = map(int, spl) edge_list.append((i, j)) reversed_edge_list.append((j, i)) Calling rstrip when you have already called it is redundant in itself even more so when you are splitting as that would already remove the whitespace so splitting just once means you save doing a lot of unnecessary work. You can also use csv.reader to read the data and filter empty rows once you have a single whitespace delimiting: from csv import reader with open('SCC.txt') as data: edge_list, reversed_edge_list = [], [] for i, j in filter(None, reader(data, delimiter=" ")): i, j = int(i), int(j) edge_list.append((i, j)) reversed_edge_list.append((j, i)) Or if there are multiple whitespaces delimiting you can use map(str.split, data): for i, j in filter(None, map(str.split, data)): i, j = int(i), int(j) Whatever you choose will be faster than going over the data twice or splitting the sames lines multiple times.
How to parse an HTML table with rowspans in Python?
The problem I'm trying to parse an HTML table with rowspans in it, as in, I'm trying to parse my college schedule. I'm running into the problem where if the last row contains a rowspan, the next row is missing a TD where the rowspan is now that TD that is missing. I have no clue how to account for this and I hope to be able to parse this schedule. What I tried Pretty much everything I can think of. The result I get [ { 'blok_eind': 4, 'blok_start': 3, 'dag': 4, # Should be 5 'leraar': 'DOODF000', 'lokaal': 'ALK C212', 'vak': 'PROJ-T', }, ] As you can see, there's a vak key with the value PROJ-T in the output snippet above, dag is 4 while it's supposed to be 5 (a.k.a Friday/Vrijdag), as seen here: The result I want A Python dict() that looks like the one posted above, but with the right value Where: day/dag is an int from 1~5 representing Monday~Friday block_start/blok_start is an int that represents when the course starts (Time block, left side of table) block_end/blok_eind is an int that represent in what block the course ends classroom/lokaal is the classroom's code the course is in teacher/leraar is the teacher's ID course/vak is the ID of the course Basic HTML Structure for above data <center> <table> <tr> <td> <table> <tbody> <tr> <td> <font> TEACHER-ID </font> </td> <td> <font> <b> CLASSROOM ID </b> </font> </td> </tr> <tr> <td> <font> COURSE ID </font> </td> </tr> </tbody> </table> </td> </tr> </table> </center> The code HTML <CENTER><font size="3" face="Arial" color="#000000"> <BR></font> <font size="6" face="Arial" color="#0000FF"> 16AO4EIO1B &nbsp;</font> <font size="4" face="Arial"> IO1B </font> <BR> <TABLE border="3" rules="all" cellpadding="1" cellspacing="1"> <TR> <TD align="center"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 align="center" nowrap="1"> <TABLE> <TR> <TD align="center" nowrap=1><font size="2" face="Arial" color="#000000"> Maandag 29-08 </font> </TD> </TR> </TABLE> </TD> <TD colspan=12 align="center" nowrap="1"> <TABLE> <TR> <TD align="center" nowrap=1><font size="2" face="Arial"> Dinsdag 30-08 </font> </TD> </TR> </TABLE> </TD> <TD colspan=12 align="center" nowrap="1"> <TABLE> <TR> <TD align="center" nowrap=1><font size="2" face="Arial"> Woensdag 31-08 </font> </TD> </TR> </TABLE> </TD> <TD colspan=12 align="center" nowrap="1"> <TABLE> <TR> <TD align="center" nowrap=1><font size="2" face="Arial"> Donderdag 01-09 </font> </TD> </TR> </TABLE> </TD> <TD colspan=12 align="center" nowrap="1"> <TABLE> <TR> <TD align="center" nowrap=1><font size="2" face="Arial"> Vrijdag 02-09 </font> </TD> </TR> </TABLE> </TD> </TR> <TR> <TD rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial"> <B>1</B> </font> </TD> <TD align="center" nowrap=1><font size="2" face="Arial"> 8:30 </font> </TD> </TR> <TR> <TD align="center" nowrap=1><font size="2" face="Arial"> 9:20 </font> </TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=4 align="center" nowrap="1"> <TABLE> <TR> <TD width="50%" nowrap=1><font size="2" face="Arial"> BLEEJ002 </font> </TD> <TD width="50%" nowrap=1><font size="2" face="Arial"> <B>ALK B021</B> </font> </TD> </TR> <TR> <TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial"> WEBD </font> </TD> </TR> </TABLE> </TD> </TR> <TR> </TR> <TR> <TD rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial"> <B>2</B> </font> </TD> <TD align="center" nowrap=1><font size="2" face="Arial"> 9:20 </font> </TD> </TR> <TR> <TD align="center" nowrap=1><font size="2" face="Arial"> 10:10 </font> </TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=4 align="center" nowrap="1"> <TABLE> <TR> <TD width="50%" nowrap=1><font size="2" face="Arial"> BLEEJ002 </font> </TD> <TD width="50%" nowrap=1><font size="2" face="Arial"> <B>ALK B021B</B> </font> </TD> </TR> <TR> <TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial"> WEBD </font> </TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> </TR> <TR> </TR> <TR> <TD rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial"> <B>3</B> </font> </TD> <TD align="center" nowrap=1><font size="2" face="Arial"> 10:25 </font> </TD> </TR> <TR> <TD align="center" nowrap=1><font size="2" face="Arial"> 11:15 </font> </TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=4 align="center" nowrap="1"> <TABLE> <TR> <TD width="50%" nowrap=1><font size="2" face="Arial"> DOODF000 </font> </TD> <TD width="50%" nowrap=1><font size="2" face="Arial"> <B>ALK C212</B> </font> </TD> </TR> <TR> <TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial"> PROJ-T </font> </TD> </TR> </TABLE> </TD> </TR> <TR> </TR> <TR> <TD rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial"> <B>4</B> </font> </TD> <TD align="center" nowrap=1><font size="2" face="Arial"> 11:15 </font> </TD> </TR> <TR> <TD align="center" nowrap=1><font size="2" face="Arial"> 12:05 </font> </TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=4 align="center" nowrap="1"> <TABLE> <TR> <TD width="50%" nowrap=1><font size="2" face="Arial"> BLEEJ002 </font> </TD> <TD width="50%" nowrap=1><font size="2" face="Arial"> <B>ALK B021B</B> </font> </TD> </TR> <TR> <TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial"> MENT </font> </TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> </TR> <TR> </TR> <TR> <TD rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial"> <B>5</B> </font> </TD> <TD align="center" nowrap=1><font size="2" face="Arial"> 12:05 </font> </TD> </TR> <TR> <TD align="center" nowrap=1><font size="2" face="Arial"> 12:55 </font> </TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> </TR> <TR> </TR> <TR> <TD rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial"> <B>6</B> </font> </TD> <TD align="center" nowrap=1><font size="2" face="Arial"> 12:55 </font> </TD> </TR> <TR> <TD align="center" nowrap=1><font size="2" face="Arial"> 13:45 </font> </TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=4 align="center" nowrap="1"> <TABLE> <TR> <TD width="50%" nowrap=1><font size="2" face="Arial"> JONGJ003 </font> </TD> <TD width="50%" nowrap=1><font size="2" face="Arial"> <B>ALK B008</B> </font> </TD> </TR> <TR> <TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial"> BURG </font> </TD> </TR> </TABLE> </TD> </TR> <TR> </TR> <TR> <TD rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial"> <B>7</B> </font> </TD> <TD align="center" nowrap=1><font size="2" face="Arial"> 13:45 </font> </TD> </TR> <TR> <TD align="center" nowrap=1><font size="2" face="Arial"> 14:35 </font> </TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=4 align="center" nowrap="1"> <TABLE> <TR> <TD width="50%" nowrap=1><font size="2" face="Arial"> FLUIP000 </font> </TD> <TD width="50%" nowrap=1><font size="2" face="Arial"> <B>ALK B004</B> </font> </TD> </TR> <TR> <TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial"> ICT algemeen Prakti </font> </TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> </TR> <TR> </TR> <TR> <TD rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial"> <B>8</B> </font> </TD> <TD align="center" nowrap=1><font size="2" face="Arial"> 14:50 </font> </TD> </TR> <TR> <TD align="center" nowrap=1><font size="2" face="Arial"> 15:40 </font> </TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=4 align="center" nowrap="1"> <TABLE> <TR> <TD width="50%" nowrap=1><font size="2" face="Arial"> KOOLE000 </font> </TD> <TD width="50%" nowrap=1><font size="2" face="Arial"> <B>ALK B008</B> </font> </TD> </TR> <TR> <TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial"> NED </font> </TD> </TR> </TABLE> </TD> </TR> <TR> </TR> <TR> <TD rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial"> <B>9</B> </font> </TD> <TD align="center" nowrap=1><font size="2" face="Arial"> 15:40 </font> </TD> </TR> <TR> <TD align="center" nowrap=1><font size="2" face="Arial"> 16:30 </font> </TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> </TR> <TR> </TR> <TR> <TD rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial"> <B>10</B> </font> </TD> <TD align="center" nowrap=1><font size="2" face="Arial"> 16:30 </font> </TD> </TR> <TR> <TD align="center" nowrap=1><font size="2" face="Arial"> 17:20 </font> </TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> <TD colspan=12 rowspan=2 align="center" nowrap="1"> <TABLE> <TR> <TD></TD> </TR> </TABLE> </TD> </TR> <TR> </TR> </TABLE> <TABLE cellspacing="1" cellpadding="1"> <TR> <TD valign=bottom> <font size="4" face="Arial" color="#0000FF"></TR></TABLE><font size="3" face="Arial"> Periode1 29-08-2016 (35) - 04-09-2016 (35) G r u b e r &amp; P e t t e r s S o f t w a r e </font></CENTER> Python from pprint import pprint from bs4 import BeautifulSoup import requests r = requests.get("http://rooster.horizoncollege.nl/rstr/ECO/AMR/400-ECO/Roosters/36" "/c/c00025.htm") daytable = { 1: "Maandag", 2: "Dinsdag", 3: "Woensdag", 4: "Donderdag", 5: "Vrijdag" } timetable = { 1: ("8:30", "9:20"), 2: ("9:20", "10:10"), 3: ("10:25", "11:15"), 4: ("11:15", "12:05"), 5: ("12:05", "12:55"), 6: ("12:55", "13:45"), 7: ("13:45", "14:35"), 8: ("14:50", "15:40"), 9: ("15:40", "16:30"), 10: ("16:30", "17:20"), } page = BeautifulSoup(r.content, "lxml") roster = [] big_rows = 2 last_row_big = False # There are 10 blocks, each made up out of 2 TR's, run through them for block_count in range(2, 22, 2): # There are 5 days, first column is not data we want for day in range(2, 7): dayroster = { "dag": 0, "blok_start": 0, "blok_eind": 0, "lokaal": "", "leraar": "", "vak": "" } # This selector provides the classroom table_bold = page.select( "html > body > center > table > tr:nth-of-type(" + str(block_count) + ") > td:nth-of-type(" + str( day) + ") > table > tr > td > font > b") # This selector provides the teacher's code and the course ID table = page.select( "html > body > center > table > tr:nth-of-type(" + str(block_count) + ") > td:nth-of-type(" + str( day) + ") > table > tr > td > font") # This gets the rowspan on the current row and column rowspan = page.select( "html > body > center > table > tr:nth-of-type(" + str(block_count) + ") > td:nth-of-type(" + str( day) + ")") try: if table or table_bold and rowspan[0].attrs.get("rowspan") == "4": last_row_big = True # Setting end of class dayroster["blok_eind"] = (block_count // 2) + 1 else: last_row_big = False # Setting end of class dayroster["blok_eind"] = (block_count // 2) except IndexError: pass if table_bold: x = table_bold[0] # Classroom ID dayroster["lokaal"] = x.contents[0] if table: iter = 0 for x in table: content = x.contents[0].lstrip("\r\n").rstrip("\r\n") # Cell has data if content != "": # Set start of class dayroster["blok_start"] = block_count // 2 # Set day of class dayroster["dag"] = day - 1 if iter == 0: # Teacher ID dayroster["leraar"] = content elif iter == 1: # Course ID dayroster["vak"] = content iter += 1 if table or table_bold: # Store the data roster.append(dayroster) # Remove duplicates seen = set() new_l = [] for d in roster: t = tuple(d.items()) if t not in seen: seen.add(t) new_l.append(d) pprint(new_l)
You'll have to track the rowspans on previous rows, one per column. You could do this simply by copying the integer value of a rowspan into a dictionary, and subsequent rows decrement the rowspan value until it drops to 1 (or we could store the integer value minus 1 and drop to 0 for ease of coding). Then you can adjust subsequent table counts based on preceding rowspans. Your table complicates this a little by using a default span of size 2, incrementing in steps of two, but that can easily be brought back to manageable numbers by dividing by 2. Rather than use massive CSS selectors, select just the table rows and we'll iterate over those: roster = [] rowspans = {} # track rowspanning cells # every second row in the table rows = page.select('html > body > center > table > tr')[1:21:2] for block, row in enumerate(rows, 1): # take direct child td cells, but skip the first cell: daycells = row.select('> td')[1:] rowspan_offset = 0 for daynum, daycell in enumerate(daycells, 1): # rowspan handling; if there is a rowspan here, adjust to find correct position daynum += rowspan_offset while rowspans.get(daynum, 0): rowspan_offset += 1 rowspans[daynum] -= 1 daynum += 1 # now we have a correct day number for this cell, adjusted for # rowspanning cells. # update the rowspan accounting for this cell rowspan = (int(daycell.get('rowspan', 2)) // 2) - 1 if rowspan: rowspans[daynum] = rowspan texts = daycell.select("table > tr > td > font") if texts: # class info found teacher, classroom, course = (c.get_text(strip=True) for c in texts) roster.append({ 'blok_start': block, 'blok_eind': block + rowspan, 'dag': daynum, 'leraar': teacher, 'lokaal': classroom, 'vak': course }) # days that were skipped at the end due to a rowspan while daynum < 5: daynum += 1 if rowspans.get(daynum, 0): rowspans[daynum] -= 1 This produces correct output: [{'blok_eind': 2, 'blok_start': 1, 'dag': 5, 'leraar': u'BLEEJ002', 'lokaal': u'ALK B021', 'vak': u'WEBD'}, {'blok_eind': 3, 'blok_start': 2, 'dag': 3, 'leraar': u'BLEEJ002', 'lokaal': u'ALK B021B', 'vak': u'WEBD'}, {'blok_eind': 4, 'blok_start': 3, 'dag': 5, 'leraar': u'DOODF000', 'lokaal': u'ALK C212', 'vak': u'PROJ-T'}, {'blok_eind': 5, 'blok_start': 4, 'dag': 3, 'leraar': u'BLEEJ002', 'lokaal': u'ALK B021B', 'vak': u'MENT'}, {'blok_eind': 7, 'blok_start': 6, 'dag': 5, 'leraar': u'JONGJ003', 'lokaal': u'ALK B008', 'vak': u'BURG'}, {'blok_eind': 8, 'blok_start': 7, 'dag': 3, 'leraar': u'FLUIP000', 'lokaal': u'ALK B004', 'vak': u'ICT algemeen Prakti'}, {'blok_eind': 9, 'blok_start': 8, 'dag': 5, 'leraar': u'KOOLE000', 'lokaal': u'ALK B008', 'vak': u'NED'}] Moreover, this code will continue to work even if courses span more than 2 blocks, or just one block; any rowspan size is supported.
Is there any reason for giving self a default value?
I was browsing through some code, and I noticed a line that caught my attention. The code is similar to the example below class MyClass: def __init__(self): pass def call_me(self=''): print(self) This looks like any other class that I have seen, however a str is being passed in as default value for self. If I print out self, it behaves as normal >>> MyClass().call_me() <__main__.MyClass object at 0x000002A12E7CA908> This has been bugging me and I cannot figure out why this would be used. Is there any reason to why a str instance would be passed in as a default value for self?
Not really, it's just an odd way of making it not raise an error when called via the class: MyClass.call_me() works fine since, even though nothing is implicitly passed as with instances, the default value for that argument is provided. If no default was provided, when called, this would of course raise the TypeError for args we all love. As to why he chose an empty string as the value, only he knows. Bottom line, this is more confusing than it is practical. If you need to do something similar I'd advice a simple staticmethod with a default argument to achieve a similar effect. That way you don't stump anyone reading your code (like the developer who wrote this did with you ;-): @staticmethod def call_me(a=''): print(a) If instead you need access to class attributes you could always opt for the classmethod decorator. Both these (class and static decorators) also serve a secondary purpose of making your intent crystal clear to others reading your code.
Why is ''.join() faster than += in Python?
I'm able to find a bevy of information online (on Stack Overflow and otherwise) about how it's a very inefficient and bad practice to use + or += for concatenation in Python. I can't seem to find WHY += is so inefficient. Outside of a mention here that "it's been optimized for 20% improvement in certain cases" (still not clear what those cases are), I can't find any additional information. What is happening on a more technical level that makes ''.join() superior to other Python concatenation methods?
Let's say you have this code to build up a string from three strings: x = 'foo' x += 'bar' # 'foobar' x += 'baz' # 'foobarbaz' In this case, Python first needs to allocate and create 'foobar' before it can allocate and create 'foobarbaz'. So for each += that gets called, the entire contents of the string and whatever is getting added to it need to be copied into an entirely new memory buffer. In other words, if you have N strings to be joined, you need to allocate approximately N temporary strings and the first substring gets copied ~N times. The last substring only gets copied once, but on average, each substring gets copied ~N/2 times. With .join, Python can play a number of tricks since the intermediate strings do not need to be created. CPython figures out how much memory it needs up front and then allocates a correctly-sized buffer. Finally, it then copies each piece into the new buffer which means that each piece is only copied once. There are other viable approaches which could lead to better performance for += in some cases. E.g. if the internal string representation is actually a rope or if the runtime is actually smart enough to somehow figure out that the temporary strings are of no use to the program and optimize them away. However, CPython certainly does not do these optimizations reliably (though it may for a few corner cases) and since it is the most common implementation in use, many best-practices are based on what works well for CPython. Having a standardized set of norms also makes it easier for other implementations to focus their optimization efforts as well.
Mimicing glib.spawn_async with Popen…
The function glib.spawn_async allows you to hook three callbacks which are called on event on stdout, stderr, and on process completion. How can I mimic the same functionality with subprocess with either threads or asyncio? I am more interested in the functionality rather than threading/asynio but an answer that contains both will earn a bounty. Here is a toy program that shows what I want to do: import glib import logging import os import gtk class MySpawn(object): def __init__(self): self._logger = logging.getLogger(self.__class__.__name__) def execute(self, cmd, on_done, on_stdout, on_stderr): self.pid, self.idin, self.idout, self.iderr = \ glib.spawn_async(cmd, flags=glib.SPAWN_DO_NOT_REAP_CHILD, standard_output=True, standard_error=True) fout = os.fdopen(self.idout, "r") ferr = os.fdopen(self.iderr, "r") glib.child_watch_add(self.pid, on_done) glib.io_add_watch(fout, glib.IO_IN, on_stdout) glib.io_add_watch(ferr, glib.IO_IN, on_stderr) return self.pid if __name__ == '__main__': logging.basicConfig(format='%(thread)d %(levelname)s: %(message)s', level=logging.DEBUG) cmd = '/usr/bin/git ls-remote https://github.com/DiffSK/configobj'.split() def on_done(pid, retval, *args): logging.info("That's all folks!…") def on_stdout(fobj, cond): """This blocks which is fine for this toy example…""" for line in fobj.readlines(): logging.info(line.strip()) return True def on_stderr(fobj, cond): """This blocks which is fine for this toy example…""" for line in fobj.readlines(): logging.error(line.strip()) return True runner = MySpawn() runner.execute(cmd, on_done, on_stdout, on_stderr) try: gtk.main() except KeyboardInterrupt: print('') I should add that since readlines() is blocking, the above will buffer all the output and send it at once. If this is not what one wants, then you have to use readline() and make sure that on end of command you finish reading all the lines you did not read before.
asyncio has subprocess_exec, there is no need to use the subprocess module at all: import asyncio class Handler(asyncio.SubprocessProtocol): def pipe_data_received(self, fd, data): # fd == 1 for stdout, and 2 for stderr print("Data from /bin/ls on fd %d: %s" % (fd, data.decode())) def pipe_connection_lost(self, fd, exc): print("Connection lost to /bin/ls") def process_exited(self): print("/bin/ls is finished.") loop = asyncio.get_event_loop() coro = loop.subprocess_exec(Handler, "/bin/ls", "/") loop.run_until_complete(coro) loop.close() With subprocess and threading, it's simple as well. You can just spawn a thread per pipe, and one to wait() for the process: import subprocess import threading class PopenWrapper(object): def __init__(self, args): self.process = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.DEVNULL) self.stdout_reader_thread = threading.Thread(target=self._reader, args=(self.process.stdout,)) self.stderr_reader_thread = threading.Thread(target=self._reader, args=(self.process.stderr,)) self.exit_watcher = threading.Thread(target=self._exit_watcher) self.stdout_reader_thread.start() self.stderr_reader_thread.start() self.exit_watcher.start() def _reader(self, fileobj): for line in fileobj: self.on_data(fileobj, line) def _exit_watcher(self): self.process.wait() self.stdout_reader_thread.join() self.stderr_reader_thread.join() self.on_exit() def on_data(self, fd, data): return NotImplementedError def on_exit(self): return NotImplementedError def join(self): self.process.wait() class LsWrapper(PopenWrapper): def on_data(self, fd, data): print("Received on fd %r: %s" % (fd, data)) def on_exit(self): print("Process exited.") LsWrapper(["/bin/ls", "/"]).join() However, mind that glib does not use threads to asynchroneously execute your callbacks. It uses an event loop, just as asyncio does. The idea is that at the core of your program is a loop that waits until something happens, and then synchronously executes an associated callback. In your case, that's "data becomes available for reading on one of the pipes", and "the subprocess has exited". In general, its also stuff like "the X11-server reported mouse movement", "there's incoming network traffic", etc. You can emulate glib's behaviour by writing your own event loop. Use the select module on the two pipes. If select reports that the pipes are readable, but read returns no data, the process likely exited - call the poll() method on the subprocess object in this case to check whether it is completed, and call your exit callback if it has, or an error callback elsewise.
Why does Python's set difference method take time with an empty set?
Here is what I mean: > python -m timeit "set().difference(xrange(0,10))" 1000000 loops, best of 3: 0.624 usec per loop > python -m timeit "set().difference(xrange(0,10**4))" 10000 loops, best of 3: 170 usec per loop Apparently python iterates through the whole argument, even if the result is known to be the empty set beforehand. Is there any good reason for this? The code was run in python 2.7.6. (Even for nonempty sets, if you find that you've removed all of the first set's elements midway through the iteration, it makes sense to stop right away.)
IMO it's a matter of specialisation, consider: In [18]: r = range(10 ** 4) In [19]: s = set(range(10 ** 4)) In [20]: %time set().difference(r) CPU times: user 387 µs, sys: 0 ns, total: 387 µs Wall time: 394 µs Out[20]: set() In [21]: %time set().difference(s) CPU times: user 10 µs, sys: 8 µs, total: 18 µs Wall time: 16.2 µs Out[21]: set() Apparently difference has specialised implementation for set - set. Note that difference operator requires right hand argument to be a set, while difference allows any iterable. Per @wim implementation is at https://github.com/python/cpython/blob/master/Objects/setobject.c#L1553-L1555
How do I make a custom model Field call to_python when the field is accessed immediately after initialization (not loaded from DB) in Django >=1.10?
After upgrading from Django 1.9 to 1.10, I've experienced a change in behaviour with a field provided by the django-geolocation package. This is the change that was made for 1.10 compatibility that broke the behaviour: https://github.com/philippbosch/django-geoposition/commit/689ff1651a858d81b2d82ac02625aae8a125b9c9 Previously, if you initialized a model with a GeopositionField, and then immediately accessed that field, you would get back a Geoposition object. Now you just get back the string value that you provided at initialization. How do you achieve the same behaviour with Django 1.10? Is there another method like from_db_value that needs to be overridden to call to_python?
After lots of digging it turns out that in 1.8 the behaviour of custom fields was changed in such a way that to_python is no longer called on assignment to a field. https://docs.djangoproject.com/en/1.10/releases/1.8/#subfieldbase The new approach doesn’t call the to_python() method on assignment as was the case with SubfieldBase. If you need that behavior, reimplement the Creator class from Django’s source code in your project. Here's a Django ticket with some more discussion on this change: https://code.djangoproject.com/ticket/26807 So in order to retain the old behaviour you need to do something like this: class CastOnAssignDescriptor(object): """ A property descriptor which ensures that `field.to_python()` is called on _every_ assignment to the field. This used to be provided by the `django.db.models.subclassing.Creator` class, which in turn was used by the deprecated-in-Django-1.10 `SubfieldBase` class, hence the reimplementation here. """ def __init__(self, field): self.field = field def __get__(self, obj, type=None): if obj is None: return self return obj.__dict__[self.field.name] def __set__(self, obj, value): obj.__dict__[self.field.name] = self.field.to_python(value) And then add this to the custom field: def contribute_to_class(self, cls, name): super(MyField, self).contribute_to_class(cls, name) setattr(cls, name, CastOnAssignDescriptor(self)) Solution was taken from this pull request: https://github.com/hzdg/django-enumfields/pull/61
When should I use list.count(0), and how do I to discount the "False" item?
a.count(0) always returns 11, so what should I do to discount the False and return 10? a = ["a",0,0,"b",None,"c","d",0,1,False,0,1,0,3,[],0,1,9,0,0,{},0,0,9]
Python 2.x interprets False as 0 and vice versa. AFAIK even None and "" can be considered False in conditions. Redefine count as follows: sum(1 for item in a if item == 0 and type(item) == int) or (Thanks to Kevin, and Bakuriu for their comments): sum(1 for item in a if item == 0 and type(item) is type(0)) or as suggested by ozgur in comments (which is not recommended and is considered wrong, see this), simply: sum(1 for item in a if item is 0) it may (“is” operator behaves unexpectedly with integers) work for small primary types, but if your list contains objects, please consider what is operator does: From the documentation for the is operator: The operators is and is not test for object identity: x is y is true if and only if x and y are the same object. More information about is operator: Understanding Python's "is" operator
How to traverse cyclic directed graphs with modified DFS algorithm
OVERVIEW I'm trying to figure out how to traverse directed cyclic graphs using some sort of DFS iterative algorithm. Here's a little mcve version of what I currently got implemented (it doesn't deal with cycles): class Node(object): def __init__(self, name): self.name = name def start(self): print '{}_start'.format(self) def middle(self): print '{}_middle'.format(self) def end(self): print '{}_end'.format(self) def __str__(self): return "{0}".format(self.name) class NodeRepeat(Node): def __init__(self, name, num_repeats=1): super(NodeRepeat, self).__init__(name) self.num_repeats = num_repeats def dfs(graph, start): """Traverse graph from start node using DFS with reversed childs""" visited = {} stack = [(start, "")] while stack: # To convert dfs -> bfs # a) rename stack to queue # b) pop becomes pop(0) node, parent = stack.pop() if parent is None: if visited[node] < 3: node.end() visited[node] = 3 elif node not in visited: if visited.get(parent) == 2: parent.middle() elif visited.get(parent) == 1: visited[parent] = 2 node.start() visited[node] = 1 stack.append((node, None)) # Maybe you want a different order, if it's so, don't use reversed childs = reversed(graph.get(node, [])) for child in childs: if child not in visited: stack.append((child, node)) if __name__ == "__main__": Sequence1 = Node('Sequence1') MtxPushPop1 = Node('MtxPushPop1') Rotate1 = Node('Rotate1') Repeat1 = NodeRepeat('Repeat1', num_repeats=2) Sequence2 = Node('Sequence2') MtxPushPop2 = Node('MtxPushPop2') Translate = Node('Translate') Rotate2 = Node('Rotate2') Rotate3 = Node('Rotate3') Scale = Node('Scale') Repeat2 = NodeRepeat('Repeat2', num_repeats=3) Mesh = Node('Mesh') cyclic_graph = { Sequence1: [MtxPushPop1, Rotate1], MtxPushPop1: [Sequence2], Rotate1: [Repeat1], Sequence2: [MtxPushPop2, Translate], Repeat1: [Sequence1], MtxPushPop2: [Rotate2], Translate: [Rotate3], Rotate2: [Scale], Rotate3: [Repeat2], Scale: [Mesh], Repeat2: [Sequence2] } dfs(cyclic_graph, Sequence1) print '-'*80 a = Node('a') b = Node('b') dfs({ a : [b], b : [a] }, a) The above code is testing a couple of cases, the first would be some sort of representation of the below graph: The second one is the simplest case of one graph containing one "infinite" loop {a->b, b->a} REQUIREMENTS There won't exist such a thing like "infinite cycles", let's say when one "infinite cycle" is found, there will be a maximum threshold (global var) to indicate when to stop looping around those "pseudo-infinite cycles" All graph nodes are able to create cycles but there will exist a special node called Repeat where you can indicate how many iterations to loop around the cycle The above mcve I've posted is an iterative version of the traversal algorithm which doesn't know how to deal with cyclic graphs. Ideally the solution would be also iterative but if there exists a much better recursive solution, that'd be great The data structure we're talking about here shouldn't be called "directed acyclic graphs" really because in this case, each node has its children ordered, and in graphs node connections have no order. Everything can be connected to anything in the editor. You'll be able to execute any block combination and the only limitation is the execution counter, which will overflow if you made neverending loop or too many iterations. The algorithm will preserve start/middle/after node's method execution similarly than the above snippet QUESTION Could anyone provide some sort of solution which knows how to traverse infinite/finite cycles? REFERENCES If question is not clear yet at this point, you can read this more about this problem on this article, the whole idea will be using the traversal algorithm to implement a similar tool like the shown in that article. Here's a screenshot showing up the whole power of this type of data structure I want to figure out how to traverse&run:
Before I start, Run the code on CodeSkulptor! I also hope that the comments elaborate what I have done enough. If you need more explanation, look at my explanation of the recursive approach below the code. # If you don't want global variables, remove the indentation procedures indent = -1 MAX_THRESHOLD = 10 INF = 1 << 63 def whitespace(): global indent return '| ' * (indent) class Node: def __init__(self, name, num_repeats=INF): self.name = name self.num_repeats = num_repeats def start(self): global indent if self.name.find('Sequence') != -1: print whitespace() indent += 1 print whitespace() + '%s_start' % self.name def middle(self): print whitespace() + '%s_middle' % self.name def end(self): global indent print whitespace() + '%s_end' % self.name if self.name.find('Sequence') != -1: indent -= 1 print whitespace() def dfs(graph, start): visits = {} frontier = [] # The stack that keeps track of nodes to visit # Whenever we "visit" a node, increase its visit count frontier.append((start, start.num_repeats)) visits[start] = visits.get(start, 0) + 1 while frontier: # parent_repeat_count usually contains vertex.repeat_count # But, it may contain a higher value if a repeat node is its ancestor vertex, parent_repeat_count = frontier.pop() # Special case which signifies the end if parent_repeat_count == -1: vertex.end() # We're done with this vertex, clear visits so that # if any other node calls us, we're still able to be called visits[vertex] = 0 continue # Special case which signifies the middle if parent_repeat_count == -2: vertex.middle() continue # Send the start message vertex.start() # Add the node's end state to the stack first # So that it is executed last frontier.append((vertex, -1)) # No more children, continue # Because of the above line, the end method will # still be executed if vertex not in graph: continue ## Uncomment the following line if you want to go left to right neighbor #### graph[vertex].reverse() for i, neighbor in enumerate(graph[vertex]): # The repeat count should propagate amongst neighbors # That is if the parent had a higher repeat count, use that instead repeat_count = max(1, parent_repeat_count) if neighbor.num_repeats != INF: repeat_count = neighbor.num_repeats # We've gone through at least one neighbor node # Append this vertex's middle state to the stack if i >= 1: frontier.append((vertex, -2)) # If we've not visited the neighbor more times than we have to, visit it if visits.get(neighbor, 0) < MAX_THRESHOLD and visits.get(neighbor, 0) < repeat_count: frontier.append((neighbor, repeat_count)) visits[neighbor] = visits.get(neighbor, 0) + 1 def dfs_rec(graph, node, parent_repeat_count=INF, visits={}): visits[node] = visits.get(node, 0) + 1 node.start() if node not in graph: node.end() return for i, neighbor in enumerate(graph[node][::-1]): repeat_count = max(1, parent_repeat_count) if neighbor.num_repeats != INF: repeat_count = neighbor.num_repeats if i >= 1: node.middle() if visits.get(neighbor, 0) < MAX_THRESHOLD and visits.get(neighbor, 0) < repeat_count: dfs_rec(graph, neighbor, repeat_count, visits) node.end() visits[node] = 0 Sequence1 = Node('Sequence1') MtxPushPop1 = Node('MtxPushPop1') Rotate1 = Node('Rotate1') Repeat1 = Node('Repeat1', 2) Sequence2 = Node('Sequence2') MtxPushPop2 = Node('MtxPushPop2') Translate = Node('Translate') Rotate2 = Node('Rotate2') Rotate3 = Node('Rotate3') Scale = Node('Scale') Repeat2 = Node('Repeat2', 3) Mesh = Node('Mesh') cyclic_graph = { Sequence1: [MtxPushPop1, Rotate1], MtxPushPop1: [Sequence2], Rotate1: [Repeat1], Sequence2: [MtxPushPop2, Translate], Repeat1: [Sequence1], MtxPushPop2: [Rotate2], Translate: [Rotate3], Rotate2: [Scale], Rotate3: [Repeat2], Scale: [Mesh], Repeat2: [Sequence2] } dfs(cyclic_graph, Sequence1) print '-'*40 dfs_rec(cyclic_graph, Sequence1) print '-'*40 dfs({Sequence1: [Translate], Translate: [Sequence1]}, Sequence1) print '-'*40 dfs_rec({Sequence1: [Translate], Translate: [Sequence1]}, Sequence1) The input and (well formatted and indented) output can be found here. If you want to see how I formatted the output, please refer to the code, which can also be found on CodeSkulptor. Right, on to the explanation. The easier to understand but much more inefficient recursive solution, which I'll use to help explain, follows: def dfs_rec(graph, node, parent_repeat_count=INF, visits={}): visits[node] = visits.get(node, 0) + 1 node.start() if node not in graph: node.end() return for i, neighbor in enumerate(graph[node][::-1]): repeat_count = max(1, parent_repeat_count) if neighbor.num_repeats != INF: repeat_count = neighbor.num_repeats if i >= 1: node.middle() if visits.get(neighbor, 0) < MAX_THRESHOLD and visits.get(neighbor, 0) < repeat_count: dfs_rec(graph, neighbor, repeat_count, visits) node.end() visits[node] = 0 The first thing we do is visit the node. We do this by incrementing the number of visits of the node in the dictionary. We then raise the start event of the node. We do a simple check to see if the node is a childless (leaf) node or not. If it is, we raise the end event and return. Now that we've established that the node has neighbors, we iterate through each neighbor. Side Note: I reverse the neighbor list (by using graph[node][::-1]) in the recursive version to maintain the same order (right to left) of traversal of neighbors as in the iterative version. For each neighbor, we first calculate the repeat count. The repeat count propagates (is inherited) through from the ancestor nodes, so the inherited repeat count is used unless the neighbor contains a repeat count value. We raise the middle event of the current node (not the neighbor) if the second (or greater) neighbor is being processed. If the neighbor can be visited, the neighbor is visited. The visitability check is done by checking whether the neighbor has been visited less than a) MAX_THRESHOLD times (for pseudo-infinite cycles) and b) the above calculated repeat count times. We're now done with this node; raise the end event and clear its visits in the hashtable. This is done so that if some other node calls it again, it does not fail the visitability check and/or execute for less than the required number of times.
What's the closest I can get to calling a Python function using a different Python version?
Say I have two files: # spam.py import library_Python3_only as l3 def spam(x,y) return l3.bar(x).baz(y) and # beans.py import library_Python2_only as l2 ... Now suppose I wish to call spam from within beans. It's not directly possible since both files depend on incompatible Python versions. Of course I can Popen a different python process, but how could I pass in the arguments and retrieve the results without too much stream-parsing pain?
Here is a complete example implementation using subprocess and pickle that I actually tested. Note that you need to use protocol version 2 explicitly for pickling on the Python 3 side (at least for the combo Python 3.5.2 and Python 2.7.3). # py3bridge.py import sys import pickle import importlib import io import traceback import subprocess class Py3Wrapper(object): def __init__(self, mod_name, func_name): self.mod_name = mod_name self.func_name = func_name def __call__(self, *args, **kwargs): p = subprocess.Popen(['python3', '-m', 'py3bridge', self.mod_name, self.func_name], stdin=subprocess.PIPE, stdout=subprocess.PIPE) stdout, _ = p.communicate(pickle.dumps((args, kwargs))) data = pickle.loads(stdout) if data['success']: return data['result'] else: raise Exception(data['stacktrace']) def main(): try: target_module = sys.argv[1] target_function = sys.argv[2] args, kwargs = pickle.load(sys.stdin.buffer) mod = importlib.import_module(target_module) func = getattr(mod, target_function) result = func(*args, **kwargs) data = dict(success=True, result=result) except Exception: st = io.StringIO() traceback.print_exc(file=st) data = dict(success=False, stacktrace=st.getvalue()) pickle.dump(data, sys.stdout.buffer, 2) if __name__ == '__main__': main() The Python 3 module (using the pathlib module for the showcase) # spam.py import pathlib def listdir(p): return [str(c) for c in pathlib.Path(p).iterdir()] The Python 2 module using spam.listdir # beans.py import py3bridge delegate = py3bridge.Py3Wrapper('spam', 'listdir') py3result = delegate('.') print py3result
How to break conversation data into pairs of (Context , Response)
I'm using Gensim Doc2Vec model, trying to cluster portions of a customer support conversations. My goal is to give the support team an auto response suggestions. Figure 1: shows a sample conversations where the user question is answered in the next conversation line, making it easy to extract the data: during the conversation "hello" and "Our offices are located in NYC" should be suggested Figure 2: describes a conversation where the questions and answers are not in sync during the conversation "hello" and "Our offices are located in NYC" should be suggested Figure 3: describes a conversation where the context for the answer is built over time, and for classification purpose (I'm assuming) some of the lines are redundant. during the conversation "here is a link for the free trial account" should be suggested I have the following data per conversation line (simplified): who wrote the line (user or agent), text, time stamp I'm using the following code to train my model: from gensim.models import Doc2Vec from gensim.models.doc2vec import TaggedLineDocument import datetime print('Creating documents',datetime.datetime.now().time()) context = TaggedLineDocument('./test_data/context.csv') print('Building model',datetime.datetime.now().time()) model = Doc2Vec(context,size = 200, window = 10, min_count = 10, workers=4) print('Training...',datetime.datetime.now().time()) for epoch in range(10): print('Run number :',epoch) model.train(context) model.save('./test_data/model') Q: How should I structure my training data and what heuristics could be applied in order to extract it from the raw data?
To train a model I would start by concatenating consecutive sequences of messages. What I would do is, using the timestamps, concatenate the messages without any message in between from the other entity. For instance: Hello I have a problem I cannot install software X Hi What error do you get? would be: Hello I have a problem I cannot install software X Hi What error do you get? Then I would train a model with sentences in that format. I would do that because I am assuming that the conversations have a "single topic" all the time between interactions from the entities. And in that scenario suggesting a single message Hi What error do you get? would be totally fine. Also, take a look at the data. If the questions from the users are usually single-sentenced (as in the examples) sentence detection could help a lot. In that case I would apply sentence detection on the concatenated strings (nltk could be an option) and use only single-sentenced questions for training. That way you can avoid the out-of-sync problem when training the model at the price of reducing the size of the dataset. On the other hand, I would really consider to start with a very simple method. For example you could score questions by tf-idf and, to get a suggestion, you can take the most similar question in your dataset wrt some metric (e.g. cosine similarity) and suggest the answer for that question. That will perform very bad in sentences with context information (e.g. how do you do it?) but can perform well in sentences like where are you based?. My last suggestion is because traditional methods perform even better than complex NN methods when the dataset is small. How big is your dataset? How you train a NN method is also crucial, there are a lot of hyper-parameters, and tuning them properly can be difficult, that's why having a baseline with a simple method can help you a lot to check how well you are doing. In this other paper they compare the different hyper-parameters for doc2vec, maybe you find it useful. Edit: a completely different option would be to train a model to "link" questions with answers. But for that you should manually tag each question with the corresponding answer and then train a supervised learning model on that data. That could potentially generalize better but with the added effort of manually labelling the sentences and still it doesn't look like an easy problem to me.
How do you organise a python project that contains multiple packages so that each file in a package can still be run individually?
TL;DR Here's an example repository that is set up as described in the first diagram (below): https://github.com/Poddster/package_problems If you could please make it look like the second diagram in terms of project organisation and can still run the following commands, then you've answered the question: $ git clone https://github.com/Poddster/package_problems.git $ cd package_problems <do your magic here> $ nosetests $ ./my_tool/my_tool.py $ ./my_tool/t.py $ ./my_tool/d.py (or for the above commands, $ cd ./my_tool/ && ./my_tool.py is also acceptable) Alternatively: Give me a different project structure that allows me to group together related files ('package'), run all of the files individually, import the files into other files in the same package, and import the packages/files into other package's files. Current situation I have a bunch of python files. Most of them are useful when callable from the command line i.e. they all use argparse and if __name__ == "__main__" to do useful things. Currently I have this directory structure, and everything is working fine: . ├── config.txt ├── docs/ │   ├── ... ├── my_tool.py ├── a.py ├── b.py ├── c.py ├── d.py ├── e.py ├── README.md ├── tests │   ├── __init__.py │   ├── a.py │   ├── b.py │   ├── c.py │   ├── d.py │   └── e.py └── resources ├── ... Some of the scripts import things from other scripts to do their work. But no script is merely a library, they are all invokable. e.g. I could invoke ./my_tool.py, ./a.by, ./b.py, ./c.py etc and they would do useful things for the user. "my_tool.py" is the main script that leverages all of the other scripts. What I want to happen However I want to change the way the project is organised. The project itself represents an entire program useable by the user, and will be distributed as such, but I know that parts of it will be useful in different projects later so I want to try and encapsulate the current files into a package. In the immediate future I will also add other packages to this same project. To facilitate this I've decided to re-organise the project to something like the following: . ├── config.txt ├── docs/ │   ├── ... ├── my_tool │   ├── __init__.py │   ├── my_tool.py │   ├── a.py │   ├── b.py │   ├── c.py │   ├── d.py │   ├── e.py │   └── tests │   ├── __init__.py │   ├── a.py │     ├── b.py │   ├── c.py │   ├── d.py │   └── e.py ├── package2 │   ├── __init__.py │   ├── my_second_package.py | ├── ... ├── README.md └── resources ├── ... However, I can't figure out an project organisation that satisfies the following criteria: All of the scripts are invokable on the command line (either as my_tool\a.py or cd my_tool && a.py) The tests actually run :) Files in package2 can do import my_tool The main problem is with the import statements used by the packages and the tests. Currently, all of the packages, including the tests, simply do import <module> and it's resolved correctly. But when jiggering things around it doesn't work. Note that supporting py2.7 is a requirement so all of the files have from __future__ import absolute_import, ... at the top. What I've tried, and the disastrous results 1 If I move the files around as shown above, but leave all of the import statements as they currently are: $ ./my_tool/*.py works and they all run properly $ nosetests run from the top directory doesn't work. The tests fail to import the packages scripts. pycharm highlights import statements in red when editing those files :( 2 If I then change the test scripts to do: from my_tool import x $ ./my_tool/*.py still works and they all run properly $ nosetests run from the top directory doesn't work. Then tests can import the correct scripts, but the imports in the scripts themselves fail when the test scripts import them. pycharm highlights import statements in red in the main scripts still :( 3 If I keep the same structure and change everything to be from my_tool import then: $ ./my_tool/*.py results in ImportErrors $ nosetests runs everything ok. pycharm doesn't complain about anything e.g. of 1.: Traceback (most recent call last): File "./my_tool/a.py", line 34, in <module> from my_tool import b ImportError: cannot import name b 4 I also tried from . import x but that just ends up with ValueError: Attempted relative import in non-package for the direct running of scripts. Looking at some other SO answers: I can't just use python -m pkg.tests.core_test as a) I don't have main.py. I guess I could have one? b) I want to be able to run all of the scripts, not just main? I've tried: if __name__ == '__main__' and __package__ is None: from os import sys, path sys.path.append(path.dirname(path.dirname(path.abspath(__file__)))) but it didn't help. I also tried: __package__ = "my_tool" from . import b But received: SystemError: Parent module 'loading_tool' not loaded, cannot perform relative import adding import my_tool before from . import b just ends up back with ImportError: cannot import name b Fix? What's the correct set of magical incantations and directory layout to make all of this work?
Once you move to your desired configuration, the absolute imports you are using to load the modules that are specific to my_tool no longer work. You need three modifications after you create the my_tool subdirectory and move the files into it: Create my_tool/__init__.py. (You seem to already do this but I wanted to mention it for completeness.) In the files directly under in my_tool: change the import statements to load the modules from the current package. So in my_tool.py change: import c import d import k import s to: from . import c from . import d from . import k from . import s You need to make a similar change to all your other files. (You mention having tried setting __package__ and then doing a relative import but setting __package__ is not needed.) In the files located in my_tool/tests: change the import statements that import the code you want to test to relative imports that load from one package up in the hierarchy. So in test_my_tool.py change: import my_tool to: from .. import my_tool Similarly for all the other test files. With the modifications above, I can run modules directly: $ python -m my_tool.my_tool C! D! F! V! K! T! S! my_tool! my_tool main! |main tool!||detected||tar edit!||installed||keys||LOL||ssl connect||parse ASN.1||config| $ python -m my_tool.k F! V! K! K main! |keys||LOL||ssl connect||parse ASN.1| and I can run tests: $ nosetests ........ ---------------------------------------------------------------------- Ran 8 tests in 0.006s OK Note that I can run the above both with Python 2.7 and Python 3. Rather than make the various modules under my_tool be directly executable, I suggest using a proper setup.py file to declare entry points and let setup.py create these entry points when the package is installed. Since you intend to distribute this code, you should use a setup.py to formally package it anyway. Modify the modules that can be invoked from the command line so that, taking my_tool/my_tool.py as example, instead of this: if __name__ == "__main__": print("my_tool main!") print(do_something()) You have: def main(): print("my_tool main!") print(do_something()) if __name__ == "__main__": main() Create a setup.py file that contains the proper entry_points. For instance: from setuptools import setup, find_packages setup( name="my_tool", version="0.1.0", packages=find_packages(), entry_points={ 'console_scripts': [ 'my_tool = my_tool.my_tool:main' ], }, author="", author_email="", description="Does stuff.", license="MIT", keywords=[], url="", classifiers=[ ], ) The file above instructs setup.py to create a script named my_tool that will invoke the main method in the module my_tool.my_tool. On my system, once the package is installed, there is a script located at /usr/local/bin/my_tool that invokes the main method in my_tool.my_tool. It produces the same output as running python -m my_tool.my_tool, which I've shown above.
ImportError: cannot import name 'QtCore'
I am getting the below error with the following imports. It seems to be related to pandas import. I am unsure how to debug/solve this. Imports: import pandas as pd import numpy as np import pdb, math, pickle import matplotlib.pyplot as plt Error: In [1]: %run NN.py --------------------------------------------------------------------------- ImportError Traceback (most recent call last) /home/abhishek/Desktop/submission/a1/new/NN.py in <module>() 2 import numpy as np 3 import pdb, math, pickle ----> 4 import matplotlib.pyplot as plt 5 6 class NN(object): /home/abhishek/anaconda3/lib/python3.5/site-packages/matplotlib/pyplot.py in <module>() 112 113 from matplotlib.backends import pylab_setup --> 114 _backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup() 115 116 _IP_REGISTERED = None /home/abhishek/anaconda3/lib/python3.5/site-packages/matplotlib/backends/__init__.py in pylab_setup() 30 # imports. 0 means only perform absolute imports. 31 backend_mod = __import__(backend_name, ---> 32 globals(),locals(),[backend_name],0) 33 34 # Things we pull in from all backends /home/abhishek/anaconda3/lib/python3.5/site-packages/matplotlib/backends/backend_qt4agg.py in <module>() 16 17 ---> 18 from .backend_qt5agg import FigureCanvasQTAggBase as _FigureCanvasQTAggBase 19 20 from .backend_agg import FigureCanvasAgg /home/abhishek/anaconda3/lib/python3.5/site-packages/matplotlib/backends/backend_qt5agg.py in <module>() 14 15 from .backend_agg import FigureCanvasAgg ---> 16 from .backend_qt5 import QtCore 17 from .backend_qt5 import QtGui 18 from .backend_qt5 import FigureManagerQT /home/abhishek/anaconda3/lib/python3.5/site-packages/matplotlib/backends/backend_qt5.py in <module>() 29 figureoptions = None 30 ---> 31 from .qt_compat import QtCore, QtGui, QtWidgets, _getSaveFileName, __version__ 32 from matplotlib.backends.qt_editor.formsubplottool import UiSubplotTool 33 /home/abhishek/anaconda3/lib/python3.5/site-packages/matplotlib/backends/qt_compat.py in <module>() 135 # have been changed in the above if block 136 if QT_API in [QT_API_PYQT, QT_API_PYQTv2]: # PyQt4 API --> 137 from PyQt4 import QtCore, QtGui 138 139 try: ImportError: cannot import name 'QtCore' Debugging: $ python -c "import PyQt4" $ python -c "from PyQt4 import QtCore" Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: cannot import name 'QtCore' $ conda list | grep qt jupyter-qtconsole-colorschemes 0.7.1 <pip> pyqt 5.6.0 py35_0 qt 5.6.0 0 qtawesome 0.3.3 py35_0 qtconsole 4.2.1 py35_0 qtpy 1.0.2 py35_0 I found other answers but all related to Windows. I am using ubuntu 16.04 with anaconda distribution of python 3.
Downgrading pyqt version 5.6.0 to 4.11.4, and qt from version 5.6.0 to 4.8.7 fixes this: $ conda install pyqt=4.11.4 $ conda install qt=4.8.7 The issue itself is being resolved here: https://github.com/ContinuumIO/anaconda-issues/issues/1068
Remove the first N items that match a condition in a Python list
If I have a function matchCondition(x), how can I remove the first n items in a Python list that match that condition? One solution is to iterate over each item, mark it for deletion (e.g., by setting it to None), and then filter the list with a comprehension. This requires iterating over the list twice and mutates the data. Is there a more idiomatic or efficient way to do this? n = 3 def condition(x): return x < 5 data = [1, 10, 2, 9, 3, 8, 4, 7] out = do_remove(data, n, condition) print(out) # [10, 9, 8, 4, 7] (1, 2, and 3 are removed, 4 remains)
One way using itertools.filterfalse and itertools.count: from itertools import count, filterfalse data = [1, 10, 2, 9, 3, 8, 4, 7] output = filterfalse(lambda L, c=count(): L < 5 and next(c) < 3, data) Then list(output), gives you: [10, 9, 8, 4, 7]
Short-circuit evaluation like Python's "and" while storing results of checks
I have multiple expensive functions that return results. I want to return a tuple of the results of all the checks if all the checks succeed. However, if one check fails I don't want to call the later checks, like the short-circuiting behavior of and. I could nest if statements, but that will get out of hand if there are a lot of checks. How can I get the short-circuit behavior of and while also storing the results for later use? def check_a(): # do something and return the result, # for simplicity, just make it "A" return "A" def check_b(): # do something and return the result, # for simplicity, just make it "B" return "B" ... This doesn't short-circuit: a = check_a() b = check_b() c = check_c() if a and b and c: return a, b, c This is messy if there are many checks: if a: b = check_b() if b: c = check_c() if c: return a, b, c Is there a shorter way to do this?
Just use a plain old for loop: results = {} for function in [check_a, check_b, ...]: results[function.__name__] = result = function() if not result: break The results will be a mapping of the function name to their return values, and you can do what you want with the values after the loop breaks. Use an else clause on the for loop if you want special handling for the case where all of the functions have returned truthy results.
Why does the floating-point value of 4*0.1 look nice in Python 3 but 3*0.1 doesn't?
I know that most decimals don't have an exact floating point representation (Is floating point math broken?). But I don't see why 4*0.1 is printed nicely as 0.4, but 3*0.1 isn't, when both values actually have ugly decimal representations: >>> 3*0.1 0.30000000000000004 >>> 4*0.1 0.4 >>> from decimal import Decimal >>> Decimal(3*0.1) Decimal('0.3000000000000000444089209850062616169452667236328125') >>> Decimal(4*0.1) Decimal('0.40000000000000002220446049250313080847263336181640625')
The simple answer is because 3*0.1 != 0.3 due to quantization (roundoff) error (whereas 4*0.1 == 0.4 because multiplying by a power of two is usually an "exact" operation). You can use the .hex method in Python to view the internal representation of a number (basically, the exact binary floating point value, rather than the base-10 approximation). This can help to explain what's going on under the hood. >>> (0.1).hex() '0x1.999999999999ap-4' >>> (0.3).hex() '0x1.3333333333333p-2' >>> (0.1*3).hex() '0x1.3333333333334p-2' >>> (0.4).hex() '0x1.999999999999ap-2' >>> (0.1*4).hex() '0x1.999999999999ap-2' 0.1 is 0x1.999999999999a times 2^-4. The "a" at the end means the digit 10 - in other words, 0.1 in binary floating point is very slightly larger than the "exact" value of 0.1 (because the final 0x0.99 is rounded up to 0x0.a). When you multiply this by 4, a power of two, the exponent shifts up (from 2^-4 to 2^-2) but the number is otherwise unchanged, so 4*0.1 == 0.4. However, when you multiply by 3, the little tiny difference between 0x0.99 and 0x0.a0 (0x0.07) magnifies into a 0x0.15 error, which shows up as a one-digit error in the last position. This causes 0.1*3 to be very slightly larger than the rounded value of 0.3. Python 3's float repr is designed to be round-trippable, that is, the value shown should be exactly convertible into the original value. Therefore, it cannot display 0.3 and 0.1*3 exactly the same way, or the two different numbers would end up the same after round-tripping. Consequently, Python 3's repr engine chooses to display one with a slight apparent error.
Python vectorizing nested for loops
I'd appreciate some help in finding and understanding a pythonic way to optimize the following array manipulations in nested for loops: def _func(a, b, radius): "Return 0 if a>b, otherwise return 1" if distance.euclidean(a, b) < radius: return 1 else: return 0 def _make_mask(volume, roi, radius): mask = numpy.zeros(volume.shape) for x in range(volume.shape[0]): for y in range(volume.shape[1]): for z in range(volume.shape[2]): mask[x, y, z] = _func((x, y, z), roi, radius) return mask Where volume.shape (182, 218, 200) and roi.shape (3,) are both ndarray types; and radius is an int
Approach #1 Here's a vectorized approach - m,n,r = volume.shape x,y,z = np.mgrid[0:m,0:n,0:r] X = x - roi[0] Y = y - roi[1] Z = z - roi[2] mask = X**2 + Y**2 + Z**2 < radius**2 Possible improvement : We can probably speedup the last step with numexpr module - import numexpr as ne mask = ne.evaluate('X**2 + Y**2 + Z**2 < radius**2') Approach #2 We can also gradually build the three ranges corresponding to the shape parameters and perform the subtraction against the three elements of roi on the fly without actually creating the meshes as done earlier with np.mgrid. This would be benefited by the use of broadcasting for efficiency purposes. The implementation would look like this - m,n,r = volume.shape vals = ((np.arange(m)-roi[0])**2)[:,None,None] + \ ((np.arange(n)-roi[1])**2)[:,None] + ((np.arange(r)-roi[2])**2) mask = vals < radius**2 Simplified version : Thanks to @Bi Rico for suggesting an improvement here as we can use np.ogrid to perform those operations in a bit more concise manner, like so - m,n,r = volume.shape x,y,z = np.ogrid[0:m,0:n,0:r]-roi mask = (x**2+y**2+z**2) < radius**2 Runtime test Function definitions - def vectorized_app1(volume, roi, radius): m,n,r = volume.shape x,y,z = np.mgrid[0:m,0:n,0:r] X = x - roi[0] Y = y - roi[1] Z = z - roi[2] return X**2 + Y**2 + Z**2 < radius**2 def vectorized_app1_improved(volume, roi, radius): m,n,r = volume.shape x,y,z = np.mgrid[0:m,0:n,0:r] X = x - roi[0] Y = y - roi[1] Z = z - roi[2] return ne.evaluate('X**2 + Y**2 + Z**2 < radius**2') def vectorized_app2(volume, roi, radius): m,n,r = volume.shape vals = ((np.arange(m)-roi[0])**2)[:,None,None] + \ ((np.arange(n)-roi[1])**2)[:,None] + ((np.arange(r)-roi[2])**2) return vals < radius**2 def vectorized_app2_simplified(volume, roi, radius): m,n,r = volume.shape x,y,z = np.ogrid[0:m,0:n,0:r]-roi return (x**2+y**2+z**2) < radius**2 Timings - In [106]: # Setup input arrays ...: volume = np.random.rand(90,110,100) # Half of original input sizes ...: roi = np.random.rand(3) ...: radius = 3.4 ...: In [107]: %timeit _make_mask(volume, roi, radius) 1 loops, best of 3: 41.4 s per loop In [108]: %timeit vectorized_app1(volume, roi, radius) 10 loops, best of 3: 62.3 ms per loop In [109]: %timeit vectorized_app1_improved(volume, roi, radius) 10 loops, best of 3: 47 ms per loop In [110]: %timeit vectorized_app2(volume, roi, radius) 100 loops, best of 3: 4.26 ms per loop In [139]: %timeit vectorized_app2_simplified(volume, roi, radius) 100 loops, best of 3: 4.36 ms per loop So, as always broadcasting showing its magic for a crazy almost 10,000x speedup over the original code and more than 10x better than creating meshes by using on-the-fly broadcasted operations!
Not nesting version of @atomic() in Django?
From the docs of atomic() atomic blocks can be nested This sound like a great feature, but in my use case I want the opposite: I want the transaction to be durable as soon as the block decorated with @atomic() gets left successfully. Is there a way to ensure durability in django's transaction handling? Background Transaction are ACID. The "D" stands for durability. That's why I think transactions can't be nested without loosing feature "D". Example: If the inner transaction is successful, but the outer transaction is not, then the outer and the inner transaction get rolled back. The result: The inner transaction was not durable. I use PostgreSQL, but AFAIK this should not matter much.
You can't do that through any API. Transactions can't be nested while retaining all ACID properties, and not all databases support nested transactions. Only the outermost atomic block creates a transaction. Inner atomic blocks create a savepoint inside the transaction, and release or roll back the savepoint when exiting the inner block. As such, inner atomic blocks provide atomicity, but as you noted, not e.g. durability. Since the outermost atomic block creates a transaction, it must provide atomicity, and you can't commit a nested atomic block to the database if the containing transaction is not committed. The only way to ensure that the inner block is committed, is to make sure that the code in the transaction finishes executing without any errors.
PYTHONPATH order on Ubuntu 14.04
I have two computers running Ubuntu 14.04 server (let's call them A and B). B was initially a 10.04 but it has received two upgrades to 12.04 and 14.04. I do not understand why the python path is different on the two computers. As you can see on the two paths below, the pip installation path /usr/local/lib/python2.7/dist-packages comes before the apt python packages path /usr/lib/python2.7/dist-packages on Ubuntu A, but it comes after on Ubuntu B. This leads to several problems if a python package is installed both via apt and pip. As you can see below, if both python-six apt package and six pip package are installed, they may be two different library versions. The installation of packages system is not always my choice, but might be some dependencies of other packages that are installed. This problem could probably be solved with a virtualenv, but for reasons I will not detail, I cannot use virtualenv here, and must install pip packages system-wide. Ubuntu A >>> import sys, six >>> sys.path ['', '/usr/local/bin', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PILcompat', '/usr/local/lib/python2.7/dist-packages/IPython/extensions'] >>> six <module 'six' from '/usr/local/lib/python2.7/dist-packages/six.pyc'> Ubuntu B >>> import sys, six >>> sys.path ['', '/usr/local/bin', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PILcompat', '/usr/local/lib/python2.7/dist-packages/IPython/extensions'] >>> six >>> <module 'six' from '/usr/lib/python2.7/dist-packages/six.pyc'> For both machines $PATH is the same, and $PYTHONPATH is empty. Why are those PYTHONPATHS different? How can I fix the pythonpath order in "Ubuntu B" so it will load pip packages before the system ones, in a system-wide way? Is there a apt package I should reinstall or reconfigure so the PYTHONPATH would prioritize pip packages ?
As we cannot explore into your system, I am trying to analysis your first question by illustrating how sys.path is initialized. Available references are where-does-sys-path-starts and pyco-reverse-engineering(python2.6). The sys.path comes from the following variables(in order): $PYTHONPATH (highest priority) sys.prefix-ed stdlib sys.exec_prefix-ed stdlib site-packages *.pth in site-packages (lowest priority) Now let's describe each of these variables: $PYTHONPATH, this is just a system environment variable. & 3. sys.prefix and sys.exec_prefix are determined before any python script is executed. It is actually coded in the source Module/getpath.c. The logic is like this: IF $PYTHONHOME IS set: RETURN sys.prefix AND sys.exec_prefix as $PYTHONHOME ELSE: current_dir = directory of python executable; DO: current_dir = parent(current_dir) IF FILE 'lib/pythonX.Y/os.py' EXSITS: sys.prefix = current_dir IF FILE 'lib/pythonX.Y/lib-dynload' EXSITS: sys.exec_prefix = current_dir IF current_dir IS '/': BREAK WHILE(TRUE) IF sys.prefix IS NOT SET: sys.prefix = BUILD_PREFIX IF sys.exec_prefix IS NOT SET: sys.exec_prefix = BUILD_PREFIX & 5. site-packages and *.pth are added by import of site.py. In this module you will find the docs: This will append site-specific paths to the module search path. On Unix (including Mac OSX), it starts with sys.prefix and sys.exec_prefix (if different) and appends lib/python/site-packages as well as lib/site-python. ... ... For Debian and derivatives, this sys.path is augmented with directories for packages distributed within the distribution. Local addons go into /usr/local/lib/python/dist-packages, Debian addons install into /usr/{lib,share}/python/dist-packages. /usr/lib/python/site-packages is not used. A path configuration file is a file whose name has the form .pth; its contents are additional directories (one per line) to be added to sys.path. ... ... And a code snippet for important function getsitepackages: sitepackages.append(os.path.join(prefix, "local/lib", "python" + sys.version[:3], "dist-packages")) sitepackages.append(os.path.join(prefix, "lib", "python" + sys.version[:3], "dist-packages")) Now I try to fig out where may be this odd problem comes from: $PYTHONPATH, impossible, because it is empty both A and B sys.prefix and sys.exec_prefix, maybe, please check them and as well as $PYTHONHOME site.py, maybe, check the file. The sys.path output of B is quite odd, dist-package (site-package) goes before sys.exec_prefix (lib-dynload). Please try to investigate each step of sys.path initialization of machine B, you may find out something. Very sorry that I cannot replicate your problem. By the way, about your question title, I think SYS.PATH is better than PYTHONPATH, which makes me misinterpretation as $PYTHONPATH at first glance.
Re-compose a Tensor after tensor factorization
I am trying to decompose a 3D matrix using python library scikit-tensor. I managed to decompose my Tensor (with dimensions 100x50x5) into three matrices. My question is how can I compose the initial matrix again using the decomposed matrix produced with Tensor factorization? I want to check if the decomposition has any meaning. My code is the following: import logging from scipy.io.matlab import loadmat from sktensor import dtensor, cp_als import numpy as np //Set logging to DEBUG to see CP-ALS information logging.basicConfig(level=logging.DEBUG) T = np.ones((400, 50)) T = dtensor(T) P, fit, itr, exectimes = cp_als(T, 10, init='random') // how can I re-compose the Matrix T? TA = np.dot(P.U[0], P.U[1].T) I am using the canonical decomposition as provided from the scikit-tensor library function cp_als. Also what is the expected dimensionality of the decomposed matrices?
The CP product of, for example, 4 matrices can be expressed using Einstein notation as or in numpy as numpy.einsum('az,bz,cz,dz -> abcd', A, B, C, D) so in your case you would use numpy.einsum('az,bz->ab', P.U[0], P.U[1]) or, in your 3-matrix case numpy.einsum('az,bz,cz->abc', P.U[0], P.U[1], P.U[2]) sktensor.ktensor.ktensor also have a method totensor() that does exactly this: np.allclose(np.einsum('az,bz->ab', P.U[0], P.U[1]), P.totensor()) >>> True
How to get lineno of "end-of-statement" in Python ast
I am trying to work on a script that manipulates another script in Python, the script to be modified has structure like: class SomethingRecord(Record): description = 'This records something' author = 'john smith' I use ast to locate the description line number, and I use some code to change the original file with new description string base on the line number. So far so good. Now the only issue is description occasionally is a multi-line string, e.g. description = ('line 1' 'line 2' 'line 3') or description = 'line 1' \ 'line 2' \ 'line 3' and I only have the line number of the first line, not the following lines. So my one-line replacer would do description = 'new value' 'line 2' \ 'line 3' and the code is broken. I figured that if I know both the lineno of start and end/number of lines of description assignment I could repair my code to handle such situation. How do I get such information with Python standard library?
I looked at the other answers; it appears people are doing backflips to get around the problems of computing line numbers, when your real problem is one of modifying the code. That suggests the baseline machinery is not helping you the way you really need. If you use a program transformation system (PTS), you could avoid a lot of this nonsense. A good PTS will parse your source code to an AST, and then let you apply source-level rewrite rules to modify the AST, and will finally convert the modified AST back into source text. Generically PTSes accept transformation rules of essentially this form: if you see *this*, replace it by *that* [A parser that builds an AST is NOT a PTS. They don't allow rules like this; you can write ad hoc code to hack at the tree, but that's usually pretty awkward. Not do they do the AST to source text regeneration.] (My PTS, see bio, called) DMS is a PTS that could accomplish this. OP's specific example would be accomplished easily by using the following rewrite rule: source domain Python; -- tell DMS the syntax of pattern left hand sides target domain Python; -- tell DMS the syntax of pattern right hand sides rule replace_description(e: expression): statement -> statement = " description = \e " -> " description = ('line 1' 'line 2' 'line 3')"; The one transformation rule is given an name replace_description to distinguish it from all the other rule we might define. The rule parameters (e: expression) indicate the pattern will allow an arbitrary expression as defined by the source language. statement->statement means the rule maps a statement in the source language, to a statement in the target language; we could use any other syntax category from the Python grammar provided to DMS. The " used here is a metaquote, used to distinguish the syntax of the rule language form the syntax of the subject language. The second -> separates the source pattern this from the target pattern that. You'll notice that there is no need to mention line numbers. The PTS converts the rule surface syntax into corresponding ASTs by actually parsing the patterns with the same parser used to parse the source file. The ASTs produced for the patterns are used to effect the pattern match/replacement. Because this is driven from ASTs, the actual layout of the orginal code (spacing, linebreaks, comments) don't affect DMS's ability to match or replace. Comments aren't a problem for matching because they are attached to tree nodes rather than being tree nodes; they are preserved in the transformed program. DMS does capture line and precise column information for all tree elements; just not needed to implement transformations. Code layout is also preserved in the output by DMS, using that line/column information. Other PTSes offer generally similar capabilities.
Python button functions oddly not doing the same
I currently have 2 buttons hooked up to my Raspberry Pi (these are the ones with ring LED's in them) and I'm trying to perform this code #!/usr/bin/env python import RPi.GPIO as GPIO import time GPIO.setmode(GPIO.BCM) GPIO.setwarnings(False) GPIO.setup(17, GPIO.OUT) #green LED GPIO.setup(18, GPIO.OUT) #red LED GPIO.setup(4, GPIO.IN, GPIO.PUD_UP) #green button GPIO.setup(27, GPIO.IN, GPIO.PUD_UP) #red button def remove_events(): GPIO.remove_event_detect(4) GPIO.remove_event_detect(27) def add_events(): GPIO.add_event_detect(4, GPIO.FALLING, callback=green, bouncetime=800) GPIO.add_event_detect(27, GPIO.FALLING, callback=red, bouncetime=800) def red(pin): remove_events() GPIO.output(17, GPIO.LOW) print "red pushed" time.sleep(2) GPIO.output(17, GPIO.HIGH) add_events() def green(pin): remove_events() GPIO.output(18, GPIO.LOW) print "green pushed" time.sleep(2) GPIO.output(18, GPIO.HIGH) add_events() def main(): while True: print "waiting" time.sleep(0.5) GPIO.output(17, GPIO.HIGH) GPIO.output(18, GPIO.HIGH) GPIO.add_event_detect(4, GPIO.FALLING, callback=green, bouncetime=800) GPIO.add_event_detect(27, GPIO.FALLING, callback=red, bouncetime=800) if __name__ == "__main__": main() On the surface it looks like a fairly easy script. When a button press is detected: remove the events print the message wait 2 seconds before adding the events and turning the LED's back on Which normally works out great when I press the green button. I tried it several times in succession and it works without fail. With the red, however, it works well the first time, and the second time, but after it has completed it second red(pin) cycle the script just stops. Considering both events are fairly similar, I can't explain why it fails on the end of the 2nd red button. EDIT: I have changed the pins from red and green respectively (either to different pin's completely or swap them). Either way, it's always the red button code (actually now green button) causes an error. So it seems its' not a physical red button problem, nor a pin problem, this just leaves the code to be at fault...
I was able to reproduce your problem on my Raspberry Pi 1, Model B by running your script and connecting a jumper cable between ground and GPIO27 to simulate red button presses. (Those are pins 25 and 13 on my particular Pi model.) The python interpreter is crashing with a Segmentation Fault in the thread dedicated to polling GPIO events after red returns from handling a button press. After looking at the implementation of the Python GPIO module, it is clear to me that it is unsafe to call remove_event_detect from within an event handler callback, and this is causing the crash. In particular, removing an event handler while that event handler is currently running can lead to memory corruption, which will result in crashes (as you have seen) or other strange behaviors. I suspect you are removing and re-adding the event handlers because you are concerned about getting a callback during the time when you are handing a button press. There is no need to do this. The GPIO module spins up a single polling thread to monitor GPIO events, and will wait for one callback to return before calling another, regardless of the number of GPIO events you are watching. I suggest you simply make your calls to add_event_detect as your script starts, and never remove the callbacks. Simply removing add_events and remove_events (and their invocations) from your script will correct the problem. If you are interested in the details of the problem in the GPIO module, you can take a look at the C source code for that module. Take a look at run_callbacks and remove_callbacks in the file RPi.GPIO-0.6.2/source/event_gpio.c. Notice that both of these functions use a global chain of struct callback nodes. run_callbacks walks the callback chain by grabbing one node, invoking the callback, and then following that node's link to the next callback in the chain. remove_callbacks will walk the same callback chain, and free the memory associated with the callbacks on a particular GPIO pin. If remove_callbacks is called in the middle of run_callbacks, the node currently held by run_callbacks can be freed (and have its memory potentially reused and overwritten) before the pointer to the next node is followed. The reason you see this problem only for the red button is likely due to the order of calls to add_event_detect and remove_event_detect causes the memory previously used by the callback node for the red button to be reclaimed for some other purpose and overwritten earlier than the memory used from the green button callback node is similarly reclaimed. However, be assured that the problem exists for both buttons -- it is just luck that that the memory associated with the green button callback isn't changed before the pointer to the next callback node is followed. More generally, there is a concerning lack of thread synchronization around the callback chain use in the GPIO module in general, and I suspect similar problems could occur if remove_event_detect or add_event_detect are called while an event handler is running, even if events are removed from another thread! I would suggest that the author of the RPi.GPIO module should use some synchronization to ensure that the callback chain can't be modified while callbacks are being made. (Perhaps, in addition to checking whether the chain is being modified on the polling thread itself, pthread_mutex_lock and pthread_mutex_unlock could be used to prevent other threads from modifying the callback chain while it is in use by the polling thread.) Unfortunately, that is not currently the case, and for this reason I suggest you avoid calling remove_event_detect entirely if you can avoid it.
Regular Expression Matching First Non-Repeated Character
TL;DR re.search("(.)(?!.*\1)", text).group() doesn't match the first non-repeating character contained in text (it always returns a character at or before the first non-repeated character, or before the end of the string if there are no non-repeated characters. My understanding is that re.search() should return None if there were no matches). I'm only interested in understanding why this regex is not working as intended using the Python re module, not in any other method of solving the problem Full Background The problem description comes from https://www.codeeval.com/open_challenges/12/. I've already solved this problem using a non-regex method, but revisited it to expand my understanding of Python's re module. The regular expressions i thought would work (named vs unnamed backreferences) are: (?P<letter>.)(?!.*(?P=letter)) and (.)(?!.*\1) (same results in python2 and python3) My entire program looks like this import re import sys with open(sys.argv[1], 'r') as test_cases: for test in test_cases: print(re.search("(?P<letter>.)(?!.*(?P=letter))", test.strip() ).group() ) and some input/output pairs are: rain | r teetthing | e cardiff | c kangaroo | k god | g newtown | e taxation | x refurbished | f substantially | u According to what I've read at https://docs.python.org/2/library/re.html: (.) creates a named group that matches any character and allows later backreferences to it as \1. (?!...) is a negative lookahead which restricts matches to cases where ... does not match. .*\1 means any number (including zero) of characters followed by whatever was matched by (.) earlier re.search(pattern, string) returns only the first location where the regex pattern produces a match (and would return None if no match could be found) .group() is equivalent to .group(0) which returns the entire match I think these pieces together should solve the stated problem, and it does work like I think it should for most inputs, but failed on teething. Throwing similar problems at it reveals that it seems to ignore repeated characters if they are consecutive: tooth | o # fails on consecutive repeated characters aardvark | d # but does ok if it sees them later aah | a # verified last one didn't work just because it was at start heh | e # but it works for this one hehe | h # What? It thinks h matches (lookahead maybe doesn't find "heh"?) heho | e # but it definitely finds "heh" and stops "h" from matching here hahah | a # so now it won't match h but will match a hahxyz | a # but it realizes there are 2 h characters here... hahxyza | h # ... Ok time for StackOverflow I know lookbehind and negative lookbehind are limited to 3-character-max fixed length strings, and cannot contain backreferences even if they evaluate to a fixed length string, but I didn't see the documentation specify any restrictions on negative lookahead.
Well let's take your tooth example - here is what the regex-engine does (a lot simplified for better understanding) Start with t then look ahead in the string - and fail the lookahead, as there is another t. tooth ^ ° Next take o, look ahead in the string - and fail, as there is another o. tooth ^° Next take the second o, look ahead in the string - no other o present - match it, return it, work done. tooth ^ So your regex doesn't match the first unrepeated character, but the first one, that has no further repetitions towards the end of the string.
How to use the `pos` argument in `networkx` to create a flowchart-style Graph? (Python 3)
I am trying create a linear network graph using Python (preferably with matplotlib and networkx although would be interested in bokeh) similar in concept to the one below. How can this graph plot be constructed efficiently (pos?) in Python using networkx? I want to use this for more complicated examples so I feel that hard coding the positions for this simple example won't be useful :( . Does networkx have a solution to this? pos (dictionary, optional) – A dictionary with nodes as keys and positions as values. If not specified a spring layout positioning will be computed. See networkx.layout for functions that compute node positions. I haven't seen any tutorials on how this can be achieved in networkx which is why I believe this question will be a reliable resource for the community. I've extensively gone through the networkx tutorials and nothing like this is on there. The layouts for networkx would make this type of network impossible to interpret without careful use of the pos argument... which I believe is my only option. None of the precomputed layouts on the https://networkx.github.io/documentation/networkx-1.9/reference/drawing.html documentation seem to handle this type of network structure well. Simple Example: (A) every outer key is the iteration in the graph moving from left to the right (e.g. iteration 0 represents samples, iteration 1 has groups 1 - 3, same with iteration 2, iteration 3 has Groups 1 - 2, etc.). (B) The inner dictionary contains the current grouping at that particular iteration, and the weights for the previous groups merging that represent the current group (e.g. iteration 3 has Group 1 and Group 2 and for iteration 4 all of iteration 3's Group 2 has gone into iteration 4's Group 2 but iteration 3's Group 1 has been split up. The weights always sum to 1. My code for the connections w/ weights for the plot above: D_iter_current_previous = { 1: { "Group 1":{"sample_0":0.5, "sample_1":0.5, "sample_2":0, "sample_3":0, "sample_4":0}, "Group 2":{"sample_0":0, "sample_1":0, "sample_2":1, "sample_3":0, "sample_4":0}, "Group 3":{"sample_0":0, "sample_1":0, "sample_2":0, "sample_3":0.5, "sample_4":0.5} }, 2: { "Group 1":{"Group 1":1, "Group 2":0, "Group 3":0}, "Group 2":{"Group 1":0, "Group 2":1, "Group 3":0}, "Group 3":{"Group 1":0, "Group 2":0, "Group 3":1} }, 3: { "Group 1":{"Group 1":0.25, "Group 2":0, "Group 3":0.75}, "Group 2":{"Group 1":0.25, "Group 2":0.75, "Group 3":0} }, 4: { "Group 1":{"Group 1":1, "Group 2":0}, "Group 2":{"Group 1":0.25, "Group 2":0.75} } } This is what happened when I made the Graph in networkx: import networkx import matplotlib.pyplot as plt # Create Directed Graph G = nx.DiGraph() # Iterate through all connections for iter_n, D_current_previous in D_iter_current_previous.items(): for current_group, D_previous_weights in D_current_previous.items(): for previous_group, weight in D_previous_weights.items(): if weight > 0: # Define connections using `|__|` as a delimiter for the names previous_node = "%d|__|%s"%(iter_n - 1, previous_group) current_node = "%d|__|%s"%(iter_n, current_group) connection = (previous_node, current_node) G.add_edge(*connection, weight=weight) # Draw Graph with labels and width thickness nx.draw(G, with_labels=True, width=[G[u][v]['weight'] for u,v in G.edges()]) Note: The only other way, I could think of to do this would be in matplotlib creating a scatter plot with every tick representing a iteration (5 including the initial samples) then connecting the points to each other with different weights. This would be some pretty messy code especially trying to line up the edges of the markers w/ the connections...However, I'm not sure if this and networkx is the best way to do it or if there is a tool (e.g. bokeh or plotly) that is designed for this type of plotting.
Networkx has decent plotting facilities for exploratory data analysis, it is not the tool to make publication quality figures, for various reason that I don't want to go into here. I hence rewrote that part of the code base from scratch, and made a stand-alone drawing module called netgraph that can be found here (like the original purely based on matplotlib). The API is very, very similar and well documented, so it should not be too hard to mold to your purposes. Building on that I get the following result: I chose colour to denote the edge strength as you can 1) indicate negative values, and 2) distinguish small values better. However, you can also pass an edge width to netgraph instead (see netgraph.draw_edges()). The different order of the branches is a result of your data structure (a dict), which indicates no inherent order. You would have to amend your data structure and the function _parse_input() below to fix that issue. Code: import itertools import numpy as np import matplotlib.pyplot as plt import netgraph; reload(netgraph) def plot_layered_network(weight_matrices, distance_between_layers=2, distance_between_nodes=1, layer_labels=None, **kwargs): """ Convenience function to plot layered network. Arguments: ---------- weight_matrices: [w1, w2, ..., wn] list of weight matrices defining the connectivity between layers; each weight matrix is a 2-D ndarray with rows indexing source and columns indexing targets; the number of sources has to match the number of targets in the last layer distance_between_layers: int distance_between_nodes: int layer_labels: [str1, str2, ..., strn+1] labels of layers **kwargs: passed to netgraph.draw() Returns: -------- ax: matplotlib axis instance """ nodes_per_layer = _get_nodes_per_layer(weight_matrices) node_positions = _get_node_positions(nodes_per_layer, distance_between_layers, distance_between_nodes) w = _combine_weight_matrices(weight_matrices, nodes_per_layer) ax = netgraph.draw(w, node_positions, **kwargs) if not layer_labels is None: ax.set_xticks(distance_between_layers*np.arange(len(weight_matrices)+1)) ax.set_xticklabels(layer_labels) ax.xaxis.set_ticks_position('bottom') return ax def _get_nodes_per_layer(weight_matrices): nodes_per_layer = [] for w in weight_matrices: sources, targets = w.shape nodes_per_layer.append(sources) nodes_per_layer.append(targets) return nodes_per_layer def _get_node_positions(nodes_per_layer, distance_between_layers, distance_between_nodes): x = [] y = [] for ii, n in enumerate(nodes_per_layer): x.append(distance_between_nodes * np.arange(0., n)) y.append(ii * distance_between_layers * np.ones((n))) x = np.concatenate(x) y = np.concatenate(y) return np.c_[y,x] def _combine_weight_matrices(weight_matrices, nodes_per_layer): total_nodes = np.sum(nodes_per_layer) w = np.full((total_nodes, total_nodes), np.nan, np.float) a = 0 b = nodes_per_layer[0] for ii, ww in enumerate(weight_matrices): w[a:a+ww.shape[0], b:b+ww.shape[1]] = ww a += nodes_per_layer[ii] b += nodes_per_layer[ii+1] return w def test(): w1 = np.random.rand(4,5) #< 0.50 w2 = np.random.rand(5,6) #< 0.25 w3 = np.random.rand(6,3) #< 0.75 import string node_labels = dict(zip(range(18), list(string.ascii_lowercase))) fig, ax = plt.subplots(1,1) plot_layered_network([w1,w2,w3], layer_labels=['start', 'step 1', 'step 2', 'finish'], ax=ax, node_size=20, node_edge_width=2, node_labels=node_labels, edge_width=5, ) plt.show() return def test_example(input_dict): weight_matrices, node_labels = _parse_input(input_dict) fig, ax = plt.subplots(1,1) plot_layered_network(weight_matrices, layer_labels=['', '1', '2', '3', '4'], distance_between_layers=10, distance_between_nodes=8, ax=ax, node_size=300, node_edge_width=10, node_labels=node_labels, edge_width=50, ) plt.show() return def _parse_input(input_dict): weight_matrices = [] node_labels = [] # initialise sources sources = set() for v in input_dict[1].values(): for s in v.keys(): sources.add(s) sources = list(sources) for ii in range(len(input_dict)): inner_dict = input_dict[ii+1] targets = inner_dict.keys() w = np.full((len(sources), len(targets)), np.nan, np.float) for ii, s in enumerate(sources): for jj, t in enumerate(targets): try: w[ii,jj] = inner_dict[t][s] except KeyError: pass weight_matrices.append(w) node_labels.append(sources) sources = targets node_labels.append(targets) node_labels = list(itertools.chain.from_iterable(node_labels)) node_labels = dict(enumerate(node_labels)) return weight_matrices, node_labels # -------------------------------------------------------------------------------- # script # -------------------------------------------------------------------------------- if __name__ == "__main__": # test() input_dict = { 1: { "Group 1":{"sample_0":0.5, "sample_1":0.5, "sample_2":0, "sample_3":0, "sample_4":0}, "Group 2":{"sample_0":0, "sample_1":0, "sample_2":1, "sample_3":0, "sample_4":0}, "Group 3":{"sample_0":0, "sample_1":0, "sample_2":0, "sample_3":0.5, "sample_4":0.5} }, 2: { "Group 1":{"Group 1":1, "Group 2":0, "Group 3":0}, "Group 2":{"Group 1":0, "Group 2":1, "Group 3":0}, "Group 3":{"Group 1":0, "Group 2":0, "Group 3":1} }, 3: { "Group 1":{"Group 1":0.25, "Group 2":0, "Group 3":0.75}, "Group 2":{"Group 1":0.25, "Group 2":0.75, "Group 3":0} }, 4: { "Group 1":{"Group 1":1, "Group 2":0}, "Group 2":{"Group 1":0.25, "Group 2":0.75} } } test_example(input_dict) pass
Stuck implementing simple neural network
I've been bashing my head against this brick wall for what seems like an eternity, and I just can't seem to wrap my head around it. I'm trying to implement an autoencoder using only numpy and matrix multiplication. No theano or keras tricks allowed. I'll describe the problem and all its details. It is a bit complex at first since there are a lot of variables, but it really is quite straightforward. What we know 1) X is an m by n matrix which is our inputs. The inputs are rows of this matrix. Each input is an n dimensional row vector, and we have m of them. 2)The number of neurons in our (single) hidden layer, which is k. 3) The activation function of our neurons (sigmoid, will be denoted as g(x)) and its derivative g'(x) What we don't know and want to find Overall our goal is to find 6 matrices: w1 which is n by k, b1 which is m by k, w2 which is k by n, b2 which is m by n, w3 which is n by n and b3 which is m by n. They are initallized randomly and we find the best solution using gradient descent. The process The entire process looks something like this First we compute z1 = Xw1+b1. It is m by k and is the input to our hidden layer. We then compute h1 = g(z1), which is simply applying the sigmoid function to all elements of z1. naturally it is also m by k and is the output of our hidden layer. We then compute z2 = h1w2+b2 which is m by n and is the input to the output layer of our neural network. Then we compute h2 = g(z2) which again is naturally also m by n and is the output of our neural network. Finally, we take this output and perform some linear operator on it: Xhat = h2w3+b3 which is also m by n and is our final result. Where I am stuck The cost function I want to minimize is the mean squared error. I already implemented it in numpy code def cost(x, xhat): return (1.0/(2 * m)) * np.trace(np.dot(x-xhat,(x-xhat).T)) The problem is finding the derivatives of cost with respect to w1,b1,w2,b2,w3,b3. Let's call the cost S. After deriving myself and checking myself numerically, I have established the following facts: 1) dSdxhat = (1/m) * np.dot(xhat-x) 2) dSdw3 = np.dot(h2.T,dSdxhat) 3) dSdb3 = dSdxhat 4) dSdh2 = np.dot(dSdxhat, w3.T) But I can't for the life of me figure out dSdz2. It's a brick wall. From chain-rule, it should be that dSdz2 = dSdh2 * dh2dz2 but the dimensions don't match. What is the formula to compute the derivative of S with respect to z2? Edit - This is my code for the entire feed forward operation of the autoencoder. import numpy as np def g(x): #sigmoid activation functions return 1/(1+np.exp(-x)) #same shape as x! def gGradient(x): #gradient of sigmoid return g(x)*(1-g(x)) #same shape as x! def cost(x, xhat): #mean squared error between x the data and xhat the output of the machine return (1.0/(2 * m)) * np.trace(np.dot(x-xhat,(x-xhat).T)) #Just small random numbers so we can test that it's working small scale m = 5 #num of examples n = 2 #num of features in each example k = 2 #num of neurons in the hidden layer of the autoencoder x = np.random.rand(m, n) #the data, shape (m, n) w1 = np.random.rand(n, k) #weights from input layer to hidden layer, shape (n, k) b1 = np.random.rand(m, k) #bias term from input layer to hidden layer (m, k) z1 = np.dot(x,w1)+b1 #output of the input layer, shape (m, k) h1 = g(z1) #input of hidden layer, shape (m, k) w2 = np.random.rand(k, n) #weights from hidden layer to output layer of the autoencoder, shape (k, n) b2 = np.random.rand(m, n) #bias term from hidden layer to output layer of autoencoder, shape (m, n) z2 = np.dot(h1, w2)+b2 #output of the hidden layer, shape (m, n) h2 = g(z2) #Output of the entire autoencoder. The output layer of the autoencoder. shape (m, n) w3 = np.random.rand(n, n) #weights from output layer of autoencoder to entire output of the machine, shape (n, n) b3 = np.random.rand(m, n) #bias term from output layer of autoencoder to entire output of the machine, shape (m, n) xhat = np.dot(h2, w3)+b3 #the output of the machine, which hopefully resembles the original data x, shape (m, n)
OK, here's a suggestion. In the vector case, if you have x as a vector of length n, then g(x) is also a vector of length n. However, g'(x) is not a vector, it's the Jacobian matrix, and will be of size n X n. Similarly, in the minibatch case, where X is a matrix of size m X n, g(X) is m X n but g'(X) is n X n. Try: def gGradient(x): #gradient of sigmoid return np.dot(g(x).T, 1 - g(x)) @Paul is right that the bias terms should be vectors, not matrices. You should have: b1 = np.random.rand(k) #bias term from input layer to hidden layer (k,) b2 = np.random.rand(n) #bias term from hidden layer to output layer of autoencoder, shape (n,) b3 = np.random.rand(n) #bias term from output layer of autoencoder to entire output of the machine, shape (n,) Numpy's broadcasting means that you don't have to change your calculation of xhat. Then (I think!) you can compute the derivatives like this: dSdxhat = (1/float(m)) * (xhat-x) dSdw3 = np.dot(h2.T,dSdxhat) dSdb3 = dSdxhat.mean(axis=0) dSdh2 = np.dot(dSdxhat, w3.T) dSdz2 = np.dot(dSdh2, gGradient(z2)) dSdb2 = dSdz2.mean(axis=0) dSdw2 = np.dot(h1.T,dSdz2) dSdh1 = np.dot(dSdz2, w2.T) dSdz1 = np.dot(dSdh1, gGradient(z1)) dSdb1 = dSdz1.mean(axis=0) dSdw1 = np.dot(x.T,dSdz1) Does this work for you? Edit I've decided that I'm not at all sure that gGradient is supposed to be a matrix. How about: dSdxhat = (xhat-x) / m dSdw3 = np.dot(h2.T,dSdxhat) dSdb3 = dSdxhat.sum(axis=0) dSdh2 = np.dot(dSdxhat, w3.T) dSdz2 = h2 * (1-h2) * dSdh2 dSdb2 = dSdz2.sum(axis=0) dSdw2 = np.dot(h1.T,dSdz2) dSdh1 = np.dot(dSdz2, w2.T) dSdz1 = h1 * (1-h1) * dSdh1 dSdb1 = dSdz1.sum(axis=0) dSdw1 = np.dot(x.T,dSdz1)
cryptography AssertionError: sorry, but this version only supports 100 named groups
I'm installing several python packages via pip install on travis, language: python python: - '2.7' install: - pip install -r requirements/env.txt Everything worked fine, but today I started getting following error: Running setup.py install for cryptography Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-build-hKwMR3/cryptography/setup.py", line 334, in <module> **keywords_with_side_effects(sys.argv) File "/opt/python/2.7.9/lib/python2.7/distutils/core.py", line 111, in setup _setup_distribution = dist = klass(attrs) File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/setuptools/dist.py", line 269, in __init__ _Distribution.__init__(self,attrs) File "/opt/python/2.7.9/lib/python2.7/distutils/dist.py", line 287, in __init__ self.finalize_options() File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/setuptools/dist.py", line 325, in finalize_options ep.load()(self, ep.name, value) File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/setuptools_ext.py", line 181, in cffi_modules add_cffi_module(dist, cffi_module) File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/setuptools_ext.py", line 48, in add_cffi_module execfile(build_file_name, mod_vars) File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/setuptools_ext.py", line 24, in execfile exec(code, glob, glob) File "src/_cffi_src/build_openssl.py", line 81, in <module> extra_link_args=extra_link_args(compiler_type()), File "/tmp/pip-build-hKwMR3/cryptography/src/_cffi_src/utils.py", line 61, in build_ffi_for_binding extra_link_args=extra_link_args, File "/tmp/pip-build-hKwMR3/cryptography/src/_cffi_src/utils.py", line 70, in build_ffi ffi.cdef(cdef_source) File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/api.py", line 105, in cdef self._cdef(csource, override=override, packed=packed) File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/api.py", line 119, in _cdef self._parser.parse(csource, override=override, **options) File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/cparser.py", line 299, in parse self._internal_parse(csource) File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/cparser.py", line 304, in _internal_parse ast, macros, csource = self._parse(csource) File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/cparser.py", line 260, in _parse ast = _get_parser().parse(csource) File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/cparser.py", line 40, in _get_parser _parser_cache = pycparser.CParser() File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/pycparser/c_parser.py", line 87, in __init__ outputdir=taboutputdir) File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/pycparser/c_lexer.py", line 66, in build self.lexer = lex.lex(object=self, **kwargs) File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/pycparser/ply/lex.py", line 911, in lex lexobj.readtab(lextab, ldict) File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/pycparser/ply/lex.py", line 233, in readtab titem.append((re.compile(pat, lextab._lexreflags | re.VERBOSE), _names_to_funcs(func_name, fdict))) File "/home/travis/virtualenv/python2.7.9/lib/python2.7/re.py", line 194, in compile return _compile(pattern, flags) File "/home/travis/virtualenv/python2.7.9/lib/python2.7/re.py", line 249, in _compile p = sre_compile.compile(pattern, flags) File "/home/travis/virtualenv/python2.7.9/lib/python2.7/sre_compile.py", line 583, in compile "sorry, but this version only supports 100 named groups" AssertionError: sorry, but this version only supports 100 named groups Solutions?
There is a bug with PyCParser - See https://github.com/pyca/cryptography/issues/3187 The work around is to use another version or to not use the binary distribution. pip install git+https://github.com/eliben/pycparser@release_v2.14 or pip install --no-binary pycparser
How does one add an item to GTK's "recently used" file list from Python?
I'm trying to add to the "recently used" files list from Python 3 on Ubuntu. I am able to successfully read the recently used file list like this: from gi.repository import Gtk recent_mgr = Gtk.RecentManager.get_default() for item in recent_mgr.get_items(): print(item.get_uri()) This prints out the same list of files I see when I look at "Recent" in Nautilus, or look at the "Recently Used" place in the file dialog of apps like GIMP. However, when I tried adding an item like this (where /home/laurence/foo/bar.txt is an existing text file)... recent_mgr.add_item('file:///home/laurence/foo/bar.txt') ...the file does not show up in the Recent section of Nautilus or in file dialogs. It doesn't even show up in the results returned by get_items(). How can I add a file to GTK's recently used file list from Python?
A Gtk.RecentManager needs to emit the changed signal for the update to be written in a private attribute of the C++ class. To use a RecentManager object in an application, you need to start the event loop by calling Gtk.main: from gi.repository import Gtk recent_mgr = Gtk.RecentManager.get_default() uri = r'file:/path/to/my/file' recent_mgr.add_item(uri) Gtk.main() If you don't call Gtk.main(), the changed signal is not emitted and nothing happens. To answer @andlabs query, the reason why RecentManager.add_item returns a boolean is because the g_file_query_info_async function is called. The callback function gtk_recent_manager_add_item_query_info then gathers the mimetype, application name and command into a GtkRecentData struct and finally calls gtk_recent_manager_add_full. The source is here. If anything goes wrong, it is well after add_item has finished, so the method just returns True if the object it is called from is a RecentManager and if the uri is not NULL; and False otherwise. The documentation is inaccurate in saying: Returns TRUE if the new item was successfully added to the recently used resources list as returning TRUE only means that an asynchronous function was called to deal with the addition of a new item.
How to make an integer larger than any other integer?
Note: while the accepted answer achieves the result I wanted, and @ecatmur answer provides a more comprehensive option, I feel it's very important to emphasize that my use case is a bad idea in the first place. This is explained very well in @Jason Orendorff answer below. Note: this question is not a duplicate of the question about sys.maxint. It has nothing to do with sys.maxint; even in python 2 where sys.maxint is available, it does NOT represent largest integer (see the accepted answer). I need to create an integer that's larger than any other integer, meaning an int object which returns True when compared to any other int object using >. Use case: library function expects an integer, and the only easy way to force a certain behavior is to pass a very large integer. In python 2, I can use sys.maxint (edit: I was wrong). In python 3, math.inf is the closest equivalent, but I can't convert it to int.
Since python integers are unbounded, you have to do this with a custom class: import functools @functools.total_ordering class NeverSmaller(object): def __le__(self, other): return False class ReallyMaxInt(NeverSmaller, int): def __repr__(self): return 'ReallyMaxInt()' Here I've used a mix-in class NeverSmaller rather than direct decoration of ReallyMaxInt, because on Python 3 the action of functools.total_ordering would have been prevented by existing ordering methods inherited from int. Usage demo: >>> N = ReallyMaxInt() >>> N > sys.maxsize True >>> isinstance(N, int) True >>> sorted([1, N, 0, 9999, sys.maxsize]) [0, 1, 9999, 9223372036854775807, ReallyMaxInt()] Note that in python2, sys.maxint + 1 is bigger than sys.maxint, so you can't rely on that. Disclaimer: This is an integer in the OO sense, it is not an integer in the mathematical sense. Consequently, arithmetic operations inherited from the parent class int may not behave sensibly. If this causes any issues for your intended use case, then they can be disabled by implementing __add__ and friends to just error out.
Is there a more Pythonic way to combine an Else: statement and an Except:?
I have a piece of code that searches AutoCAD for text boxes that contain certain keywords (eg. "overall_weight" in this case) and replaces it with a value from a dictionary. However, sometimes the dictionary key is assigned to an empty string and sometimes, the key doesn't exist altogether. In these cases, the "overall_weight" keywords should be replaced with "N/A". I was wondering if there was a more pythonic way to combine the KeyError exception and the else to both go to nObject.TextString = "N/A" so its not typed twice. if nObject.TextString == "overall_weight": try: if self.var.jobDetails["Overall Weight"]: nObject.TextString = self.var.jobDetails["Overall Weight"] else: nObject.TextString = "N/A" except KeyError: nObject.TextString = "N/A" Edit: For clarification for future visitors, there are only 3 cases I need to take care of and the correct answer takes care of all 3 cases without any extra padding. dict[key] exists and points to a non-empty string. TextString replaced with the value assigned to dict[key]. dict[key] exists and points to a empty string. TextString replaced with "N/A". dict[key] doesn't exist. TextString replaced with "N/A".
Use dict.get() which will return the value associated with the given key if it exists otherwise None. (Note that '' and None are both falsey values.) If s is true then assign it to nObject.TextString otherwise give it a value of "N/A". if nObject.TextString == "overall_weight": nObject.TextString = self.var.jobDetails.get("Overall Weight") or "N/A"
Is there a way to compile python application into static binary?
What I'm trying to do is ship my code to a remote server, that may have different python version installed and/or may not have packages my app requires. Right now to achieve such portability I have to build relocatable virtualenv with interpreter and code. That approach has some issues (for example, you have to manually copy a bunch of libraries into your virtualenv, since --always-copy doesn't work as expected) and generally slow. There's (in theory) a way to build python itself statically. I wonder if I could pack interpreter with my code into one binary and run my application as module. Something like that: ./mypython -m myapp run or ./mypython -m gunicorn -c ./gunicorn.conf myapp.wsgi:application.
There are two ways you could go about to solve your problem Use a static builder, like freeze, or pyinstaller, or py2exe Compile using cython I will explain how you can go about doing it using the second, since the first method is not cross platform and version, and has been explained in other answers. Also, using programs like pyinstaller typically results in huge file sizes, where as using cython will result in a file that's KBs in size First, install cython. Then, rename your python file (say test.py) into a pyx file $ sudo pip install cython $ mv test.py test.pyx Then, you can use cython along with GCC to compile it (Cython generates a C file out of a Python .pyx file, and then GCC compiles the C file) (in reference to http://stackoverflow.com/a/22040484/5714445) $ cython test.pyx --embed $ gcc -Os -I /usr/include/python3.5m -o test test.c -lpython3.5m -lpthread -lm -lutil -ldl NOTE: Depending on your version of python, you might have to change the last command. To know which version of python you are using, simply use $ python -V You will now have a binary file 'test', which is what you are looking for NOTE: Cython is used to use C-Type Variable definitions for static memory allocation to speed up Python programs. In your case however, you will still be using traditional Python definitions. NOTE2: If you are using additional libraries (like opencv, for example), you might have to provide the directory to them using -L and then specify the name of the library using -l in the GCC Flags. For more information on this, please refer to GCC flags
What are variable annotations in Python 3.6?
Python 3.6 is about to be released. PEP 494 -- Python 3.6 Release Schedule mentions the end of December, so I went through What's New in Python 3.6 to see they mention the variable annotations: PEP 484 introduced standard for type annotations of function parameters, a.k.a. type hints. This PEP adds syntax to Python for annotating the types of variables including class variables and instance variables: primes: List[int] = [] captain: str # Note: no initial value! class Starship: stats: Dict[str, int] = {} Just as for function annotations, the Python interpreter does not attach any particular meaning to variable annotations and only stores them in a special attribute __annotations__ of a class or module. In contrast to variable declarations in statically typed languages, the goal of annotation syntax is to provide an easy way to specify structured type metadata for third party tools and libraries via the abstract syntax tree and the __annotations__ attribute. So from what I read they are part of the type hints coming from Python 3.5, described in What are Type hints in Python 3.5. I follow the captain: str and class Starship example, but not sure about the last one: How does primes: List[int] = [] explain? Is it defining an empty list that will just allow integers?
Everything between : and the = is a type hint, so primes is indeed defined as List[int], and initially set to an empty list (and stats is an empty dictionary initially, defined as Dict[str, int]). List[int] and Dict[str, int] are not part of the next syntax however, these were already defined in the Python 3.5 typing hints PEP. The 3.6 PEP 526 – Syntax for Variable Annotations proposal only defines the syntax to attach the same hints to variables; before you could only attach type hints to variables with comments (e.g. primes = [] # List[int]). Both List and Dict are Generic types, indicating that you have a list or dictionary mapping with specific (concrete) contents. For List, there is only one 'argument' (the elements in the [...] syntax), the type of every element in the list. For Dict, the first argument is the key type, and the second the value type. So all values in the primes list are integers, and all key-value pairs in the stats dictionary are (str, int) pairs, mapping strings to integers. See the typing.List and typing.Dict definitions, the section on Generics, as well as PEP 483 – The Theory of Type Hints. Like type hints on functions, their use is optional and are also considered annotations (provided there is an object to attach these to, so globals in modules and attributes on classes, but not locals in functions) which you could introspect via the __annotations__ attribute. You can attach arbitrary info to these annotations, you are not strictly limited to type hint information. You may want to read the full proposal; it contains some additional functionality above and beyond the new syntax; it specifies when such annotations are evaluated, how to introspect them and how to declare something as a class attribute vs. instance attribute, for example.
Dictionaries are ordered in Python 3.6
Dictionaries are ordered in Python 3.6, unlike in previous Python incarnations. This seems like a substantial change, but it's only a short paragraph in the documentation. It is described as an implementation detail rather than a language feature, but also implies this may become standard in the future. How does the Python 3.6 dictionary implementation perform better than the older one while preserving element order? Here is the text from the documentation: dict() now uses a “compact” representation pioneered by PyPy. The memory usage of the new dict() is between 20% and 25% smaller compared to Python 3.5. PEP 468 (Preserving the order of **kwargs in a function.) is implemented by this. The order-preserving aspect of this new implementation is considered an implementation detail and should not be relied upon (this may change in the future, but it is desired to have this new dict implementation in the language for a few releases before changing the language spec to mandate order-preserving semantics for all current and future Python implementations; this also helps preserve backwards-compatibility with older versions of the language where random iteration order is still in effect, e.g. Python 3.5). (Contributed by INADA Naoki in issue 27350. Idea originally suggested by Raymond Hettinger.)
How does the Python 3.6 dictionary implementation perform better than the older one while preserving element order? Essentially by keeping two arrays, one holding the entries for the dictionary in the order that they were inserted and the other holding a list of indices. In the previous implementation a sparse array of type dictionary entries had to be allocated; unfortunately, it also resulted in a lot of empty space since that array was not allowed to be more than 2/3s full. This is not the case now since only the required entries are stored and a sparse array of type integer 2/3s full is kept. Obviously creating a sparse array of type "dictionary entries" is much more memory demanding than a sparse array for storing ints (sized 8 bytes tops in cases of really large dictionaries) In the original proposal made by Raymond Hettinger, a visualization of the data structures used can be seen which captures the gist of the idea. For example, the dictionary: d = {'timmy': 'red', 'barry': 'green', 'guido': 'blue'} is currently stored as: entries = [['--', '--', '--'], [-8522787127447073495, 'barry', 'green'], ['--', '--', '--'], ['--', '--', '--'], ['--', '--', '--'], [-9092791511155847987, 'timmy', 'red'], ['--', '--', '--'], [-6480567542315338377, 'guido', 'blue']] Instead, the data should be organized as follows: indices = [None, 1, None, None, None, 0, None, 2] entries = [[-9092791511155847987, 'timmy', 'red'], [-8522787127447073495, 'barry', 'green'], [-6480567542315338377, 'guido', 'blue']] As you can visually now see, in the original proposal, a lot of space is essentially empty to reduce collisions and make look-ups faster. With the new approach, you reduce the memory required by moving the sparseness where it's really required, in the indices. Should you depend on it and/or use it? As noted in the documentation, this is considered an implementation detail meaning it is subject to change and you shouldn't depend on it. Different implementations of Python aren't required to make the dictionary ordered, rather, just support an ordered mapping where that is required (Notable examples are PEP 520: Preserving Class Attribute Definition Order and PEP 468: Preserving Keyword Argument Order) If you want to write code that preserves the ordering and want it to not break on previous versions/different implementations you should always use OrderedDict. Besides, OrderedDict will most likely eventually become a thin-wrapper around the new dict implementation.
Why does map return a map object instead of a list in Python 3?
I am interested in understanding the new language design of Python 3.x. I do enjoy, in Python 2.7, the function map: Python 2.7.12 In[2]: map(lambda x: x+1, [1,2,3]) Out[2]: [2, 3, 4] However, in Python 3.x things have changed: Python 3.5.1 In[2]: map(lambda x: x+1, [1,2,3]) Out[2]: <map at 0x4218390> I understand the how, but I could not find a reference to the why. Why did the language designers make this choice, which, in my opinion, introduces a great deal of pain. Was this to arm-wrestle developers in sticking to list comprehensions? IMO, list can be naturally thought as Functors; and I have been somehow been thought to think in this way: fmap :: (a -> b) -> f a -> f b
Because it returns an iterator, it omit storing the full list in the memory. So that you can easily iterate over it in the future not making pain to memory. Possibly you even don't need a full list, but the part of it, until your condition is matched. You can find this docs useful, because iterators are awesome. An object representing a stream of data. Repeated calls to the iterator’s __next__() method (or passing it to the built-in function next()) return successive items in the stream. When no more data are available a StopIteration exception is raised instead. At this point, the iterator object is exhausted and any further calls to its __next__() method just raise StopIteration again. Iterators are required to have an __iter__() method that returns the iterator object itself so every iterator is also iterable and may be used in most places where other iterables are accepted. One notable exception is code which attempts multiple iteration passes. A container object (such as a list) produces a fresh new iterator each time you pass it to the iter() function or use it in a for loop. Attempting this with an iterator will just return the same exhausted iterator object used in the previous iteration pass, making it appear like an empty container.
list() uses more memory than list comprehension
So i was playing with list objects and found little strange thing that if list is created with list() it uses more memory, than list comprehension? I'm using Python 3.5.2 In [1]: import sys In [2]: a = list(range(100)) In [3]: sys.getsizeof(a) Out[3]: 1008 In [4]: b = [i for i in range(100)] In [5]: sys.getsizeof(b) Out[5]: 912 In [6]: type(a) == type(b) Out[6]: True In [7]: a == b Out[7]: True In [8]: sys.getsizeof(list(b)) Out[8]: 1008 From the docs: Lists may be constructed in several ways: Using a pair of square brackets to denote the empty list: [] Using square brackets, separating items with commas: [a], [a, b, c] Using a list comprehension: [x for x in iterable] Using the type constructor: list() or list(iterable) But it seems that using list() it uses more memory. And as much list is bigger, the gap increases. Why this happens? UPDATE #1 Test with Python 3.6.0b2: Python 3.6.0b2 (default, Oct 11 2016, 11:52:53) [GCC 5.4.0 20160609] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.getsizeof(list(range(100))) 1008 >>> sys.getsizeof([i for i in range(100)]) 912 UPDATE #2 Test with Python 2.7.12: Python 2.7.12 (default, Jul 1 2016, 15:12:24) [GCC 5.4.0 20160609] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.getsizeof(list(xrange(100))) 1016 >>> sys.getsizeof([i for i in xrange(100)]) 920
I think you're seeing over-allocation patterns this is a sample from the source: /* This over-allocates proportional to the list size, making room * for additional growth. The over-allocation is mild, but is * enough to give linear-time amortized behavior over a long * sequence of appends() in the presence of a poorly-performing * system realloc(). * The growth pattern is: 0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ... */ new_allocated = (newsize >> 3) + (newsize < 9 ? 3 : 6); Printing the sizes of list comprehensions of lengths 0-88 you can see the pattern matches: # create comprehensions for sizes 0-88 comprehensions = [sys.getsizeof([1 for _ in range(l)]) for l in range(90)] # only take those that resulted in growth compared to previous length steps = zip(comprehensions, comprehensions[1:]) growths = [x for x in list(enumerate(steps)) if x[1][0] != x[1][1]] # print the results: for growth in growths: print(growth) Results (format is (list length, (old total size, new total size))): (0, (64, 96)) (4, (96, 128)) (8, (128, 192)) (16, (192, 264)) (25, (264, 344)) (35, (344, 432)) (46, (432, 528)) (58, (528, 640)) (72, (640, 768)) (88, (768, 912)) The over-allocation is done for performance reasons allowing lists to grow without allocating more memory with every growth (better amortized performance). A probable reason for the difference with using list comprehension, is that list comprehension can not deterministically calculate the size of the generated list, but list() can. This means comprehensions will continuously grow the list as it fills it using over-allocation until finally filling it. It is possible that is will not grow the over-allocation buffer with unused allocated nodes once its done (in fact, in most cases it wont, that would defeat the over-allocation purpose). list(), however, can add some buffer no matter the list size since it knows the final list size in advance. Another backing evidence, also from the source, is that we see list comprehensions invoking LIST_APPEND, which indicates usage of list.resize, which in turn indicates consuming the pre-allocation buffer without knowing how much of it will be filled. This is consistent with the behavior you're seeing. To conclude, list() will pre-allocate more nodes as a function of the list size >>> sys.getsizeof(list([1,2,3])) 60 >>> sys.getsizeof(list([1,2,3,4])) 64 List comprehension does not know the list size so it uses append operations as it grows, depleting the pre-allocation buffer: # one item before filling pre-allocation buffer completely >>> sys.getsizeof([i for i in [1,2,3]]) 52 # fills pre-allocation buffer completely # note that size did not change, we still have buffered unused nodes >>> sys.getsizeof([i for i in [1,2,3,4]]) 52 # grows pre-allocation buffer >>> sys.getsizeof([i for i in [1,2,3,4,5]]) 68
Removing elements from an array that are in another array
Say I have these 2D arrays A and B. How can I remove elements from A that are in B. A=np.asarray([[1,1,1], [1,1,2], [1,1,3], [1,1,4]]) B=np.asarray([[0,0,0], [1,0,2], [1,0,3], [1,0,4], [1,1,0], [1,1,1], [1,1,4]]) #output = [[1,1,2], [1,1,3]] To be more precise, I would like to do something like this. data = some numpy array label = some numpy array A = np.argwhere(label==0) #[[1 1 1], [1 1 2], [1 1 3], [1 1 4]] B = np.argwhere(data>1.5) #[[0 0 0], [1 0 2], [1 0 3], [1 0 4], [1 1 0], [1 1 1], [1 1 4]] out = np.argwhere(label==0 and data>1.5) #[[1 1 2], [1 1 3]]
Here is a Numpythonic approach with broadcasting: In [83]: A[np.all(np.any((A-B[:, None]), axis=2), axis=0)] Out[83]: array([[1, 1, 2], [1, 1, 3]]) Here is a timeit with other answer: In [90]: def cal_diff(A, B): ....: A_rows = A.view([('', A.dtype)] * A.shape[1]) ....: B_rows = B.view([('', B.dtype)] * B.shape[1]) ....: return np.setdiff1d(A_rows, B_rows).view(A.dtype).reshape(-1, A.shape[1]) ....: In [93]: %timeit cal_diff(A, B) 10000 loops, best of 3: 54.1 µs per loop In [94]: %timeit A[np.all(np.any((A-B[:, None]), axis=2), axis=0)] 100000 loops, best of 3: 9.41 µs per loop # Even better with Divakar's suggestion In [97]: %timeit A[~((A[:,None,:] == B).all(-1)).any(1)] 100000 loops, best of 3: 7.41 µs per loop Well, if you are looking for a faster way you should looking for ways that reduce the number of comparisons. In this case (without considering the order) you can generate a unique number from your rows and compare the numbers which can be done with summing the items power of two. Here is the benchmark with Divakar's in1d approach: In [144]: def in1d_approach(A,B): .....: dims = np.maximum(B.max(0),A.max(0))+1 .....: return A[~np.in1d(np.ravel_multi_index(A.T,dims),\ .....: np.ravel_multi_index(B.T,dims))] .....: In [146]: %timeit in1d_approach(A, B) 10000 loops, best of 3: 23.8 µs per loop In [145]: %timeit A[~np.in1d(np.power(A, 2).sum(1), np.power(B, 2).sum(1))] 10000 loops, best of 3: 20.2 µs per loop You can use np.diff to get the an order independent result: In [194]: B=np.array([[0, 0, 0,], [1, 0, 2,], [1, 0, 3,], [1, 0, 4,], [1, 1, 0,], [1, 1, 1,], [1, 1, 4,], [4, 1, 1]]) In [195]: A[~np.in1d(np.diff(np.diff(np.power(A, 2))), np.diff(np.diff(np.power(B, 2))))] Out[195]: array([[1, 1, 2], [1, 1, 3]]) In [196]: %timeit A[~np.in1d(np.diff(np.diff(np.power(A, 2))), np.diff(np.diff(np.power(B, 2))))] 10000 loops, best of 3: 30.7 µs per loop Benchmark with Divakar's setup: In [198]: B = np.random.randint(0,9,(1000,3)) In [199]: A = np.random.randint(0,9,(100,3)) In [200]: A_idx = np.random.choice(np.arange(A.shape[0]),size=10,replace=0) In [201]: B_idx = np.random.choice(np.arange(B.shape[0]),size=10,replace=0) In [202]: A[A_idx] = B[B_idx] In [203]: %timeit A[~np.in1d(np.diff(np.diff(np.power(A, 2))), np.diff(np.diff(np.power(B, 2))))] 10000 loops, best of 3: 137 µs per loop In [204]: %timeit A[~np.in1d(np.power(A, 2).sum(1), np.power(B, 2).sum(1))] 10000 loops, best of 3: 112 µs per loop In [205]: %timeit in1d_approach(A, B) 10000 loops, best of 3: 115 µs per loop Timing with larger arrays (Divakar's solution is slightly faster): In [231]: %timeit A[~np.in1d(np.diff(np.diff(np.power(A, 2))), np.diff(np.diff(np.power(B, 2))))] 1000 loops, best of 3: 1.01 ms per loop In [232]: %timeit A[~np.in1d(np.power(A, 2).sum(1), np.power(B, 2).sum(1))] 1000 loops, best of 3: 880 µs per loop In [233]: %timeit in1d_approach(A, B) 1000 loops, best of 3: 807 µs per loop