UbuntuIRC / 2012 /07 /02 /#juju-dev.txt
niansa
Initial commit
4aa5fce
[03:18] <Aram> morning.
[06:19] <TheMue> Morning
[06:21] <davecheney> howdy
[06:23] <Aram> hey.
[06:29] <fwereade> davecheney, Aram, TheMue, morning
[06:39] <davecheney> fwereade: morning
[06:39] <davecheney> fwereade: looking forward to your testing change getting merged, we have a lot of overlap
[06:39] <fwereade> davecheney, just saw your mail
[06:40] <fwereade> davecheney, I'm feeling very conflicted ovr it
[06:40] <fwereade> davecheney, part of me is saying "just break it into 6 separate CLs"
[06:40] <davecheney> fwereade: i don't think that will reduce the wall time of merging it
[06:41] <fwereade> davecheney, but a bigger part is saying "this is really all one change: it's actually going to make it harder and take longer"
[06:41] <fwereade> davecheney, cool, glad it seems that way to you too :)
[06:41] <davecheney> fwereade: if there are improvements to be made, they can be done after this change it merged
[06:42] <fwereade> davecheney, let's hope niemeyer sees it that way :)
[06:42] <davecheney> aye, there's the rub
[06:50] <TheMue> Hmm, once again morning, connection seems to be broken.
[07:01] <fwereade> TheMue, heyhey
[07:04] <TheMue> fwereade: How has the weekend been?
[07:12] <fwereade> TheMue, very nice thanks
[07:13] <fwereade> TheMue, went to a charmingly low-rent charity event on saturday -- bunch of different foods and presentations from various nationalities
[07:16] <davecheney> have a nice evening folks, i'll be online later
[07:24] <TheMue> fwereade: We've been outside a lot, at friends on Saturday and in our own garden and a park on Sunday.
[07:24] <fwereade> TheMue, lovely :)
[07:24] <TheMue> fwereade: Enjoyed it a lot.
[07:25] <fwereade> TheMue, I bet, sounds pleasingly idyllic
[07:26] <TheMue> fwereade: Yep. We've got a park here near our home town which has founded for larger exhibition more than 10 years ago. And since then we're several times there each year.
[07:27] <TheMue> fwereade: https://plus.google.com/photos/107694490695522997974/albums/5760290760673390033 shows some pics from yesterday
[07:28] <fwereade> TheMue, awesome!
[07:29] <fwereade> TheMue, Malta is a bit lacking in green spaces, they're probably what I miss most
[07:29] <TheMue> fwereade: Hehe, yes, can imagine. Your change hasn't been small. Where in England did you lived before?
[07:31] <fwereade> TheMue, london, which is pretty green really all things considered
[07:31] <fwereade> TheMue, I grew up in the countryside in gloucestershite though so that's really what I feel is a "correct" environment, if you know what I mean
[07:32] <fwereade> er, gloucestershi*r*e
[07:32] <TheMue> fwereade: *rofl*
[07:32] <fwereade> TheMue, it's a nice place, a little village in the costwolds called eastleach
[07:33] <TheMue> fwereade: I sadly hadn't the chance to visit Britain yet, but there are so many places I want to see.
[07:33] <TheMue> fwereade: From the crowded London to the Outer Hebrides.
[07:33] <fwereade> TheMue, never been on the outlying islands
[07:34] <fwereade> TheMue, been camping by the sea pretty far north in scotland
[07:34] <TheMue> fwereade: So far I've only been in Heathrow for transit. *lol*
[07:34] <fwereade> TheMue, haha
[07:34] <fwereade> TheMue, the lake district is gorgeous
[07:35] <TheMue> fwereade: We've seen so many fantastic locations in TV. I think we'll take a longer time and a Land Rover Defender and then cruise all over UK.
[07:37] <fwereade> TheMue, I can think of worse ways to spend a month :)
[07:38] <TheMue> fwereade: I would also take an other great Britain car, but that would be a bit expensive: An Aston Martin.
[07:39] <fwereade> TheMue, haha
[07:41] <TheMue> fwereade: I've got a part of a single malt cask on the Isle of Arran. We bought it a few years ago with some friends.
[07:42] <fwereade> TheMue, I remember talking about it in budapest :)
[07:43] <TheMue> fwereade: I can't hide that I've got a passion for the British culture, even that I've not been there. Funny, isn't it?
[07:47] <fwereade> TheMue, it has good bits and bad bits, but the good bits somehow seem to have generated a lot of good PR ;)
[07:48] <TheMue> fwereade: You definitely have better insides than me. OTOH I'm only a tourist. :D
[08:14] <TheMue> fwereade: I'm off for about an hour, routine visit at the dentist.
[09:00] <hazmat> morning folks, tool question, does lbox submit -adopt notice local modificattions you've had to make to someone else's branch?
[09:32] <hazmat> it does
[10:42] <Aram> I hate it when I fix bugs but I don't understand my own fix.
[10:44] <Aram> heh.
[10:44] <Aram> go it.
[10:44] <Aram> wow, this one was subtle.
[11:13] <hazmat> fwereade, is there any docs extant for various cli changes in gojuju ?
[11:13] <hazmat> actually even an auto generated go docs at a public site would be nice
[11:15] <fwereade> hazmat, heyhey
[11:16] <fwereade> hazmat, --help will tell you what exists, but it is not otherwise explicitly available anywhere; that is a sensible idea
[11:21] * hazmat works through compiling the tree
[11:21] <hazmat> oh.. install is now get
[11:22] <hazmat> bson in charm urls?
[11:24] <hazmat> has install always been get. its been a while i guess
[11:24] <hazmat> oh.. the bson is the incremental serialization work
[11:24] <hazmat> for mongo
[11:25] <hazmat> i thought most of that was in mstate
[11:29] <fwereade> hazmat, sorry, I am missing a little bit of context
[11:29] <hazmat> fwereade, ./cloudinit_test.go:210: undefined: Commentf ? do i need a more recent version of go? or am i missing a lib
[11:29] <fwereade> hazmat, that's in gocheck
[11:29] <hazmat> fwereade, shouldn't i have gotten an import error then?
[11:30] <fwereade> hazmat, probably out-of-date
[11:31] <fwereade> hazmat, `go get -u launchpad.net/juju-core/...`, I think (but make sure you don;t have local changes you want to keep if you do this)
[11:31] <fwereade> hazmat, also worth checking what version of go you have
[11:31] <hazmat> 1.0.1
[11:32] <hazmat> fwereade, re go get juju-core what's the ... ?
[11:32] <Aram> literal ...
[11:32] <hazmat> sure.. just curious what it meant
[11:32] <fwereade> hazmat, and everything underneath it
[11:32] <hazmat> ah
[11:32] <hazmat> fwereade, thanks
[11:32] <fwereade> hazmat, (or imported by it, but that's go get's work not the ...'s, if you see what I mean)
[11:33] <fwereade> hazmat, not sure whether 1.0.1 will pick all the latest library versions, if you still have trouble consider updating that
[11:59] <fwereade> hazmat, offhand, do you have any idea of the approximate range of ratios between zookeeper time and wall clock time at the client end of a specific connection?
[12:02] <fwereade> hazmat, s/zookeeper time and/the apparent rates of progression of zookeeper time and of/
[12:04] <hazmat> fwereade, not sure what you mean.. notifications from zk are delivered in order
[12:05] <hazmat> but the delay in delivery is subject to the quality of the network connection
[12:05] <fwereade> hazmat, they are delivered in order but 2 conns do not necessarily have the same idea of "now"
[12:05] <hazmat> fwereade, right
[12:05] <fwereade> hazmat, not just that, I'm pretty sure the docs state that two conns can be out of syn by order-of 10s of seconds
[12:05] <hazmat> fwereade, each conn is an independent view of the ordered stream
[12:06] <fwereade> hazmat, quite so; this indicates that a client may see two events that are 100ms apart in ZK time arrive only 50ms appart on wall clock time
[12:06] <fwereade> hazmat, or possibly 1ms apart
[12:06] <hazmat> fwereade, that sounds odd, but given a conn on a separate server of the zk quorum that's a little out of date perhaps.. the conn can request the server catch up via sync
[12:07] <hazmat> fwereade, in what context is that an issue?
[12:07] <fwereade> hazmat, yeah, I know about sync, it's not in gozk AFAICS
[12:07] <hazmat> are you trying to have multiple conns in a single server?
[12:07] <hazmat> er. process
[12:07] <fwereade> hazmat, testing of the presence nodes using two separate connections
[12:08] <hazmat> fwereade, the possibility is greatly diminished of long deltas in independent views of time if their connected to the same server and the same network quality
[12:08] <hazmat> fwereade, there are numerous tests in txzk that i've looped 10s of k that exercise multiple conns fwiw
[12:09] <hazmat> fwereade, is this a theoretical concern or something you've seen in practice?
[12:09] <fwereade> hazmat, ok; but I have directly observed an alternate connection, running in test X, to have a conception of ZK "now" that corresponds to a state that was current during a previous test
[12:10] <hazmat> fwereade, are the tests building state across tests?
[12:10] <fwereade> hazmat, this happens very unpredictably
[12:10] <fwereade> hazmat, no
[12:10] <fwereade> hazmat, the main connection nukes everything between test cases
[12:10] <hazmat> so it sounds like a test framework issue then with cleanup
[12:11] <hazmat> fwereade, or perhaps a bug in gozk delivering events on closed conns
[12:11] <hazmat> fwereade, does the conn get closed/open for teardown/setup?
[12:11] <fwereade> hazmat, no; I'll try doing that
[12:12] <hazmat> fwereade, if it doesn't then yes.. its quite possible your getting event delivery on old state
[12:12] <fwereade> hazmat, but based on what I've seen that will only add more uncertainty...
[12:12] <hazmat> keep in mind you've only got one execution thread
[12:12] <fwereade> hazmat, new conns get events from older state
[12:13] <hazmat> fwereade, was the old conn closed?
[12:13] <fwereade> hazmat, no, the old conn is still being used to perform new operations, which I expect the alternate conn to respond to
[12:13] <fwereade> hazmat, or vice versa in some tests
[12:13] <hazmat> fwereade, that sounds like an event dispatch issue with events
[12:14] <hazmat> i'd try instrumenting gozk
[12:14] <fwereade> hazmat, is it not behaviour consistent with the somewhat loose guarantees that ZK makes?
[12:14] <hazmat> and printing out with handle info
[12:14] <fwereade> hazmat, yeah, I will look into that, it's perfectly plausible
[12:14] <hazmat> fwereade, old events on new conns? no
[12:15] <hazmat> that's not part of the guarantee, the event only happens on state observation
[12:15] <hazmat> and temporarily if that state is new, it should never see an old event
[12:15] <fwereade> hazmat, potentially old view of history implies potentially old state changes, doesn't it?
[12:15] <hazmat> ie. seeing old event on new state, would be a violation of the guarantee zk makes
[12:15] <fwereade> hazmat, does a new conn guarantee up-to-date view of history?
[12:16] <hazmat> fwereade, temporarily its should be up to date
[12:16] <fwereade> hazmat, it explicitly does not AIUI
[12:16] <hazmat> fwereade, connected to the same server yes it does
[12:16] <hazmat> fwereade, the only exception is if your running a quorum of servers
[12:16] <hazmat> and connecting to a not quite up to date server with the new conn
[12:17] <fwereade> hazmat, hold on though: it guarantees that a single client will only see a single view of history, and that that view is independent of the server it connects to
[12:17] <hazmat> its not eventually consistent
[12:17] <fwereade> hazmat, therefore, surely, it is possible that two clients connected to the same server may have an alternate view of history
[12:17] <hazmat> fwereade, feel free to verify, but it sounds like a code bug not a zk bug
[12:18] <hazmat> fwereade, the state is in memory on the zk server and modified by each op
[12:18] <hazmat> fwereade, and flushed to disk, a new client, will see current state
[12:18] <hazmat> as i said there are exceptions, but not to a single server setup
[12:19] <fwereade> hazmat, Single System Image
[12:19] <fwereade> A client will see the same view of the service regardless of the server that it connects to.
[12:19] <hazmat> and even then the limiting factor is the overall speed of the quorum to propagate changes
[12:19] <hazmat> fwereade, history doesn't exist from a client perspective, there is only present state and future observation
[12:20] <fwereade> hazmat, yes: but for it to have the single system image, history surely *must* exist at the server level?
[12:20] <hazmat> fwereade, yes.. but your asking about the delta between multiple clients
[12:20] <hazmat> fwereade, it does but its not exposed
[12:20] <hazmat> fwereade, and its only the delta on disk, not in mem
[12:24] <fwereade> hazmat, will look into it further, but still unconvinced that the fact the server is standalone guarantees consistency of client connections
[12:24] <hazmat> fwereade, the watch notifications for the client are in mem and are queued up
[12:24] <hazmat> again the notification carries no state
[12:25] <hazmat> only the change info, observation is required to capture state
[12:25] <hazmat> fwereade, feel free to verify, but it sounds like a code bug not a zk bug
[12:26] <hazmat> fwereade, the zk lists are pretty helpful
[12:26] <fwereade> hazmat, sure, that's the plan -- but I'm not actually claiming a ZK bug, I'm just claiming that this surprising behaviour is not actually inconsistent with the guarantees made by the docs
[12:27] <hazmat> fwereade, that a new client sees non current state
[12:27] <hazmat> against a single server
[12:28] <hazmat> i don't think so
[12:28] <fwereade> hazmat, I agree that the explanation for this bit:
[12:28] <fwereade> Sometimes developers mistakenly assume one other guarantee that ZooKeeper does not in fact make. This is:
[12:28] <fwereade> Simultaneously Conistent Cross-Client Views
[12:28] <fwereade> hazmat, *does* always mention multiple servers
[12:30] <hazmat> fwereade, like i said.. on a single server.. that's not possible.. we have many tests in python to verify that
[12:31] <hazmat> but i'm just repeating myself at this point...
[12:31] <hazmat> fwereade, and the form of consistency i'm referencing is weaker than whats in there
[12:32] <fwereade> hazmat, ok, I misunderstood your statement that you had verified this precise behaviour, and I accept your diagnosis of the likely cause; that is why I'm looking into it ;)
[12:32] <fwereade> hazmat, the actual original question I asked though is different, and does potentially involve multiple servers
[12:32] <fwereade> hazmat, and client connections from separate machines
[12:33] <hazmat> fwereade, perhaps you should backup and explain what the goal is?
[12:36] <fwereade> hazmat, the goal is to understand what could be causing unpredictable test failures in which two separate zk connections on the client are respectively seeing snapshots of state that appear indicate that deleting a node on conn A does not guarantee that the next request for state made on conn B will see that the node has been deleted
[12:36] <hazmat> fwereade, has the delete node op completed before B makes a request?
[12:37] <hazmat> fwereade, and you don't want to restrict to single server?
[12:37] <fwereade> hazmat, the calls are performed synchronously; my understanding is that the call completing without error indicates that the operation has completed successfully
[12:38] <fwereade> hazmat, in the general case of the problem, assuming multiple zookeeper, my initial question about relative rates of time progression may be relevant, but just for now we can worry about single servers
[12:38] <hazmat> fwereade.. well its still likely async.. there are two results to check
[12:39] <hazmat> the api call, and the result call
[12:40] <hazmat> given that you've got a single server (*ignoring multi server for a moment) for your tests it would appear to be a bug in gozk
[12:40] <hazmat> the unpredictable nature of the failures reinforces that guess, namely that something isn't properly waiting on results
[12:41] <fwereade> hazmat, ok, so, by "still likely async", do you mean "a line of code immediately following `err := conn.Delete("/some/path"); c.Assert(err, IsNil)` is not guaranteed to see the change"
[12:41] <hazmat> fwereade, you need to go deeper
[12:41] <hazmat> fwereade, conn.Delete is what?
[12:41] <hazmat> a gozk binding to the libzk
[12:41] <hazmat> underneath the hood its doing what
[12:41] <fwereade> hazmat, that is what I intend to do
[12:41] <hazmat> i'd guess adelete
[12:41] <hazmat> which is async
[12:42] * hazmat takes a look
[12:43] <hazmat> interesting
[12:43] <fwereade> hazmat, zoo_delete which I presume is not adelete
[12:43] <Aram> Delete does zoo_delete which is synchronous.
[12:43] <Aram> gozk is sync.
[12:43] <hazmat> wow
[12:43] <hazmat> ok..
[12:44] <hazmat> fwereade, so i'd suggest instrumenting delete and the subsequent client op with some prints
[12:44] <Aram> I suspect you simply check against the wrong version.
[12:45] <Aram> that's why you don't see the change.
[12:45] * Aram didn't read much of the backlog.
[12:45] <hazmat> so gozk is sync, and gojuju runs with a single thread ?
[12:45] <hazmat> Aram, interesting idea
[12:48] <hazmat> Aram, versions are only passed for modifications
[12:48] <hazmat> in this case its an observation that shows old state
[12:49] <fwereade> hazmat, sorry, lunch
[12:49] * hazmat moves onto openstack provider review
[13:10] <hazmat> fwereade, could pastebin the test in question?
[13:18] <fwereade> hazmat, the clearest situation is in line 10 of http://paste.ubuntu.com/1071241/ -- when it occurs, other connections are known to have been going around creating the node we're watching in the past
[13:20] <hazmat> fwereade, that looks like the same connection?
[13:20] <fwereade> hazmat, yes, it is an alternate connection is a previous test about which I am concerned
[13:21] <niemeyer> Gooooood morning!
[13:21] <Aram> hi niemeyer.
[13:21] <Aram> how was your trip?
[13:22] <hazmat> niemeyer, g'morning, how was i/o?
[13:22] <Aram> and how was SF?
[13:22] <niemeyer> hazmat: Superb
[13:22] <niemeyer> Aram: Superb too :)
[13:23] <niemeyer> Aram: Great to meet all the folks
[13:25] <TheMue> niemeyer: Heya
[13:25] <Aram> niemeyer: yeah, that must have been great.
[13:25] <hazmat> fwereade, what happens to the conn in setup/teardown?
[13:27] <hazmat> is there anyway to get the go test to be verbose about test cases being run?
[13:27] <fwereade> hazmat, in TearDownTest, recursively delete everything and (IIRC, checking) panic on error except nonodoe
[13:27] <niemeyer> http://arethegovideosupyet.com/ < This is great :)
[13:27] <hazmat> fwereade, so it is the same open connection for multiple tests?
[13:27] <fwereade> hazmat, -test.v -gocheck.vv should give you plenty
[13:27] <fwereade> hazmat, yes
[13:28] <TheMue> niemeyer: Yep, funny idea. Waiting for the next ones to be online.
[13:29] <niemeyer> How're things going in juju-dev land?
[13:32] <TheMue> niemeyer: First a hurdle but now moving forward.
[14:42] <niemeyer> fwereade: So, hazmat tells me we're not testing our code properly because we have a bug. What's up there?
[14:42] <fwereade> niemeyer, I am still trying t characterize it properly
[14:43] <fwereade> niemeyer, the stars appear to be aligned such that I can repro more often than not
[14:43] <niemeyer> fwereade: Ah, this is perfect
[14:43] <fwereade> niemeyer, but I am still trying to coax an "aha" out of the data
[14:43] <niemeyer> fwereade: http://paste.ubuntu.com/1071241/ is this the test?
[14:43] <fwereade> niemeyer, that is one of the many that *can* exhibit anomalous behaviour
[14:44] <fwereade> niemeyer, but the vast majority of the presence suite has been *slightly* flaky for a while, and I now appear to be close to pinning down the problem
[14:44] <niemeyer> fwereade: Ah, that's awesome
[14:47] <niemeyer> fwereade: Does it fail if you run just presence in isolation?
[14:47] <fwereade> niemeyer, very very very rarely
[14:47] <niemeyer> fwereade: Your more-often-than not is achieved with a few packages, or just with the whole suite?
[14:48] <fwereade> niemeyer, just state
[14:48] <niemeyer> Nice
[14:49] <niemeyer> fwereade: What's the liveness timing being used by the tests?
[14:49] <fwereade> niemeyer, 50ms, and there is certainly something tricksy there which I think I am close to accounting for
[14:50] <fwereade> niemeyer, ie sometimes we get mtimes more than 100ms apart when we do that
[14:50] <niemeyer> fwereade: This may well be the issue
[14:50] <niemeyer> fwereade: The GC may be stopping to collect stuff
[14:50] <fwereade> niemeyer, this is surely *part* of the issue
[14:50] <fwereade> niemeyer, I am trying to eliminate it and see whether I can goose theweirder one into existence
[14:55] <niemeyer> fwereade: Does it change the situation if you set GOMAXPROCS=4?
[15:03] <fwereade> niemeyer, hmm, quite possibly; but that's another dimension of phase space that I don't need right now I think
[15:04] <niemeyer> fwereade: Well, perhaps without the parallel collection that went into tip it wouldn't make much of a difference anyway
[15:10] <fwereade> niemeyer, well, we need timing tweaks, but the real important ones will come in real usage I think
[15:11] <fwereade> niemeyer, if we make them noticeably more generous than whatever I find to be rock-solid in test usage we will hopefully be ok
[15:11] <niemeyer> fwereade: Agreed
[15:12] <niemeyer> fwereade: Is there anything more unusual than the timing bits, that you'd like a second pair of eyes over?
[15:15] <fwereade> niemeyer, if I can't figure it out today I will cry uncle, but I feel like I'm converging on something
[15:16] <niemeyer> fwereade: Superb, you have me excited on the other side meanwhile ;-D
[15:16] <fwereade> niemeyer, cool :)
=== niemeyer_ is now known as niemeyer
=== Aram2 is now known as Aram
=== Aram2 is now known as Aram
[18:22] <niemeyer> fwereade: Any luck there?
[18:33] <fwereade> niemeyer, broke for supper at an opportune point; it looks like there was a case we'd missed in the code, subtly distinct from the normal failures due to c pauses/whatever
[18:34] <fwereade> niemeyer, state seems to be solid, given a fix for that
[18:34] <fwereade> niemeyer, trying a few full runs
[18:34] <niemeyer> fwereade: Oh, so happy that you found it.. even if I don't know what the issue really is yet :-)
[18:35] <fwereade> niemeyer, there's another bit of the issue which is trivial and embarrassing and kinda contributed to some of the confusion
[18:35] <fwereade> niemeyer, not all the tests were nicely cleaning up pingers on failure
[18:36] <fwereade> niemeyer, so I plan to do a pass for that too
[18:36] <niemeyer> fwereade: Aha, that was my initial guess at the problem
[18:36] <niemeyer> fwereade: Not to be embarrassed, though.. it's kind of easy to miss tear downs in any testing
[18:37] <fwereade> niemeyer, the thing is, I knew about that -- the first failure often causes a cascade -- but there was always a particular mode of initial failure that didn't seem to match reality
[18:38] <niemeyer> fwereade: Well, I appreciate you going after root cause.. a lot of people just ignore the obvious hints and go for the trivial solutions
[18:38] <fwereade> niemeyer, has to be done really
[18:38] <fwereade> niemeyer, fwiw there's a very occasional store failure that I never remember to capture
[18:38] <fwereade> niemeyer, primarily because the sheer weight of mongo logs is intimidating
[18:39] <fwereade> niemeyer, I promise that next time I see it I will make a proper bug
[18:39] <fwereade> ;)
[18:39] <niemeyer> fwereade: If it's the one I'm thinking off, it's just timing
[18:40] <fwereade> niemeyer, sounds very plausible
[18:40] <niemeyer> fwereade: I feel bad for it.. I've been kind of postponing increasing the timing to see if it will force myself to get the test to run faster rather than going for the easy solution
[18:40] <niemeyer> fwereade: I can tell it's not working so far
[18:40] <fwereade> niemeyer, I can sympathise
=== philipballew_ is now known as philipballew
[19:14] <niemeyer> brb
=== robbiew is now known as robbiew-afk
[21:59] <niemeyer> fwereade: Dude
[21:59] <niemeyer> fwereade: There?
=== robbiew-afk is now known as robbiew
[22:37] <niemeyer> davecheney: Morning!
[22:38] <davecheney> niemeyer: howdy!
[22:39] <niemeyer> davecheney: Good to see you around from the usual time zone :)
[22:39] <niemeyer> davecheney: Less overlap, but at least it's easy to actually talk :)
[22:39] <davecheney> indeed
[22:39] <davecheney> hows things ?
[22:39] <niemeyer> Pretty good. Just having a look at William's monster branch
[22:39] <niemeyer> Looks very nice
[22:40] <davecheney> niemeyer: it would be awesome if that got a green light
[22:40] <davecheney> i need his refactorings of zksuite etc for the local ec2 tests
[22:40] <niemeyer> davecheney: Thanks for reviewing it too, btw.. really appreciate having more eyes
[22:40] <davecheney> niemeyer: anytime
[22:40] <niemeyer> davecheney: It will surely get a green light
[22:40] <davecheney> it is big, and a lot of the changes are one line per file
[22:40] <niemeyer> davecheney: huge improvement overall
[22:41] <davecheney> indeed
[22:41] <niemeyer> davecheney: The concerns I have so far are lateral.. e.g., mstate needs to be included
[22:46] <davecheney> niemeyer: in related news, I have a fix for the location constraint issue in goamz
[22:46] <davecheney> but am unsure how to write tests for it
[22:57] <niemeyer> davecheney: Hmm
[22:58] <niemeyer> davecheney: Can you please open a CL with the fix?
[22:58] <niemeyer> davecheney: I can have a look and suggest something
[22:58] <davecheney> niemeyer: twosecs
[22:59] <davecheney> niemeyer: https://codereview.appspot.com/6344050
[23:00] <niemeyer> davecheney: Checking
[23:01] <davecheney> niemeyer: i can add support for LocationConstraint parsing into s3 test if you like
[23:03] <niemeyer> davecheney: Okay, so
[23:03] <niemeyer> davecheney: The testing seems to be easy to do inside s3_test.go
[23:04] <niemeyer> davecheney: We mock the server, and can easily compare the result against something we own.
[23:04] <niemeyer> davecheney: We don't even need to parse it
[23:04] <niemeyer> davecheney: Check out.. hmmm..
[23:05] <niemeyer> davecheney: Well, we actually don't have an example yet
[23:05] <niemeyer> davecheney: But the req we get out of WaitRequest is a normal http.Request
[23:05] <niemeyer> davecheney: With a Body and all
[23:05] <davecheney> niemeyer: yeah, I can address the TODO in the s3_test server
[23:06] <niemeyer> davecheney: A second detail: it looks like this info is well suited for a Name field
[23:06] <niemeyer> davecheney: Region.name
[23:06] <niemeyer> davecheney: Region.Name
[23:07] <davecheney> niemeyer: yes, there are a number of places where we want to convert from the Region type back to its canonical name
[23:39] <davecheney> niemeyer: https://codereview.appspot.com/6347059 << adds Region.Name
[23:40] <niemeyer> davecheney: The map is a nice touch, thanks
[23:40] <niemeyer> davecheney: LGTM
[23:40] <davecheney> niemeyer: that was a TODO from environs/ec2
[23:41] <niemeyer> davecheney: Ah, I didn't recall.. still looks like a good idea then! ;-)
[23:42] <davecheney> niemeyer: i'll address the todo in juju after I commit the location constraint fix
[23:42] <davecheney> so that people get a hint to go update goamz
[23:42] <niemeyer> Super
[23:43] * niemeyer => dinner.. back soon
[23:47] <davecheney> niemeyer: would you mind commiting williams lp:~fwereade/juju-core/vast-zookeeper-tests-cleanup ?
[23:57] <niemeyer> davecheney: Hmm.. there are a few trivial details there to be sorted out.. I think I'd prefer to let him consider these details, including the mstate stuff, to see how to push it forward before getting it in