=== 21WAAD3MF is now known as wallyworld [00:52] anyone knows his way around state/open.go ? [01:24] perrito666: it depends on what you want to know, i might be able to help [01:29] cmars: hah gues you figured i based my plugin off yours :) [01:29] axw: mornin'. can we have a hangout now instead of in an hour? [01:29] was gonna give you credit once i got something working [01:29] morning wallyworld. sure thing, just give me a moment [01:31] wallyworld: erm, my sound isn't working. gotta fix that first... [01:31] ok [01:36] wallyworld: tx, sadly my head is falling on the kb so I better hit the bed before I introduce a bug instead of fixing the current one [01:37] perrito666: np, i'm on call anyway now. if you have a question, feel free to email to the list or ask again later [01:46] wallyworld: ok, I am more curious about fixing this bug than about going to sleep :p so her I go [01:47] I am trying to fix the restore functionality [01:48] now, at some point the restore calls state.Open(), I tried to replace it by using juju.NetConn and NewConnFromName and in all cases, it timeouts at mgo.DialWithInfo thile trying to make Ping() [01:49] perrito666: ok. there may also be someone else looking into that from juju-core [01:49] "that" being? [01:50] i think Horacio Durán [01:51] sadly that would be me [01:51] he's started to fix some of the backup bugs and was also going to look at restore [01:51] oh [01:51] hi [01:51] hi [01:51] i didn't realise! [01:52] perrito666: give me a couple of minutes to finish this call [01:52] sure [02:00] perrito666: sorry, back now [02:00] i'm not across the restore stuff specifically [02:01] wallyworld: I think the restore part of my explanation can be safely ignored [02:01] wallyworld: gotta go to the shops for a little while, bbs [02:01] I just provided it for context [02:01] axw: sure, np [02:02] perrito666: so you are looking to, in general, replace calls to state.Open() with juju.NewConn ? [02:02] to use the api [02:03] so you definitely have a state server running? [02:03] api server even [02:05] wallyworld: well I am pretty sure I do, I try to query mongo by hand and it responds, yet when juju tries to dial it just timeouts [02:05] mongo != api server though [02:06] the api server listens on port 17070 [02:06] true, altough I am pretty sure this breaks before getting to state [02:06] what code are you changing? [02:07] well, current existing code calls open, open in time calls DialWithInfo [02:07] which file? [02:08] DialWithInfo creates a session [02:08] ah sorry [02:08] state/open.go [02:08] sure, but the caller to that [02:08] which caller of state.Open() is being replaced? [02:08] cmd/plugins/juju-restore/restore.go [02:09] around :187 [02:09] so at the time restore runs, is there a bootstrap node running? [02:09] i don't think there is [02:10] ah there may be [02:10] cause looks like it calls rebootstrap() [02:10] there is [02:10] but you might find that it is just that the api server has not started yet [02:11] cause it can take a while to spin up the bootstrap node and then start the services [02:12] maybe to see if that's the issue, pause the restore script or add in a big attempt loop to see if it just needs more time [02:12] wallyworld: mm I tried looping on that [02:12] I waited 30 mins total [02:12] that is a lot [02:12] can you do a juju status when it fails? [02:12] ie does juju status work? [02:12] that would need an api server connection [02:13] mm, it does not [02:13] so if juju status is broken also, then there's an issue with the bootstrap node [02:13] you would need to ssh in and look at the log file [02:14] cause it could be the node itself starts but then the juju services fail to start [02:15] mm, the service seems to be running, I even restarted it by hand [02:15] in what port should the state server be listening? [02:15] 37017 [02:15] 17070 [02:15] 37017 is ongo [02:15] mongo [02:16] perrito666: when you say you restarted the state service by hand, that doesn't make sense to me because the state service runs inside the machine agent - did you start jujud? [02:17] wallyworld: yes [02:17] and the machine log file is good? [02:18] and yet juju status fails also [02:18] there's gotta be something logged which shows the problem [02:18] until something like juju status is happy, then the code changes to restore.go won't work either [02:19] wallyworld: interesting though, restore is trying to open a state server n 37017 [02:20] the current restore using state.open()? [02:20] it will because it connects straight to mongo [02:20] the new juju.NewConn() methods instead go via the api server on port 17070 [02:21] aghh, juju.NewConn fails just as Open, so someting is definitely broken in my recently restored node [02:22] wallyworld: is that in trunk yet? [02:22] my logs show NewConnFromName accessing mongo directly on 37017 [02:22] stokachu: the api server stuff? [02:22] yea [02:22] yes, been there since 1.16 [02:22] used universally since 1.18 [02:23] perrito666: i'd be surprised and sad if the log files on that node didn't show what was wrong [02:24] * perrito666 run the extremely tedious setup script [02:25] perrito666: it will still be waiting for you tomorrow after you get some sleep :-) [02:26] wallyworld: certainly but now its personal [02:26] lol [02:26] feel free to pastebin logs files if you want some more eyes [02:26] * perrito666 paints canonical logo on his face and yells mel gibson style [02:26] woot i actually a juju plugin to do something in go [02:28] stokachu: I sense a verb missing there :p [02:29] would have been funnier if you said "i a missing verb there" :-) [02:29] hah [02:29] back... [02:29] to much time looking at juju core code [02:30] wallyworld: axw: I'm here for standup [02:30] wallyworld: my wife is watching tv in spanish next to me, when 2 lang module enabled in my head I loose capacity for witty sentences in both languages [02:30] waigani: huh? i thought you were on holidays so we had it early :-) [02:30] waigani: we already had it early, weren't expecting you [02:30] but we can have another [02:30] :( [02:31] I'm in auk airport [02:31] okay, maybe I can talk through what I'm doing? [02:31] waigani: sure, i'm in the hangout [02:31] brt [02:41] wallyworld: https://pastebin.canonical.com/108967/ [02:44] perrito666: looking, sorry was otp [02:44] on the same note https://pastebin.canonical.com/108968/ [02:47] perrito666: is there any more in machine-0.log? [02:52] wallyworld: well, there is before that altough I am not sure if I can distinguish between pre/post restore (restore is a particularly ugly thing) [02:53] perrito666: what i mean is, after the output you logged. that log looks ok i think. there was one timeout with the api client connecting but thatcan happen and it appeared to be ok after that but i wanted to be sure by looking at subsequent logging [02:55] nope, after that it just loops with https://pastebin.canonical.com/108969/ [02:56] hmmm, ok. so that says there is an issue with the api server [02:57] you may need to enable trace level logging and/or add extra logging to see why it's failing. i wonder if netstat shows the port as open [02:57] tcp 0 1 10.140.171.13:59925 10.150.60.153:17070 SYN_SENT 4001/jujud [02:58] that's a different ip address to what is being dialled [02:58] oh no [02:58] it's not [02:59] nope, just without the dns nae [02:59] yeah [02:59] if it were me, i'd have to add lots of extra debug logging at this point to see what's happening as i'm out of ideas [03:00] but you can see even internally the machine agent api client can't start [03:00] so there's a core issue with starting the api server itself [03:01] axw: local provider is sorta ok. it doesn't like starting precise containers on trusty although it used to. and if i start a precise container first and it fails, subsequent trusty containers also fail, but starting a trusty container first works [03:01] wallyworld: well, I think the restore step is actually breaking the state api server [03:01] since it works right before [03:01] likely [03:01] (restore bootstraps a machine and then untars the backup on top of it) [03:01] roger wrote all that so i have no insight off the top of my head as to what might be wrong [03:02] wallyworld: ah ok. there have been a few bugs flying around about host vs. container series mismatch not working [03:03] axw: yeah, i'm going to try explicitly setting default series to see if i can get precise to work. but precise failing should not also then kill trusty :-( [03:04] wallyworld: I think there might be something wrong with the backup, tomorrow I will strip one into pieces and see what is wrong, as for me I am now officially out or tomorrow I will be sleeping on the kn at the standup [03:04] kb* [03:04] np, good night :-) [03:04] wallyworld: oh I didn't see that bit... weird [03:04] yeah [03:04] wallyworld: I think you can also bootstrap --series=trusty,precise to get it to work [03:05] not sure why trying precise would fail trusty tho [03:05] ta, will try that also to try and get a handle on it [03:05] * wallyworld -> food === wallyworld_ is now known as wallyworld [03:43] wallyworld: I just pasted the output I see from destroy-environment with manual [03:43] wallyworld: it's as I expected [03:43] axw: i missed it as my laptop got disconnected [03:43] wallyworld: I mean I pasted it in the bug [03:43] ah, looking [03:43] #1306357 [03:43] <_mup_> Bug #1306357: destroy environment fails for manual provider [03:45] axw: clearly then i need to get my eyes tested as i had thought i included it all, sorry :-( [03:45] although i wish the last error was first [03:45] wallyworld: nps. it does kinda get lost down there... [03:45] as it would read much nicer that way [03:46] ie root cause, followed by option to fix === vladk|offline is now known as vladk [04:06] wallyworld: I'm going to look at fixing these openstack tests. If you do have any spare time, it would still be useful if you could review the placement CL [04:06] but if you're busy then that's okay [04:07] axw: funny you should mention that - just finished another review and am looking right now [04:07] wallyworld: cool :) [04:17] axw: this is a personal view, but i tend to think that if a method returning a (value, error) returns a err != nil, then the value should be considered invalid. so this bit irks me: [04:17] if c.Placement != nil && err == instance.ErrPlacementScopeMissing { [04:17] i would use an out of band signal like a bool or something [04:17] wallyworld: err was originally nil, that was something william wanted [04:18] I suppose I could change it to reutrn a nil placement, and have the caller construct one [04:18] hmmm. is there value in adding a bool to the return values [04:18] or something [04:19] I don't really think so, then you may as well just check if the scope has a non-empty scope [04:19] i sorta think that err != nil meaning the value is bad is kinda idiomatic Go [04:20] yeah... probably should have just left it as it was [04:20] change it since he isn't here :-) [04:22] wallyworld: I think I will just change it to return a nil Placement, and hten the caller will create a Placement with empty scope and the input string as the directive field [04:22] ok [04:22] i think that sounds good [04:22] the caller needs to know the rule anyway, at least this way it's the usual case of nil value iff error [04:22] sorta best of both worlds [04:23] ta [04:28] axw: with these lines in addmachine [04:29] if params.IsCodeNotImplemented(err) { [04:29] [04:29] 135 if c.Placement != nil { [04:29] is there any point trying again if c.Placement is nil? [04:29] should it just be a single if ... && ... ? [04:29] wallyworld: yes we should try again, because we're calling a new API method [04:30] wallyworld: client.AddMachines now calls a new API method by default [04:30] wallyworld: and client.AddMachines1dot18 calls the old one [04:30] oh,right. hadn't go to that bit yet, i recalled it was the same api from earlier review [04:30] it was, I fixed it :) [04:30] but i guess versioning [04:30] wish we had it [04:30] indeed [04:32] do i have to invoke "scp" with the ssh.Copy function in utils/ssh? [04:34] stokachu: the openssh client impl will delegate to scp, if that's what you're asking [04:34] https://github.com/battlemidget/juju-sos/blob/master/main.go#L89-L94 [04:34] so im trying to replicate juju scp within my plugin [04:35] this is my log output : http://paste.ubuntu.com/7312090/ [04:35] i think my actual copyStr is incorrect as i was following was is required by juju scp [04:35] * axw looks [04:35] what is* [04:36] stokachu: I think you want the target and source in separate args [04:36] im a newb with golang as well so if i got stupid stuff in there [04:37] lemme try that [04:37] stokachu: i.e. a length-2 slice [04:37] ok lemme see if i can make that happen [04:38] axw: is there a reason why we store placement as a string and not a parsed object. and hence precheck take s a string and not a parsed struct etc. i would normally look to parse on the way in and then pass around the parsed struct etc so we fail as close to the system boundary as possible. am i missing a design decision? [04:39] sweet, gotten farther http://paste.ubuntu.com/7312102/ [04:39] wallyworld: originally I did that, william wanted it changed. it should not get to the environment if the scope doesn't match [04:39] though maybe i should be using the instance.SelectPublicAddress of machine? [04:39] axw: hmmmm. ok. i disagree with william here then :-( [04:40] stokachu: cool. ahh, "juju scp" does the magic of converting machine IDs to addresses [04:40] wallyworld: why? the environment should not need the scope [04:40] ive got a execssh that i borrowed from someone that uses instance.selectpublicaddress [04:40] going ot try that [04:41] axw: what i mean is that the string should be parsed into whatever internal representation makes sense at the system boundary ie a struct of some sort, possibly different to what is used on the client ie minus the scope [04:41] stokachu: see juju-core/cmd/juju/scp.go, hostFromTarget -- that's where it maps machine IDs to addresses [04:41] and internal apis should then use that typed struct [04:42] axw: ahh i see that now [04:42] not an "untyped" string [04:42] but, doesn't matter, it's already been changed to get approval [04:42] to bad expandArgs isnt public [04:42] wallyworld: the directive string is free-form, so how are you going to do that? [04:43] wallyworld: it's up to the provider to decide what makes sense in directives [04:43] axw: ah bollocks, i was thinking there was more to it than just a string. but you are saying that by the time it's stored, it represents a mass name or whatever [04:44] that makes more sense. i hadn't fully re-groked the implementation [04:44] wallyworld: as far as the infrastructure is concerned, it's an opaque blob of bytes. the provider will interpret it. provider/maas will interpret it as maas-name to start with [04:45] ok [04:45] we may converge on some convention, like thing=value [04:45] az=uswest-1 or whatever [04:46] stokachu: it's also worth noting that some providers (e.g. azure) require proxying through machine 0 [04:46] stokachu: so you may want to just shell out to "juju scp" if you can... [04:47] axw: ah good point [04:47] cleaner than what im doing [04:47] is there a shell function in juju-core thats exposed? [04:47] or should i just use os.Exec [04:48] stokachu: os/exec is as good as anything [04:48] axw: good deal [04:48] ill do that instead [04:48] there are some utils in juju, but I don't think they'd be useful [04:48] cool no worries [04:48] axw: yeah, i'm a fan of a little more structure. but none the less, land that f*cker [04:49] hazmat: fwiw the first line that api-endpoints returns is the one that we last connected to, so if you just do "head -n1" you can get the same output we used to give [04:50] wallyworld: thanks [04:50] np. sorry if i went over old ground [04:50] nope, that's cool [04:51] jam: i was going to get your opinion on that bug - i'd like to close now as "invalid" or whatever given the other ifx has landed [04:51] wallyworld: sorry, which bug? [04:51] jam: the one you just remarked on above [04:51] bug 1311227 [04:51] <_mup_> Bug #1311227: juju api-endpoints cli regression on trunk/1.19 [04:52] wallyworld: localhost shouldn't be in the output [04:52] and I would be fine pruning ipv6 by default [04:53] jam: it can be for local provider since localhost is the public address for local provider [04:53] jam: martin's branch does prune ip6 by default [04:53] wallyworld: sure, I'm not saying don't print localhost when that's the address, but *don't* print localhost for ec2 [04:54] we shouldn't have localhost for ec2, but we would have 127.0.0.1 and that'll get pruned [04:54] jam: martin's branch probably ensures that's the case, since for ec2 localhost is machinelocal isn't it? [04:54] wallyworld: hmmm... I don't know that Martin's patch is *quite* right. I'd rather still cache IPv6, but just not display them on api-endpoints [04:54] we don't use any scope heuristics for hostnames [04:54] wallyworld: right, I think his patch is what we want, and we do want to be caching the network scope data instead of just addrs [04:54] jam: it's ok for now i think since we don't need/use ip6 yet [04:55] jam: so, i think then that kapil's bug has 2 bits 1. the ip6/127.0.0.1 stuff which martin's bug fixes, and 2. the multiple api address thing which is new and intended [04:56] so therefore we can mark the bug as invalid [04:56] right ? [04:57] wallyworld: so I still think there are bits that we can evolve on api-endpoints. Namely, to change what we cache from just addrs to being the full HostPort content (which includes network scope), and then api-endpoints can grow flags to do --network-scope=public [04:57] wallyworld: so while I think we've addressed the regression today [04:57] I don't think the bug is "just closed" [04:57] sure, but that's not the bug as described [04:58] we can get it off 1.19.1 at least [04:58] wallyworld: right, i think the *regression* portion is stuff that we intend (multiple addresses, even per server), because we think they might be routable [04:58] and we don't save enough information (yet) to be able to provide --network-scope [04:58] yep, i don't see any regression at all [04:58] (and then default it to public) [04:59] wallyworld: giving private addresses in api-endpoints by default is wrong [04:59] but "good enough" for now. [04:59] And hazmat has a point about actually grouping the data by server, so you have a feeling for what machine is a fallback [04:59] ok, so let's retarget off 1.19.1 then [05:00] SGTM [05:00] jam: 2.0 or 1.20? [05:00] 2.0 i guess? [05:03] I'd be ok with 2.0 [05:04] axw: when I use restore with patchValue I get this error: http://pastebin.ubuntu.com/7312196/ [05:05] so heres my latest change using juju scp https://github.com/battlemidget/juju-sos/blob/master/main.go#L89-L96 [05:05] and the error output http://paste.ubuntu.com/7312200/ [05:05] i verified that juju ssh 1 and /tmp/sosreport*xz exists on the machine [05:07] anyway, I need to go catch a plane [05:08] waigani: sorry, need more context. show me in vegas :) [05:09] axw: -r doesn't work with machine num it seems [05:09] juju scp 1:/tmp/test . works [05:09] but juju scp -r 1:/tmp/test* . fails [05:09] stokachu: you need to separate the command out into individual args [05:09] stokachu: i.e. "juju", "scp", ... [05:09] this is manually running the command from the shell [05:10] stokachu: there are some limitations with juju scp, I forget exactly how to pass extra args... lemme see [05:10] http://paste.ubuntu.com/7312211/ [05:10] thats what ive tested manually [05:13] stokachu: stick "--" before -r [05:14] axw: you da man [05:14] axw: is that juju 1.16? as 1.18 is a bit broken wrt scp [05:14] stokachu: in 1.18 (for a while until it gets fixed) args for just scp must come at the end and be grouped [05:15] jam: well I'm on trunk... I forget which versions do what wrt scp [05:15] so: juju scp 1:foo 2:bar "-r -o SSH SpecialSauc" [05:15] jam: what I just described does work on trunk, so presumably on 1.18 too? [05:15] ah [05:15] jam: i.e. I just tested "juju scp -- -r 0:/tmp/foo /tmp/bar" [05:16] axw: https://bugs.launchpad.net/juju-core/+bug/1306208 was fixed in 1.18.1 I guess [05:16] <_mup_> Bug #1306208: juju scp no longer allows multiple extra arguments to pass throug [05:16] axw: trunk just lets you pass everything, and you shouldn't need "--" I thought [05:16] you do need --, otherwise juju tries to interpret the args [05:17] axw: fairy nuff [05:20] yea i had to use -- with 1.18.1-trusty [05:20] axw: that worked :D:D [05:21] stokachu: cool :) [05:29] jam: morning [05:30] morning vladk, its early for you, isn't it ? [05:30] well, early for you to be on IRC :) [05:42] good mornings [05:50] fwereade: morning :) [05:53] morning fwereade, we've missed you [05:54] waigani, jam: it's nice to be back :) [05:54] heh, easter holiday? [05:55] brb [05:57] hey fwereade [05:58] fwereade: I was about to approve https://codereview.appspot.com/85040046 (placement directives) - do you want another look first? [05:58] axw, I'll cast a quick eye over it :) [05:58] okey dokey [06:01] axw, ok, based on a quick read of your responses I think I'm fine -- my only question is exactly what happens with the internal API change as we upgrade [06:02] fwereade: the provisioner will be unhappy until it has upgraded [06:02] axw, I *think* that it's fine, given that the environment provisioner only runs on the leader state server, and therefore the upgrade happens in lockstep [06:02] axw, but other provisioners? [06:02] axw, hm, I have a little bit of a concern about error messages during upgrade [06:02] fwereade: it will be the same for the container provisioners, I think [06:02] back [06:02] * axw checks [06:02] axw, *we* might know they're fine [06:03] axw, but people who read our logs don't get quite such a sunny prospect of our general competence [06:03] axw: so we talked about having EnsureAvailability with a value of say 0 just preserve the existing desired num of servers [06:03] AFAICT, we never *record* the desired number of servers [06:03] we just have a number of things that are running. [06:04] jam: it's implied by what's in stateServerInfo [06:04] and we have stuff like WantsVote() but I can't see anywhere that sets NoVote=true to indicate that we no longer want to be votiing. [06:04] jam: len(VotingStateMachineIds) [06:04] jam: that's done in EnsureAvailability, in state/addmachine.go [06:04] axw: sure, but isn't that the actual ones that are voting? I guess it would be an availability check? [06:04] axw, this must ofc be balanced against the hassle of maintaining the multiple code paths [06:05] jam: VotingMachineIds is really the ones that *want* to vote [06:05] fwereade: just checking still, sorry [06:05] axw, np [06:06] axw, what I did with the unit agent the other day was just to leave it blocking until the state server it's connected to *does* understand the message, and then continue as usual [06:06] fwereade: yeah, this is common to all provisioners - it will cause an error on upgrade for container provisioners [06:06] hmm ok [06:06] I'll take a look at that code [06:06] fwereade: worker/uniter? [06:06] axw, it's not the best code in the world but it seemed to work [06:06] just a sec yeah somewhere there [06:07] fwereade: got it I think [06:07] logger.Infof("waiting for state server to be upgraded") [06:07] yeah okay, I can add that in [06:07] axw, cool [06:08] * axw senses another need for API versioning imminently [06:08] although I suppose we can just see that fields are zero values... [06:09] fwereade: yuck, this means threading the tomb all the way through... oh well. [06:09] I suppose it's for the best [06:09] * fwereade glances pointedly at jam re API versioning [06:09] * jam ducks and pretends to catch a plane [06:10] * fwereade does understand [06:10] fwereade: I made sure it was in the topics list [06:10] jam, great, thanks :) [06:10] jam: sorry, back to ensure-ha: if you just send 0 or -1 to state.EnsureAvailability, then it can load st.StateServerInfo() and set numStateServers=len(VotingMachineIds) [06:12] axw: I'm going to use 0, because it isn't otherwise valid, and we don't have to woryr about negative numbers. [06:12] sounds good [06:12] axw: I was thinking to do that originaly, but trying to verify the actual meaning of the various values was ... tricky [06:12] oh I don't have to thread the tomb, hooray [06:13] jam: it's not super clear, I agree [06:13] axw: I was reading through the code and trying to figure out what the actual invariants are [06:13] axw: I was really surprised that ensureAvailabilityIntentions doesn't take into account the new request [06:13] so we end up with 2 passes at it [06:14] also, the WantsVote vs HasVote split is confusing. Probably necessary, but very confusing [06:14] jam: yeah, we need to know what the existing ones want to do [06:15] jam: we certainly could do with some developer docs on this [06:15] I don't understand what the peergrouper does, haven't looked at it at all [06:16] I know what EnsureAvailability does, but it's easy to forget :) [06:16] axw: one advantage of "-1" is that it is odd :) [06:17] heh [06:17] axw: I took out the <= 0 and it still failed, and had to remember 0 is even [06:20] axw: non-negative or nonnegative ? [06:21] our error message currently says >0 [06:21] and "greater than or equal to 0" is long [06:21] jam: non-negative looks good to me [06:21] though non-math people won't get non-negative, I guess [06:21] really? [06:21] number of state servers must be odd >= 0 [06:21] number of state servers must be odd and >= 0 [06:21] ? [06:22] will non-math people understand >= ? ;) sure, I guess so [06:22] axw: non-engineering/scientists sort of people don't distinguish "positive" from "nonnegative" [06:23] axw: I can't even say "must not be even"... -1 for clarity :) [06:23] only not [06:23] hehe [06:44] fwereade: updated https://codereview.appspot.com/85040046/patch/120001/130035 [06:58] axw: updated "juju ensure-availability" defaults 3 https://codereview.appspot.com/90160044 [06:58] jam: looking [07:03] axw: note that I merged my default-series branch in ther [07:03] to get the test cases right [07:03] but that didn't end up landing in the mean time [07:03] ok [07:03] so there is a bit of diff that should be ignored, but you can't really add a prereq after the fact [07:12] jam: reviewed [07:20] jam, wallyworld: review for a goose fix please https://codereview.appspot.com/90540043 [07:21] looking [07:22] axw: lgtm [07:22] ta [07:24] fwereade: am I okay to land that branch, or are you still looking? [07:30] * axw takes silence as acquiescence [07:41] axw, sorry, yes, it looks fine :) [07:42] cool [07:46] jam: is the bot awake? [07:47] axw: checking [07:47] axw: it is currently running on addmachine-placement [07:47] perhaps there was a queu? [07:47] its been goin for 14 min [07:47] okey dokey, thanks [07:47] I thought my goose one would go through first [07:48] axw: I don't think there is relative ordering, and the bot only runs one at a time based on what it finds when itwakes up every minute [07:48] so if you approve both, but it hasn't seen it [07:48] then it will wake up, get the list, and start on one [07:48] ok [07:53] wheee, placement is in [07:53] * axw does the maas bits [07:56] * fwereade bbiab [08:29] jam: the bot does do goose MPs, right? [08:30] axw: it does [08:30] wallyworld: thanks for landing my branch [08:30] mgz: np, pleased to help [08:30] i also tested with local provider just in case [08:37] morning all [08:40] morning voidspace [08:43] axw: so the bot has "landed" your code, but the branch isn't a proper checkout, so it didn't get pushed back to LP [08:43] I'll fix it [08:43] doh [08:43] jam1: thanks [08:45] axw: should be merged now [08:46] right, time to get a train to a plane, see you all next week! [08:46] mgz: see you soon [08:47] have a good trip [08:47] you'll see some of us tomorrow at gophercon, righT? [08:48] jam1: thanks! and yeah, some this week [09:01] axw: lgtm on your dependencies branch [09:01] jam1: ta [09:01] we'll have to make the bot get the latest version, though [09:01] fortunately, I know someone who is currently logged in [09:01] :) [09:01] I thought the bot updated now? [09:02] axw: it runs godeps [09:02] but that won't pull in new data [09:02] it does do go get -u when you poke config [09:02] axw: Ican't *quite* go get -u to not screw up the directory under test [09:03] jam1: it does godeps? "godeps -u" updates the code thought...? [09:03] though* [09:05] jam1: please, take a look https://codereview.appspot.com/90580043 [09:05] I will be offline until meeting === vladk is now known as vladk|offline [09:15] woop, add-machine works... now the fun of updating the test service [09:28] axw: it sets the version of an existing tree to that revision. It does not *pull* data from remote sources. [09:28] so if it isn't present locally, godeps -u doesn't work [09:28] jam1: ah right, I see [09:29] axw: so I haven't gotten a chance to dig into it thoroughly, but are we writing "/var/lib/juju/system-identity" via cloud-init? Or are we only using the cloud-initty stuff to get it on their via SSH bootstrap ? [09:30] jam1: yes, that is how it is done now. I'm not a fan [09:31] jam1: actually... [09:31] jam1: sorry, no, we SSH in and then put it in place [09:32] jam1: anything inside environs/cloudinit.ConfigureJuju happens after cloud-init, but only for the bootstrap node [09:34] hello, could someone help me build juju from source pls? [09:34] I'm getting http://paste.ubuntu.com/7313347/ when i run go install -v launchpad.net/juju-core/... [09:38] psivaa: I'm just doing a pull and trying now [09:38] psivaa: works for me [09:39] psivaa: so I suspect you're using a "too old" version of Go [09:39] psivaa: what does "go version" say? [09:39] psivaa: I'm on 1.2.1 (built from source) [09:39] voidspace: 'go version xgcc (Ubuntu 4.9-20140406-0ubuntu1) 4.9.0 20140405 (experimental) [trunk revision 209157] linux/amd64' is the output for go version [09:40] fwereade: maas-name support -> https://codereview.appspot.com/90470044/ [09:40] psivaa: actually that looks like an incompatible version of go crypto [09:40] fwereade: still need to support it in bootstrap [09:40] axw, awesome :) [09:40] (and add-unit and deploy, but they're coming later) [09:40] psivaa: if you "go get launchpad.net/godeps" you can run "godeps -u dependencies.tsv" and it should grab the right versions of dependencies [09:41] jam: ack, i did 'hg clone https://code.google.com/p/go.crypto/' to get go crypto. [09:41] jam: voidspace: thanks. i'll try your suggestion [09:42] psivaa: gccgo 4.9 should be new enough [09:42] psivaa: My guess is that go crypto updated their apis, which broke our use of their code [09:43] and we haven't caught up yet [09:43] which is why we have dependencies.tsv to ensure we can get compat versions [09:43] jam: ahh ack, i'll use that. thanks [09:43] psivaa: if you don't want godeps, then you can hg update --revision 6478cc9340cbbe6c04511280c5007722269108e9 [09:43] I think [09:44] psivaa: looks like just "hg update 6478cc9340cbbe6c04511280c5007722269108e9" [09:48] axw, LGTM, it's really nice to see it implemented with such a small amount of new code:) [09:48] fwereade: :) thanks [09:49] fwereade: sadly the bootstrap one will be a bit larger - I'll need to change Environ.Bootstrap [09:50] axw, sure, but it's absolutely a desirable change, and subsequent ones (like zone on ec2) will themselves then basically come for free :) [09:50] yup [09:52] vladk|offline, ping me when you're back please -- wondering whether we should really share an identity across state servers, or whether we should be creating one each === axw is now known as axw-away [09:54] vladk|offline, ah, forget it, I made bad assumptions in the first reading === vladk|offline is now known as vladk [10:06] my parents have just turned up for coffee [10:06] fwereade: ping [10:06] be afk for 15minutes :-) [10:06] vladk, pong [10:07] vladk, I see we have separate identities, sorry I misread; but I don't see when we'll rerun those upgrade steps. perhaps we'll definitely never need them? [10:17] good soon to be morning everyone [10:18] fwereade: I just used a formatter struct, my code does nothing with upgrade. I don't know whether SSH key will distributed on tools upgrade. It wasn't my task. [10:18] But SSH key will be installed on every new mashing with state agent. [10:18] Should I investigate what occurs during upgrade? [10:19] vladk, ahh, I see [10:20] vladk, yes, please see if you can find a way to break it by upgrding at a bad time [10:21] vladk, if you can't, then LGTM, just note it in the CL and ping me to give it the official stamp ;) [10:21] perrito666, heyhey [10:21] perrito666, sorry I left you hanging last week, I think I managed to send you another review a day or two ago though -- was it useful? [10:22] fwereade: AFAIK we don't have different identities, do we? [10:22] fwereade: https://codereview.appspot.com/90580043/patch/1/10013 concerns me [10:22] are we actually writing that to userdata ? [10:22] (exposing the secret ssh id) [10:22] I think axw-away claimed that we didn't actually do that during bootstrap [10:22] fwereade: It was, altough right now I put that on hold since I am juggling with a brand new set of restore bugs :p [10:23] jam1, it does indeed look like we were, grrmbl grrmbl; but it looks to me like what we do now is generate a fresh id and add that to the system, as one of N keys for the state-server "user", per state-server-machine [10:24] jam1, so I think it's solid -- did I miss something [10:24] perrito666, ok, great -- I'm here to talk further if you need me [10:25] fwereade: I haven't yet found that bit that you're talking about (where we actually generate the new value) [10:25] I see the code that if we have the value we write it onto disk [10:26] fwereade: but while we remove this: https://codereview.appspot.com/90580043/patch/1/10012 [10:26] I don't see the the SystemPrivateSSHKey being removed from MachineCfg [10:27] nor have I yet found anything that creates the populates the contents of identity [10:27] but I could easily just be missing it, though I've gone over the patch a few times now [10:27] jam1, hum, yes, I now think I was seeing that bit in the upgrade instructions alone [10:28] jam1, yeah, I think that's the only place -- vladk, thoughts? ^^ [10:29] jam1, but fwiw, I suspect that the stuff in cloudinit is actually not in *cloudinit*, only in the bit that gets rendered as a script when we ssh in at bootstrap time [10:29] fwereade: and we are calling AddKeys(config.JujuSystemKey, publicKey) and setting it to exactly 1 key [10:29] fwereade: right, so I'm not very sure about the cloudinit stuff because we did the bad thing and punned it [10:30] jam1, AddKeys is meant to *add*, not update -- did that change? [10:30] so that sometimes cloud-init is rendered to actual cloud-init [10:30] and sometimes it is rendered to a ssh script [10:30] fwereade: ah, it might [10:30] jam1, believe me, I told the affected parties when they wrote the environs/cloudinit module *waaay* back in the day -- cloudinit is just one possible output format [10:31] jam1, sadly I was not in an official tantrum-throwing position at that time ;p [10:31] fwereade: also, I think we have a point that steps118.go is only run when upgrading from 1.16 to 1.18, so it *won't* be run when upgrading to 1.20 (from 1.18) [10:31] but I don't think that actually matters here [10:32] as we don't actually need to fix upgrade [10:32] because HA is new in 1.19, so we don't have anything that we're upgrading [10:32] jam1: jfyi, godeps method made installing from source work for me. thanks [10:32] jam1, I think that, yeah, upgrade is irrelevant except in that it's the one place that actually sets up the keys [10:32] fwereade: the issue is that if we are going to give each one a unique identity (which I think is better, fwiw, but I'm not sure if it breaks some assumptions) [10:32] I would expect us to see a change in AddMachine() [10:32] or EnsureAvailability [10:33] fwereade: it sets up the first key [10:33] fwereade: I really don't see how his patch would populate the new "identity" field in agent.conf [10:34] fwereade: but the fact that we have 3 or 4 types with a StateServingInfo method, and each gets its data from somewhere else [10:34] (might be API, might be agent.conf, might be ...) [10:34] fwereade, jam1: about https://codereview.appspot.com/90580043/patch/1/10012 [10:34] This is a part of ssh-init script construction. [10:34] Now ssh key is passed inside of agent.conf file. So I remove it direct creation. [10:35] vladk: right, I think that line is great [10:35] vladk: but I haven't managed to find the part that actually sets the contents of the agent.conf file [10:35] here https://codereview.appspot.com/90580043/patch/1/10005 [10:36] via yaml marshaling [10:36] vladk: but what is setting it on the struct [10:36] (I'm also not sure that we're allowed to change the content of an agent.conf without bumping the format number, but that is a later concern) [10:37] vladk: I see a lot of stuff that "if we have the data set" gets it written to the right places, which all looks good [10:37] I just haven't managed to find a line that is "SystemIdentity = XXXXX" [10:39] vladk: going the route you did, I would expect to see a change in state/addmachine.go [10:39] to something in either EnsureAvailability or elsewhere [10:39] to create the system-identity data that the machine agent then reads from agent.conf later [10:40] jam1: https://codereview.appspot.com/90580043/patch/1/10008 set to StateServingInfo [10:40] https://codereview.appspot.com/90580043/patch/1/10005 set to formatter of agent.conf [10:41] vladk: thanks, fwereade^^ your original assumption is wrong, they all get the same value, and it is being written via cloud-init (from what I can tell) [10:41] which is sad news, I believe [10:41] vladk: I expected that we would be actually calling an API to get that data during cmd/jujud/machine.go [10:41] if we are only reading it from disk [10:41] then we wrote it to disk via cloud-init [10:41] which means we are passing our ssh secret key to EC2 [10:41] to hand back to us [10:42] we got away with it (slightly) with "bootstrap" because bootstrap actually SSH's onto the machine to write those files [10:42] well fuck [10:42] but all other provisioning is done via cloud-init and follow up calls to the API [10:42] honestly I'd expect us to just generate it at runtime [10:43] jam1, wait, we're writing state-server info to new state servers we provision? [10:43] wwitzel3: can you see me? [10:43] fwereade: I had originally thought they should be shared, but honestly, I like your idea to have the agent come up [10:43] check that it doesn't have one [10:43] generate it [10:43] jam1, that's *all* meant to come over the API [10:43] and add the public key only to the list of accepted keys [10:43] jam1, and indeed in this case there's no reason not to do it on the agent [10:43] fwereade: *I* don't understand the code very well [10:43] we do some crazy shit [10:43] about writing agent.conf [10:43] and then reading it back in [10:44] fwereade: all of the code in machine.go uses agentConfig.StateServingInfo() [10:44] fwereade: except line 240 [10:44] where we call st.Agent().StateServingInfo() [10:45] and then call: err = a.ChangeConfig(func(config agent.ConfigSetter) { [10:45] config.SetStateServingInfo(info) [10:45] }) [10:45] to get it written to disk [10:45] for everything else to read [10:45] fwereade: but I *think* there is a bug that you have to have it written to agent.conf first, so that you come up thinking you want to be an API server [10:46] fwereade: also see machine.go line 458 [10:46] that says "this is not recoverable, so we kill it, in the future we might get it from the API" [10:46] there *is* an issue with bootstrap, the first API server obviously has to get it from agent.conf [10:46] so there is some bit of we can't just always read from the api [10:46] I guess [10:46] but the swings and roundabouts make it hard for me to reason [10:47] anyway, standup time, switchnig machines [10:48] fwereade: standup ? [10:59] Horacio Durán [10:59] jam: [11:38] jam: on the logging, the theory is that all the state servers should have *all* the logging - so when bringing up a new state server it really shouldn't need to connect to *all* state servers to get existing logging. Any one (that is fully active) should do. [11:38] voidspace: I understand that, but when you go from 1 to 3, you'll probably see the other api server that is coming up at the same time, and then it is just random-chance if you get the full log or not [11:39] (similarly going from 3-5) [11:39] though not going from degraded-2 to 3 [11:39] jam: right, so being able to determine if it's fully active or not would help - but if we can't do that then maybe there's no other way [11:39] voidspace: I certainly understand why it might work, but my point would still be "we can iron out getting the backlog later, because it isn't the most important thing right now" [11:39] jam: ok, understood [11:40] connecting to all state servers and filtering out duplicate logging offends me though [11:40] (and it's O(n^2) if you bring up lots of state servers [11:41] voidspace: its O(n) if the data was properly sorted :) [11:42] definitely just ignore the backlog for now. We'll get a real logging framework set up that will do more than rsyslog. There's a topic for it in Vegas. [11:42] though you only ever have 7 state servers (because we use mongo, and mongo has that limit) [11:42] ah [11:42] still, I'm sure we can do better [11:42] jam: in theory you can have up to 12 as long as only 7 are voting. [11:46] jam: 1) do we need different identites on different machines? [11:46] 2) should I find places where agent.conf is written and where SystemIdentity is assigned? [11:46] do we already have any clue why add-machine doesn't work for local providers anymore ? [11:53] ghartmann: I hadn't heard that that was the case [11:53] is there a bug/context/paste ? [11:53] I don't get any logs at all [11:53] the machines just stick on pending [11:54] I tried installing on the VM and seen the same issue [11:54] I decided to roll back to 1.18 [11:54] and it's kinda working [11:55] I can't boot precise but trusty works [11:58] by the way [11:58] I am willing to help but I am struggling a bit on how to debug the code [12:01] ghartmann, sorry, my internet is up and down, I am missing context [12:01] ghartmann, but I would like to help you if I can [12:05] I am currently using juju for local provider only [12:05] best way to prototype and fix charms [12:05] but since I updated juju I am unable to start any machines [12:05] or they start but that way too long [12:06] 30 minutes if they do start [12:06] ghartmann, hmm, that "way too long" is really interesting, to begin with it sounded like it might be https://bugs.launchpad.net/juju-core/+bug/1306537 [12:06] <_mup_> Bug #1306537: LXC provider fails to provision precise instances from a trusty host [12:08] I would imagine that someone have reported it because being unable to start machines is a breaking issue [12:09] I am trying to understand why this happens and how can I help [12:12] ghartmann, ok, the best way to collect information is to `juju set-env "logging-config==DEBUG"`; and then to look in /var/log/juju- [12:14] ghartmann, in fact looking at the lxc code you might want to set juju.container.lxc=TRACE [12:15] fwereade: I think if you "juju bootstrap --debug" it does that level of logging, doesn't it ? [12:15] DEBUG (not TRACE) [12:15] jam1, yeah, I was assuming an existing environment [12:16] jam1, but if it's not working I guess there's not much reason t keep the old one around [12:16] jam1, and in particular a lot of the lxc stuff is only logged at trace level, I now observe [12:18] vladk: so having unique identities is more of a "it would be nice if they did" rather than "they must" [12:18] ghartmann, if you're struggling to find *where* in the code I would start poking around in the container/lxc package -- specifically CreateContainer in lxc.go -- but I'm not sure if that's what you're asking [12:20] the debug helps a little bit but it seems it believes that it worked ... "2014-04-23 12:16:50 INFO juju.cmd.juju addmachine.go:152 created machine 4" [12:21] ghartmann: created machine is creating a record in the DB for a new machine [12:21] ghartmann, that just indicates that it recorded we'd like to start the container [12:21] != actually started a machine [12:21] ah ok [12:22] ghartmann, it's possible that the provisioner is implicated, but in particular the slowness STM to point to the actual nuts and bolts of the container work [12:22] fwereade: so I think his statement was "it isn't working after 30 minutes" which means it hasn't actually worked yet [12:22] jam1, ok, I see :) [12:22] fwereade: ghartmann: if it *was* working, it would still need to download the precise/trusty cloud image, but that download should only need to happen once [12:23] I will try looking on lxc [12:23] ghartmann, do you see any lines mentioning the provisioner in the logs? [12:24] ghartmann, in particular "started machine as instance ..." [12:24] opening environment local [12:24] no started machine [12:25] you mean on .juju/local/log right ? [12:26] I am stop starting the machine manually [12:27] it seems that the machine can't start a network device [12:28] ghartmann, ah! you get a container created but it won't do anything? [12:32] it seems that the lxc-start doesn't start the machine [12:33] I will try to get it working first [12:33] it is something related with the network [12:34] it seems that the network of the machine doesn't start [12:34] I will try making it as a bridge [12:34] will let you know once I finish it [12:34] thanks for the ideas [12:34] ghartmann, there's a "network-bridge" setting for the local provider which defaults to lxcbr0 -- that works for most people, but possibly you have a different setup there? [12:34] I am using the standard [12:35] but I will change a few things on my network [12:35] will take a while [12:37] fwereade: so there is a bug that deploying precise on trusty will fail because of "no matching tools found" [12:37] fwereade: 2014-04-23 12:36:43 ERROR juju runner.go:220 worker: exited "environ-provisioner": failed to process updated machines: cannot start machine 1: no matching tools available [12:37] jam, is that different from the one Ilinked? [12:38] fwereade: it might be the root cause of the one linked, I'm not sure [12:42] fwereade: ghartmann: so one option is to try running "juju bootstrap --series precise,trusty" or possibly "juju upgrade-juju --series=precise,trusty --upload-tools" to see if that gets things unstuck. But for *me* the provisioner is spinning on not creating an LXC instance because it cannot find the right tools [12:42] if you got past that part [13:21] fwereade: so it would seem that if the provisioner cannot provision machine 1 because of no tools, it won't try to provision machine 2 [13:21] (in this case, the former is precise, the latter is trusty) [13:23] jam, I think the core of it all is tools.HasTools [13:23] jam, oh, wait, it actually can't be here, can it [13:25] jam, but the provisioner task's possibleTools method is all messed up anyway :/ [13:26] fwereade: the check we have that all machines are running the same agent version also fails when you have dead machines (since nil != "1.18.1.1") [13:26] so you can't use "juju upgrade-juju --upload-tools --series precise,trusty" to trick it [13:26] jam, not without force-destroying the machines, yeah [13:26] fwereade: but for *me* if I "juju bootstrap -e local --upload-tools --series precise,trusty" it works [13:27] without the --series trick, it gets stuck never finding tools for the precise charm [13:27] and then never getting to try for thetrusty charm [13:27] seemingly === BradCrittenden is now known as bac [13:28] jam, it seems reasonably likely that the provisioner is just failing out on the first one, and then trying again in the same order when it comes back up [13:29] fwereade: right [13:29] fwereade: I would have thought the provisioner would fail and keep trying the next one [13:29] though perhaps the idea is that if tools aren't available yet, it isn't worth trying until later? [13:30] jam, yeah, unless explicitly handled otherwise we assume that errors might fix themselves if we try again later [13:30] jam, frankly it's insane that the provisioner even knows about tools in the first place [13:33] fwereade: well, it needs to pass them to cloud init [13:33] so that the machine that is starting up can get them [13:33] fwereade: why is that insane ? [13:34] jam, the environ *already knows about the tools*. we *ask it where to find the tools*. [13:34] lunch [13:34] jam, a bit more than a year ago, we managed to refactor some of the way, but not all [13:34] fwereade: is it intended to stay that way? Given we've talked about object storage in mongo [13:37] jam, tools-in-state would indeed change the picture significantly, it's true [13:38] jam, but even then the provisioner would just be a dumb pipe wrt tools, Ithink [13:39] fwereade: I thought "juju destroy-machine --force" was intended to prevent this status: [13:39] "2": [13:39] instance-id: pending [13:39] life: dead [13:39] series: trusty [13:40] jam, hmm, yeah, the provisioner ought to be able to kill all the dead machines before it starts worrying about the live ones [13:40] fwereade: well it is possible that it will get to it soon, but it is stuck downloading the cloud-image template [13:40] which is a few MB [13:40] like 100 or so [13:40] jam, btw, I don't suppose you know where that "instance-id: pending" business comes from? [13:40] jam, either we have an instance-id or we don't [13:41] fwereade: in that particular case, the "trusty-template" fslock was left stale [13:41] when I called "destroy-environment" while not waiting for trusty to come up. [13:41] jam: just saw your message about system-identity in cloud-init. that test you linked to is a bit misleading; it's running Configure, when it should be running ConfigureBasic [13:41] jam: IOW, the test does not reflect what we really do on bootstrap [13:41] oh WTF [13:41] fwereade: I'm also seeing: 2014-04-23 13:41:08 WARNING juju.worker.instanceupdater updater.go:231 cannot get instance info for instance "": no instances found [13:42] * axw-away goes back away [13:44] jam, looks.like m.InstanceId is not erroring when it should? [13:47] fwereade: perhaps [13:53] fwereade: so from what I can sort out, vladk's patch is worth landing. I'm still confused by bits of it (why is it working), but I can accept that it might just be because I don't understand the swings and roundabouts [13:53] certainly he said he confirmed that secrets aren't going to EC2 [13:54] fwereade: a potential fix for bug #1306537: https://codereview.appspot.com/90640043 [13:54] <_mup_> Bug #1306537: LXC local provider fails to provision precise instances from a trusty host [13:57] question via email this morning.. local provider (using lxc).. doing deploy --to kvm:0 is supported? [13:58] hazmat: my understanding is that it has worked, perhaps accidentally but it was working [14:00] voidspace: I'm going to grab an early lunch and do an errand and we can sync up with where we are at when I get back. [14:02] jam, I'm worried about that because tim added a hack somewhere else in an attempt to resolve essentially the same problem [14:03] jam, except it's not quite the-same *enough* I guess [14:03] fwereade: so there is certainly a bit of "this worked for me" vs feeling good about the change. but I have the strong feeling that feeling good about the change means a much bigger overhaul of our internals [14:04] fwereade: so I filed bug #1311677 [14:04] <_mup_> Bug #1311677: if the provisioner fails to find tools for one machine it fails to provision the others [14:04] and looking at it [14:04] (the startMachines code) [14:04] it does exit on the first failure [14:04] and we have the fact that on "normal" provisioning failures [14:04] we call "task.setErrorStatus" [14:04] so if one fails [14:04] we mark it failing [14:05] and then just go back to doing the next thing when we wake up again [14:05] however, if possibleTools fails [14:05] we *don't* call setErrorStatus [14:05] so that machine stays around blocking up all other work [14:06] fwereade: my concerns. 1) We could try to keep provisioning even on errors, but if we are getting RateLimitExceeded, we realyl should just shut up and go sleep for a wihle [14:06] 2) Do we expect tha tpossibleTools is actually going to resolve itself RealSoonNow ? [14:06] now that we have the idea of Transient failures, could we treat no tools there ? [14:08] jam, still thinking [14:08] jam, re (1), I really think we have to do the rate-limiting inside the Environ, and use a common Environ for the various workers that need one [14:09] fwereade: so even with that we are likely to eventually exceed our retries [14:09] (say we retry up to 3 times, do we want to come back tomorrow?) [14:09] I don't think we want to block a worker thread completely in Environ for more than ... minutes? [14:10] * jam gets called away to actually be part of a family [14:13] jam, if you come back sometime soon: I don't think that tools failure is transient, so I don't think treating it as such will really help -- setErrorStatus is probably the right answer to the problem (apart from anything else, precise/trusty are not the only series people will use even if they are *today*) [14:13] to *that* problem [14:14] fwereade: definitely, no tools is likely to be a semi-permanent problem for all intents and purposes, certainly not something likely to get fixed within a small number of minutes, which is the most amount of time I can conceive of actually waiting for something to succeed. [14:21] jam, it works, the question is it supported, i thought thumper had said that it was, but various folks are getting mixed signals on it [14:21] so there's some confusion in regard [14:22] jam, fwereade, I think we are 2+ week away from a stable 1.20. I want to try for a 1.18.2 release this week. [14:22] hazmat: it works by accident. I wouldn't say it is "supported" [14:23] sinzui: so my understanding is that there is very strong political pressure to get something out that has HA in a 'stable' release by the end of the week. We don't have to close all the High bugs to get there. [14:23] hazmat: which is to say, I wouldn't rely on it working in the future. [14:23] I think we might be able to do a 1.19.1 today [14:23] which will be missing debug-log in HA, and backup/restore, I think [14:23] but I think we can land Vladk's patch to get "juju run" to work in 1.19.1 and HA [14:24] jam1, You cannot have stable release until after users have given feedback. If I release today, you still don't get feedback until next week [14:24] natefinch, so if we have folks that need a working solution for lxc and kvm today that need a supported solution, the answer is your out of luck? and we don't support lxc and kvm in the same local provider. [14:24] fwereade: sinzui: alexisb (if around) I'm not the one who has the specifics for why we need HA available for April 25th, can you give more context ? [14:25] jam1, also CI still doesn't pass HA. Someone might need to work with abentley to make the test pass of find the bug that might be in the code [14:25] hazmat, I don't *like* it, but ISTM that it's (1) useful and (2) used, so we don't have any reasonable option for breaking it without providing an alternative [14:25] fwereade, there's an extant bug on the later to support kvm and lxc containers in the same provider, which would also work, but its a bit more work. [14:25] fwereade: hazmat: I would agree with the "we shouldn't break it without providing another way" [14:26] hazmat: you still have the problem with spelling "I want to deploy the next one into KVM", unless we go all the way and make all the things you deploy prefixed [14:26] ok.. so supported for now .. till we have something better :-) [14:26] jam, any placement effectively bypasses constraints [14:27] fwereade, jam1, thanks [14:27] jam1, alexisb, fwereade: I am not here to be the voice of idealism. I am the voice of pragmatism. We know developers, user, and CI find bugs, and all three need to affirm the feature works. There is not enough information to call HA stable for release [14:27] jam1, hazmat: or we bite the bullet and get multi-provider environments going; at which point it's just another pseudo-provider and should Just Work [14:27] jam1, hazmat: but I'm not confident that'll happen any time soon [14:27] fwereade: then there is the argument that cross-env relations is better than multi-provider ones [14:28] fwereade: if only because for most of them, you actually still want to run an agent local to that provider [14:28] jam1, the 4/25 date for the 1.20 release was set because the target for a release with HA is ODS and jamespage needs some time to integrate [14:28] long term that sounds great, manual provider with cross region worked well enough for most of those cases for me till 1.19 (the address stuff breaks it) [14:29] but as sinzui points out it has to be ready, which it is not [14:29] alexisb: fwiw, it is probably ready enough for jamespage to look into integrating it [14:30] jam1, ok, we should connect with jamespage then [14:30] alexisb, jamespage If you get juju 1.19.1 with HA this week, is that good enough to test? [14:30] jam1, alexisb: that was going to be my thought as well. There's some edge case stuff that should be fixed, but the main workings are all there [14:30] sinzui: though probably we'll want to get 1.19.1 rather than have him running trunk [14:30] sinzui: I was trying to assign someone to work on the HA bug today ,I think natefinch is the one that volunteered to get the test running [14:30] sinzui, jam1 how close are we to a 19.1 release? [14:31] I see 2 critical bugs still being worked [14:31] alexisb, jam1, you are actually on schedule for a Friday release [14:31] alexisb: one of those should have a patch that should be landing, I don't know for sure why it hasn't [14:31] I just don't see that release being called 1.20 [14:31] the other is "juju backup" which is also supposed to have something from perrito666, but may not have to block 1.19.1 [14:31] sinzui, agreed [14:31] sinzui: I agree, I don't think 1.19.1 is 1.20 [14:31] but it is HA out for testing [14:31] * perrito666 feels conjured [14:32] to get feedback to drive a proper 1.20 [14:32] perrito666: so you work working to get "juju backup" to find /usr/lib/juju/bin/mongod when available, did that get done? [14:32] jamespage, would a 1.19.1 development release be enough for you to begin testing and integration? [14:32] jam1 yep [14:33] alexisb: I know of 2 things that are just-broken when you run HA (juju debug-log and juju run), but we have a patch for the latter, and wwitzel3 and voidspace on the former. [14:33] jam1, I'm not sure how important it is to have a local state-server in the *long* term, but in the short term it is true that we benefit a lot from it [14:34] natefinch: did you get to look into the HA CI test suite? Can you give me an update on it by your EOD, as I can look at it tomorrow. [14:34] jam1: I am actually trying to fix the whole thing together (backup/restore) since the test takes time I try to make the best of it, but I can propose the backup fix alone if you want [14:34] jam1, returning to 1.18.2. You have diligently landed some fixes to it. I think there were a few more bugs that would be lovely to include. May I propose some merges to 1.18 to prepare a 1.18.2 that Ubuntu will love? [14:34] jam1: looking at it now, late start to my day today, but i still have a lot of time to put into it. [14:35] perrito666: please never block getting incremental improvements on getting the whole thing. In general everyone benefits as long as it doesn't regress things in the mean time. [14:35] perrito666, I like small branches -- I know that a backup that can't be restored is no backup at all, but I'd still rather see a few branches that we merge all at once if we have to [14:35] sinzui: I have the strong feeling that 1.18 is going to stick in Trusty and we're going to be supporting it for a while. [14:35] ack [14:35] sinzui: so while I'm not currently focused on it, because of 1.19 and HA stuff filling my queue [14:35] :) [14:35] sinzui: patches seem most welcome to 1.18 [14:35] perrito666, jam1: indeed, the only reason to hold off on landing one of those branches is if it does, in isolation, regress something [14:36] jam1, are you thinking that 1.18 will be the long term solution for Trusty? [14:36] jam1. okay. I will make plans for 1.18.2 [14:36] sinzui: how do I investigate a CI failure? I believe functional-ha-recovery-devel is the one I'm supposed to be fixing [14:37] alexisb: 1.18 doesn't have HA support, and will likely be missing lots of stuff. I just think that given our track record with actually getting stuff into the main archive, we really can't trust it [14:38] natefinch, abentley in canonical's #juju is seeing errors like this...http://ec2-54-84-137-170.compute-1.amazonaws.com:8080/job/functional-ha-recovery-devel/64/console [14:38] alexisb: so likely we'll want something like cloud-archive for Trusty that provides the latest set of tools that we like [14:38] natefinch, abentley believes the problem is the test. it is not waiting for the confirmation that juju is in HA. [14:38] but I don't think we can actually expect to get things into the Ubuntu archive. [14:39] natefinch, abentley will ask for assistance if the test continues to fail after assuring itself that HA is up [14:39] sinzui: cool. I'm more than willing to help. I know that working with mongo can be hairy [14:40] jam1, yes we are working with the foundations team/TB to define the process for updating juju-core package in trustie [14:40] I don't know yet what the process will be [14:40] alexisb: i might be being jaded, but cloud-tools:archive still has 1.16.3 because it never got 1.16.5 landed in Saucy [14:41] and that is... 6 months old? [14:41] and it could very well become via cloud-tools [14:43] alexisb: though again, we've struggled to get stuff in there, too [14:44] are there any tricks to compiling juju with gccgo? [14:45] jam1, alexisb : I thought jamespage had made progress getting juju 1.16.4..1.16.6 in old ubuntu. The issue was the backup and restore plugins...since the backup plugin wasn't in the code, we elected to not package it. [14:46] jam1, re https://codereview.appspot.com/90640043 -- how about fixing environs/bootstrap.SeriesToUpload instead? [14:46] sinzui: so cloud-archive:tools still has 1.16.3 as the best you can get: http://ubuntu-cloud.archive.canonical.com/ubuntu/dists/precise-updates/cloud-tools/main/binary-amd64/Packages [14:46] well HA is really important so we will need to fight the battles to get it into Trustie [14:47] fwereade: so instead of LatestLTSSeries it would do AllLTSSeries ? [14:47] jam1, essentially, yeah [14:48] jam1, if we were smart we'd only upload a single binary anyway but I'm not sure we got that far yet [14:48] fwereade: so at this point, I think using LatestLTSSeries is still a bit wonky since we really can't expect anything about T+4 [14:48] fwereade: we're not [14:48] alexisb, jam1, we have never tested upgrade from 1.16.3 to 1.18.x. We need to test that if jamespage fails to get 1.16.6 into the cloud-archive...and hope it works [14:48] if you bootstrap --debug you can see the double upload [14:48] jam1, yeah, thought so [14:50] sinzui: AIUI, the issue was that once Trusty releases, then the version in Trusty becomes the version in cloud-tools, so it will jump from 1.16.3 to 1.18.1 (?) [14:50] jam1, right, that was the jamespage's fear. [14:50] fwereade: I would be fine moving it toSeriesToUpload, and *I* would be fine just making that function put Add("precise"), Add("trusty") [14:51] fwereade: but *I'm* way past EOD here [14:51] jam1, but regardless, I think we're better off fixing SeriesToUpload (and maybe improving the double-upload, now that it's potentially a triple-upload) than adding another tweak to a code path that is in itself pretty-much straight-up evil in the first place [14:51] fwereade: so happy to LGTM a patch that does that :) [14:52] even better that it could *actually* be tested [14:52] jam1, quite so, that was my other quibble there ;) [14:53] jam1, ok, I have a meeting in a few minutes and am not sure I will get to it today myself, but I'll make sure you know if I do [14:56] sinzui: so the swift fix was a mirage? [14:56] bac: yes [14:56] drats [14:57] bac: and the corrupt admin-secret theory is crushed [14:58] bac, also, staging machine-0 has been stuck in hard reboot for a week. I think we can say it is dead. [15:15] fwereade: I gave a summary of why vladk's patch works, mostly boiling down to the fact that what we write to the DB is the params.StateServingInfo struct, unlike most of our code which uses separate types for API from DB types [15:15] https://codereview.appspot.com/90580043/ [15:16] vladk: are you able to land that patch today before sinzui can put together a release ? [15:16] (and get CI to pass on it, I guess) [15:16] jam1: yes [15:16] vladk: great [15:16] LGTM [15:17] vladk: can I ask that you file a "tech-debt" bug to track that we may want to have each API server have their own system identity? [15:17] jam1: ok [15:17] I think as long as we have the api StateServingInfo we can actually notice who's calling and give them the a different value if we want [15:19] it looks like 1.18 branch has deps on both github.com/loggo/loggo and github.com/juju/loggo are those the same ? [15:21] hazmat: they need to be only one, otherwise the objects internally are not compatible [15:21] it should all be "github.com/juju/loggo" [15:25] jam1, 1.18 stable branch -> state/apiserver/usermanager/usermanager.go: "github.com/loggo/loggo" [15:25] jam1, thanks.. i'll mod locally [15:25] hazmat: please propose a fix if you could [15:26] jam1, sure.. just need to get through the morning [15:26] jam1: ping, if you have 5 minutes [15:27] jam1: it can wait until tomorrow if not === BradCrittenden is now known as bac [15:42] ooh, precise only has version 5 of rsyslog so we can only use the "legacy" configuration format [15:42] lovely [15:47] jam1: cancel my ping :-) [15:47] natefinch: ping [15:50] voidspace: howdy [16:00] fwereade: where do I go to approve time off? [16:05] jam1: fwereade sinzui https://codereview.appspot.com/90660043 this fixes the backup part of the issue [16:06] so ptal? [16:07] anyone is encouraged to, although be warned, its bash [16:11] natefinch, canonicaladmin.com is all I know === vladk is now known as vladk|offline [17:04] does anyone now why are we dragging the logs on the backup? (and most precisely why are we restoring them?) I mean I know we might want to back them up for analysis purposes, but restore the old logs pollutes information a bit [17:17] natefinch: you should be able to log into Canonical Admin and have "Team Requests" under the Administration section [17:18] perrito666: if you want to investigate why something failed in the past, you need the log [17:19] jam1: exactly, but if you restore the log from the previous achine you are lying about the current one [17:19] perrito666: but it also contains the whole history of your actual environment [17:19] vs just this new thing that I just brought up [17:19] I would be fine moving the existing file to the side [17:19] but all the juicy history is what you are restoring [17:19] perrito666: did you test the backup stuff live against a Trusty bootstrap? [17:20] perrito666: nate's patch landed at r2662 [17:27] jam1: sorry I was at the door [17:27] I did, let me re-check that the env that is being back-up actually has the proper mongodb [17:30] jam1: re your comment, I could try to assert MONGO* is exectuable or fail instead [17:39] going jogging, back shortly [17:41] perrito666: I don't really think we need to spend many cycles worrying about it. [17:42] It may be that just using '-f' will give better failure modes (more obvious if we try to execute something that isn't executable than trying to run a command that isn't in $PATH) [17:42] perrito666: anyway, not a big deal, don't spend too much time on it, focus on getting it landed and on to restore [17:42] yea, most likely if you have those and they are not executable you most likely noticed other problems [17:44] * perrito666 repeats himself when he stops writing a sentence in the middle and then restarts [17:44] that is certainly a common thing [17:45] whell I did a version of restore that backups the old config just so I get to discover what part of our backup restoration breaks the state server [17:49] * perrito666 's kindgom for an aws location in south america === vladk|offline is now known as vladk [18:32] EOD folks [18:32] g'night [18:32] bye [18:32] voidspace: see ya [18:32] is juju add-relation smart enough to handle add-relations to non-existent services that may be coming available in the future [18:32] for example if I deploy 3 charms and charm 1 relies on charm 3 so i add the relation during charm 1 deployment [18:33] is it smart enough to retry to add-relations once it sees charm 3 come online? [18:34] marcoceppi: ^ curious if you know this? [18:35] stokachu: no [18:35] marcoceppi: no to not smart enough or no to you aren't sure? [18:35] not smart enough, if you run add-relation then it won't actually work if the one of the two services isn't there [18:36] so that makes it difficult for me to put juju deploy ; juju add-relation ; juju deploy [18:37] stokachu: not difficult, impossible. [18:37] stokachu: you should run add-relation once you have all your services deployed [18:37] so if i deploy and openstack cloud i'd have to deploy all charms, then re-loop through those charms and add-relations [18:38] stokachu: or, use juju deployer [18:39] stokachu: or better yet, deploy charms, mount volumes, then add relations, as many charms expect the volumes to be already configured on the joined hook [19:01] bloodearnest: interesting ill look into that [19:02] stokachu: on account of juju having no way yet to detect/react to volumes changing, AIUI [19:03] i wonder if it'd be worth it to have add-relations kept in a queue and when a service comes online it just checks for pending [19:07] stokachu: note that you don't need to wait for the charms to be deployed to add relations. You can fire off deploy deploy deploy add-relation add-relation add-relation, and juju will eventually catch up. It's just that you have to run the deploy command before the add-relation command [19:07] natefinch: yea thats what im doing now [19:07] just iterating through the charms twice is all [19:08] stokachu: iterate through charms once and then through relations once ;) [19:08] gotta run, car needs to be inspected, back in 45 mins === natefinch is now known as natefinch-afk === natefinch-afk is now known as natefinch [19:35] wwitzel3, natefinch CI cursed the most recent juju because of a unit-test failure on precise. Do either of you think the test can be tuned to be reliable on precise? https://bugs.launchpad.net/juju-core/+bug/1311825 [19:35] <_mup_> Bug #1311825: test failure UniterSuite.TestUniterUpgradeConflicts [19:37] sinzui: looking [19:37] sinzui: also taking a look [19:40] man I hate overly refactored tests [19:47] wwitzel3: can you even tell what sub-test is failing? [19:47] all I see is "step 8" which doesn't tell me diddly [19:55] natefinch: not really, I've got as far as fixUpgradeError step [19:56] natefinch: but it is all nested so I can't tell in which that is happening === vladk is now known as vladk|offline