UbuntuIRC / 2014 /07 /01 /#juju-dev.txt
niansa
Initial commit
4aa5fce
[00:09] <wwitzel3> wallyworld: ok, will fixup the er's .. I can probably do it as two branches, I will look in to that. As for the results, I already define a params.go to hold the result, so I can just store the value that I am marshalling in there instead of manually marshalling it and the RPC layer will do that for me?
[00:10] <wallyworld> wwitzel3: i believe so yes
[00:11] <wallyworld> the gui makes other api calls which just return the structs
[00:12] <wallyworld> wwitzel3: don't waste time on 2 branches if it's too difficult to split out. bigger branches can be harder to land though since one bit might be ok but other bits may need fixes and the ok bit can't land separately
[00:14] <wallyworld> wwitzel3: also, don't forget to add SupportedArchitectures to the results
[00:14] <wwitzel3> wallyworld: I actually removed it, since it is send along with each instanceType
[00:14] <wwitzel3> sent
[00:14] <wallyworld> we also need to handle that some providers can not supply InstanceTypes but do have SupportedAchitectures etc
[00:15] <wallyworld> oh, let me look
[00:16] <wallyworld> wwitzel3: i think we need supported arches as an explicit field since otherwise the caller will have to iterate over the instance types and pull out the arches, also not all providers will suport returning instance types but will be able to supply supported arches
[00:16] <wwitzel3> wallyworld: which ones? I didn't see that, it is easy to re-add them as their own top level key in the map
[00:17] <wallyworld> manual provider for example
[00:17] <wallyworld> maas also
[00:18] <wallyworld> the supported arches value will be used to guide the user in creating vlid constraints with arch=blah
[00:18] <wwitzel3> ahh ok
[00:18] <wwitzel3> should I still include it with the instanceType data?
[00:18] <wallyworld> no, it is a separate concept
[00:18] <wallyworld> each provider has a SupportedArchitecures() api
[00:19] <wwitzel3> wallyworld: right, I just notice that InstanceType had an Arches field and it was the same type as the return value of SupportedArchitectures()
[00:20] <wallyworld> wwitzel3: yes, that's right. for ec2 (only i think), instances can run on i386 or amd64. i think for all other providers instance types run on one arch
[00:20] <wallyworld> so we record for individual instance types what arch(es) that instance type can run on
[00:21] <wwitzel3> wallyworld: ok, so that needs to be specific to the instance, not the provider level SupportedArchitectures()
[00:21] <wallyworld> no, it's at the provider level
[00:22] <wallyworld> the supported rches for the provider comes from reading the simplestreams image metadata
[00:22] <wallyworld> and adding together all the arches listed
[00:23] <wallyworld> so that the system can disallow arch=foo constraints for arches for which there is no available image
[00:23] <wwitzel3> wallyworld: right and that is different than the supported arches for an instance
[00:23] <wallyworld> wwitzel3: yes, so when an image is chosen, it needs to be matched up with an instance type and the arches also need to match there too
[00:24] <wwitzel3> wallyworld: right, so really what I am doing now in the API InstanceType.Arches = provider.SupportedArchiectures() is wrong, I need to move supported up to its own field and only populate Arches from the specific provider instance/package/flavor response
[00:24] <thumper> wallyworld: found one leak
[00:24] <wallyworld> wwitzel3: so when the user says mem=64G arch=i386, the matching finds an i386 image from simplestreams and then looks at all the instance types which can provid 64G which can run on i386
[00:25] <wallyworld> wwitzel3: you don't really need all the instance type data - just the valid names
[00:25] <wallyworld> wwitzel3: the gui only cares about knowing what instance type names are valid
[00:25] <wwitzel3> wallyworld: oh, ok, so I'll just drop it from the results then and move supportedArchiectures up to its own field
[00:26] <wallyworld> i guess we could return all the instance type data
[00:26] <wallyworld> not just the names
[00:26] <wallyworld> but names is all that's needed for now
[00:26] <wallyworld> and regardless, yes, we do need supported aches as top level field
[00:26] <wwitzel3> wallyworld: sounds good
[00:27] <wallyworld> wwitzel3: since we have api versioning, let's start with just names, and add stuff later if needed
[00:27] <wwitzel3> wallyworld: also if you have any suggestions of where I should look to create a proper testing env to exercise this code that would be helpful.
[00:27] <thumper> maybe not...
[00:27] <thumper> geez
[00:27] <wwitzel3> wallyworld: well I already have all the other stuff in there, but I can remove it if you prefer
[00:27] <wallyworld> wwitzel3: i'm wary of yagni
[00:28] <wallyworld> and the extra testing, maintenance etc for stuff thats not used
[00:28] <wallyworld> wwitzel3: which bit specifically do you need help testing? the client calls?
[00:28] <wallyworld> thumper: be with you in a sec
[00:29] <wwitzel3> wallyworld: yes, starting from state/api .. the env I am creating is throwing nil pointer exceptions because it doesn't have the InstanceTypes method or AvailabilityZones methods .. but I've added them to the dummy provider ..
[00:30] <wallyworld> wwitzel3: i normally add an assertion that a struct implements an interface so i get a compile time error if i haven't got the right methods eg var _ environs.EnvironProvider = (*environProvider)(nil)
[00:30] <wwitzel3> wallyworld: https://github.com/wwitzel3/juju/commit/e1598a5300cd20c4c23c02e09952d1b6136fdad3 here is my broken test if that helps
[00:31] <wallyworld> the above says that the environProvider struct implements the EnvironProvider interface
[00:31] <wallyworld> and it won't compile if it doesn't
[00:31] <wallyworld> we have a bunch of those in the various providers
[00:31] <wwitzel3> wallyworld: ok
[00:31] <wallyworld> since there's several interfaces that can be implemented
[00:32] <wallyworld> wwitzel3: dummy provider has these already
[00:32] <wallyworld> var _ imagemetadata.SupportsCustomSources = (*environ)(nil)
[00:32] <wallyworld> var _ tools.SupportsCustomSources = (*environ)(nil)
[00:32] <wallyworld> var _ environs.Environ = (*environ)(nil)
[00:33] <wallyworld> i can't see trivially what's wrong with your implementation
[00:33] <wwitzel3> wallyworld: ok, thanks, I'll keep poking at it
[00:34] <wallyworld> wwitzel3: thank you. let me know how you go
[00:45] <wwitzel3> wallyworld: your tip help, added common.EnvironCapabilities like you mentioned and it revealed that the dummy provider wasn't properly implementing the interface
[00:45] <wallyworld> cool :-)
[00:45] <wwitzel3> wallyworld: I guess now I actually have to write some tests that assert something useful
[00:45] <wwitzel3> bugger :P
[00:45] <wallyworld> lol yeah
[00:46] <thumper> wallyworld: chat when you have a minute
[00:46] <wallyworld> thumper: sure, 2 mins
[00:51] <davecheney> https://github.com/juju/juju/pull/195
[00:51] <davecheney> i am blocked on this review
[00:52] <thumper> wallyworld: https://github.com/juju/juju/pull/201
[00:52] <wallyworld> thumper: https://plus.google.com/hangouts/_/canonical.com/tanzanite-daily
[01:05] <axw> wallyworld: gonna spend some time today looking into test failures
[01:06] <axw> clearly something is very broken
[01:06] <wallyworld> axw: hey, yes please, i was going to ask you to do that. thumper already has one fix landing https://github.com/juju/juju/pull/201
[01:06] <axw> cool
[01:06] <menn0> axw: I'm just testing that precise upgrades fix now
[01:07] <axw> menn0: cool, thanks
[01:18] <davecheney> grr, version package
[01:28] <wallyworld> axw: also, don't forget to backport stuff to the 1.20 branch eg i think the ssh timeout fix was proposed for master only
[01:30] <davecheney> wallyworld: should i move version.SeriesVersion so it is only visible when compiled on linux ?
[01:30] <axw> wallyworld: I've been waiting for them to merge into master first. I guess I'd better do it now before I forget
[01:30] <wallyworld> axw: np :-)
[01:31] <wallyworld> davecheney: it's needed on other clients too
[01:31] <wallyworld> to set up ubuntu workloads using custom image metadata
[01:31] <davecheney> wallyworld: that is a problem
[01:31] <davecheney> how can I run it on osx ?
[01:31] <davecheney> fuckit
[01:31] <davecheney> i'll just remove that comment
[01:31] <wallyworld> davecheney: see my comment?
[01:31] <davecheney> yup
[01:31] <wallyworld> it can run
[01:31] <davecheney> i'll just remove my comment on that function and back away
[01:32] <axw> ;)
[01:32] <wallyworld> why not just make the trivial change i suggested?
[01:32] <wallyworld> then it can run on other clients
[01:32] <davecheney> looked harder than just backing out a comment I wrote that was a mistake
[01:32] <davecheney> my comment was in error
[01:32] <davecheney> i have deleted it
[01:32] <wallyworld> sure, but it seems like there's a legitimate problem there that is trivially fixable
[01:32] <wallyworld> so lets just fix it
[01:33] <davecheney> wallyworld: i'll fix it in a followu
[01:33] <wallyworld> ok
[01:33] <davecheney> the goal is to fix 1.20
[01:33] <wallyworld> i didn't think this was broken in 1.20
[01:33] <davecheney> wallyworld: i am only going on the reports I have been given
[01:33] <wallyworld> the 1.20 branch was forked of master before this change?
[01:34] <davecheney> i cannot validate the problem or the fix myself
[01:34] <wallyworld> i could be wrong
[01:34] <wallyworld> let me check
[01:38] <menn0> axw: did this make it in to 1.20 as well? https://github.com/juju/juju/pull/188
[01:39] <menn0> axw: my manual upgrade test worked fine btw
[01:39] <axw> menn0: sweet. not yet, I've just created a PR and sent it to the bot now
[01:40] <menn0> axw: great, just making sure
[01:40] <axw> menn0: thanks for testing
[01:43] <wallyworld> davecheney: 1.20 was forked for trunk right before that os_version change that broke stuff
[01:43] <wallyworld> so any fixes just need to go to trunk
[01:44] <wallyworld> axw: once stuff lands in 1.20, mark the bugs as fix committed as well
[01:45] <wallyworld> i think there are 2 branches for bug 1334273
[01:45] <axw> wallyworld: will do
[01:45] <_mup_> Bug #1334273: Upgrades of precise localhost fail <local-provider> <precise> <regression> <upgrade-juju> <juju-core:Triaged by axwalk> <juju-core 1.20:Triaged by axwalk> <https://launchpad.net/bugs/1334273>
[01:45] <axw> wallyworld: yep, there's one more coming for 1.20
[01:45] <menn0> axw: I've tested the 1.18.4 to 1.19.5 upgrade on precise 3 times now and all looks well
[01:45] <wallyworld> \o/
[01:45] <axw> menn0: awesome
[01:58] <davecheney> wallyworld: i don't understand
[01:59] <wallyworld> how can i help?
[01:59] <davecheney> wallyworld: dunno, does 1.20 cli work on windows ?
[01:59] <davecheney> if so, job done
[02:00] <wallyworld> it should i think since the landings to trunk which broke things happened after 1.20 was forked
[02:00] <wallyworld> but
[02:00] <wallyworld> even on 1.20, if SUpportedSeries is called on bootstrap, it will fail
[02:01] <davecheney> wallyworld: the bug i have is 1.20 cli doesn't work on windows
[02:01] <davecheney> i cannot validate this muself
[02:01] <davecheney> only go on the bug report I have
[02:01] <davecheney> and at the moment, things are getting less clear by the second
[02:01] <wallyworld> hmmm. i don't know about that then. perhpas that bug was raised before the decision was made to branch 1.20 off an earlier rev from master
[02:02] <wallyworld> we'd need to check with curtis
[02:03] <davecheney> sinzui: ping ?
[02:05] <wallyworld> davecheney: what bug number?
[02:07] <davecheney> lp # 1334493
[02:07] <_mup_> Bug #1334493: Cannot compile/exec win client <regression> <windows> <juju-core:Fix Committed by dave-cheney> <https://launchpad.net/bugs/1334493>
[02:08] <wallyworld> davecheney: that bug is not marked as affecting 1.20
[02:08] <wallyworld> it was taken off 1.20 on the 28th
[02:08] <davecheney> wallyworld: mate all I can tell you is what is in the bug
[02:08] <wallyworld> i'm guessing that's when the 1.20 branch was cut off an earlier rev before that bug was introduced
[02:08] <davecheney> somethigns broken
[02:09] <davecheney> i fixed something
[02:09] <davecheney> maybe it helped
[02:09] <davecheney> the bug was introduced in https://github.com/juju/juju/pull/95/files
[02:09] <wallyworld> davecheney: sure, but look at the affected series - 1.20 was removed
[02:09] <davecheney> which was 17 days ago
[02:10] * thumper has father in law drop in for a quick visit
[02:11] <davecheney> thumper: is that a euphamism for something ?
[02:12] <lifeless> boom tish
[02:13] <wallyworld> davecheney: 1.20 was forked before that rev from what i can tell, which is why the bug was changed to no longer affecting it
[02:14] <davecheney> 1.20 was forked on the 15th of June ?
[02:14] <davecheney> really ?
[02:14] <wallyworld> no
[02:14] <wallyworld> last common rev was merging pull reuqest 160
[02:14] <wallyworld> i think your date is a bit out
[02:15] <wallyworld> pr 160 was 5 days ago
[02:15] <davecheney> wallyworld: oh
[02:15] <davecheney> you are right
[02:15] <davecheney> i am sorry
[02:15] <wallyworld> np
[02:15] <davecheney> i was lookig at the first comment on that PR
[02:15] <davecheney> not when it finally merged
[02:15] <davecheney> so
[02:15] <davecheney> ok
[02:15] <davecheney> so 1.20 isn't broken and trunk isn't blocked then
[02:16] <wallyworld> so, i *think* that bug was originally raised as affecting 1.20, and then retracted
[02:16] <wallyworld> yes, 1.20 is good, trunk not blocked
[02:16] <davecheney> phew
[02:16] * wallyworld is not 100% sure the issue is fixed on trunk, but seems to be
[02:17] <wallyworld> with the pr to be merged
[02:17] <davecheney> ok, here's the plan
[02:17] <davecheney> i'll merge this one
[02:18] <davecheney> then i have a followup waiting on it that adds more tests
[02:18] <wallyworld> ok
[02:18] <davecheney> we can keep the fun going there
[02:18] <davecheney> deal ?
[02:18] <wallyworld> deal, thanks :-)
[02:18] <davecheney> \o/
[02:18] <wallyworld> sorry for confusion
[02:18] <davecheney> nope
[02:18] <davecheney> it was my fault
[02:19] <wallyworld> np
[02:46] <thumper> wallyworld: I guess my branch didn't fix the panic
[02:46] <thumper> wallyworld: sorry...
[02:46] <wallyworld> thumper: hey, thanks for trying. you did a lot of good fixes regardless
[03:04] <thumper> oh fark...
[03:04] * thumper had an annoying thought
[03:11] <thumper> menn0: are you around to chat?
[03:11] <menn0> thumper: yep
[03:13] <menn0> thumper: hang out?
[03:14] <thumper> menn0: yeah
[03:15] <menn0> thumper: https://plus.google.com/hangouts/_/gy36zsk3j2lx4ffgfvotsvzxmya
[03:25] <wallyworld> thumper: did you foresee a charm's resources directory on disk being a flat list of files?
[03:26] <wallyworld> i'd think we'd want a dir heirarchy
[03:26] <thumper> wallyworld: yes...
[03:26] <wallyworld> hmmm
[03:26] <thumper> it was strongly suggested that the dir be flat by the time a charm hook is using it
[03:27] <wallyworld> by whom? our fearless leader?
[03:28] <wallyworld> thumper: also, there seem to be devel and stable streams. but what about charm revisions? would we have a stream for each revision, or namespace the resources under each stream by revision?
[03:35] <wallyworld> axw: do my eyes deceive me, your branch landed
[03:35] <axw> wow, first attempt
[04:03] <thumper> wallyworld: charm revisions are independent of stream revisions
[04:03] <thumper> wallyworld: the full charm version would then be "charm revision - stream - stream revision" tuple
[04:03] <thumper> all mashed together into a string
[04:04] <thumper> that was the idea anyway
[04:47] <davechen1y> umm
[04:47] <davechen1y> https://github.com/juju/juju/pull/195
[04:47] <davechen1y> the bot ate my merge requst
[04:48] <wallyworld> looks like our landing instance might have gone down
[04:48] <wallyworld> it was processing a pr and got interrupted
[04:50] <axw> weird, mine too
[04:50] <wallyworld> hmmm, and now the lander says no pull requests to merge
[04:50] <axw> wallyworld: it thinks it's still processing them
[04:51] <axw> gotta add a "Build failed: whatever" message
[04:51] <wallyworld> i'll poke around and see if i can find out where it keeps its in progress queue
[04:52] <axw> wallyworld: it's dumb, it just checks for "Status: merge request accepted." and then a "Build failed:"
[04:52] <wallyworld> oh wait, i see what you're saying
[04:52] <wallyworld> sigh
[04:53] <axw> I think what we should do is add a link to the Jenkins job, so the the lander can periodically check the status of the jobs it thinks are running
[04:53] <wallyworld> yep, let's talk to martin about it
[04:53] <axw> atm we just get a link to the job type, but not the actual run
[04:54] <axw> until it fails anyway
[04:54] <davechen1y> so, should i $$merge$$ again ?
[04:54] <axw> davechen1y: already done
[04:54] <davechen1y> thanks mate
[05:52] <davechen1y> can someone kill this job
[05:52] <davechen1y> http://juju-ci.vapour.ws:8080/job/github-merge-juju/310/console
[05:52] <davechen1y> it's not going to pass
[05:53] <wallyworld> gooone
[05:54] <davechen1y> today is giving me the shits
[05:55] <wallyworld> yeah :-(
[06:07] <davechen1y> win12
=== vladk|offline is now known as vladk
[06:27] <axw> wallyworld: https://github.com/juju/juju/pull/205
[06:28] <wallyworld> looking
[06:29] <davechen1y> # github.com/juju/juju/state
[06:29] <davechen1y> state/action.go:86: too many arguments in call to names.NewActionTag
[06:29] <davechen1y> state/state.go:1389: tag.UnitTag undefined (type names.ActionTag has no field or method UnitTag)
[06:29] <davechen1y> state/state.go:1390: tag.Sequence undefined (type names.ActionTag has no field or method Sequence)
[06:29] <davechen1y> is anyone seeing this on master ?
[06:29] <davechen1y> yes, i've run godeps
[06:29] <axw> davechen1y: I was before I ran godeps
[06:29] * axw tries again
[06:29] <davechen1y> lucky(~/src/github.com/juju/juju) % godeps -u dependencies.tsv
[06:29] <davechen1y> "/home/dfc/src/github.com/juju/names" now at b2e06a0ab1c09f138853d1ef6b11f94ca9f7b675
[06:30] <davechen1y> commit 780947ad0e66382af782c65eec6b86796409f0c7
[06:30] <davechen1y> Author: Roger Peppe <rogpeppe@gmail.com>
[06:30] <davechen1y> Date: Thu Jun 26 15:46:34 2014 +0100
[06:30] <davechen1y> use earlier names dependency
[06:30] <davechen1y> umm
[06:30] <davechen1y> why ?
[06:30] <axw> no problems here
[06:30] <davechen1y> axw: what sha do you have for juju/names ?
[06:30] <axw> b2e06a0ab1c09f138853d1ef6b11f94ca9f7b675
[06:31] <davechen1y> ta
[06:31] <davechen1y> oh fuck
[06:31] <davechen1y> godeps is broken
[06:31] <davechen1y> lucky(~/src/github.com/juju/juju) % godeps -u dependencies.tsv
[06:31] <davechen1y> "/home/dfc/src/github.com/juju/names" now at b2e06a0ab1c09f138853d1ef6b11f94ca9f7b675
[06:32] <davechen1y> lucky(~/src/github.com/juju/names) % git log | head -n2
[06:32] <davechen1y> commit d6e9f06b936da18e4feeef3e788bf0dde0cc2d99
[06:32] <davechen1y> Merge: 01e6ac7 f52c443
[06:32] <davechen1y> greap, godeps doens't work
[06:32] <davechen1y> great, that's fucking great
[06:32] <davechen1y> now the new version of godeps
[06:32] <davechen1y> lies
[06:33] <davechen1y> rather than telling you it can't find that rev
[06:33] <axw> hmm. I'm sure it failed for me last time I didn't specify -f
[06:33] <davechen1y> ಠ╭╮ಠ
[06:33] <davechen1y> axw: i recently upraded to rog's new version
[06:33] <davechen1y> imma gonna downgrade again
[06:34] <axw> davechen1y: yeah I'm on the new version too
[06:34] <davechen1y> it didn't work for me
[06:34] <axw> unless there' a new new version
[06:34] <davechen1y> it kept saying "i've updated"
[06:34] <davechen1y> but it didn't
[06:34] <davechen1y> see above
=== vladk is now known as vladk|offline
[06:46] <davechen1y> shit .
=== vladk|offline is now known as vladk
[07:02] <davechen1y> axw: https://github.com/juju/juju/pull/206
[07:06] <davechen1y> https://github.com/juju/loggo/pull/2
[07:12] <davechen1y> there are SO MANY races in the apiserver package
[07:15] <mattyw> morning all
[07:15] <axw> davechen1y: sorry went to pick up my daughter. looking now
[07:15] <axw> davechen1y: does the detector pick them up? didn't find any last time I ran it
[07:15] <axw> mattyw: morning
[07:16] <mattyw> axw, thanks for taking a look at my branch - sorry about putting the wrong bug link
[07:16] <mattyw> A few minutes before I wrote that I commented on the wrong bug in lp as well
[07:16] <axw> mattyw: that's ok, I figured it out from the commit description :)
[07:17] <davechen1y> axw: it does, if you're persistant enough
[07:17] <davechen1y> http://paste.ubuntu.com/7730028/
[07:17] <axw> davechen1y: ah, thanks
[07:17] <davechen1y> PR's for both in play
[07:17] <axw> sweet
[07:31] <voidspace> morning all
[07:33] <mattyw> morning
[07:34] <davechen1y> axw: shit
[07:34] <davechen1y> that race is worse than I thought
[07:35] <axw> davechen1y: which one?
[07:36] <vladk> dimitern: ping
[07:37] <dimitern> vladk, pong
[07:37] <davechen1y> axw: the one on the loggo/TestLogWriter
[07:38] <davechen1y> axw: i'm fixing it now
[07:38] <axw> davechen1y: ah, I see.
[07:38] <axw> hrm, thought I fixed that.. maybe it was a similar case
[07:38] <vladk> dimitern: I wrote a test for WatchInterfaces API client, It gives a 5 sec delay between state modification and notify signal
[07:39] <vladk> dimitern: test is working fast on API server, but with delays on API client
[07:39] <davechen1y> axw: i cna't see a way we can assert that the TestLogWriter is single threaded
[07:40] <axw> davechen1y: perhaps just call loggo.RemoveWriter before ranging over tw.Log
[07:40] <davechen1y> axw: fuck it, i've gone and made tw.Log a function that returns a copy of writer.Log
[07:40] <axw> okey dokey
[07:40] <dimitern> vladk, probably the statetesting code used for the watcher need tweaking a bit - can i have a look?
[07:40] <davechen1y> if it's worth doing; it's worth over doing
[07:40] <axw> less band-aid-ish
[07:40] <axw> sgtm
[07:43] <davechen1y> axw: PTAL
[07:44] <davechen1y> this will need some work to integrate into juju and other callers
[07:44] <fwereade> vladk, look for BackingState and StartSync it once you're sure there's a notification waiting
[07:45] <fwereade> vladk, (even if that's not *directly* applicable, the point is that the *State that's driving the api server needs to be synced -- and that's not necessarily the *State you're usually manipulating in the tests)
[07:47] <dimitern> fwereade, statetesting does that internally for the Assert methods
[07:47] <dimitern> ..IIRC
[07:47] <fwereade> dimitern, indeed, but it's not necessarily using the right *State
[07:47] <fwereade> dimitern, and if you're using the wrong one you'll see those 5s delays
[07:47] <dimitern> fwereade, right
[07:48] <davechen1y> ttps://bugs.launchpad.net/juju-core/+bug/1336180
[07:48] <_mup_> Bug #1336180: state/apiserver: yet more data races <juju-core:In Progress by dave-cheney> <https://launchpad.net/bugs/1336180>
[07:48] <dimitern> fwereade, vladk, it uses c.State.StartSync()
[07:49] <fwereade> dimitern, vladk: so it *should* just be a matter of starting it with s.BackingState, if that's accessible; and figuring out how to set one up if it's not
[07:49] <vladk> dimitern, fwereade: https://github.com/juju/juju/pull/207
[07:50] <dimitern> vladk, line 278 in networker_test.go - use s.BackingState instead of s.State for NewNotifyWatcherC
[07:51] <dimitern> (and in other similar cases)
=== rogpeppe2 is now known as rogpeppe
[08:06] <TheMue> morning
[08:06] <TheMue> *yawn*
[08:17] <vladk> dimitern: thanks, that works, could you review my PR
[08:24] <dimitern> vladk, will do in a bit
=== rogpeppe1 is now known as rogpeppe
[09:05] <voidspace> dimitern: so a network interface always has a subnet (in the juju model)
[09:05] <voidspace> dimitern: what's the default subnet?
[09:08] <voidspace> dimitern: and why do we reference count networks?
[09:09] <dimitern> voidspace, sorry, in a call, will get back to you a bit later
[09:09] <voidspace> dimitern: no problem
[09:32] * fwereade bbiab
[09:33] <mattyw> is juju-bot on holiday today?
[09:34] <dimitern> voidspace, back
[09:34] <dimitern> voidspace, so there are two default networks - juju-private and juju-public (if available)
[09:35] <dimitern> voidspace, they are discovered and created at bootstrap time, if the provider supports that
[09:36] <dimitern> voidspace, we use refcounts for networks, because they can be referenced by services and/or machines (i.e. they are considered in-use, when the refcount>0)
[09:37] <dimitern> voidspace, removing networks in use is not allowed; refcounts are used as a simple sanity check on the same doc as the network, as opposed to checking multiple documents in other collections
[09:41] <mattyw> axw, are you still awake/ working?
[09:43] <axw> mattyw: I am here
[09:43] <axw> the bot is not on holiday, it's just a bit sick
[09:44] <axw> mattyw: which PR are you trying to land?
[09:44] <mattyw> axw, https://github.com/juju/juju/pull/108
[09:44] <mattyw> axw, but I was going to ask you about this one: https://github.com/juju/juju/pull/198/
[09:45] <dimitern> vladk, reviewed
[09:45] <axw> mattyw: nfi why it didn't pick that up...
[09:45] <axw> mattyw: ask away
[09:46] <mattyw> axw, just wanted to make sure I understand your feedback properly
[09:46] <mattyw> axw, do you mean if you specifiy add-machine -n you shouldn't be able to specify lxc:2 as well?
[09:47] <axw> mattyw: right
[09:47] <axw> mattyw: just like with "juju deploy", which doesn't allow you to do --to and -n together
[09:49] <mattyw> axw, deploy uses the UnitCommandBase stuff for all of that, I did wonder if I should be using something generic to provide the -n flag, but I figured it probably wouldn't be worth it, I guess I should just implement the same logic in AddMachineCommand.Init?
[09:50] <axw> mattyw: yeah I think so
[09:50] <mattyw> axw, also I wasn't sure if I should just be sending a list of items to the existing AddMachine api call - or making a new api that would take a NumMachine int, do you have an opinion either way?
[09:52] <TheMue> one small command change for review: https://github.com/juju/juju/pull/208
[09:52] <axw> mattyw: I think a list makes sense
[09:52] <voidspace> dimitern: thanks
[09:52] <voidspace> dimitern: but, I was asking specifically about subnets
[09:53] <voidspace> dimitern: every network must have a subnet
[09:53] <dimitern> voidspace, yes
[09:53] <voidspace> dimitern: so what is the "default subnet"?
[09:53] <dimitern> voidspace, a subnet of the default network
[09:53] <voidspace> dimitern: specifically
[09:53] <voidspace> dimitern: what will the netmask be
[09:53] <voidspace> dimitern: how do we determine it
[09:54] <dimitern> voidspace, the provider will implement ListNetworks() and/or ListSubnets(), depending on what's supported, and tell us
[09:54] <voidspace> dimitern: heh, so for local provider then
[09:54] <dimitern> voidspace, in network.BasicInfo struct used as result will have an IsDefault field
[09:55] <voidspace> dimitern: what does the implementation of ListSubnets do? or what if the provider doesn't support subnets
[09:55] <voidspace> dimitern: what will we use as the default
[09:55] <dimitern> voidspace, local and manual are special (yet undefined fully wrt networking)
[09:55] <voidspace> dimitern: I just wonder if the concept of "you have to have a subnet" really reflects reality
[09:56] <voidspace> dimitern: and for refcounts, it seems to be the case that we want refcounts in order to destroy networks
[09:56] <dimitern> voidspace, there are SupportsNetworks() and there will be SupportsSubnets() as well, which each provider implements to tell juju what it can handle, and we call ListSubnets/Networks or both accordingly
[09:56] <voidspace> dimitern: i.e. know when they are not references
[09:56] <voidspace> dimitern: (sure, but as a network *must* have a subnet I wonder what we do if the provider doesn't support subnets - what is "the default")
[09:56] <dimitern> voidspace, read the doc :)
[09:56] <voidspace> dimitern: why do we need to destroy networks? as they're abstract concepts, what's the cost in just leaving them around
[09:57] <voidspace> dimitern: I am!
[09:57] <voidspace> dimitern: these are questions that don't *appear* to be easily answered
[09:57] <dimitern> voidspace, when the provider does not support subnets, but supports networks, we simulate a subnet by splitting the network.BasicInfo into a network + subnet
[09:57] <voidspace> dimitern: right, I'm asking what the subnet part is
[09:58] <voidspace> dimitern: it's quite likely that my understanding of the basic concept is flawed
[09:58] <dimitern> voidspace, CIDR + VLANTag (optional) + AvailabilityZone (if applicable) + ProviderId (if applicable) + IsDefault
[09:58] <dimitern> voidspace, sorry, not the last thing
[09:58] <voidspace> dimitern: but doesn't a subnet mask off part of the ip range
[09:58] <dimitern> voidspace, a network does not specify a CIDR range, it's a collection of subnets with a name
[09:59] <voidspace> dimitern: so do we simulate it by having a 0.0.0.0 netmask
[09:59] <dimitern> voidspace, sure, for example 172.20.0.0/16
[09:59] <voidspace> dimitern: so a network from a provider comes with *a subnet* as part of the specification
[10:00] <dimitern> voidspace, if the provider does not support subnets, only networks, and what we get from the provider contains 0.0.0.0/0 as CIDR, that's the "catch-all" case - everything goes
[10:00] <voidspace> dimitern: "not supporting subnets" just means we can't create new subnets, but there will always be a default
[10:00] <voidspace> dimitern: ok, fair enough
[10:00] <dimitern> voidspace, take MAAS as an example
[10:00] <voidspace> dimitern: ok
[10:00] <dimitern> voidspace, it supports networks, which have a label, CIDR, VLANTag and list of connected MACs
[10:01] <dimitern> voidspace, so even if subnets are not supported, we have all the info we need from the network to create a subnet for it
[10:01] <voidspace> dimitern: ok, cool
[10:01] <dimitern> (not supported as "there's no such api-accessible entity for juju to handle)
[10:02] <voidspace> dimitern: much appreciated, thanks
[10:02] <dimitern> (effectively, there will always be some subnet involved, but we might not be able to know which, if the provider does not tell us - like manual and local)
[10:03] <dimitern> voidspace, np, glad you're asking the right questions :) i'll update the model doc some more today, including some clarifications around default networks, as discussed with fwereade
[10:04] <voidspace> great
[10:26] <jam1> voidspace: so to put it from my view, (just in case), we model it as having a subnet, even if that subnet just matches the whole network
[10:41] <voidspace> jam1: right - that specifically wasn't clear (to me) from the doc. That in order to match our model we might create a subnet that "just matches the whole network"
[10:42] <voidspace> jam1: I guess it's implied...
[10:42] <fwereade> voidspace, we refcount networks so we can know when to delete them
[10:42] <dimitern> voidspace, jam1, vladk, TheMue, I'm sorry I might miss the standup due to an unexpected minor emergency, will join later if it's still going
[10:42] <fwereade> voidspace, when we *can* delete them, rather
[10:43] <voidspace> fwereade: right, I wonder why we want to delete them
[10:43] <fwereade> voidspace, if they can be added, we should be able to remove them, I think
[10:43] <TheMue> dimitern: ok, hope it’s not to bad
[10:43] <TheMue> too
[10:44] <jam1> dimitern: np, take care of your stuff
[10:44] <jam1> voidspace: dimitern: I have the feeling it was in an earlier draft...? I thought I had read it in the subnet section.
[10:44] <voidspace> fwereade: in general I suppose I agree, refcounting can be a fair amount of work/hassle to get right
[10:44] <voidspace> jam1: I will read again (just gone back for a reread anyway)
[10:44] <fwereade> voidspace, refcounting *is* a hassle
[10:44] <voidspace> jam1: maybe I just missed it
[10:44] <voidspace> fwereade: :-)
[10:44] <jam1> voidspace: I don't see it in my quick searching.
[10:45] <fwereade> voidspace, but peculiarities of the mgo/txn model mean that it's the only tool we really have
[10:45] <jam1> Probably worth adding to the subnet/network sections somehow. As EC2 doesn't really have a Network model, just subnets, and MaaS doesn't have Subnets, just a Network.
[10:45] <jam1> speaking of which, TheMue, voidspace, vladk: standup?
[10:45] <fwereade> voidspace, in particular, we can only do asserts against specific documents
[10:46] <fwereade> voidspace, so there's no way to assert that "no service uses this network", for example
[10:46] <voidspace> fwereade: right
[10:46] <TheMue> jam1: coming, only quick jump to the bath ;)
[10:46] <voidspace> oh for a relational query I guess...
[10:46] <jam1> TheMue: ??
[10:46] <fwereade> voidspace, well we can do queries just fine
[10:46] <fwereade> voidspace, but we can't use them in txn asserts
[10:46] <mattyw> axw, don't suppose you're still around?
[10:46] <voidspace> fwereade: ah, I see
[11:16] <perrito666> gsamfira: ping?
[11:17] <gsamfira> hey
[11:17] <perrito666> sorry I got here a bit late, are you in today?
[11:17] <gsamfira> yup, but It will be a short day for me.
[11:17] <gsamfira> I'm in the hangout
[11:18] <perrito666> gsamfira: william and I are there too and we dont see you
[11:18] <perrito666> :)
[11:19] <gsamfira> hmm
[11:19] <gsamfira> can you give me the link?
[11:19] <perrito666> certainly
[11:20] <perrito666> https://plus.google.com/hangouts/_/canonical.com/cloudbase?authuser=3
=== jam is now known as jam1
[11:59] <axw> fwereade: can you confirm that we no longer need git in juju? we can remove it from our cloud-config?
[11:59] <axw> (I just deployed ubuntu to azure without git installed, and it worked... not sure what else I'd need to test)
[12:03] <ahasenack> axw: doesn't juju bootstrap install it?
[12:03] <axw> ahasenack: it does, I want to remove it
[12:03] <axw> and more importantly, not require it in windows bootstrap
[12:04] <ahasenack> axw: while you wait for an answer, try an experiment. In a deployed unit, remove git, make a silly change to the charm and run juju upgrade-charm
[12:04] <ahasenack> axw: or, go to /var/lib/juju/.../..../(don't remember)/charm, see if it's a git repo
[12:04] <axw> ahasenack: thanks
[12:04] <axw> I'll try that
[12:14] <vladk> dimitern: I fixed your notices in https://github.com/juju/juju/pull/207
[12:14] <dimitern> vladk, cheers, looking
=== psivaa_ is now known as psivaa-lunch
[12:19] <dimitern> vladk, LGTM
[12:19] <vladk> dimitern: thanks
=== vladk is now known as vladk|offline
[12:56] <jam1> TheMue: proposal reviewed
[12:56] <TheMue> jam1: thx
[12:57] <TheMue> jam1: yeah, will add a follow-up for those tests
=== psivaa-lunch is now known as psivaa
[13:30] <dimitern> fwereade, g+?
[13:39] <mattyw> does anyone know what juju-bot hates me? https://github.com/juju/juju/pull/108
[13:40] <perrito666> mattyw: well he seems to be hating axw too
[13:41] <sinzui> mattyw, perrito666 That looks like it was caused by my attempt to move testing to its dedicated server
[13:41] <sinzui> I am requeuing the tests
[13:54] <katco> hello team :)
[13:54] <wwitzel3> morning katco
[13:55] <perrito666> hey, are we having stdup?
[13:57] <katco> is that a question for me?
[13:58] <sinzui> mattyw, your PR isn't among the ones that got rejected by the git-merge-juju job
[13:58] <mattyw> sinzui, afaik it's not even attempted to land yet
[13:58] <wwitzel3> perrito666: yes :)
[13:58] <mattyw> sinzui, juju-bot has certainly never told me it accepted a request to merge
[13:58] <sinzui> mattyw, I see you are a public member of the juju org, which is enough for the bot to know your $$merge$$ is good
[13:59] <mattyw> sinzui, it certainly worked before
[13:59] <sinzui> I see axwalk also tried to intercede.
[14:00] <sinzui> mgz, ^ the bot hates mattyw. Can you broker a peace?
[14:01] <mattyw> no treaty - just an armistice is ok
[14:06] <mgz> teh bot is in an unhappy place
[14:07] <mgz> we also have a massive queue, which is partly because nothing has eben lading, but if I get the switch to your dedicated job done sinzui that'll be sorted at least
[14:09] <sinzui> mgz, I think we need to configure git on that machine to.
[14:09] <sinzui> maybe you have
[14:09] <mgz> sinzui: yeah, I need to do some sample runs to check
[14:10] <sinzui> mgz, have you started lxc containers instead of ami instanced? I am very keen to get to it?
[14:11] <mgz> I've been trying it out, but just locally
[14:12] <sinzui> mgz, I can add lxc support to run-unit-tests within the next 24 hours. Do you intend to use clone?
[14:13] <mgz> I'd really like to
[14:14] <sinzui> mgz, I gave up hunting for a utopic AMI, so I switch the utopic unit tests to run directly on the utopic slave. I worry that I will have janitorial duties until something like lxc is available
[14:15] <mgz> hmmm
[14:16] <sinzui> mgz, I will add command support for lxc, setup and teardown. I doubt I can help in the creation the the template or change the test suite to like lxc envs
[14:17] <sinzui> mgz, and you want test ./... || test -p 2./... ?
[14:17] <mgz> sinzui: I think we need it for now, but it's not helping much today...
[14:19] <sinzui> mgz, not many branches pass when -p 2 is needed. I thought fixing the "panic session closed" bug would make -p 2 better
[14:21] <wwitzel3> ericsnow, perrito666: https://github.com/wwitzel3/juju/compare/013-environment-info-api
[14:22] <katco> small milestone; got some charms deployed locally! :)
[14:23] <katco> however, it seemed to take a long time for wordpress to start (10ish minutes). is that normal?
[14:26] <mgz> katco: with the local provider? probably, it may well be faster for you second time
[14:27] <katco> mgz: does jujud do any sort of caching of charms?
[14:27] <mgz> it does, but it was the lxc setup I was thinking of
[14:27] <katco> ahhh
[14:28] <katco> thank you :)
[14:31] <voidspace> jam1: ping
[14:31] <jam1> voidspace: not usually when I'm around, but I happen to be today, what can i help you with?
[14:31] <voidspace> jam1: I thought I'd give it a try... you seem to work about 18hours a day usually
[14:31] <jam1> voidspace: :)
[14:32] <voidspace> jam1: you suggested that to change mongo to use ipv6 in our test suite that testing/mgo.go would be a place to start
[14:32] <jam1> that was my thought, yes
[14:32] <voidspace> jam1: this is now a thin wrapper around gitjujutesting/mgo.go (or whatever it's called - horrible package name)
[14:33] <voidspace> jam1: it *starts* mongo, but it's open.go that handles the connection
[14:33] <voidspace> jam1: how did you have in mind changing the connection just for tests?
[14:33] <voidspace> jam1: where you thinking of binding to :: specifically (I'm hoping we don't need to do that and mongo will allow either - but I need to check)
[14:34] <jam1> voidspace: So it looks like (as you notice) the old github.com/juju/juju/testing got pulled out into github.com/juju/testing/ but there is still the mgo.go file, they did alreayd remove --bind_ip
[14:35] <jam1> but they are'nt passing '--ipv6' for mongo
[14:35] <voidspace> jam1: I thought it just used inst.run() to start it... I obviously need to look better
[14:35] <jam1> voidspace: thati s where it passes the args
[14:36] <jam1> MgoInstance.run needs to pass "--ipv6"
[14:36] <voidspace> jam1: gah, ok
[14:36] <voidspace> jam1: sorry, being dumb
[14:36] <jam1> voidspace: http://docs.mongodb.org/manual/reference/program/mongo/#cmdoption--ipv6
[14:36] <voidspace> yeah, I've added that in the actual code and the tests all pass
[14:36] <jam1> voidspace: presumably as long as we do that, mongo will be happy to start and bind to both networks
[14:36] <voidspace> yeah, I need to specifically test that
[14:37] <voidspace> jam1: for some reason I thought inst.run() called our standard code for starting mongo
[14:37] <jam1> voidspace: the other bit is that mongo seems to report "mongod:PORTNUM" for output and we wait to see that.
[14:37] <voidspace> not sure why I thought that...
[14:37] <jam1> voidspace: I could see why it you'd think so, certainly it would make more sense if it was shared code
[14:39] <mgz> alexisb: you vanished
[14:41] * perrito666 notices that he should get food before the football game starts or he will not have food until after
[14:43] <katco> perrito666: which game?
[14:43] <perrito666> katco: apparently my country (ARG) against.. some other country :p
[14:43] <katco> haha
[14:44] <TheMue> perrito666: SWI, directly south to GER (which won yesterday)
[14:44] <perrito666> katco: that produces eigher a) food places closing or b) cook spitting into my food
[14:44] <katco> good luck to them :)
[14:44] <ericsnow> katco, perrito666: obviously perrito666 is a real patriot ;)
[14:44] <katco> haha
[14:44] <perrito666> TheMue: I know where SWI its, I did not know that its the team against ARG
[14:44] <katco> i'm recording the US v Belgium game
[14:44] <TheMue> perrito666: imho ARG will do it, but it won’t be simple
[14:45] <perrito666> ericsnow: I pay taxes and have the flag on flag day, I dont mind that much about sports
[14:45] <perrito666> TheMue: well our politicians will have problems, bc they have their hart in ARG but their money in SWI
[14:45] <TheMue> katco: there USA will win, the second GER team in this championchip *lol*
[14:45] <ericsnow> perrito666: next you'll tell me you don't tango ;)
[14:45] <katco> TheMue: i haven't heard it put like that lol. it's true!
[14:45] <TheMue> perrito666: that’s a problem of many politicians (and industrials)
[14:45] <katco> i hope they win
[14:46] <perrito666> ericsnow: I dont, requires far more motor ability than I have
[14:47] <ericsnow> perrito666: eh, I don't dance well either
[14:47] * ericsnow doesn't like where this conversation is heading
[14:47] * TheMue has to admit he absolutely failed when learning tango once in the past
[14:48] * katco often wonders if she's the only lady with no dancing skills whatsoever
[14:48] <perrito666> ericsnow: assuming all Argentinans know how to dance tango is like assuming every oriental person knows martial arts
[14:48] <perrito666> katco: oh no, my wife and I cannot put one step together
[14:48] <TheMue> katco: no, my wife neither
[14:48] <perrito666> lol
[14:49] <TheMue> perrito666: h5
[14:49] <katco> lol :D
[14:49] <ericsnow> perrito666: so true
[14:49] <ericsnow> katco: alas, my wife has a degree in dance
[14:49] <katco> ericsnow: oh how neat :)
[14:50] <perrito666> ericsnow: I lack practically all required skills to be Argentinian.
[14:50] <mgz> perrito666: you have the funny tea drink thing skilz right?
[14:51] <wwitzel3> haha
[14:51] <perrito666> mgz: I do :p
[14:51] <wwitzel3> that's all you need
[14:51] <ericsnow> perrito666: ah, but you have mate so it's okay :)
[14:51] <mgz> you'll need to do a demo next week
[14:53] <perrito666> mgz: I usually prefer not to go trough the hassle of explaining frontier authorities what is that green herb I carry and why is there a metallic tube on my baggage
[14:53] <mgz> :D
[14:55] <ericsnow> mgz: amazon carries everything you need: http://www.amazon.com/Taragui-Yerba-Mate-Bombilla-Leather/dp/B007V650EM/ref=sr_1_1?ie=UTF8&qid=1404226489&sr=8-1&keywords=mate+cup
=== vladk|offline is now known as vladk
[14:58] <perrito666> ericsnow: with a bit overprice :p but yes
[14:58] <perrito666> you also need a thermos
[14:58] <perrito666> or something that keeps water hot
[14:58] <ericsnow> perrito666: :)
[15:20] <rogpeppe1> fwereade: have you got a link to a doc for the specification for charm resources/blobs, by any chance,?
[15:24] <rogpeppe1> jam1, mgz, voidspace, wwitzel3: ^
[15:25] <rogpeppe1> i've *heard* about the "streams" stuff, but i don't think i've seen it written down
[15:25] <voidspace> rogpeppe1: afraid not, sorry
[15:25] <rogpeppe1> voidspace: thanks
[15:25] <ericsnow> speaking of blobs, we have 2 (soon 3) API client methods that deal with sending/receiving binary blobs over RPC...
[15:26] <ericsnow> do we anticipate more need for supporting that for more methods?
[15:27] <ericsnow> (should we consider adding support for binary data to our RPC implementation?)
[15:27] <rogpeppe1> ericsnow: with potentially large binary blobs, we generally tend to avoid RPC and use REST-style
[15:27] <rogpeppe1> ericsnow: which methods are you thinking of there?
[15:27] <ericsnow> AddLocalCharm and UploadTools
[15:27] <ericsnow> and soon Backup
[15:28] <ericsnow> and Restore
[15:28] <rogpeppe1> AddLocalCharm doesn't use RPC
[15:28] <rogpeppe1> ericsnow: i haven't looked at UploadTools, but I hope it's similar to AddLocalCharm
[15:28] <rogpeppe1> ericsnow: and Backup and Restore should be similar
[15:28] <ericsnow> rogpeppe1: it is basically copy-and-paste
[15:28] <ericsnow> rogpeppe1: right
[15:29] <rogpeppe1> ericsnow: well, there might be some common code that can be abstracted
[15:29] <ericsnow> for Backup I've factored the boilerplate into a common package
[15:29] <ericsnow> rogpeppe1: I plan on refactoring the other two methods to use it
[15:29] <rogpeppe1> ericsnow: i'm surprised there was much boilerplate, actually
[15:30] <rogpeppe1> ericsnow: have you got a link to your proposed common package?
[15:30] <ericsnow> rogpeppe1: there was enough that wholesale copy-and-paste was happening
[15:30] <ericsnow> rogpeppe1: https://github.com/juju/juju/pull/200
[15:31] <ericsnow> rogpeppe1: I called it RPC because it basically it...it just doesn't use our RPC implementation nor does it do JSON RPC
[15:31] <rogpeppe1> ericsnow: it's really just a REST request, right?
[15:32] <ericsnow> rogpeppe1: I wouldn't say REST. In each case it's a POST to a URL whose location ends with the method name
[15:33] <ericsnow> rogpeppe1: args are handled via URL values
[15:33] <rogpeppe1> ericsnow: just about anything can be considered REST if it does a POST :-)
[15:34] <ericsnow> rogpeppe1: :)
[15:34] <ericsnow> rogpeppe1: it's basically a form
[15:34] <rogpeppe1> ericsnow: yeah
[15:35] <rogpeppe1> ericsnow: BTW if you're implementing exported methods in a package, each exported function should a) deserve to be exported and b) have a doc comment describing what it does
[15:36] <ericsnow> rogpeppe1: good to know (I'll fix that)
[15:37] <rogpeppe1> ericsnow: i agree that that's useful boilerplate to abstract out
[15:37] <ericsnow> rogpeppe1: you mean in export_test.go?
[15:38] <rogpeppe1> ericsnow: no, export_test.go is an exception in lots of respects :-)
[15:38] <rogpeppe1> ericsnow: i'm looking at UnpackJSON
[15:38] <ericsnow> rogpeppe1: ah, got it
[15:38] <rogpeppe1> ericsnow: i'm not entirely sure about the form of the package there. let me think for a few moments.
[15:42] <rogpeppe1> ericsnow: do we not use a standard error struct type for all error returns?
[15:42] <ericsnow> rogpeppe1: not that I saw (I may have missed it)
[15:46] <ericsnow> rogpeppe1: you mean why did I make ErrorResult instead of using params.Error?
[15:47] <rogpeppe1> ericsnow: yeah
[15:47] <ericsnow> rogpeppe1: it was for testing
[15:48] <rogpeppe1> ericsnow: i think it should be easy enough to test without that
[15:48] <ericsnow> rogpeppe1: the interface allowed using the method rather that relying on the struct member (which is inconsistently an error or a string in various results types)
[15:49] <rogpeppe1> ericsnow: i'd suggest something like this: http://paste.ubuntu.com/7732030/
[15:50] <ericsnow> rogpeppe1: since it only mattered for the raw RPC calls, I figured ErrorResult was more appropriate in the rawrpc package than in params/apierror.go
[15:50] * ericsnow takes a look
[15:54] <ericsnow> rogpeppe1: FYI, my background is heavily Python with a little C; I first wrote my first Go code around a month ago so pointers on idiomatic approaches is *always* appreciated :)
[15:54] <rogpeppe1> ericsnow: np
[15:57] <ericsnow> rogpeppe1: as to that code you pasted, are you suggesting that we use an approach like that to handle the errors in the unmarshalled results?
[15:57] <rogpeppe1> ericsnow: the point of that function, as i see it, is to take out the boilerplate involved in parsing the error result from http requests to the API
[15:58] <ericsnow> rogpeppe1: right, and that was my intention with UnpackJSON as well
[16:10] <rogpeppe1> ericsnow: how about something like this: http://paste.ubuntu.com/7732104/
[16:11] <ericsnow> rogpeppe1: I like that
[16:12] <rogpeppe1> ericsnow: cool
[16:12] <rogpeppe1> ericsnow: i'd also add a doc comment to the Doer, BTW.
[16:14] <rogpeppe1> ericsnow: something like: // Do makes an HTTP request. It is implemented by *http.Client, for example.
[16:14] <ericsnow> rogpeppe1: this does put certain constraints on the response, right?
[16:14] <ericsnow> rogpeppe1: yeah
[16:14] <ericsnow> rogpeppe1: result, rather
[16:14] <rogpeppe1> ericsnow: sure - it means we need to use a consistent error type for all error responses
[16:14] <rogpeppe1> ericsnow: but that seems like a good thing to me
[16:14] <ericsnow> rogpeppe1: I'm on board with that :)
[16:16] <ericsnow> rogpeppe1: I swear the hardest thing we do in this industry is finding the balance between doing things the best way and getting things done (in the cases where you don't have the resources for both)
[16:17] <rogpeppe1> ericsnow: yeah
=== vladk is now known as vladk|offline
[16:20] <rogpeppe1> ericsnow: the other hardest thing we do is coping with the inevitable creeping complexity that comes from doing too much of the latter without enough of the former :-)
[16:21] <rogpeppe1> ericsnow: not that your case was an example of that though, i hasten to add
[16:22] <ericsnow> rogpeppe1: :)
[16:24] <ericsnow> rogpeppe1: I'll readily admit that I tend toward the latter but over time have become more cognizant of the realities of a world of limited resources and immediate needs :p
[16:25] <rogpeppe1> ericsnow: i *hope* you mean tend towards the former :-)
[16:25] <rogpeppe1> ericsnow: well, i guess "getting things done" is an admirable attribute too
[16:25] <ericsnow> rogpeppe1: oh, yeah :)
[16:25] <rogpeppe1> ericsnow: and one which i could do with more of :-)
[16:25] <ericsnow> rogpeppe1: I'm glad we have a mix of both in this world
[16:54] <voidspace> rogpeppe1: ping
[16:55] <rogpeppe1> voidspace: sprong
[16:55] <voidspace> rogpeppe1: :-)
=== rogpeppe1 is now known as rogpeppe
[16:55] <voidspace> rogpeppe1: can you tell me where in the code we store the replicaset (mongo) addresses
[16:55] <voidspace> rogpeppe: and where those addresses are created
[16:55] <voidspace> I've tried following a few trails but I thought you'd likely know pretty quickly
[16:56] <rogpeppe> client side or agent side?
[16:57] <voidspace> rogpeppe: agent side
[16:57] <voidspace> rogpeppe: client side shouldn't have them, right? just api server addresses
[16:57] <rogpeppe> voidspace: the primary source of info is in the state. APIAddresses
[16:57] <rogpeppe> voidspace: oh yeah
[16:57] <rogpeppe> voidspace: sorry, i was thinking of api addresses
[16:57] <voidspace> rogpeppe: I want the mongo addresses not api addresses
[16:57] <voidspace> right
[16:58] <voidspace> rogpeppe: I'm experimenting with having mongo use ipv6
[16:58] <voidspace> rogpeppe: so I want to tweak the way we create those addresses
[16:59] <voidspace> rogpeppe: it looks like adding the "--ipv6" flag is sufficient to allow us to connect with ipv6, whilst remaining compatible with our existing code
[17:00] <rogpeppe> voidspace: cool
[17:00] <rogpeppe> voidspace: i'd hope so
[17:00] <rogpeppe> voidspace: state.Machine has a MongoHostPorts method
[17:00] <rogpeppe> oh no it doesn't
[17:01] <rogpeppe> voidspace: ah
[17:01] <mfoord> rogpeppe: sorry, if you replied to me then I didn't see it - my connection was dropped
[17:01] <rogpeppe> mfoord: what was the last thing you saw me say?
[17:01] <mfoord> <rogpeppe> voidspace: sorry, i was thinking of api addresses
[17:03] <rogpeppe> mfoord: so we get the mongo addresses by adding the configuration's StatePort to the machine addresses
[17:03] <mfoord> rogpeppe: right, I remember that being the case
[17:03] <mfoord> rogpeppe: where do we do that?
[17:03] <rogpeppe> mfoord: that's not ideal - i would prefer it if a machine could choose its own mongo port
[17:03] <rogpeppe> mfoord: in worker/peergrouper
[17:03] <mfoord> (we have the same problem with rsyslog)
[17:03] <rogpeppe> mfoord: see worker/peergrouper/shim.go:/MongoHostPorts
[17:04] <mfoord> cool
[17:04] <mfoord> rogpeppe: thanks
[17:06] <mfoord> defining methods on machineShim before we define machineShim is confusing
[17:06] <mfoord> if only briefly
[17:18] <rogpeppe> mfoord: yeah, it should be declared earlier
[17:19] <voidspace> so if I hardcode that to use ::1 then everything still seems to work
[17:19] <voidspace> I need to verify this address is actually being used directly though
=== vladk|offline is now known as vladk
=== alexisb is now known as alexisb_afk
=== urulama is now known as uru-away
[18:07] <katco> i just ran into this gem: unit-wordpress-0: 2014-07-01 18:07:02 INFO juju-log db:2: We are now single, ALL THE SINGLE UNITS ALL THE SINGLE UNITS
[18:08] <katco> kudos to whomever :)
[18:13] <perrito666> lol, I guess you could bzr blame that
[18:15] <ericsnow> voidspace: so I'm guessing their upgrading your internet right now :)
[18:15] <perrito666> ericsnow: indeed, he looks taller
[18:15] <perrito666> :p
[18:17] <ericsnow> mfoord: nice internet you have there :)
[18:29] <mfoord> ericsnow: yeah, horrible
[18:30] <mfoord> and now... EOD
[18:30] <mfoord> g'night all
[18:30] <TheMue> so, added https://github.com/juju/juju/pull/211 as final PR for today
[18:31] * TheMue waves
[18:52] <bac> jamespage: ping
=== alexisb_afk is now known as alexisb
[19:51] <mattyw> sinzui, mgz ping?
[19:52] <sinzui> hi mattyw
[19:52] <mattyw> sinzui, I'm not sure if I'm reading this right but it looks like my branch test failed but still landed? http://juju-ci.vapour.ws:8080/job/github-merge-juju/332/console
[19:53] <sinzui> mattyw, It did
[19:54] <sinzui> mattyw, The test suite is soooo bad that the running will try the tests as we think they should run, and fail over to a run with just two procs. Your branch passed the second try
[19:55] <mattyw> sinzui, well that's *good* news I guess
[19:55] <mattyw> thanks
[19:55] <mattyw> as long as the landing is working to plan
[19:55] <sinzui> mattyw, yes, your branch is not being failed because someone else landed a brittle test
[19:57] <mattyw> sinzui, sweet
[19:57] <mattyw> well - thats officially a day then
[19:57] <mattyw> night all
[20:47] <thumper> morning folks
[20:48] <katco> howdy thumper
[20:48] <thumper> katco: morning, how's the learning going?
[20:49] <katco> good, thanks for asking :) having a lot of fun
[20:49] <katco> working on a brilliant charm right now. it's going to revolutionize the way we say hello to the world.
[20:51] <alexisb> thumper, speaking of fun, this video totally reminds me of Jesse: https://www.youtube.com/watch?v=HYupUy7wiIU
[20:51] <alexisb> :)
[20:51] * katco has a very dry sense of humor
[20:51] <thumper> katco: hah :-)
[20:52] <thumper> alexisb: youtube hates me "an error occurred"
[20:52] <thumper> alexisb: but I can guess what it is about :-)
[20:52] <alexisb> o, you have to watch it, it is so fitting
[20:53] * alexisb waits for jesse to show up online
[20:54] <katco> alexisb: i hope this person is still here next week :)
[20:54] <alexisb> :)
[20:54] <alexisb> katco, wallyworld will have to fill you in on the back story next week
[20:54] <katco> hehe ok
[20:55] * wallyworld clicks the video
[20:55] * wallyworld wallyworld can't stop laughing
[20:55] <wallyworld> alexisb: you are evil
[20:55] <alexisb> :)
[20:57] <ChrisW1> can I come program in Go yet? me and python are no longer friends :-/
[20:58] <ChrisW1> well, specifically, me, Jenkins and running python jobs are no longer friends
[20:59] <thumper> haha
[20:59] <thumper> ChrisW1: is that a whingy pom I hear?
[21:00] <ChrisW1> sea of red: http://jenkins.simplistix.co.uk/
[21:00] <thumper> ChrisW1: what are you doing to it?
[21:01] <thumper> ChrisW1: and three isn't really a sea
[21:01] <ChrisW1> http://jenkins.simplistix.co.uk/job/testfixtures-virtualenv/
[21:01] <ChrisW1> spot the pattern
[21:02] <thumper> ChrisW1: you'll have to tell me because I'm not going to go through it all hoping to see a pattern
[21:02] <thumper> still trying to go through my 200 odd emails
[21:03] <ChrisW1> there is no pattern
[21:03] <thumper> although to be honest, I spent the first six months of programming Go being very angry at it
[21:03] <thumper> coming from python
[21:03] <thumper> I think I have calmed down a lot now though
[21:03] <ChrisW1> you? angry? nah, don't buy it
[21:03] <ChrisW1> could never ever imagine that...
[21:04] <thumper> heh
[21:04] <ChrisW1> if I had hair I would be pulling it out right now
=== vladk is now known as vladk|offline
[21:04] <thumper> you could grow it longer just for the pleasure of pulling it out
[21:06] * ChrisW1 wonders if menn0 is up yet?
[21:06] <menn0> ChrisW1: yep :)
[21:07] <thumper> menn0: careful, I think he may be in a complaining mood
[21:08] * thumper pokes ChrisW1 with a long stick and runs away
[21:08] <menn0> he he
[21:09] <menn0> ChrisW1: so what brings you to these parts? :)
[21:10] <ChrisW1> my fault, attempting to support a package on 3 x os and 4 x python versions
[21:10] <ChrisW1> menn0: oh, no reason...
[21:10] <ChrisW1> how's nz?
[21:11] <thumper> very cold today
[21:11] <thumper> weather forecast is for 15-20cm of snow today
[21:11] <thumper> which is a lot for us
[21:11] <thumper> and likely to close schools
[21:11] <ChrisW1> that'd be a lot for the uk too!"
[21:11] <thumper> however right now, it is just cold ~3°C and dry
[21:12] <menn0> that would bring the UK to a stand-still :)
[21:12] <thumper> I'm missing sunny
[21:12] <thumper> no.. not what I meant, missing summer
[21:12] <thumper> warmth
[21:12] <thumper> and sun
[21:13] <alexisb> thumper, it is 34 C here today
[21:13] <alexisb> hottest day of the year so far
[21:13] <thumper> :-(
[21:13] <alexisb> and I think I would rather have snow
[21:13] <ChrisW1> yeah, pretty toasty here too
[21:14] <ChrisW1> great weather for debugging weird on top of weird
[21:14] <alexisb> heh
[21:24] <uru-away> cmars: thanks for the info. gonna read the paper 2nd time tomorrow morning with a fresh brain, and try to deduce the number of interactions, as per their design and as section 7 somehow shows their efficiency, there shouldn't be an issue for scaling
[21:32] <thumper> ChrisW1: I normally find a drink helps in the evening to lower the frustration levels
[21:33] <thumper> aim for the Ballmer peak: http://xkcd.com/323/
[21:34] <thumper> menn0: fwiw, I agree on the facade approach
[21:34] <thumper> menn0: if things go wrong, we want the user to be able to ssh in at least
[21:34] <thumper> and if we remove the api altogether, we can't do that easily
[21:34] <menn0> thumper: did you see the email I just sent?
[21:34] <thumper> yeah
[21:34] <thumper> just agreeing with you
[21:35] <thumper> finally through email backlog
[21:35] <menn0> you said "facade approach" and what I'm suggesting is actually not quite using facades - although the work done for facades has made it a lot easier
[21:35] <thumper> now I get about 25 minutes before the next meeting
[21:35] * thumper sighs
[21:35] <thumper> menn0: well, whatever you said makes sense, let's look at the implementation :-)
[21:37] <thumper> sinzui: where is the jub page for the landing bot?
[21:38] <rick_h__> thumper: github.com/juju/jenkins-github-lander
[21:38] <thumper> sinzui: I'm curious to see if recent landings have improved the intermittent failure issue
[21:38] <thumper> it looks like it has
[21:38] <rick_h__> oh, diff bot :)
[21:38] <sinzui> thumper, http://juju-ci.vapour.ws:8080/job/github-merge-juju/
[21:38] <thumper> sinzui: cheers
[21:38] <sinzui> thumper, There was a misadventure 8 hours ago
[21:39] <sinzui> thumper, I tried to move the job to the dedicated slave, but I don't think git was setup.
[21:39] <thumper> sinzui: well five passes in a row is more than we have seen in some time
[21:39] <sinzui> thumper, so I requeued the job to my favour, all 1.20 jobs first
[21:39] <sinzui> thumper, mgz was also working on it the running
[21:40] * thumper nods
[21:40] <sinzui> I just landed lxc support to the run-unit-tests script. I hope that will help mgz
[22:00] <thumper> cmars: with you shortly, just making a pot of tea for my sick wife
[22:00] <thumper> :)
[23:00] <sinzui> wallyworld, I am concerned about 1.20. http://juju-ci.vapour.ws:8080/ shows the restore tests failing. I just changed one test to HP after it reached its final 3 tries. Either our restore code has spontaneously combusted in both branch or ec2 has changed. I don't wan't to say https://bugs.launchpad.net/juju-core/+bug/1336104 also affect 1.20
[23:00] <_mup_> Bug #1336104: cannot restore bootstrap machine: cannot get public address of bootstrap machine <backup-restore> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1336104>
[23:00] <sinzui> wallyworld, I was hoping to start the 1.20 release with a blessed revision in an hour
[23:01] <wallyworld> sinzui: ok, will look. i was hoping we were good to release as well :-(
[23:03] <wallyworld> sinzui: can you clarify - do you think this affects 1.20 also?
[23:04] <sinzui> This is 1.20 that is failing right now in a similar way
[23:04] <wallyworld> :-(
[23:04] <sinzui> wallyworld, none of the recent commits seem to be a cause
[23:04] <wallyworld> and it is running now against hp cloud ?
[23:05] <wallyworld> in case it's an ec2 issue?
[23:05] <sinzui> wallyworld, I stopped running the functional test on HP at the start of June. This is the first time I am trying them with the new regions
[23:05] <wallyworld> ok
[23:06] <wallyworld> sinzui: was the bootstrap timeout config attribute you are using typed in as "bootstrap-timeout"
[23:10] <sinzui> wallyworld, I tried this with all joyent encs
[23:10] <sinzui> bootstrap-timeout: 600
[23:10] <sinzui> I now have a job that kills any joyent proc older than i hour
[23:10] <wallyworld> hmmm, ok. that looks correct, so might be an issue
[23:12] <sinzui> wallyworld, 1.20.0 is soooo much better than 1.18.0. Since I am not killing the 1.20.0 versions of juju I favour fixing this issue in 1.20.1
[23:12] <wallyworld> sounds good
[23:12] <wallyworld> sinzui: my first look into the restore code shows no recent changes in the part that is failing
[23:13] <sinzui> Hp just started its restore http://juju-ci.vapour.ws:8080/job/functional-backup-restore/1050/console
[23:13] <sinzui> 20 minutes and I hope for a pass
[23:16] <sinzui> The other restore test just switched to Hp
[23:17] <wallyworld> sinzui: the error seems to indicate the mongo db is empty, and further back in the console log i see "Restore failed:" but with no additional information
[23:18] <sinzui> :(
[23:24] <sinzui> wallyworld, the HP run failed with "no reachable servers" this might be caused by Hp...but the error is still in juju-restore :( http://juju-ci.vapour.ws:8080/job/functional-backup-restore/1050/
[23:24] * sinzui moves the test back to aws
[23:25] <wallyworld> hmmmm
[23:25] <wallyworld> we this kinda sucks
[23:26] <wallyworld> sinzui: i think "Restore failed:" is printed by the ci script
[23:26] <sinzui> yes
[23:26] <wallyworld> i wonder why it is not logging an error?
[23:29] <sinzui> wallyworld, the test prints "Restore Failed" then the error that was sent to stderr
[23:29] <wallyworld> which appears to be nothing :-(
[23:30] <sinzui> wallyworld, message about the lock is nothing? http://juju-ci.vapour.ws:8080/job/functional-backup-restore/1049/console
[23:31] <wallyworld> sinzui: didn't see that. that is a problem
[23:31] <wallyworld> so 2 things are running apt
[23:33] <sinzui> wallyworld, I have seen that several times. The problem with the test is there is so much setup that we see failures getting to HA or restore dies early with a huge glob of text http://juju-ci.vapour.ws:8080/job/functional-backup-restore/1048/console
[23:35] <wallyworld> maybe a manual test is required
[23:37] <wallyworld> we'll also look into the rev where it first started failing
[23:38] <wallyworld> sinzui: how hard would it be to re-run a test with the previous rev to 7f77fc1
[23:38] <wallyworld> just to get another data point