|
[01:14] <axw> menn0: hey, you're not looking at the manual provider bug atm are you? |
|
[01:14] <axw> I can see the problem now - just added a comment to the bug |
|
[01:16] <axw> oh, I see natefinch has a PR up... |
|
[01:43] <waigani> axw: I was going to see if reverting my 'manual provision with custom ssh key' branch fixes it. Should I keep on or have you got it? |
|
[01:44] <axw> waigani: I can repro in my env now, so leave it with me for now |
|
[01:44] <axw> waigani: I'm pretty sure it was failing before your change went in |
|
[01:44] <axw> will see tho |
|
[01:44] <waigani> axw: okey dokey |
|
[02:01] <axw> waigani: nothing to do with your change, it's related to removal of storage |
|
[02:01] <waigani> axw: okay, thanks |
|
[02:12] * thumper stares at this code trying to work out what is different |
|
[02:26] <menn0> axw: nate's change went in |
|
[02:27] <menn0> with it I can happily bootstrap using the manual provider |
|
[02:27] <menn0> the CI test is still failing, but in a different way |
|
[02:27] <menn0> the SSH key error that waigani was referring to |
|
[02:28] <menn0> axw: to answer your original question, no I'm not looking at this bug any more |
|
[02:29] <axw> menn0: thanks, no problems; I found the issue |
|
[02:32] <axw> https://github.com/juju/juju/pull/504 <- review please someone |
|
[02:32] <axw> fixes CI blocking bug |
|
[02:34] <menn0> axw: looking |
|
[02:41] <axw> menn0: agreed about using use-sshstorage. do you have any ideas of what else I can use there? |
|
[02:41] <menn0> axw: not off the top of my head |
|
[02:42] <menn0> axw: I'm concerned that if change how SSH storage is used, this code is going to break |
|
[02:42] <menn0> (e.g. if we stop using it all together, or start using it for the bootstrap node) |
|
[02:43] <axw> menn0: I understand and agree, but at this point I don't think there's an alternative |
|
[02:43] <axw> probably we should have a provider independent way of determining whether you're running from inside the env |
|
[02:43] <menn0> axw: I'll trust your judgment on that |
|
[02:44] <axw> something like use-sshstorage, but for this purpose |
|
[02:44] <menn0> could we have some tests that ensure that SSH storage is used in the current way? |
|
[02:44] <axw> there are I'm pretty sure |
|
[02:44] <axw> I just added one, too |
|
[02:45] <axw> i.e. that verification is elided if use-sshstorage=false |
|
[02:45] <menn0> I'm thinking something right next to these tests that emit a message if they fail to remind us that this code needs to be updated |
|
[02:45] <menn0> I saw your test and that's obviously required |
|
[02:46] <menn0> but I'm also wondering if it's possible to have something that checks useSSHStorage on a bootstrap node, and not and ensures it's what we expect here |
|
[02:47] <menn0> if it fails then it should error with something like: u"seSSHStorage semantics have changed. Please update manualEnviron.StateServerInstances" |
|
[02:47] <menn0> maybe that's overkill |
|
[02:47] <menn0> or too hard |
|
[02:47] <menn0> but that's the kind of thing I'd aim for |
|
[02:48] <axw> menn0: there's also tests in provider_test.go that check that Prepare sets use-sshstorage, and Open doesn't. there should be one for Bootstrap too, I'll add one |
|
[02:48] <menn0> ok sounds good |
|
[02:48] <axw> tho testing Bootstrap may be a PITA, will see... |
|
[02:50] <menn0> axw: if it's going to be too hard then leave it |
|
[02:50] <menn0> it's probably more important to get this fix in at this point |
|
[02:50] <axw> menn0: shouldn't take long I think, I'll see how I go |
|
[02:50] <axw> won't waste too much time on it |
|
[02:50] <menn0> sweet |
|
[02:50] <menn0> well you have my LGTM anyway |
|
[02:51] <menn0> axw: just remembered... not sure if you need someone else's too. I'm a "junior reviewer". thumper? |
|
[02:52] * thumper sighs... |
|
[02:52] <menn0> :) |
|
[02:52] <thumper> I should sort that shit out |
|
[02:52] * thumper looks |
|
[02:52] <menn0> thumper: at least this one is a small change |
|
=== blackboxsw is now known as blackboxsw_away |
|
[03:19] * thumper needs to take kid to hockey |
|
[03:19] <thumper> bbl |
|
[04:16] <jcw4> thanks axw |
|
[04:16] <axw> jcw4: nps |
|
[04:34] <jcw4> is there a publicly accessible repo with the funcitonal tests used by jenkins? |
|
[04:34] <jcw4> s/funcitonal/functional/ |
|
[04:51] <jcw4> ah, I'm guessing it's https://code.launchpad.net/juju-ci-tools |
|
[05:51] <ericsnow> if anyone has some time, I'd really appreciate a review: https://github.com/juju/utils/pull/16 https://github.com/juju/utils/pull/19 https://github.com/juju/juju/pull/462 https://github.com/juju/juju/pull/453 |
|
[05:51] <voidspace> morning all |
|
[05:51] <ericsnow> :) |
|
[05:51] <voidspace> ericsnow: a little collection there! |
|
[05:51] <voidspace> ericsnow: morning |
|
[05:52] <ericsnow> voidspace: noone wants to review them :( |
|
[05:52] <voidspace> ericsnow: hehe, let me get coffee and I'll take a look |
|
[05:52] <ericsnow> voidspace: FYI, 55a9507 (drop direct mongo access) got reverted because it broken restore |
|
[05:53] <voidspace> ericsnow: restore needs direct mongo access? |
|
[05:53] <voidspace> that's horrible |
|
[05:53] <ericsnow> voidspace: apparently |
|
[05:53] <ericsnow> voidspace: for now (the new restore won't) |
|
[05:53] <ericsnow> and with that, I'm going to bed! |
|
[05:54] <voidspace> ericsnow: goodnight! |
|
[06:03] <dimitern> morning |
|
[06:11] <voidspace> dimitern: morning |
|
[06:11] <voidspace> dimitern: so "shutting off direct db access" got reverted |
|
[06:12] <voidspace> dimitern: as it was this change that broke restore :-( |
|
[06:12] <voidspace> dimitern: I thought restore used ssh rather than direct mongo access, but it seems I'm wrong |
|
[06:28] <dimitern> voidspace, oh, bugger :( |
|
[06:29] <dimitern> voidspace, I think restore needs to be smarter |
|
[06:31] <dimitern> voidspace, and use ssh to run mongo commands remotely |
|
[06:31] <voidspace> dimitern: right |
|
[06:31] <voidspace> dimitern: but restore is being changed anyway, so the "new restore" will be smarter |
|
[06:31] <voidspace> but until then... |
|
[06:32] <dimitern> yeah.. |
|
=== uru__ is now known as urulama |
|
[07:36] <TheMue> morning |
|
[07:43] <dimitern> morning TheMue |
|
[08:00] <voidspac_> TheMue: morning |
|
[08:22] <voidspac_> does anyone know the lxc-create magic invocation to get it to share home directory with the host? |
|
[08:30] <dimitern> voidspac_, why do you need this? |
|
[08:31] <voidspac_> dimitern: especially for nested lxc containers it makes experimenting simpler |
|
[08:31] <voidspac_> dimitern: shared access to scripts / ssh keys etc |
|
[08:31] <voidspac_> dimitern: only for experimentation |
|
[08:31] <dimitern> voidspac_, you can take a look at man lxc.container.conf - there is a way to specify additional mount points there; or just ask stgraber or hallyn in #server (@can) |
|
[08:31] <voidspac_> dimitern: there's a u1 development wiki page that explains it somewhere, I'm looking now |
|
[08:32] <voidspac_> it's how we used to do dev (inside an lxc container) |
|
[08:32] <dimitern> voidspac_, ah, nice |
|
[08:32] <voidspac_> very useful, if you screw up your dev environment just blow it away and create a new one |
|
[08:34] <dimitern> voidspac_, take a look at https://wiki.debian.org/LXC - "Bind mounts inside the container" section |
|
[08:34] <voidspac_> dimitern: thanks |
|
[08:40] <voidspac_> dimitern: https://wiki.canonical.com/UbuntuOne/Developer/LXC |
|
[08:40] <voidspac_> dimitern: sudo lxc-create -t ubuntu -n u1-precise -- -r precise -a i386 -b $USER |
|
[08:41] <voidspac_> obviously modifying appropriately for trusty / amd64 |
|
[08:41] <voidspac_> but it's the -b $USER |
|
[08:41] <dimitern> voidspac_, ah, even nicer, thanks! :) |
|
[08:41] <voidspac_> then start the container as a daemon, ssh in and do your dev work there |
|
[08:42] <dimitern> voidspac_, so you can still mess up your /home from inside the container, but nothing else? |
|
[08:42] <voidspac_> dimitern: right |
|
[08:42] <voidspac_> dimitern: and you can have separate ppas and packages installed |
|
[08:46] * TheMue just wondered where his blocked PR is and then recognized that the bot merged it half an hour ago :) |
|
[08:47] <fwereade> dammit, I thought it was quiet -- irc client wasn't actually running :/ |
|
[08:48] <TheMue> *rofl* |
|
[08:48] <TheMue> morning fwereade |
|
[08:48] <axw> fwereade: I thought you were just hiding :) |
|
[08:49] <fwereade> axw, if I'd realised I was I would have been coding ;p |
|
[08:49] <menn0> hello peoples |
|
[08:49] <axw> :p |
|
[08:49] * menn0 is back for more |
|
[08:49] <fwereade> menn0, everybody, heyhey :) |
|
[08:49] <voidspac_> fwereade: morning :-) |
|
[08:49] <voidspac_> menn0: morning |
|
[09:00] <rogpeppe> wallyworld: just checking: have you seen the issues that i raised on juju/blobstore? i was wondering what your thoughts were there. |
|
[09:29] <wallyworld> rogpeppe: no, haven't seen them, have been focused on 1.20 issues and the sprint last week. i'm back at work tomorrow. do you have bug numbers? |
|
[09:31] <rogpeppe> wallyworld: https://github.com/juju/blobstore/issues |
|
[09:32] <wallyworld> rogpeppe: oh, ok. we should be using launchpad for rasing bugs |
|
[09:32] <wallyworld> otherwise i don't see them |
|
[09:32] <rogpeppe> wallyworld: ah |
|
[09:32] <wallyworld> and we can't track them into milestones |
|
[09:32] <rogpeppe> wallyworld: i thought it was more appropriate to raise the issues against the repo itself |
|
[09:33] <rogpeppe> wallyworld: but i see the milestone issue too |
|
[09:33] <wallyworld> for sub repos, that may be a good point |
|
[09:33] <wallyworld> but, yeah, it's sort of messed up using two tools |
|
[09:33] <wallyworld> 9 issues :-( |
|
[09:34] <wallyworld> i may not get to look in detail till a bit later this week |
|
[09:37] <rogpeppe> wallyworld: (there's no way of tracking external bugs in lp?) |
|
[09:37] <rogpeppe> wallyworld: if you starred juju/blobstore, i think you might get email messages about issues etc with it. (but maybe that's another setting) |
|
[09:38] <rogpeppe> wallyworld: some are harder to fix than others |
|
[09:38] <wallyworld> rogpeppe: there is a way of importing bugs yes; it would need to be set up. but since we use lp for juju-core, i'm conflicted about introducing a separate tools for other things |
|
[09:39] <wallyworld> i may have got emails buried in my in box, will need to check - i didn't have filters set up |
|
[09:41] <rogpeppe> wallyworld: the other side of that coin is that if you're looking at a sub-repo, it makes sense to be able to trivially see all the bugs associated with it |
|
[09:41] <rogpeppe> wallyworld: but i'm happy to move the bugs to lp if you think that's better. |
|
[09:42] <wallyworld> rogpeppe: yeah, i would prefer other affected people help make that call, not just me. i suspect the answer will be juju-core stays in lp and the sub repos are in github since they are just libraries and don't have a release schedule as such |
|
[09:49] <wallyworld> Beret: hi, sorry i only just saw your ping in the back scroll (I've been off on leave for a few days). that work hasn't started yet but i hope to have something done by end of week for Juju 1.21 alpha |
|
[10:47] <gsamfira> hello folks. Anyone care to review: https://github.com/juju/juju/pull/499 ? :) |
|
[11:22] <mattyw> fwereade_, ping? |
|
[11:22] <fwereade_> mattyw, pong |
|
[11:45] * dimitern lunches |
|
[12:40] <perrito666> voidspac_: r u there? |
|
[12:49] <hazmat> fwereade_, if you can't make the txn meeting.. i'd prefer we just push it one day or alternatively move it to earlier today (+ natefinch) |
|
[12:55] <natefinch> hazmat: with the toasca thing at 10, we'd have to meet like right now to make it in earlier today. |
|
[12:56] <hazmat> natefinch, yup |
|
[12:56] <mattyw> dimitern, ping? |
|
[13:07] <dimitern> mattyw, pong |
|
[13:10] <fwereade_> hazmat, I'm about to go out now and *hope* I will be back by 5 |
|
[13:10] <voidspac_> perrito666: yep |
|
[13:10] <fwereade_> hazmat, and I haven't taken my swap day yet and was going to take it tomorrow |
|
[13:10] <fwereade_> hazmat, natefinch: I would hope you can be somewhat productive without me? |
|
[13:10] <perrito666> voidspac_: I reverted a PR from you last night |
|
[13:10] <voidspac_> perrito666: I saw |
|
[13:11] <voidspac_> *grrr* |
|
[13:11] <voidspac_> :-( |
|
[13:11] <voidspac_> perrito666: restore still requires direct db access |
|
[13:11] <voidspac_> so we can't close it off just yet |
|
[13:12] <perrito666> voidspac_: new restore does too, but I do accept ideas to change that |
|
[13:12] <voidspac_> perrito666: ssh |
|
[13:12] <voidspac_> perrito666: we want to close off direct db access |
|
[13:13] <perrito666> voidspac_: I am a bit curious, how is db access going to be done now? |
|
[13:13] <voidspac_> perrito666: not externally |
|
[13:13] <voidspac_> perrito666: db access externally should never be needed - all calls should go through the api |
|
[13:14] <voidspac_> perrito666: and the state server wouldn't need the port to be open to connect to it on the same machine |
|
[13:14] <voidspac_> perrito666: so if access to the db is needed an api endpoint should be created - or ssh used |
|
[13:16] <perrito666> voidspac_: wait, it means that the db would be listening locally? |
|
[13:16] <perrito666> as in localhost:FORMERSTATEPORT ? |
|
[13:16] <voidspac_> perrito666: yes |
|
[13:16] <voidspac_> perrito666: it already is |
|
[13:16] <voidspac_> I believe |
|
[13:16] <hazmat> fwereade_, k, hopefully we'll see you at 5 then, have fun. |
|
[13:16] <perrito666> voidspac_: well your patch changes that |
|
[13:17] <voidspac_> perrito666: I didn't believe so |
|
[13:17] <perrito666> voidspac_: since this stopped working: |
|
[13:17] <perrito666> mongo --ssl -u admin -p {{.AgentConfig.Credentials.OldPassword | shquote}} localhost:{{.AgentConfig.StatePort}}/admin --eval "$1" |
|
[13:17] <perrito666> :) |
|
[13:17] <perrito666> you might want to add a test for that |
|
[13:17] <perrito666> and that is run via ssh |
|
[13:17] <voidspac_> perrito666: show me where my patch changes that? |
|
[13:17] <perrito666> into the machine |
|
[13:18] <voidspac_> perrito666: in the code |
|
[13:18] <voidspac_> ... |
|
[13:18] <voidspac_> perrito666: it may just be that the template doesn't work now |
|
[13:18] <perrito666> voidspac_: possible |
|
[13:18] <voidspac_> the port (and port binding) didn't change |
|
[13:18] <perrito666> I also thought of that |
|
[13:18] <voidspac_> in which case that is much easier to fix I think |
|
[13:19] <perrito666> voidspac_: sorry I did not try to fix it more in depth, we really needed CI back |
|
[13:20] <voidspac_> perrito666: no problem - so long as new restore is implemented not needing external db access |
|
[13:20] <perrito666> voidspac_: well if you can guarantee that localhost:StatePort works should be no problem |
|
[13:20] <voidspac_> perrito666: cool |
|
=== ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: see calendar | Open critical bugs: None |
|
[13:23] <wwitzel3> woohoo .. None |
|
[13:23] <ericsnow> perrito666, voidspac_: If StatePort is gone then it definitely makes sense that {{.AgentConfig.StatePort}} no longer works in the template. |
|
[13:23] <voidspac_> ericsnow: it's not gone, it's just not opened externally |
|
[13:23] <perrito666> ericsnow: I am taking state port from the agent.config and it still is there |
|
[13:23] <voidspac_> I don't believe I actually removed it from AgentConfig |
|
[13:24] <voidspac_> trying to get back to the original PR as it's now closed |
|
[13:24] <perrito666> voidspac_: I wonder if its a problem derivedfrom the permision groups in ec2 |
|
[13:24] <ericsnow> voidspac_, perrito666: ah |
|
[13:24] <perrito666> which would not make much sense but hey, you never know |
|
[13:24] <perrito666> voidspac_: https://github.com/juju/juju/pull/449/files |
|
[13:24] <ericsnow> perrito666: weren't the failures on the HP cloud? |
|
[13:25] <perrito666> ericsnow: ah not sure actually |
|
[13:25] <perrito666> sinzui: ? |
|
[13:26] <sinzui> perrito666, at this hour there are no critical bugs affecting juju devel or stable. This is the first time in months |
|
[13:27] <perrito666> sinzui: dont jinx it |
|
[13:27] <perrito666> sinzui: was the error happening in hp too? |
|
[13:27] <ericsnow> rogpeppe: could you take another look at https://github.com/juju/utils/pull/16? |
|
[13:29] <ericsnow> perrito666: from http://juju-ci.vapour.ws:8080/job/functional-backup-restore/1309/console: "https://region-a.geo-1.objects.hpcloudsvc.com/v1/..." |
|
[13:29] <sinzui> perrito666, I am not sure what the question is. restore has failed on both aws and hpcloud. it is currently testing on hpcloud. I changed it last week to see if the recent bug was different on Hp |
|
[13:30] <perrito666> sinzui: this could be so much easier if we could all read each other thoughts |
|
[13:30] <perrito666> voidspac_: ok, its not permissions I have no clue then, I guess Ill have to find out |
|
[13:30] <voidspac_> I'm trying to look at it as well |
|
[13:31] <perrito666> voidspac_: I think the issue is restore line 287 |
|
[13:33] <voidspac_> perrito666: it's using the external address |
|
[13:33] <voidspac_> perrito666: it's pinging mongo to wait for it to come up |
|
[13:33] <perrito666> that will never work |
|
[13:34] <voidspac_> well, it used to work |
|
[13:34] <perrito666> yes I know |
|
[13:34] <voidspac_> hehe |
|
[13:34] <voidspac_> perrito666: so the restore probably succeeds - but then can't connect to mongo and thinks it has failed |
|
[13:35] <perrito666> voidspac_: sortof |
|
[13:35] <perrito666> the restore of state machine succeeds |
|
[13:35] <perrito666> but it fails when trying to update the other agents |
|
[13:35] <perrito666> bc it needs st.AllMachines |
|
[13:35] <voidspac_> right, before updateAllMachines |
|
[13:35] <perrito666> exactly |
|
[13:35] <voidspac_> does that need a new API endpoint then |
|
[13:35] <voidspac_> which can be used instead of directly connecting to mongo |
|
[13:36] <voidspac_> and the strategy can make repeated calls to that instead |
|
[13:36] <voidspac_> don't we already know AllMachines? |
|
[13:37] <voidspac_> or it could run that code on the state server or execute a mongo query |
|
[13:37] <perrito666> voidspac_: we dont, we try to work that out from the recently restored db |
|
[13:38] <voidspac_> perrito666: so restoreBootstrapMachine could run an extra command to get the info |
|
[13:38] <voidspac_> using runViaSsh |
|
[13:38] <voidspac_> and return the extra information |
|
[13:38] <voidspac_> or we could add a new endpoint and use apiState |
|
[13:39] <voidspac_> perrito666: which do you think would be better? |
|
[13:39] <perrito666> for the case of current restore implementation we can go with runViaSsh, for the new one I can do something prettier |
|
[13:40] <voidspac_> perrito666: shall I do this - I have some spare cycles |
|
[13:40] <sinzui> We are one unittest run away from having a passing devel. The anxiety is too much |
|
[13:40] <voidspac_> sinzui: :-) |
|
[13:41] <wwitzel3> :) |
|
[13:41] <katco> wallyworld: still there by chance? |
|
[13:41] <perrito666> voidspac_: please do, If I keep context switching I will never in my life finish the new restore implementation |
|
[13:41] <voidspac_> perrito666: ok |
|
[13:41] <voidspac_> perrito666: and for new restore you will take this into account? |
|
[13:41] <perrito666> I will |
|
[13:41] <voidspac_> so I'm working in restore.go still |
|
[13:41] <voidspac_> not the plugin |
|
[13:41] <perrito666> voidspac_: the plugin |
|
[13:41] <voidspac_> (just checking) |
|
[13:41] <voidspac_> ah... |
|
[13:42] <voidspac_> no wait |
|
[13:42] <voidspac_> restore.go is the plugin... |
|
[13:42] <perrito666> voidspac_: cmd/plugins/juju-restore/restore.go |
|
[13:42] <perrito666> voidspac_: makesit clearer? |
|
[13:42] <perrito666> :) |
|
[13:42] <voidspac_> yep, that's where I've been looking |
|
[13:42] <perrito666> yup |
|
[13:42] <voidspac_> thanks |
|
[13:46] <marcoceppi> OMG |
|
[13:47] <perrito666> marcoceppi: ? |
|
[13:47] <marcoceppi> wrong room, though it still applies, I got excited because the buildbot is unblocked |
|
[13:52] <rogpeppe> ericsnow: looking |
|
[13:53] <ericsnow> rogpeppe: thanks! |
|
[14:04] <rogpeppe> ericsnow: reviewed |
|
[14:04] <ericsnow> rogpeppe: much appreciated |
|
=== Ursinha is now known as Ursinha-afk |
|
=== Ursinha-afk is now known as Ursinha |
|
[14:24] <katco> when updating a library, should i specify the specific commit that fixes the issue, or the latest commit in a stable release? |
|
[14:24] <katco> sorry, in dependencies.tsv |
|
[14:25] <natefinch> it depends |
|
[14:25] <natefinch> generally... latest seems like a reasonable choice, as long as it doesn't break anything else. |
|
[14:26] <katco> yeah |
|
[14:26] <natefinch> on the assumption that other bugs may have been fixed in the meantime, and no sense waiting until we hit them to include them in our build |
|
[14:26] <katco> this is for goyaml, so i am assuming the v1 branch is _relatively_ stable |
|
[14:26] <natefinch> yeah |
|
[14:26] <katco> ok cool, i'll grab latest :) |
|
[14:26] <katco> ty nate! :) |
|
[14:27] <katco> er one more question |
|
[14:27] <katco> i switched goyaml over to gopakg.in... i noticed logger isn't in the dependencies.tsv yet, but some gopakg.in packages are |
|
[14:28] <katco> wasn't gopakg.in designed to obviate godeps? |
|
[14:28] <natefinch> not exactly |
|
[14:28] <natefinch> they're somewhat orthogonal, though related |
|
[14:28] <katco> ah ok, i misunderstood its purpose then |
|
[14:30] <natefinch> you need godeps to ensure that you have a repeatable build. Even supposedly non-breaking changes on a stable branch by definition change behavior. For a release you need to make sure it's possible to recreate the exact same binary multiple times. Godeps does that |
|
[14:32] <katco> yeah |
|
[14:32] <katco> somewhat misunderstood what gopakg.in was designed to solve |
|
[14:33] <katco> can someone land this for me? it's already been reviewed/approved, just needs to be landed into trunk: https://code.launchpad.net/~cox-katherine-e/goamz/lp1319475.v4.signature.support |
|
=== Ursinha is now known as Ursinha-afk |
|
[14:36] <katco> (i don't have permissions, or obviously i would do it myself :p) |
|
=== Ursinha-afk is now known as Ursinha |
|
[14:41] <natefinch> katco: what happened to getting goamz moved to github? |
|
[14:41] <katco> natefinch: we haven't found a home for it yet. |
|
[14:42] <katco> and didn't want it to impede development any further. we have a customer waiting on this functionality. |
|
[14:42] <natefinch> katco: I guess we don't control http://github.com/go-amz huh? |
|
[14:43] <katco> natefinch: no idea, but i would have thought wallyworld or niemeyer would have mentioned it :p |
|
[14:43] <katco> i am still not clear on why we don't have some sort of canonical repo that all code goes into |
|
[14:43] <katco> big C canonical |
|
[14:43] <katco> on github |
|
[14:44] <natefinch> katco: 'cause we don't control http://github.com/canonical either :) |
|
[14:44] <katco> is that preventing us from registering canonical-ltd, or canonical-code, or canonical-* lol |
|
[14:48] <TheMue> dimitern: did you meant the SupportNetworks capability? (which btw the should be renamed to SupportsNetworks) |
|
[14:48] <niemeyer> natefinch: Define "we"? :) |
|
[14:49] <natefinch> niemeyer: "Gustavo" :) |
|
[14:49] <niemeyer> natefinch: Well, I registered go-amz, IIRC |
|
[14:50] <natefinch> I hoped so |
|
[14:50] <natefinch> but you never know in the wild west of internet name squatting |
|
[14:55] <katco> mgz: you around yet? |
|
[14:55] <mgz> katco: just back from lunch now |
|
[14:55] <katco> mgz: ah ok, see pm's pls :) |
|
[14:55] <mgz> hm, on irc? I may be being blind, don't see any |
|
[14:56] <katco> mgz: hrm yeah |
|
[14:57] <katco> mgz: sorry my mistake... window wasn't connected |
|
[15:01] <hazmat> natefinch, fwereade_ meeting time.. |
|
[15:04] <voidspac_> perrito666: I don't have access either |
|
[15:04] <voidspac_> anyone know *where* the October sprint is? |
|
[15:04] <voidspac_> I don't think I can make it |
|
[15:04] <perrito666> voidspac_: I think only people on the cc list of the mail by sarah can see it |
|
[15:05] <voidspac_> I already have that time booked off... |
|
[15:05] <perrito666> so natefinch would you tellus where it is? |
|
[15:05] <natefinch> I asked yesterday, she said they don't know yet |
|
[15:05] <ericsnow> voidspac_: if you can't be there we should reschedule :) |
|
[15:06] <voidspac_> ericsnow: definitely |
|
[15:06] <voidspac_> I land back in the UK on the Sunday 5th October |
|
[15:10] <voidspac_> ericsnow: actually, I could just fly from India to the sprint |
|
[15:10] <voidspac_> I'd be away for two weeks then, but ah well |
|
[15:11] <ericsnow> voidspac_: you'd already be packed ;) |
|
[15:11] <voidspac_> ericsnow: well yes, but I'd need to pack for two weeks instead of one |
|
[15:11] <voidspac_> but so be it I guess |
|
[15:11] <ericsnow> voidspac_: conference T-shirts FTW |
|
[15:11] <voidspac_> heh |
|
[15:11] <ericsnow> voidspac_: the price of being a Python luminary :) |
|
[15:11] <voidspac_> ericsnow: get them, wear them, throw them |
|
[15:12] <dimitern> TheMue, yes that one |
|
[15:12] <ericsnow> voidspac_: :) |
|
[15:12] <voidspac_> ericsnow: you should be with me... |
|
[15:12] <perrito666> voidspac_: when that kind of think happens to me I pack two bags and let one at home and swap upon arrival |
|
[15:12] <dimitern> TheMue, it'll go away soon anyway, as the new model kicks in |
|
[15:12] <voidspac_> perrito666: I don't think I can do "land in uk then immediately fly out again" |
|
[15:12] <voidspac_> perrito666: but fly straight from India might be doable |
|
[15:13] <perrito666> voidspac_: if someone is waiting for you in the airport with the spare bag you might |
|
[15:13] <voidspac_> perrito666: heh, possible depending on flight times I guess :-) |
|
[15:15] <perrito666> natefinch: wwitzel3 standup? |
|
[15:18] <TheMue> dimitern: had been in a meeting, so answering now |
|
[15:19] <dimitern> TheMue, no worries, I was just replying to your earlier questions :) |
|
[15:20] <fwereade_> natefinch, hazmat: here now if there's still worthwhile time? |
|
[15:21] <TheMue> dimitern: currently I also have no more questions, only wanted a confirmation ;) |
|
[15:21] <natefinch> fwereade_: yep, 1 minute |
|
[15:24] <dimitern> TheMue, :) cheers |
|
[15:26] <natefinch> ericsnow: btw, I recommend "starring" docs you want to be able to find, so they're under the "Starred docs" in google drive |
|
[15:26] <ericsnow> natefinch: good tip :) |
|
[15:27] <natefinch> ericsnow: took me a while to figure that out, after having trouble finding docs again.... it's like google drive specific bookmarks :) |
|
[15:29] <perrito666> yup, seems that people behind google docs never used a filesystem on their lives |
|
[15:45] <TheMue> it’s also no problem to move docs to own folders as they are only virtual (like creating a symlink) |
|
[15:45] <TheMue> so it shoud be possible to access them via a google drive client on a phone or pc too |
|
[15:57] <katco> it looks like i might have to make some updates to some of our repositories under github.com/juju/* that are not under github.com/juju/juju... what's the workflow for that? fork/pr? |
|
[16:03] <natefinch> katco: yeah, same as juju/juju fork & pr |
|
[16:04] <katco> natefinch: k thanks |
|
[16:04] <natefinch> katco: not sure about the state of botness on those other repos, though |
|
[16:04] <katco> natefinch: this should be loads of fun. updating imports of goyaml, touches like 3 sub-repos |
|
[16:05] <natefinch> katco: at least there's no interdependence... no repo is passing an object from the yaml package to code from another repo... so they can be updated non-simultaneously |
|
[16:07] <katco> natefinch: at least there's that |
|
[16:17] <katco> ugh i have to backport all of these too |
|
[16:17] <katco> this is going to eat up my entire day :( |
|
[16:19] <katco> well... actually. maybe i should defer the switch to gopkg.in, since it looks like these libraries will just use whatever juju-core specifies in dependencies.tsv |
|
[16:19] <katco> and save that change for a non-backporting commit |
|
[16:22] <mattyw> fwereade_, are you around or busy? |
|
[16:22] <natefinch> katco: yeah, if there's nothing we need in the new yaml package for the old branches, I wouldn't bother backporting |
|
[16:23] <fwereade_> mattyw, bit of both, am I behind on your reviews? |
|
[16:23] <mattyw> fwereade_, not at all I landed the periodic worker one as I added the test you asked for |
|
[16:23] <katco> natefinch: no, i need to backport, i'm just not going to switch to gopkg.in for this commit |
|
[16:23] <mattyw> fwereade_, but my metrics one I have a question |
|
[16:23] <fwereade_> mattyw, sweet |
|
[16:23] <fwereade_> mattyw, ah go on |
|
[16:23] <katco> natefinch: that way the sub repos should keep using launchpad.net/goyaml which juju-core should drive to the correct version |
|
[16:26] <katco> errr no wait, b/c that change is not in the launchpad version is it, so i'm looking at an import change regardless |
|
[16:29] <natefinch> katco: what's the change that you need in yaml? I thought the move to gopkg.in didn't have any signficant functionality changes |
|
[16:29] <katco> natefinch: https://bugs.launchpad.net/juju-deployer/+bug/1243827 |
|
[16:29] <mup> Bug #1243827: juju is stripping underscore from options <canonical-webops> <cloud-installer> <config> <landscape> <goyaml:Fix Released by adeuring> <juju-core:In |
|
[16:29] <mup> Progress by cox-katherine-e> <juju-core 1.20:Triaged by cox-katherine-e> <juju-deployer:Invalid by hazmat> <https://launchpad.net/bugs/1243827> |
|
[16:30] <katco> natefinch: the move to gopkg.in was a side-effect of having to change the code already |
|
[16:30] <sinzui> Ladies and Gentlemen, CI has blessed Blessed: gitbranch:master:github.com/juju/juju 36fe5868 (Build #1699). Devel is regressions free after 49 days |
|
[16:30] <katco> woo! |
|
[16:31] <natefinch> woo hoo! |
|
[16:31] <natefinch> katco: ahh I see. Interesting. |
|
[16:32] <alexisb> sinzui, wow |
|
[16:32] <natefinch> wait, isn't 49 days about the length of time katco and ericsnow have been on the team.... ? ;) |
|
[16:32] <katco> lol |
|
[16:33] <ericsnow> squirrel! |
|
[16:33] <natefinch> lol |
|
[16:34] <alexisb> sinzui, now the flood gates will be opened |
|
[16:37] <sinzui> alexisb, yes, I am prepared to new kinds of hate mail from CI tomorrow |
|
=== Ursinha is now known as Ursinha-afk |
|
=== Ursinha-afk is now known as Ursinha |
|
=== Ursinha is now known as Ursinha-afk |
|
=== jcw4 is now known as jcw4|away |
|
=== Ursinha-afk is now known as Ursinha |
|
=== perrito6` is now known as perrito666 |
|
[19:01] <katco> i have PRs to juju/(cmd|utils|charm) that need reviewing. 1-line import change. blocking cut if anyone wants to have a quick look. |
|
=== Ursinha is now known as Ursinha-afk |
|
=== Ursinha-afk is now known as Ursinha |
|
[19:35] <natefinch> cmars: can you try https://github.com/juju/juju/pull/495 with the new code? It gives much nicer error messages now. |
|
=== tvansteenburgh1 is now known as tvansteenburgh |
|
[19:42] <katco> can anyone review the aforementioned changes? |
|
[19:43] <natefinch> katco: link me? |
|
[19:43] * natefinch is lazy |
|
[19:43] <katco> hey np, gladly :) |
|
[19:43] <katco> https://github.com/juju/utils/pull/21 |
|
[19:43] <katco> https://github.com/juju/charm/pull/39 |
|
[19:43] <katco> https://github.com/juju/cmd/pull/5 |
|
[19:44] <natefinch> katco: did you compile these? the package name changed from "goyaml" to "yaml" |
|
[19:45] <natefinch> katco: I know only because I just made the same change in another codebase, and realized it's not a one line change (unfortunately) |
|
[19:46] <katco> lol you are right, sorry. i did all these through scripting, so i forgot about that |
|
[19:46] <katco> sigh ok well good review haha |
|
[19:46] <natefinch> katco: heh np :) |
|
[19:48] * cmars takes another look |
|
[19:49] <natefinch> cmars: it'll only fail at the first line of differences, but it should be clear what's different, at least. |
|
[20:00] <katco> natefinch: have another look? all building now. still 1 line change :) |
|
[20:03] <natefinch> katco: ha. wondered if you'd go that route |
|
[20:05] <natefinch> katco: I vaguely disapprove of renaming the import just to avoid changing more lines of text, but I don't think it's a huge deal. |
|
[20:05] <katco> natefinch: actually, i don't think i know this: how do you utilize a package imported via gopakg? |
|
[20:05] <katco> would it be yaml.v1.Foo()? |
|
[20:07] <natefinch> katco: the import path and the package name are actually totally unrelated. by convention they are the same... but a package name cannot include punctuation (I believe the actual restriction is something like like unicode letter followed by any number of unicode letters, numbers, or undescore) |
|
[20:07] <natefinch> katco: the convention for gopgk.in is that the version is not part of the actual package name, so "yaml.v1" is package yaml |
|
[20:09] <katco> natefinch: ah so you just do import yaml "gopkg.in/yaml.v1"? |
|
[20:14] <natefinch> katco: import "gopkg.in/yaml.v1" and then use it as yaml.Foo() |
|
[20:15] <natefinch> katco: you don't have to name the import, it gets named by what "package foo" says in the code, which in this case is "package yaml" |
|
[20:15] <katco> natefinch: huh? how does that resolve? it elides the .v1? |
|
[20:15] <katco> ohhh i see |
|
[20:15] <natefinch> katco: https://github.com/go-yaml/yaml/blob/v1/yaml.go#L7 |
|
[20:17] <natefinch> katco: that's what I mean by the import path and the package name not being related. You can put that code at https://github.com/natefinch/ABCD and import it as import "github.com/natefinch/ABCD" and you'd still refer to the package as yaml.Foo() |
|
[20:17] <katco> natefinch: gotcha, thanks |
|
[20:17] <natefinch> katco: this was actually one of the biggest complaints about the way gopkg.in does versioning - the last part of the URL is not the same as the package name |
|
[20:18] <katco> natefinch: yeah, i wonder if like gopkg.in/v1/yaml would have worked |
|
[20:19] <natefinch> katco: there's a couple problems with that - 1.) it sorts badly in the list of imports... so gopkg.in/v1/yaml might be far away from an import of gopkg.in/v2/yaml (the .v1 .v2 imports would sort to be right next to each other) |
|
[20:19] <natefinch> katco: 2.) it puts a /v2/ directory in your filesystem with a bunch of unrelated code in it, and again, the v1 code is far from the v2 code |
|
[20:19] <katco> natefinch: ah |
|
[20:20] <katco> natefinch: anyway, does this all look ok? |
|
[20:20] <natefinch> katco: sorry, tangent :) |
|
[20:20] <katco> natefinch: not a problem :) just trying to get this in for sinzui |
|
[20:23] <natefinch> katco: LGTM'd. |
|
[20:23] <katco> natefinch: thanks for your help today |
|
[20:25] <natefinch> katco: welcome |
|
=== jcw4|away is now known as jcw4 |
|
[21:00] <thumper> morning |
|
[21:07] <alexisb> morning thumper |
|
[21:07] <thumper> alexisb: morning |
|
[21:34] <katco> morning thumper |
|
[21:34] <thumper> o/ katco |
|
[21:35] <katco> (not intended just for thumper) i'm running into a strange kind of circular dependency b/c of gopkg.in. i'm trying to update v3 of juju/charm, which utilizes gopkg.in/juju/charm.v3 to reference itself. so it's referencing the wrong version of itself... if that makes sense? |
|
[21:36] <thumper> huh? |
|
[21:36] <katco> am i doing something wrong? or should i hack this to get around it |
|
[21:36] <thumper> what exactly are you doing? |
|
[21:36] <katco> alright, so i'm working with github.com/juju/charm |
|
[21:36] <katco> and all i'm trying to do is update some imports |
|
[21:36] <thumper> AFAICT, if you have packages that use gopkg.in, then you need to be in that dir |
|
[21:37] <thumper> yeah... |
|
[21:37] <thumper> so work in the dir gopkg.in/juju/charm.v3 |
|
[21:37] <katco> so i should be making these changes w/in gopakg.in on my machine? |
|
[21:37] * thumper nods |
|
[21:37] <thumper> I think so |
|
[21:37] <katco> that's what i was doing wrong then |
|
[21:42] <fwereade_> thumper, heyhey |
|
[21:42] <thumper> hi fwereade_ |
|
[21:43] <fwereade_> thumper, how's the time difference? |
|
[21:43] <thumper> terrible |
|
[21:43] <thumper> you mean now? |
|
[21:43] <thumper> or from germany? |
|
[21:43] <fwereade_> thumper, from germany |
|
[21:44] <fwereade_> thumper, I have a notion that we may disagree on the yuckiness of the Factory varargs |
|
[21:44] <thumper> heh, yeah |
|
[21:45] <fwereade_> thumper, I'm interested in counterarguments |
|
[21:46] <thumper> I'd rather have ickyness in one place, the factory, than at every call site |
|
[21:46] <thumper> I agree it is a little icky |
|
[21:46] <thumper> but working around golang limitiations |
|
[21:46] <thumper> it was dave's idea |
|
[21:46] <thumper> originally I had two methods |
|
[21:46] <fwereade_> thumper, I think it was the *repeated* ickiness inside Factory that really put me off |
|
[21:46] <thumper> for each type |
|
[21:47] <thumper> but the ickiness there is limited in scope, and contained |
|
[21:47] <thumper> vs. spreading it around all the places the factory is used |
|
[21:47] <fwereade_> thumper, just to be clear, it's the nil that's yucky? |
|
[21:48] <thumper> mostly, and the c |
|
[21:48] <thumper> what I *want* is: factory.MakeUnit() |
|
[21:48] <thumper> however |
|
[21:48] <thumper> due to bugs in gocheck |
|
[21:48] <thumper> we need the c |
|
[21:48] <fwereade_> thumper, indicating "I don't care" in place of a set of explicit instructions |
|
[21:48] <thumper> yes |
|
[21:49] <thumper> I had earlier... |
|
[21:49] <thumper> factory.makeAnyUser() |
|
[21:49] <thumper> and factory.MakeUser() |
|
[21:49] <thumper> damn capitals |
|
[21:49] <thumper> we joined those methods together |
|
[21:49] <thumper> to avoid the nil |
|
[21:49] <thumper> having: factory.MakeUser(c, nil) isn't obvious |
|
[21:50] <thumper> factory.MakeUser(c) is slightly more so IMO |
|
[21:50] * thumper misses python |
|
[21:51] <fwereade_> thumper, I know the feeling |
|
[21:52] <fwereade_> thumper, but I'm not sure that even python's varargs aren't more trouble than they're worth |
|
[21:52] <fwereade_> thumper, explicit is better than implicit |
|
[21:53] <thumper> python has the advantage of explicit default args |
|
[21:53] <thumper> fwereade_: IMO, nil isn't explicitly stating what you want, you have to go look up what nil is |
|
[21:53] <thumper> whereas not having nil is being explicit :) |
|
[21:54] <fwereade_> thumper, (python has default args with some really entertaining behaviour, but anyway) |
|
[21:54] <thumper> sure... |
|
[21:54] <fwereade_> thumper, I can read it just as easily as nil=>no preference, and that as a stronger statement than no statement at all |
|
[21:54] <thumper> I have a gut reaction to blind params, especially with nil |
|
[21:55] <thumper> I don't care strongly enough to fight for long |
|
[21:55] <thumper> speaking of which, |
|
[21:56] <thumper> the whole user sub(super)command is in question in the spec |
|
[21:56] <thumper> which I'm losing the will to fight as well |
|
[21:56] <thumper> suppose I should write something up |
|
[21:56] <fwereade_> thumper, oh really? I do think that we do ourselves no favours by polluting the command namespace |
|
[21:57] * thumper nod, I'll CC you on the email about it |
|
[21:57] <fwereade_> thumper, cheers |
|
[22:03] <katco> these are always funny. just received a panic from:// can't happen, these values have been validated a number of times |
|
[22:03] <katco> panic(err) |
|
[22:05] <thumper> katco: for some value of funny (i.e. not funny) |
|
[22:05] <thumper> :) |
|
[22:05] <katco> thumper: haha |
|
[22:31] <wallyworld> katco: hi, how'd you get on with the deps update? |
|
[22:32] <katco> wallyworld: still working... ran into a bunch of panics using the head of the yaml lib |
|
[22:32] <wallyworld> oh :-( |
|
[22:32] <katco> i had to update 3 sub repos to use the latest version of goyaml |
|
[22:32] <katco> that's what took the longest |
|
[22:32] <wallyworld> np, sounds like you poked a hornet's nest |
|
[22:32] <katco> i think i'm going to try and sit on the commit that fixed the reported issue and see what that does |
|
[22:34] <katco> wallyworld: i did get my ~20 day old change landed :) |
|
[22:34] <katco> that made katco very happy |
|
[22:35] <katco> this run seems to be going better with commit 1b9791953ba4027efaeb728c7355e542a203be5e |
|
[22:41] <katco> yeah almost done. i'm going to stick with this one and submit a PR after tests have passed |
|
[22:52] <ericsnow> fwereade_: (on the off chance you're in a reasonable timezone for now) you still around? |
|
[22:58] <davecheney> moin |
|
[23:02] <katco> wallyworld: well, now i have test failures because of goamz. i'm guessing it's because we're mocking environments and not specifying a signer. can this wait until tomorrow? |
|
[23:02] <wallyworld> katco: be with you soon, just in a meeting |
|
[23:02] <waigani> thumper: standup? |
|
[23:02] <thumper> sorry, on my way |
|
[23:03] <katco> wallyworld: ok |
|
[23:12] <ericsnow> davecheney: you one of the reviewers today? |
|
[23:14] <wallyworld> katco: yeah, it can wait. sorry that it turned out to be more problematic than first thought |
|
[23:14] <katco> wallyworld: no worries... we're almost there |
|
[23:14] <wallyworld> yep :-) |
|
[23:14] <katco> wallyworld: just have to find out where these mocked regions are |
|
[23:15] <wallyworld> ok |
|
[23:15] <katco> wallyworld: alright, way past my EOD. going to spend some time with my daughter before she has to go to bed :) |
|
[23:15] <katco> talk to you tomorrow! |
|
[23:15] <wallyworld> will do, thanks for taking the extra time :-) |
|
[23:22] <davecheney> ericsnow: ok |
|
[23:22] <davecheney> i have calls for the next 2 hours |
|
[23:22] <davecheney> i'll take a look after that |
|
[23:28] <ericsnow> davecheney: cool, thanks |
|
[23:29] <ericsnow> davecheney: https://github.com/juju/utils/pull/19 https://github.com/juju/juju/pull/462 https://github.com/juju/juju/pull/453 |
|
|