UbuntuIRC / 2020 /05 /22 /#juju.txt
niansa
Initial commit
4aa5fce
=== wgrant_ is now known as wgrant
[01:06] <hpidcock> thumper: tlm: you both have interest in this https://github.com/juju/juju/pull/11616
[01:10] <tlm> roger
[01:12] <tlm> looks good hpidcock, can you update pki/testing for the test authority to use it ?
[01:13] <hpidcock> tlm: it should already
[01:13] <hpidcock> only the pki package tests with secure keys
[01:14] <tlm> a test would have to bring in testing for the it to apply ?
[01:15] <hpidcock> tlm: true, but that is pretty much all test packages
[01:16] <tlm> can we introduce it into pki/testing to garuntee it ?
[01:16] <tlm> that would be my only feedback
[01:16] <hpidcock> sure
[01:18] <tlm> any idea what the performance is like with rsa 512 V ecdsa 224 ?
[01:19] <hpidcock> probably negligible difference
[01:20] <hpidcock> I can have a look
[01:20] <tlm> all good was just wondering if you knew
[01:20] <tlm> sort out mongo in k8's and we can do the swap for 2.9 maybe
[01:21] <tlm> just did a check and it looks like most dns root zones have swapped over to ecdsa now
[01:24] <tlm> smallish PR if anyone has time https://github.com/juju/juju/pull/11612 (not urgent)
[03:21] <wallyworld> thumper: progress of sorts, i left a comment on the bug. the mechanics of the issue i can see but not yet the root cause
[04:02] <babbageclunk> ugh, is there any way I can handle where I try to restore the snapshot on the controller nodes (ie, move the snapshot dir back) but only some of them succeed?
[04:06] <wallyworld> what's the cause of one of them failing?
[04:17] <tlm> is lxd hanging for anyone else when bootstraping from 2.8-rc ?
[04:22] <wallyworld> not last time i tried
[05:04] <babbageclunk> wallyworld: oops, missed your response - not sure, I'm trying to handle the situation where we need to restore the database snapshots (because the juju-restore process has failed for some reason) but then restoring the snapshots has failed on some number of nodes. I guess at that point we ask the operator to restore them.
[05:05] <wallyworld> i think in some cases it's ok (to start with) to inform what wnet wrong and suggest a manual fix, possibly followed by running restore again after the user intervention
[05:16] <thumper> https://github.com/juju/juju/pull/11617 if someone is feeling bored
[05:16] <thumper> wallyworld: re bug above, I imagine that is a little frustrating
[05:18] <wallyworld> thumper: there's weird stuff happening, success vs failure depends on if the state watcher picks up all changes in one event or two (in the latter case, the worker doesn't seem to see what it needs and the event gets lost), and we seem to be incorrctly watching relation unit changes multiple times
[05:23] <thumper> ugh...
[05:27] * thumper EODs
[05:27] <thumper> time for cleaning the kitchen and a glass of wine I think
[05:27] <thumper> night all
[08:38] <stickupkid> manadart, updated my PR https://github.com/juju/juju/pull/11611
[08:39] <manadart> stickupkid: Yep. Will look.
[08:53] <manadart> stickupkid: Can further simplify like this: https://pastebin.canonical.com/p/5ZfmWmK9kn/
[08:55] <stickupkid> ooo, like it
[08:55] <stickupkid> let me try it
[09:08] <stickupkid> manadart, done, just did some renaming of stuff and then applied the changes.
[09:22] <manadart> stickupkid: I just approved it, but you can do a QA step on OpensStack to check that a charm with multiple bindings causes a machine to be provisioned with multiple NICs.
[09:22] <manadart> Also need the LP bug in the description.
[09:22] <stickupkid> manadart, fun fun fun
[09:22] <stickupkid> manadart, yeah, this PR grew, I'll sort that out now
[09:23] <stickupkid> manadart, I'll test that we didn't regress AWS as well
[10:07] <stickupkid> manadart, you got microstack installing recently?
[10:07] <stickupkid> mine is just hanging installing rabbitMQ
[10:08] <manadart> stickupkid: Using `--devmode --edge`?
[10:09] <stickupkid> manadart, let me try again
[10:10] <stickupkid> devmode
[10:10] <stickupkid> manadart, thank-you again
[10:13] <stickupkid> manadart, mind if I edit your post for now
[10:13] <manadart> stickupkid: Sure.
[13:14] <stickupkid> achilleasa, https://github.com/juju/juju/pull/11618#pullrequestreview-416899148
[13:34] <achilleasa> stickupkid: it's all spaces now ^^
[13:34] <stickupkid> FIIIIIIIIIIIIIIIGHT
[13:34] <stickupkid> achilleasa, approved
[13:34] <achilleasa> (and 0-width UTF8 chars :-) )
[13:35] <stickupkid> ha, yes!
[13:39] <achilleasa> stickupkid: manadart: unless you remove the systemd service entries when you clean up you will still get the "machine is already provisioned error". I guess I should change my PR to clean them
[13:43] <manadart> achilleasa: Yeah, that's probably from when we removed them from /lib/systemd...
[13:43] <manadart> We should delete them.
[14:04] <achilleasa> stickupkid: updated PR to delete the systemd services. ok to merge?
[14:05] <stickupkid> sure
[14:21] <stickupkid> manadart, https://github.com/juju/juju/pull/11621
[14:22] <stickupkid> manadart, I could update the base branch, so made a new PR
[14:25] <stickupkid> manadart, just realised this needs to land first https://github.com/juju/juju/pull/11622
[14:28] <achilleasa> manadart: my changes broke the manual add-machine. I accidentally filtered out the ovs bridge (the only active NIC in my setup) from the list of usable addresses...
[14:29] <achilleasa> interestingly, the agent did connect briefly to the controller and set its password; then it exploded :D
[14:29] <achilleasa> (the call to network.FilterBridgeAddresses happens later)
[14:33] <manadart> achilleasa: I see.
[14:33] <manadart> stickupkid: Just doing a quick check on that patch. Had to update Go on my notebook.
[14:33] <stickupkid> sure sure
[14:33] <stickupkid> no rush
[14:41] <manadart> stickupkid: Approved 11622.