|
[00:00] <tychicus> did you initialize vault? |
|
[00:01] <tychicus> I ran into this when testing the new 20.05 stable bundle |
|
[00:01] <tychicus> there are 2 sets of instructions that I needed to get things moving forward |
|
[00:02] <tychicus> https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-vault.html |
|
[00:02] <tychicus> first initialize and install vault |
|
[00:02] <tychicus> https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-certificate-management.html |
|
[00:02] <tychicus> second retrieve and sign the csr |
|
[01:00] <wallyworld> thumper: i give up, 2.7 landing is broken, here's a forward port of the 2.7 PR directly to 2.8-rc3 (with legacy lease stuff removed) https://github.com/juju/juju/pull/11641 |
|
[01:06] <wallyworld> or hpidcock or kelvinliu ^^^^^ |
|
[01:06] <wallyworld> kust got to get 2.8-rc unblocked |
|
[01:07] <kelvinliu> looking |
|
[01:10] <kelvinliu> approved since the original one has been approved already wallyworld |
|
[01:10] <wallyworld> ty |
|
[01:11] <kelvinliu> nws |
|
[01:58] <timClicks> I'm receiving a "no matching agent binaries available" message when deploying a focal workload to AWS |
|
[02:04] <wallyworld> juju version? |
|
[02:05] <timClicks> this is JAAS (2.6.10), client is Juju 2.8-rc2 |
|
[02:05] <thumper> timClicks: jaas doesn't know about focal |
|
[02:05] <thumper> that is 2.7 |
|
[02:05] <thumper> wallyworld: where are we at with the landings? |
|
[02:06] <wallyworld> gave up on 2.7 and forward ported the last PR directly to 2.8-rc, waiting for CI now |
|
[02:06] <thumper> wallyworld: can I try to merge my 2.8 branch? |
|
[02:07] <wallyworld> sure, i'm just landing a 2.8rc->2.8 port and then i was going to merge yours |
|
[02:07] <thumper> all good, if you want to do the mergy thing then that's all good |
|
[02:08] <wallyworld> i can |
|
[02:08] <thumper> looks like I'll be helping with a customer issue |
|
[02:09] <wallyworld> what juju version? |
|
[02:09] <thumper> don't know yet |
|
[03:02] <hpidcock> forward port to develop https://github.com/juju/juju/pull/11642 |
|
[03:03] <suke> hi Guys, |
|
[03:05] <wallyworld> hpidcock: did you pick up the just landed pr from tim? |
|
[03:06] <hpidcock> wallyworld: let me check |
|
[03:07] <wallyworld> hpidcock: also, GetByHardwareAddress isn't used |
|
[03:07] <wallyworld> i suspect it's been replaced by Filter whichis whwere the conflict was |
|
[03:10] <hpidcock> GetByHardwareAddress was only added 10 days ago |
|
[03:14] <wallyworld> hpidcock: ah, i suspect it's for the work to filter out machine local addresses, and there's a PR which will use it |
|
[03:15] <hpidcock> wallyworld: yep, can always delete it later. I'd rather not break manadart's WIP if he has any using it |
|
[03:15] <wallyworld> yup |
|
[03:18] <wallyworld> hpidcock: the changes look as expected to me based on working with the other branches of late |
|
[03:19] <hpidcock> wallyworld: awesome thanks, just waiting for a green run then I'll merge |
|
[03:19] <wallyworld> i was 1/2 way thriugh doing it myself then i saw your pr :-) |
|
[03:19] <hpidcock> wallyworld: ahah |
|
[03:20] <wallyworld> was looking at the nic conflict |
|
[03:20] <hpidcock> sorry |
|
[03:20] <wallyworld> all good, you saved me a lot6 of typing |
|
[04:17] <thumper> https://github.com/juju/juju/pull/11644 |
|
[04:18] <thumper> the last of the uniter package level loggers |
|
[04:18] <thumper> phew |
|
[04:25] <manadart> wallyworld, hpidcock: GetByHardwareAddress was indeed a utility added for a patch in progress. |
|
[04:26] <manadart> wallyworld: This is the fix for network_get and local-machine addresses: https://github.com/juju/juju/pull/11638 |
|
[04:26] <manadart> Just have to put together some QA. |
|
[04:27] <wallyworld> gr8 ty |
|
[04:29] <Eryn_1983_FL> tychicus: he got some handful of keys |
|
[04:29] <Eryn_1983_FL> i think he did |
|
[05:30] * thumper sighs |
|
[05:30] <thumper> more problems |
|
[05:30] <thumper> but yay, users?? |
|
[05:35] <wallyworld> thumper: anything serious? |
|
[05:35] <thumper> wallyworld: nothing blocking |
|
[05:35] <thumper> but enough that we'll want to include a patch in 2.7.7 |
|
[05:35] <wallyworld> got a bug? |
|
[05:36] <thumper> https://bugs.launchpad.net/juju/+bug/1881242 |
|
[05:36] <mup> Bug #1881242: Missing error check results in panic - apiserver <apiserver> <panic> <juju:Triaged> <https://launchpad.net/bugs/1881242> |
|
[05:36] <thumper> the fix for this bug is trivial... add an error handling |
|
[05:36] <thumper> the other problem, for which we are also getting a bug, is how did it get into this status |
|
[05:36] <thumper> state |
|
[05:37] <wallyworld> sigh |
|
[05:39] <thumper> wallyworld: did you see my PR above? |
|
[05:40] <thumper> wallyworld: nm, hpidcock reviewed it for me |
|
[05:40] <thumper> thanks hpidcock |
|
[05:40] <wallyworld> that's why i didn't look :-) |
|
[05:40] <wallyworld> already +1 |
|
[05:41] <thumper> hey... https://jenkins.juju.canonical.com/job/github-make-check-juju/5933/consoleText |
|
[05:42] <thumper> look for "oops" |
|
[05:42] <thumper> api/uniter fails |
|
[05:42] <thumper> but I don't see a filure in the output? |
|
[05:45] <hpidcock> thumper: might be teardown failure? |
|
[05:46] <thumper> succeeds locally |
|
[05:46] <thumper> and I only see 118 passed and 1 skipped |
|
[05:46] <thumper> hpidcock: could be |
|
[05:50] <thumper> wallyworld: I see the email you just forwarded to me |
|
[05:50] <thumper> but I don't see the original |
|
[05:50] <wallyworld> thumper: yeah, go figure, maybe the email lists are stuck |
|
[05:50] <thumper> could be |
|
[06:34] <thumper> wallyworld: you message to the list has come through now |
|
[06:35] <wallyworld> so it has |
|
[09:00] <Laney> hey, quick bit of help needed hopefully. i've got a controller running on lxd locally, but juju has somehow got the wrong idea about what its ip addresses are (maybe the container got restarted?). how can I get it to learn the right ones? |
|
[09:15] <Laney> (made a topic on discourse) |
|
[09:17] <achilleasa> Laney: you can point the cli to the right IP by editing the endpoint values in ~/.local/share/juju/controllers.yaml for your controller (as always backup first before editing ;-) |
|
[09:18] <Laney> achilleasa: wtf, that has the right addresses in it |
|
[09:18] <Laney> uh |
|
[09:18] <Laney> there's some duplication here, how did that happen |
|
[09:20] <achilleasa> Laney: you mean you have the same IP twice in the endpoint list? Not sure how that could happen but it should not have any impact in connection attempts |
|
[09:20] <Laney> no I had the same controller in there twice, once right and once wrong |
|
[09:20] <Laney> but deleted the wrong one now and 'juju status' works again |
|
[09:21] <Laney> 'juju ssh' doesn't though, it seems to be trying to connect to my public IP instead of the container's IP ... |
|
[09:24] <Laney> works with --proxy but not without |
|
[11:39] <achilleasa> manadart: can you take a look at https://github.com/juju/juju/pull/11645? |
|
[11:45] <manadart> achilleasa: Approved it. |
|
[12:04] <achilleasa> manadart: do we rewrite the agent config files when upgrading? |
|
[12:05] <manadart> achilleasa: Yes, we have to in order to set the version. |
|
[12:14] <Eryn_1983_FL> hi |
|
[19:14] <tychicus> with the nova-cloud-controller charm, is there bit of additional config that needs to be performed to get console-access-protocol to use TLS? |
|
[19:15] <tychicus> I ensured that the vault relation is there juju add-relation nova-cloud-controller:certificates vault:certificates |
|
[19:17] <tychicus> but openssl s_client -connect shows that TLS is not enabled |
|
[19:18] <tychicus> If I switch from https to http the page renders with an error "Something went wrong, connection is closed " |
|
[19:19] <pmatulis> tychicus, are you talking about the dashboard? |
|
[19:20] <tychicus> yes, the console in the dashboard |
|
[19:20] <tychicus> dashboard loads with https no problem |
|
[19:22] <tychicus> in previous releases to enable ssl you had to supply a console-ssl-cert to nova-cloud-controller |
|
[19:23] <tychicus> but console-ssl-cert was removed in the 19.07 charm release |
|
[19:24] <tychicus> Please use ssl_cert configuration option or the vault certificates relation |
|
[19:25] <pmatulis> tychicus, where do you see that? |
|
[19:26] <tychicus> https://jaas.ai/nova-cloud-controller/345#charm-config-console-ssl-cert |
|
[19:33] <pmatulis> tychicus, thanks |
|
[19:34] <tychicus> pmatulis: np, just trying to figure out if I missed something trivial in my config |
|
[19:35] <pmatulis> tychicus, what about the value of 'console-access-protocol'? |
|
[19:35] <tychicus> pmatulis: uju config nova-cloud-controller console-access-protocol=novnc |
|
[19:36] <tychicus> s /uju/juju |
|
[19:36] <pmatulis> tychicus, k |
|
[19:37] <pmatulis> tychicus, well, based on what you've said you've done and based on the documentation it should work. can you open a bug on nova-cloud-controller? |
|
[19:37] <tychicus> sure |
|
[20:31] <pmatulis> tychicus, did you try logging out and back in to the dashboard after having made that config change (to 'novnc')? |
|
[20:32] <tychicus> I did |
|
[20:33] <pmatulis> k |
|
[20:34] <pmatulis> well, like i recommend: https://bugs.launchpad.net/charm-nova-cloud-controller/+filebug |
|
[20:34] <tychicus> I'm working on the bug report now, but I need to do a little more digging |
|
[20:35] <tychicus> it looks like novnc is not proxied by apache, all of the stuff proxied by apache is fine |
|
[21:13] <tychicus> looks like the issue is that there is an issue with the nova.console.websocketproxy |
|
|