UbuntuIRC / 2020 /03 /30 /#juju.txt
niansa
Initial commit
4aa5fce
[02:59] <hpidcock> wallyworld: k8s charms don't have an install hook correct?
[03:00] <hpidcock> considering dropping the noop upgrade op for the standard deploy op, but instead of firing the "install" hook it fires the "start" hook
[03:01] <hpidcock> then the caasoperator has a caching downloader
[03:01] <hpidcock> and we use the deployer as normal inside the caas uniter
[03:01] <hpidcock> then caas upgrade/install has a post install op that copies files to the pod
[03:02] <hpidcock> that pod init op is also run on container init
[03:07] <wallyworld> hpidcock: they do have one now
[03:07] <wallyworld> it was added just before the sprint
[03:08] <wallyworld> here's that action enablement PR https://github.com/juju/juju/pull/11374
[03:08] <wallyworld> we could use the "normal" deployer if we rejig things i suspect
[03:09] <wallyworld> or we could serialse the deployer and actions etc
[03:09] <wallyworld> so that we don't unpack a new charm if there's an action running. we may even do that, would need to check
[03:10] <hpidcock> the resolver is serialised and only runs one op at a time
[03:34] <pmatulis> how does JUJU_AVAILABILITY_ZONE information, set from MAAS, get propagated to a Juju model? and is this strictly used by the nova-compute charm?
[04:17] <timClicks> latest progress report is now available: https://discourse.jujucharms.com/t/juju-progress-report-2020-w13/2842
[07:43] <zeestrat> pmatulis: regarding usage, ceph-mon can use it for crush maps if `customize-failure-domain` is enabled: https://github.com/openstack/charm-ceph-mon/blob/master/config.yaml#L172-L177
[07:50] <flxfoo> hi all
[07:50] <timClicks> hi flxfoo
[07:51] <flxfoo> little question about ~/.local/share/juju/ssh/juju_id_rsa... where does this one comes from? If I create a new user (syste/ and juju) I have this some ssh key... and if I try a juju-ssh or a simple ssh i have a permission denied... I would need to override that key by some generated in ~/.ssh ? I am not sure about the best practice here...
[07:52] <timClicks> all inter-agent communication is protected by TLS
[07:52] <timClicks> using a self-signed CA
[07:53] <timClicks> so, it is created by the juju controller
[07:58] <hpidcock> timClicks: wouldn't be the TLS cert, probably the ssh key that I think is created when you bootstrap. Not sure.
[08:13] <flxfoo> sorry I think I am not clear at all :p ... when creating a new user, (unis system and juju) one should ssh-genkey for the system user, juju register the user, import that .ssh/id_rsa.pub key, and replace .local/share/juju/ssh/ with those keys?
[08:21] <timClicks> hpidcock: good point (I was a bit too fast )
[08:22] <timClicks> flxfoo: I don't recall exactly what the steps are.. perhaps ask on https://discourse.jujucharms.com?
[10:10] <jam> achilleasa, when you work out the DEBUG level to set for the dependency engine, it would probably be good to send it to discourse to let others know that it at least exists, and they can come back if they ever want to use it.
[10:28] <achilleasa> jam: will do
[11:02] <stickupkid> manadart, whilst I'm doing some integration testing, I fixed my reload spaces-rework https://github.com/juju/juju/pull/11366
[11:40] <stickupkid> who wants a quick PR that prevents me from swearing a lot https://github.com/juju/juju/pull/11375
[11:48] <achilleasa> stickupkid: looking
[11:49] <stickupkid> haha, need to fix the linter one sec
[11:52] <achilleasa> stickupkid: can you try this on dev? cd provider/openstack; go test -check.f TestGetVolumeEndpointBadURL
[11:53] <stickupkid> achilleasa, what branch?
[11:53] <achilleasa> dev/head
[11:53] <stickupkid> achilleasa, fixed my lint issue
[11:53] <stickupkid> achilleasa, worked for me
[11:54] <achilleasa> did you pull + make dep?
[11:54] <stickupkid> probably not
[11:54] <achilleasa> on my branch I see a %q in there while the regex tests the unquoted error
[11:56] <stickupkid> PASS: cinder_test.go:914: cinderVolumeSourceSuite.TestGetVolumeEndpointBadURL 0.000s
[11:56] <achilleasa> stickupkid: not sure why but I get https://paste.ubuntu.com/p/PknhR9Zh9V/
[11:58] <achilleasa> stickupkid: getting same error on develop too
[12:00] <achilleasa> stickupkid: you wouldn't happen to use < go1.14.1 would you?
[12:04] <achilleasa> gotcha! https://github.com/golang/go/commit/64cfe9fe22113cd6bc05a2c5d0cbe872b1b57860
[12:10] <achilleasa> jam: I have rebased my relation-departed changes after the relation-created PR merged. It has already been reviewed but let me know if you want to take a quick look before I land it (https://github.com/juju/juju/pull/11356)
[12:46] <rick_h_> achilleasa: wallyworld has a ping out to me around https://launchpad.net/bugs/1869275 and a need for an upgrade-step around the app relation work?
[12:46] <mup> Bug #1869275: [subordinate] main unit did not get subordinate installed <juju:Triaged> <https://launchpad.net/bugs/1869275>
[12:46] <rick_h_> achilleasa: if you have a sec can you read that and let me know what you think?
[12:46] * rick_h_ is processing email/irc ping backlogs
[12:49] <achilleasa> rick_h_: reading...
[12:53] <stickupkid> achilleasa, I use go1.12 for work ;)
=== hpidcock_ is now known as hpidcock
=== skay_ is now known as skay
[13:50] <achilleasa> stickupkid: 11375 approved with small req
[13:52] <stickupkid> achilleasa, nope ;)
[13:52] <achilleasa> all other tests pass with go1.14 ;-)
[14:08] <rick_h_> achilleasa: looks like errors merged?
[14:08] <achilleasa> rick_h_: did juju/errors need anything special?
[14:08] <achilleasa> my merge on Friday didn't do anything :D
[14:08] <rick_h_> achilleasa: I just checked the settings, I updated the password in case it wasn't up to date
[14:08] <rick_h_> achilleasa: and watch the logs to see it go by
[14:08] <achilleasa> all good then
[14:35] <achilleasa> hml: still reviewing 11339; it's gonna take a while though
[14:36] <hml> achilleasa: it’s not blocking me, i’m storage right now, an independent piece
[16:47] <achilleasa> rick_h_: looks like we are indeed missing an upgrade step from 2.6 -> 2.7 (https://github.com/juju/juju/blob/2.7/worker/uniter/hook/hook.go#L36 vs https://github.com/juju/juju/blob/2.6/worker/uniter/hook/hook.go#L32)
[16:48] <achilleasa> I might be able to add a small patch that attempts to recover the application name from the remote which means we won't need an upgrade step
[16:50] <achilleasa> or I can just add an upgrade step but if that won't ship with 2.7.5 and we won't have a 2.7.6 things might get interesting...
[16:53] <achilleasa> any preference?
[16:54] <rick_h_> achilleasa: ok, on the phone atm. Preference would be the safest path for existing users. We don't/can't set a gateway release where "you have to upgrdae to X before you upgrade to Y"
[16:55] <achilleasa> rick_h_: I guess the manual workaround is un-relate and then relate?
[16:57] <achilleasa> rick_h_: so this seems to affect 2.6 -> 2.7 upgrades where a unit's state indicates a pending hook of type RelationChange
[16:58] <achilleasa> rick_h_: maybe just having an upgrade step should be enough; you still need to run the steps if you go from 2.6 -> 2.8 right?
[17:01] <pmatulis> how does JUJU_AVAILABILITY_ZONE information, in a MAAS context, get propagated to a Juju model?
[17:04] <rick_h_> achilleasa: sorry, off the phone now, processing
[17:06] <rick_h_> achilleasa: let's sync up in the morning. You're EOD and I want to read this again. I mean we can add an upgrade step to 2.8 that's fine.
[17:07] <rick_h_> achilleasa: but I'm nervous about current steps for existing users. So anyone on 2.7 will hit this and we've got a lot of stuff that's going to be upgaded from 2.6 to 2.7 with prodstack
[17:07] <rick_h_> achilleasa: not everything can be unrelated/rerelated
[17:08] <achilleasa> rick_h_: AFAICT it's upgrade 2.6->2.(6+x) where any of the units have a pending relation{Changed, Departed} hook (both check for non-empty RemoteApplication)
[17:09] <rick_h_> achilleasa: oh, so the thought is this is missing within the 2.6 series vs 2.6 to 2.7?
[17:09] <achilleasa> so it's not everyone but can probably (?) happen with enough units
[17:09] <rick_h_> achilleasa: yea... ugh
[17:09] <achilleasa> that's my understanding
[17:10] <rick_h_> achilleasa: ok...thinking. I don't think there's a magic trick to this though...ugh
[17:10] <achilleasa> so we can add an upgrade step to the 2.7 line
[17:10] <rick_h_> achilleasa: right, but but but I'm going to start crying lol
[17:10] <rick_h_> achilleasa: please drop your notes into the bug before you EOD and then go enjoy the evening
[17:11] <achilleasa> well another option would be to offer a juju-unfck tool to attempt and fix the state files
[17:11] <rick_h_> achilleasa: :/ not making me feel better lol
[17:11] <achilleasa> but we probably need the upgrade step anyway
[17:11] <rick_h_> yea, definitely need that. ...can the upgade step check the hook state before running?
[17:12] <achilleasa> patch 2.7.0?
[17:12] <rick_h_> e.g. can we promise we won't hit it?
[17:12] <rick_h_> achilleasa: maybe. For tomorrow you can start off 2.7 with the idea of forward port and I'll try to find out tonight about what we're thinking with 2.7.
[17:12] <achilleasa> ok. will drop my notes in the bug
[17:12] <rick_h_> I really wish I'd have been strong with the "release what we've got" because we're 5 shas in now...this would be 6...
[17:26] <achilleasa> rick_h_: added a comment but skipped the proposal to patch 2.7.0 onwards as I am not sure if it's even feasible with snaps and whatnot
[17:43] <rick_h_> achilleasa: ok, ty
[22:35] <babbageclunk> have we introduced a go 1.13 dependency in develop?
[22:36] <babbageclunk> I'm getting an error building k8s.io/apimachinery/pkg/util/errors
[22:38] <tlm> babbageclunk: good chance that was me as I started using that package
[22:39] <tlm> but more than likely it was our upgrade to latest k8's client that triggered it
[22:39] <babbageclunk> yeah, sounds likely
[22:39] <babbageclunk> is it a problem?
[22:39] <tlm> sounds like it might be when we go to make 2.8
[22:39] <tlm> ?
[22:40] <babbageclunk> I'm not sure where we are with getting off go 1.10 - might just upgrade to 1.14 locally for now
[22:41] <tlm> what is the error so I an take a look ?
[22:42] <babbageclunk> tlm: https://paste.ubuntu.com/p/r2zBVRVjhm/
[22:43] <babbageclunk> it's weird though - building juju works fine (with go 1.12), this only happens when I try to do an upgrade-controller --build-agent
[22:45] <tlm> 1.17 kubernetes is built with go 1.13.8
[22:45] <babbageclunk> hmm, upgrading to 1.14 didn't help :/
[22:46] <babbageclunk> oh no - I think it's because I was in the juju-restore directory, so it was trying to use modules!
[22:46] <babbageclunk> sorry
[22:46] <babbageclunk> that's really annoying
[22:46] <tlm> ?
[22:47] <tlm> still raises a good point, we are very lucky that the upgrade hasn't given us more problems
[22:47] <babbageclunk> I'm going to try downgrading back to 1.12 and then run the upgrade from outside the juju-restore directory
[22:47] <tlm> it should fail as they are using the new errors stuff
[22:48] <babbageclunk> yeah that totally worked fine, somehow
[22:48] <tlm> magic
[22:48] * babbageclunk shrugs!
[22:49] <tlm> all I can think of is we are not using that code so it's being compiled out
[23:07] <babbageclunk> tlm: yeah, that might be it
[23:08] <babbageclunk> a bit weird that just being in a go mod directory (for a different project) is enough to completely change how juju builds though
[23:08] <tlm> hpidcock: is the plan to get off 1.10 before 2.8 release ?
[23:08] <tlm> what is your GO111MODULE env set to ?
[23:13] <hpidcock> tlm: `go env` says empty string, so auto
[23:13] <hpidcock> the plan is to get 1.14 in snap building at least
[23:14] <babbageclunk> tlm: yeah it's unset in that shell
[23:20] <hpidcock> also babbageclunk the github actions stuff builds with 1.10
[23:20] <hpidcock> so people can't land breaking changes
[23:21] <babbageclunk> seems like it was a weird interaction between modules and unused deps?
[23:21] <babbageclunk> it's fine for me now anyway, thanks guys!