UbuntuIRC / 2020 /02 /07 /#juju.txt
niansa
Initial commit
4aa5fce
[01:49] <thumper> babbageclunk: do you have a few minutes for a python question?
[01:55] <thumper> or anyone else really who has done testing recently in pylibjuju
[01:58] <tlm> I did a little bit thumper but hardly an expert
[01:58] <thumper> I'm just having issues with the python path
[01:59] <thumper> it is grabbing a version in .local/lib
[01:59] <thumper> when I set a python path
[01:59] <thumper> it fails on an import for toposort
[01:59] <thumper> and I'm not sure why
[02:15] <tlm> this is what I do
[02:16] <tlm> python3 -m venv .venv; source .venv/bin/activate; python setup.py egg_info && pip install -r juju.egg-info/requires.txt; pip install asynctest ipdb mock pytest pytest-asyncio pytest-xdist Twine git+https://github.com/johnsca/websockets@bug/client-redirects#egg=websockets && pip install urllib3==1.25.7 && pip install pylxd
[02:16] <tlm> harry gave me that when I worked on it. That covers all dependencies from memory
[02:24] <wallyworld> babbageclunk: if you had time before eod, a look at https://github.com/juju/juju/pull/11189 would be gr8
[02:34] <babbageclunk> wallyworld: looking
[02:36] <tlm> what version of go mock are we suppoed to use wallyworld? See a lot of changes when I generate due to version bump
[02:37] <wallyworld> yeah, that can happen. it's hard because builds use go 1.10, CI uses 1.12 I think. i normally just ignore gomock changes when doing a review
[02:41] <tlm> wallyworld, hpidcock: i did a tidy up for basic structure. Have a look at https://github.com/juju/juju/pull/11190
[02:42] <wallyworld> ok
[02:45] <hpidcock> tlm: might be good if the rbac mapper follows the worker interface
[02:45] <tlm> it's going to be used inside of worker thats why I opted out and stuck to kube convention
[02:45] <tlm> but easy change
[02:46] <tlm> can do
[02:46] <wallyworld> why getRBACLabels2 ?
[02:46] <hpidcock> I dunno, maybe wallyworld has stronger opinions on it
[02:46] <tlm> I need to break out that function off of the struct but haven't got around to it. Just to illustrate
[02:46] <tlm> not staying
[02:46] <wallyworld> ah no worries, ta
[02:47] <wallyworld> yeah, so typically we do prefer workers in juju to encapulate the management of go routines
[02:48] <tlm> np, it's an easy fix. Functionally will still remain the same
[02:49] <wallyworld> all juju go routines/workers are typically started via the dependency engine or as a child worker of another worker, managed by the runner abstraction
[02:49] <wallyworld> yup
[02:54] <wallyworld> tlm: so we could use a watcher abstraction over the informer event stream (like we did in the other pr) and use a standard worker loop to process those watch events
[02:55] <tlm> not really, our current watcher implementations are not adequate for this type of problem
[02:56] <babbageclunk> wallyworld: why have Enqueue and EnqueueV2 on the same API? Shouldn't everything just use EnqueueV2 (if that's what's on the best API)?
[02:56] <wallyworld> babbageclunk: older clients still need enqueue
[02:56] <babbageclunk> Right, but wouldn't they talk to the old facade?
[02:57] <babbageclunk> (that's why we have them)
[02:57] <wallyworld> usually but there's still legacy actions CLI not behind the feature flag that uses enqueue
[02:57] <wallyworld> it's only the CLI behind the feature flag that uses v2
[02:58] <wallyworld> when 2.8 beta1 will have both CLIs
[02:58] <wallyworld> so we need to support both until juju v3
[02:58] <babbageclunk> Oh, so those commands that use the old method will still be available on the next-released juju CLI?
[02:58] <wallyworld> yeah :-(
[02:58] <wallyworld> as we can't change CLI until v3
[02:59] <wallyworld> and new stuff is opt in till then
[02:59] <babbageclunk> still seems like a weird way to do it - we can change the API internally while keeping the cli the same
[03:00] <wallyworld> how? they return different structs
[03:00] <wallyworld> and the new struct has less info
[03:00] <wallyworld> we also have AddMachineV2 (rom memory) for similar reasons
[03:01] <wallyworld> maybe it's gone now
[03:03] <wallyworld> babbageclunk: or maybe i'm on crack and am missing something?
[03:04] <wallyworld> tlm: what calls enqueueServiceAccount ?
[03:05] <babbageclunk> wallyworld: I think really I just don't like that the new method is called EnqueueV2 - it's the one that we're going to be stuck with when the legacy method is removed once the feature flag goes away.
[03:06] <wallyworld> we can rename it then
[03:06] <tlm> the event handlers, I have just fixed that up
[03:06] <tlm> my cleanup had the old code in it
[03:06] <babbageclunk> Might make more sense to rename the old one to LegacyEnqueue and have this one as Enqueue in the most recent facade version.
[03:06] <wallyworld> babbageclunk: i could call it EnqueueOperation
[03:06] <babbageclunk> yup, that would work too
[03:07] <babbageclunk> I'll put that in a comment
[03:07] <wallyworld> ok
[03:11] <wallyworld> tlm: modulo seeing the revised code, i still think we can wrap the informer event handler as a notify watcher, and use a std juju worker to update the uuid->app mapping, but i could be wrong also
[03:13] <tlm> perhaps, I can explain it over HO when you are free if you would like
[03:23] <wallyworld> tlm: coffee time, give me a few minutes
[03:24] <tlm> no rush
[03:36] <wallyworld> tlm: free now?
[03:37] <tlm> roger
[03:54] <wallyworld> thumper: can you remind me - the isResponsible flag - does that ensure we only have one of those workers it wraps per controller?
[03:54] <wallyworld> or babbageclunk?
[04:07] <babbageclunk> wallyworld: ask again?
[04:07] <wallyworld> babbageclunk: ty for review btw :-)
[04:07] <wallyworld> just checking
[04:07] <babbageclunk> hang on, reminding myself...
[04:08] <wallyworld> i think if we pass controller agent tag (or machine tag of controller) as the claimant we just get one of that worker that is wrapped
[04:08] <wallyworld> tjust want to confirm
[04:09] <babbageclunk> yeah, that's right - it tries to claim the lease for that entity and if it fails the flag is false, so any downstream workers won't run
[04:10] <wallyworld> babbageclunk: awsome ty
[04:10] <wallyworld> we want one k8s client cache per controller
[04:10] <wallyworld> the worker maintains it and different model workers use it
[04:11] <wallyworld> the isresponsible worker
[04:11] <babbageclunk> yeah, sounds like the isResponsible decorator is what you want
[04:12] <tlm> thanks, will try that for wiring this up
[04:14] <babbageclunk> wallyworld: oh, hang on - isResponsible is for 1 worker per model across n controllers, the machine-agent-level flag is the ifPrimaryController one
[04:15] <wallyworld> we want one per controller
[04:15] <babbageclunk> but the model level workers might be running in a controller that isn't the primary controller agent, not sure whether that's a problem
[04:15] <wallyworld> i think we can set the claimant flag to the controller agent though right?
[04:16] <wallyworld> i think isresponsible is what we want, can always test to be sure
[04:16] <babbageclunk> jump in a hangout? I think we might be talking at cross-purposes
[04:16] <wallyworld> ok
=== parlos_afk is now known as parlos
[07:50] <wallyworld> stickupkid: hey, 2 things, i'd love a review on a totally mechanical PR to move some unused/deprecated code out of the way https://github.com/juju/juju/pull/11191
[07:50] <wallyworld> also, https://bugs.launchpad.net/juju/+bug/1856832
[07:50] <mup> Bug #1856832: neutron-openvswitch charm lxc profile not getting set with correct configuration <juju:Triaged> <https://launchpad.net/bugs/1856832>
[07:51] <wallyworld> i ran a test of a single unit and it worked bu then the reported said it happens when deploying a bundle
[07:51] <wallyworld> so seems like it could be some sort of race
[07:51] <wallyworld> but win o'clock here so ran out of time to dig further
[07:51] <wallyworld> *wine
[07:51] <stickupkid> haha
[07:51] <stickupkid> win sounds better
[07:52] <wallyworld> no, wine :-)
[07:52] <stickupkid> i hate wine
[07:52] <stickupkid> i'll have a look once I'm up and running, taking dogs for a walk first
[07:55] <wallyworld> no worries, i got a dinner guest here so got to go afk
[09:19] <stickupkid> manadart, I'm picking up a bug today, I'll catch up on the openstack cidr issue after that
[09:20] <manadart> stickupkid: Ack. I am working on the consuming side of my last patch. I might look at yours with a mind to testing if I get through it today.
[09:20] <stickupkid> manadart, yeah, go for it
[09:28] <flxfoo> Hi all
[09:29] <stickupkid> flxfoo, hi
[09:29] <flxfoo> I have a little issue with `mysql-shared` on a percona-cluster (x3)... a webserver charm is not doing what it is suppose to do
[09:29] <flxfoo> the status on mysql-shared is joining
[09:30] <flxfoo> the webserver is connectiong to percona charm and send data, but then wait for an answer
[09:30] <flxfoo> not sure where to look at
[09:30] <flxfoo> ideas are welcome
[09:31] <stickupkid> flxfoo, this might help https://discourse.jujucharms.com/t/debugging-charm-hooks/1116
[09:31] <stickupkid> flxfoo, failing that, I would log on to the charm machine/container and ensure it can access the outside
[09:32] <stickupkid> flxfoo, or check if the relation is up correctly
[09:32] <flxfoo> stickupkid: thanks will give it a read thanks
[09:32] <flxfoo> stickupkid: thing is we use the exact same charm on dev platform, which is working fine
[09:32] <flxfoo> on production difference is cluster members is more than 1 :)
[09:33] <flxfoo> and communicate through private network without issue
[09:33] <flxfoo> everybody is happy except the charm relation :)
[09:40] <flxfoo> find that in debug log: host is not in access-network xxxx ignoring
[09:41] <flxfoo> ips are on different subnet
[09:44] <flxfoo> stickupkid: do you know how I would change the interface they would communicate for that relation? (if ever you know)
[09:44] <stickupkid> flxfoo, I don't unfortunately
[09:50] <stickupkid> flxfoo, if you want better exposure maybe drop an new topic on https://discourse.jujucharms.com/
[09:54] <flxfoo> stickupkid: ok thanks
[09:55] <flxfoo> little quick one: how can I see which relations are available in a charm
[09:55] <flxfoo> stickupkid: I think my issue is that the two leaders are not communicating on the same subnet...
[09:56] <stickupkid> flxfoo, juju status --relations
[10:26] <achilleasa> is it possible to access a unit name from its tag?
[10:27] <achilleasa> .Id() seems to apply a transformation
[10:31] <stickupkid> i thought Id didn't do that
[10:40] <achilleasa> stickupkid: for units it replaces last '-' with '/'
[10:40] <achilleasa> no prob; just have to pass the name as an extra arg
[10:51] <nammn_de> manadart: rick_h opened a bug where one should be able to use the spaceID in `show-space` . While at it I wasn't sure whats the best way to solve this from API param sense.
[10:51] <nammn_de> On the cmd part I could decide between sending an int or a tag. On the Apiserver part I could use that information to either search by id or by name. Wdyt?
[10:54] <nammn_de> Or we can just stick with the entity Tag. And in case it does not work out on the apiserver tag, try to search by id
[10:59] <achilleasa> stickupkid: turns out that for the api call I am interested in 'Unit' means 'Tag' :D
[11:03] <stickupkid> why doesn't this create three containers in lxd, it only does one? juju deploy cs:~juju-qa/bionic/lxd-profile-without-devices-5 --to lxd -n 3
[11:12] <stickupkid> this does the same thing... juju add-unit lxd-profile -n 3 --to lxd
[11:12] <stickupkid> annoying
[11:29] <nammn_de> manadart: I thought something along this line https://github.com/juju/juju/pull/11195/files
[11:37] <manadart> nammn_de: A space ID is actually a valid space name, so you can drop the extra DTOs and keep it as `params.Entities`.
[11:38] <nammn_de> manadart: ah, yes. But I would need to parse it on the apiserver to check whether I call byName or byID
[11:39] <manadart> nammn_de: This also means that testing for an integer is not enough. We'll just have to query both ID and name. Unique=return, Multiple=error with instructions to disambiguate, Nothing=not found.
[11:40] <nammn_de> manadart: rgr
[11:45] <achilleasa> looking for someone to pair with me in deciphering the way that errors are handled when flushing the hook ctx
[11:50] <stickupkid> wallyworld, I'm struggling to get a reproducer for 1856832
[13:07] <manadart> stickupkid: https://github.com/juju/juju/pull/11194
[13:07] <rick_h> morning
[13:08] <manadart> Morning rick_h.
[13:09] <stickupkid> manadart, i'll look into this
[13:12] <rick_h> stickupkid: what about if you did --to lxd,lxd,lxd ?
[13:12] <rick_h> stickupkid: I think the thing is that it reads the list of --to thinking it'll be like --to 0,1,2
[13:12] <rick_h> so it treats each target as one at a time
[13:12] <stickupkid> rick_h, it may well be, but it's very strange
[13:13] <rick_h> stickupkid: I understand it is in that case, but -n3 --to=0 you don't want three of them on the same machine as well
[13:13] <stickupkid> rick_h, it not intuitive
[13:13] <rick_h> yea, for the lxd case it's not agree
[14:28] <stickupkid> hml, this is interesting esp. because it's on the openvswitch 3, which is causing the problemshttps://paste.ubuntu.com/p/qhbzKTNNzz/
[15:05] <nammn_de> rick_h: in for a cr on column ordering? https://github.com/juju/juju/pull/11193
[15:05] <rick_h> nammn_de: rgr, will do
[15:05] <nammn_de> manadart: cr on ID/Name search for show-space? https://github.com/juju/juju/pull/11195
[15:06] <nammn_de> stickupkid: opened the LP bug https://bugs.launchpad.net/juju/+bug/1862376
[15:06] <mup> Bug #1862376: tests: status output branches output -> race condition <juju:New> <https://launchpad.net/bugs/1862376>
[15:30] <stickupkid> nammn_de, nice
[15:36] <hml> stickupkid: looking at the pastebin now.
[15:39] <nammn_de> manadart: I can remember why i didn't export ConstraintsBySpaceName and used tag conversion instead.
[15:39] <nammn_de> Reason was that the doc `constraintsWithID` is not exported. Should I just export that type instead?
[15:39] <hml> stickupkid: ln 28 looks like a side effect of the neutron-openvswitch unit not coming up correctly with the profile changes
[15:41] <manadart> nammn_de: Let me take a look.
[15:42] <manadart> nammn_de: BTW, can you get a fix up for this? https://pastebin.canonical.com/p/qxsDM3hwZ9/
[15:47] <stickupkid> hml, exactly
[15:47] <stickupkid> i'll report back
[15:48] <hml> stickupkid: there are other errors too in there, not just the 1 unit
[15:49] <hml> stickupkid: is this a MAAS setup?
[15:50] <stickupkid> hml, quick ho?
[15:50] <hml> stickupkid: sure
[16:04] <nammn_de> manadart: will do!
[16:14] <nammn_de> manadart: fixed the test by ensure ordering with sorting. I think that was the problem
[16:15] <nammn_de> Did you take a look regarding constraints thing? Should have applied the rest. Not 100% sure about your last comment though, added a comment.
[16:45] <stickupkid> manadart, what's wrong with #11194 it's not building with github correctly
[16:45] <mup> Bug #11194: auto-fsck after 30 boots can be problematic on a laptop <laptop-mode (Ubuntu):Fix Released by thombot> <https://launchpad.net/bugs/11194>
[16:46] <stickupkid> bot fail!
[16:47] <manadart> Not sure. Will look Monday.
[16:47] <stickupkid> just close it and reopen the PR
=== arif-ali_ is now known as arif-ali