UbuntuIRC / 2020 /09 /16 /#juju.txt
niansa
Initial commit
4aa5fce
[01:36] <thumper> Congruatulations on the point release
[01:49] <pmatulis> o/ thumper
[01:55] <jam> hi thumper, good to see you around
[02:29] <thumper> I'm still lurking
[02:29] <thumper> pmatulis: been dragged back to juju?
[08:48] <flxfoo> hi wallyworld
[08:49] <flxfoo> to let you know, the RPC problem with pylibjuju (2.8.0) is link to max_frame_size, default value is 4194304, doubling it yesterday, remove the issue.
[08:50] <flxfoo> but the issue is back this morning, to remove it again i needed to add 49696o more.
[08:50] <flxfoo> I am trying to see how fast that is growing...
[08:50] <flxfoo> any idea what would grow?
[09:10] <flxfoo> petevg:hi there, I did open this RPC post/ticket
[09:11] <flxfoo> petevg:as ctrl is 2.8.0, would you suggest to migrate to 2.8.2 or newer?
[09:12] <flxfoo> wallyworld:does the model related fixes that you point out yesterday could be involved in the is RPC "growing" frame? if yes it would sound that I need to move to v2.8.2 or newer?
[09:41] <wallyworld> flxfoo: from what little i've seen of your issue, it does appear something is bloating the model whether due to incomplete cleanup or something else. without a lot mre info, it will be almost impossible to fully diagnose. but > 4M for get_model() seems extreme. migrating to a 2.8.2 controller might be a good option to try, but if there are orphaned entries or issues with the current model it may fail. but this is all a guess as
[09:41] <wallyworld> there's not enough to go on. i haven't got the pylibjuju code in front of me to see what get_model() is doing internally and exactly what it is querying. it's well into my evening so i win't get to look tonight
[09:59] <flxfoo> wallyworld:no worries, I do not recall all details of what happened with this model, for sure add/remove model (with same name) and then apps and units (with sometime same name for apps).
[10:00] <flxfoo> It sounds that model is not in a good shape, and might be beneficial to create something fresh if possible.
[10:00] <flxfoo> in any case that might be beneficial to find out why this is happening...
[10:00] <stickupkid> manadart, I've updated the pylibjuju PR to add a test
[10:01] <manadart> stickupkid: Nice. Looks good.
[10:03] <flxfoo> as a little info I gathered today, is that the first error realted to mgo-txn-resumer started from end of june... after I think removing the model and recreating the model.
[10:03] <flxfoo> bb in a few minutes
[10:03] <stickupkid> flxfoo, are you able to share what type of model you have, machine/unit/app count?
[10:04] <flxfoo> stickupkid:what type of data you would like?
[10:04] <flxfoo> charms?
[10:04] <stickupkid> flxfoo, well, I've just been testing with pylibjuju and not hit this, so I'd need some more steps to try and recreate it if possible
[10:05] <stickupkid> flxfoo, any sort of repro steps would be great
[10:05] <flxfoo> stickupkid: there is 11 machines
[10:05] <flxfoo> 13 units
[10:05] <flxfoo> 7 apps
[10:05] <flxfoo> app/charms are
[10:05] <flxfoo> memcached
[10:05] <flxfoo> percona cluster
[10:05] <flxfoo> openjdk
[10:06] <flxfoo> apache-solr
[10:06] <flxfoo> and a "homemade" charm, which is more or less an nginx
[10:06] <flxfoo> sorry need to move
[10:06] <flxfoo> bb in a few minutes
[10:06] <stickupkid> sure, nps
[10:06] <stickupkid> flxfoo might be worth a bug so we can track it
[10:07] <stickupkid> https://bugs.launchpad.net/juju/+bugs
[10:07] <Chipaca> what could cause config-changed to run over and over?
[10:10] <stickupkid> flxfoo, essentially when we connect, we get all the data via an all watcher (it provides model information into a pylibjuju cache) and the model info. It doesn't look like model info has changed in sometime, but we may have changed how much we send for the all watcher...
[10:12] <stickupkid> let me see what deploying kubernetes does, always a good test
[10:15] <stickupkid> so deploying `juju deploy cs:bundle/canonical-kubernetes-954` seems like it's ok to connect, I'll leave it running to see if that is the cause
[10:32] <flxfoo> stickupkid:some things I do remember, is that I needed to test some charm changes, so I might have done a few upgrade-charm with force-units for sure
[10:32] <flxfoo> and some changes were causing some breakage, that would be fixed on a next iteration... etc...
[12:55] <jam> Chipaca, other than the hook exiting with error causing juju to retry config-changed?
[12:58] <stickupkid> anyone see this failure before FAIL: apiaddressupdater_test.go:107: APIAddressUpdaterSuite.TestAddressChange
[14:06] <stickupkid> hml, https://github.com/juju/juju/pull/12003
[14:06] <stickupkid> hml, handle the error list correctly for info responses
[14:11] <hml> stickupkid: lookging
[14:13] <achilleasa> stickupkid: you bumped the application facade to v13 (add CharmOrigin) on develop, right?
[14:14] <stickupkid> achilleasa, yeap
[14:14] <achilleasa> nice :-) less work for me then!
[14:19] <stickupkid> hml, I'm still getting empty metadata though
[14:20] <stickupkid> `{"channel-map":[],"default-release":{},"id":"K64RpNGzMfoSYHLhbovbizXDwueZzQFZ","name":"verterok-apache2","result":{},"type":"charm"}`
[14:23] <hml> stickupkid: added a comment on the PR - checking a few things out.
[14:57] <stickupkid> hml, https://github.com/juju/juju/pull/12004
[14:57] <hml> stickupkid: will look at it after lunch
[14:58] <stickupkid> hml, sure, I'm unsure if we should obliterate "upgrade-charm" from the repo and use "refresh" instead
[14:58] <stickupkid> hml, I'm open to options
[14:59] <hml> k
[15:37] <achilleasa> hml: can you take a look at https://github.com/juju/juju/pull/12005 ?
[15:38] <hml> achilleasa: ack, will add to my queue
[15:38] <hml> for toay
[15:38] <achilleasa> hml: not in a hurry
[15:48] <hml> stickupkid: found where Update() is used… and it shouldn’t be. there are 2 different api calls to update a charm config depending on how the change is done. :-(. https://github.com/juju/juju/blob/9a321b67d6413169caffb445399ff8a3a50f3ec8/cmd/juju/application/config.go#L416
[15:49] <hml> should be fixable
[16:04] <stickupkid> hml, fixed https://github.com/juju/juju/pull/12003
[16:05] <hml> stickupkid: HO?
[16:05] <stickupkid> sure
[16:16] <achilleasa> jam: the output of 'juju diff-bundle' looks suspiciously like a bundle even though the cli uses its own internal structures for marshaling the diff into yaml. Is the intent to be able to 'juju deploy' the diff and have it work?
[16:19] <jam> achilleasa, I don't think so (I haven't heard of requests to do something along those lines). bundles are meant to be self consistent (eg, not refer to applications that it isn't deploying)
[16:20] <achilleasa> great, that means I don't need to mess with overlays for adding the exposed endpoints to diff-bundle (I guess there is always the export-bundle command if you want to capture the model state in a deployable way)
[22:53] <hpidcock> wallyworld: https://github.com/juju/worker/pull/14
[22:53] <wallyworld> looking
[22:58] <wallyworld> hpidcock: lgtm with a question
[22:58] <ec0> @jam, we had discussed this many moon ago I think, I'm not sure if it ever made it to the roadmap, but being able to have a bundle that creates relations to services not in the bundle allows for easy deployment of things like logging & monitoring tools, for example
[22:58] <ec0> s/moon/moons/
[22:59] <hpidcock> wallyworld: added a response to your question
[23:00] <ec0> for example, if you deploy OpenStack with a supported bundle, and then want to deploy the LMA stack (Nagios, Graylog, Grafana, etc), a lot of those services rely on relations to various OpenStack services, not having to duplicate those application definitions in the bundle would be nice, but given the way bundles are handled now, you thankfully can just duplicate the application names usually and have the
[23:00] <ec0> relations work, and the existing deployed apps aren't touched beyond relating to them
=== ec0[m] is now known as ec0