UbuntuIRC / 2020 /03 /26 /#juju.txt
niansa
Initial commit
4aa5fce
[01:30] <wallyworld> hpidcock: https://github.com/juju/juju/pull/11360
[01:39] <wallyworld> babbageclunk: lgtm, ty
[01:39] <babbageclunk> wallyworld: thanks!
[02:45] <hpidcock> wallyworld: oops sorry, was lying down with a headache
[02:45] <wallyworld> all good, hopr you're ok
[03:45] <tlm> does anyone know why we start certificates 7 days before now and not 5 minutes in the past ?
[03:45] * wallyworld shrugs
[03:46] <tlm> ¯\_(ツ)_/¯
[06:36] <kelvinliu> wallyworld: these two PRs for adding storage support for non stateful app, https://github.com/juju/charm/pull/308 https://github.com/juju/juju/pull/11362 could u take a look?
[07:44] <parlos> Good Morning Juju!
[09:05] <stickupkid> manadart, got this for CR https://github.com/juju/juju/pull/11355
[09:05] <achilleasa> stickupkid: can you do a quick sanity check on a 2.7 -> dev merge PR https://github.com/juju/juju/pull/11363?
[09:05] <manadart> stickupkid: Yep; will look.
[09:06] <stickupkid> achilleasa, done
=== parlos is now known as parlos_afk
[09:14] <manadart> stickupkid: https://github.com/juju/juju/pull/11364
=== parlos_afk is now known as parlos
[09:34] <stickupkid> manadart, https://media.giphy.com/media/OCgTKYSVnf7iM/giphy.gif
[09:35] <stickupkid> manadart, i have no idea what i was trying to say "Subnet defines the nes" - nes what? nintendo nes? what was I doing?
[09:44] <stickupkid> manadart, quick ho?
[09:44] <manadart> stickupkid: OMW.
=== parlos is now known as parlos_afk
[10:23] <achilleasa> jam: I have replied to your comments in 11341 and pushed two additional commits (the second deals with that odd c-tor for the relation resolvers); can you take another look?
=== parlos_afk is now known as parlos
[10:37] <stickupkid> manadart, this looks wrong to me, I would expect this just check for if != 1, as you want to know why it wasn't removed https://github.com/juju/juju/blob/develop/api/spaces/spaces.go#L196-L198
[10:37] <flxfoo> hi all
[10:37] <flxfoo> It is me again
[10:37] <flxfoo> :)
[10:38] <manadart> stickupkid: Yes.
[10:38] <stickupkid> flxfoo, hi o/
[10:39] <flxfoo> So while trying to instanciate `juju add-machine` (with constraints works fine)... so two things... first juju returns 11 pending pending bionic instance "31c020be-c4b9-440d-aaf0-d2c1b33ea218" has status BUILD, wait 10 seconds before retry, attempt 8, and just loop , until I do a `juju remove-machine X --force"
[10:40] <flxfoo> so then after a little delay, list of instances is empty ... on rackspace side though... I have a list of 11 instances with the same name (difference IDs) which will listed as building (90% at most) after some time though the instances starts to appear as Ready
[10:41] <flxfoo> So I suspect something wrong with juju, maybe it does not receive the right "return" so It keeps call for server instantiation...
[10:42] <flxfoo> stickupkid: hi :)
[10:43] <stickupkid> flxfoo, so this comes from https://github.com/juju/juju/blob/develop/provider/openstack/provider.go#L1240
[10:44] <stickupkid> flxfoo, we're trying to provision a machine, but we're not getting back one that meets the constraints OR the provider doesn't have enough resources for the machine to be created
[10:45] <stickupkid> flxfoo, I've seen this locally when testing with multipass+microstack and it was the latter case for me
[10:49] <flxfoo> stickupkid: this is the only place right?
[10:49] <stickupkid> flxfoo, let me check
[10:51] <flxfoo> stickupkid: on rackspace side (as I said) each instance have the same name, but different IDs (which make sense), but juju return always the same ID to retry... why do I have several instances created on rackspace though?
[10:51] <stickupkid> flxfoo, that's correct https://paste.ubuntu.com/p/pk76srRk37/
[10:52] <stickupkid> flxfoo, that I don't know
[10:53] <stickupkid> flxfoo, if you think you've got a reproducer then I would create a bug - https://bugs.launchpad.net/juju/+bugs
=== parlos is now known as parlos_afk
[10:53] <flxfoo> stickupkid: no idea yet, just trying to put things together
[10:55] <flxfoo> stickupkid: do you know if I explicitely pass `-n 1` that would make a difference?
=== parlos_afk is now known as parlos
[10:56] <stickupkid> flxfoo, tbh, I wouldn't know personally
[11:04] <flxfoo> stickupkid: Would you know how a single `add-machine` could endup looping in endless server creation?
[11:04] <stickupkid> flxfoo, not getting back one that meets the constraints OR the provider doesn't have enough resources for the machine to be created
=== parlos is now known as parlos_afk
=== parlos_afk is now known as parlos
[11:50] <stickupkid> manadart, haha, there are so many issues in remove space, trying to work it out now
[11:50] <stickupkid> in the cmd/remove.go
=== parlos is now known as parlos_afk
[12:13] <stickupkid> anyone know how to test that something was written to a cmd/ctx log?
[12:40] <stickupkid> manadart, that was a pig
[12:40] <stickupkid> https://github.com/juju/juju/pull/11365
=== parlos_afk is now known as parlos
=== parlos is now known as parlos_afk
[14:23] <manadart> achilleasa: 11356 looks good here, but is still marked as a draft.
[14:51] <achilleasa> manadart: marked it as a draft bec it needs a rebase/force-push once the relation-created one lands
[14:51] <manadart> achilleasa: OK. I've approved it.
[14:52] <achilleasa> manadart: tyvm
=== parlos_afk is now known as parlos
=== parlos is now known as parlos_afk
=== parlos_afk is now known as parlos
[15:58] <achilleasa> is there a juju-idiomatic way for accessing controller config options inside state? I was thinking of having the facade fetch them and pass them as arguments to the state method I am working on but I am wondering whether we use a different pattern
=== parlos is now known as parlos_afk
=== parlos_afk is now known as parlos
=== parlos is now known as parlos_afk
=== parlos_afk is now known as parlos
[17:39] <achilleasa> rick_h_: do you think 640k for charm data and 512k for uniter data are reasonable defaults for the quota limits? 640k ought to be enough for everyone, right? ;-)
[17:40] <achilleasa> note that the operator will still be able to set the limits to 0 and bypass the quota checks if they are feeling yolo
[17:41] <rick_h_> achilleasa: hmmm, ideally folks will never know/hit this. I'd prefer to start with bigger defaults like 512k for juju and 1 or 2M for units?
[17:44] <achilleasa> rick_h_: OK, though 2M seems too generous to me. The absolute max (combined) is 16M
[17:45] <rick_h_> achilleasa: right, personally I'd almost default to the max. I mean we store charms with 1GB of resources/etc
[17:45] <rick_h_> achilleasa: it seems silly to make folks open the faucet more ever and only close it if they hit issues/care
[17:46] <achilleasa> rick_h_: but those end up in blob store, right?
[17:46] <rick_h_> achilleasa: but I can go wit some limit to start I guess. We know most controllers in the wild don't have > 3 models and most models are 3-5 applications
[17:46] <rick_h_> achilleasa: yes, this is true
[17:46] <rick_h_> achilleasa: but it's on the same disk as the rest of mongodb
[17:47] <achilleasa> rick_h_: we can also do a 14M/1.5M split as the default and allow operators to fine-tune it if they need to
[17:48] <achilleasa> my concern is basically that if we set it too high, charm authors will end up abusing it
[17:48] <achilleasa> to store binary blobs or something
[17:48] <achilleasa> (logs)
[17:48] <rick_h_> achilleasa: yea, that's why I'm ok with going roomy, but not all the way. I feel like the 512k 2M means we're talking 2.5mb per unit on the machine MAX. Realistically not all your charms will be abuses.
[17:49] <rick_h_> achilleasa: but we definitely need a metric about size of this collection so we can add it to graphana and if there's a disk usage rise track it directly to this change
[17:49] <rick_h_> grafana doh
=== parlos is now known as parlos_afk
=== parlos_afk is now known as parlos
[19:37] * babbageclunk waves
[19:38] <ventura> are u using IRC for project development and discord for app usage? or migrating everything to discourse?
[19:45] <rick_h_> ventura: irc for real time chat but discourse for async/news/published details
[19:49] <rick_h_> most simple charm is ubuntu heh, just get a machine and setup the charm environment
[19:55] <ventura> rick_h_: turn off/on feature flags on backend for mobile clients
[19:58] <ventura> if it possible to create a git repos with config changes, juju would allow to easily git-revert bad flags with the benefit of always keep track of changes
[20:00] <flxfoo> hi again, :)
[20:00] <rick_h_> ventura: that does some work but as Juju is more than configuration management there's a lot more to "changes" than flipping config flags.
[20:00] <rick_h_> ventura: there's resources, or binaries provided to run/use. There's actions, that trigger administrative functions like adding users, backing up db, etc. There's relations, that instruct application to pass details about themselves back and forth.
[20:01] <rick_h_> ventura: a lot more moving parts and "live system" than can be easily git commit/rollback
[20:02] <flxfoo> so I can confirm that when a machine is created (juju add-unit / add-machine) the process on the provider finishes (takes time but finish)... Except that there is more than one machine created with the same name (different IDs)...
[20:03] <flxfoo> I had to do a `juju remove-machine XX --force`, because after 11 instances... it is too much :)
[20:03] <flxfoo> I think there is something as well here, where rackspace needs time to allocate resources, and the frequency where a new server will be spawned...
[20:04] <ventura> rick_h_: i mean "a simple charm that shows something to my manager allow using juju" :-)
[20:04] <ventura> TL; DR: we lost all machines configs during Black Friday due Bolsonaro Bug (i.e. crazy day-light saving time changes in Brazil)
[20:04] <flxfoo> I don't know If I could put a delay like 30 minutes intead of 10s
[21:27] <flxfoo> Anyone on the fact that calling one `juju add-unit/add-machine` would end up create multiple instance with the same name (different IDs) ?
[21:29] <thumper> flxfoo: I'm not sure I understand what you mean
[21:29] <thumper> wallyworld: https://github.com/juju/lumberjack/pull/1
[21:33] <flxfoo> @thumper: when I perform a `juju add-unit` juju loop saying that instance is in state BUILD, retrying... probably due to rackspace lagging... but then a few minute later I have another one instamce created (with the same name different id) and juju report the same message (with different ID)
[21:33] <flxfoo> until I `juju remove-machine --force` that just goes on
[21:34] <thumper> flxfoo: that definitely sounds like a bug
[21:34] <flxfoo> after removing on juju side... all the instances go from buidling state to ready state
[21:34] <thumper> perhaps due to the slow nature in rackspace
[21:34] <thumper> we don't see it on our other openstacks
[21:35] <flxfoo> yeah that sounds very much something linked to rackspace..
[21:35] <flxfoo> feww weeks ago that was not doing that for sure
[23:20] <wallyworld> thumper: lgtm
[23:22] <babbageclunk> thumper: if the password we use to connect to mongo for juju-restore is always oldpassword from controller machine 0, what happens if machine 0 has gone away? ie, the controller has machines 1, 2, 3?
[23:23] <thumper> what does the juju-db plugin do?
[23:23] <thumper> we should use that
[23:23] <babbageclunk> should we be connecting as a different user, so we can use the oldpassword from the machine we're on?
[23:23] <babbageclunk> ooh good call - looking
[23:25] <babbageclunk> ah, ok - it uses statepassword and the tag
[23:25] <babbageclunk> I think that might have been the problem with how we were doing it before - trying with tag but oldpassword, not statepassword