UbuntuIRC / 2020 /03 /18 /#juju.txt
niansa
Initial commit
4aa5fce
=== narindergupta is now known as narinderguptamac
[00:32] <TimM[m]> Hi all, would anyone be interested in collaborating to charm up Jitsi and BigBlueButton?
[02:29] <thumper> wallyworld: ping
[02:29] <wallyworld> yo
[02:30] <thumper> I think you have a bug in the 2.8 upgrade step for the tasks sequence
[02:30] <thumper> shouldn't the tasks sequence be bumped for every model?
[02:30] <wallyworld> ok, will fix, what's the issue?
[02:30] <wallyworld> probs, i'll need to check
[02:31] <thumper> do we create the task id from the sequence value +1 ?
[02:31] <thumper> is that why we're bumping the sequence id?
[02:32] <wallyworld> the sequence starting value and increment logic changed
[02:32] <wallyworld> so to avoid doubling up when using 2.8 on a 2.7 db, the sequence was incremented
[02:32] <wallyworld> otherwise the latest sequence number could have been used twice
[02:32] <thumper> that didn't answer my question
[02:33] <thumper> it doesn't make sense to me why we actually have this upgrade step
[02:34] <wallyworld> because we went from calling sequence() to sequenceWithMin()
[02:35] <wallyworld> so starting id is 1 not 0
[02:36] <wallyworld> and if you compare the old logic vs new logic, running 2.8 would have reused an existing task id
[02:36] <wallyworld> i haven't got the exact code handy
[02:37] * thumper is looking at it now
[02:37] <wallyworld> the code has moved as well
[02:38] <wallyworld> rather, additonal use of task sequence added
[02:39] <thumper> ok
[02:39] <wallyworld> action id used to add 1 to sequence()
[02:39] <thumper> I do think that the upgrade step needs to be run across all models
[02:39] <wallyworld> which started from 0
[02:39] <thumper> right now it only does the controller model
[02:39] <wallyworld> yeah, it does
[02:39] <wallyworld> which is a bug :-(
[02:39] * thumper is writing an upgrade step now for unit machine id denormalisation
[02:40] <wallyworld> drive by? :-)
[03:50] <TimM[m]> 1st pass at a "juju personas" document https://discourse.jujucharms.com/t/juju-user-personas/2808
[07:56] <jam> manadart: ping
[07:57] <jam> or maybe stickupkid
=== balloons6 is now known as balloons
[10:24] <sdhd-sascha> Hi, i use juju with an lxd cluster. Now i remove one lxd-server from the cluster. Where can i find the configuration for the ip address ?
[10:24] <sdhd-sascha> I searched and changed all occurrences at `~/.local/share/juju`
[10:24] <sdhd-sascha> But i still get this error, when i try to remove or add unit's. Or when i try to call `juju storage`
[10:24] <sdhd-sascha> `ERROR getting state: getting storage provider registry: Get https://...:8443/1.0: Unable to connect to: ...:8443`
[10:26] <stickupkid> sdhd-sascha: is this the same one you signed up to?
[10:56] <sdhd-sascha> stickupkid: it was the initial lxd-server from the cluster, where i later bootstrapped juju.
[10:58] <stickupkid> sdhd-sascha, yeah, so I believe the issue is that we only know about that ip. To change the ip of that will require so mongo surgery. I think it's not unreasonable to make a bug for this.
[10:59] <stickupkid> manadart, achilleasa_ that's correct ^
[10:59] <stickupkid> ?
[10:59] <sdhd-sascha> stickupkid: where can i find the mongodb?
[10:59] <manadart> stickupkid: Yep.
[11:00] <stickupkid> sdhd-sascha, https://discourse.jujucharms.com/t/login-into-mongodb/309
[11:00] <sdhd-sascha> stickupkid: super. Thank you :-)
[11:00] <stickupkid> sdhd-sascha, would you mind taking the time to file a bug? https://bugs.launchpad.net/juju/+bugs
[11:01] <sdhd-sascha> stickupkid: yes, i will. Maybe i have time to create a patch, too
[11:02] <stickupkid> sdhd-sascha, I'm unsure what's the best way to solve this, maybe the lxd provider/environ should help the instancepoller
[11:02] <sdhd-sascha> Ah, ok. I will see
[11:03] <sdhd-sascha> Thank you
[11:39] <stickupkid> manadart, jam you seen this stack trace before ? https://paste.ubuntu.com/p/djJKY7dszN/
[11:40] <jam> 2020-03-18 11:38:45 INFO juju.cmd supercommand.go:83 running jujud [2.7.5.1 9825e246a9ec70e6551744d033802f19d78cabce gc go1.14]
[11:40] <jam> runtime: mlock of signal stack failed: 12
[11:40] <jam> runtime: increase the mlock limit (ulimit -l) or
[11:40] <jam> runtime: update your kernel to 5.3.15+, 5.4.2+, or 5.5+
[11:40] <jam> fatal error: mlock failed
[11:40] <jam> stickupkid: ^ I have not seen that before.
[11:40] <jam> sounds like a bug in focal if I had to guess
[11:40] <stickupkid> jam, neither have i
[11:40] <manadart> stickupkid: Nope.
[11:41] <stickupkid> jam, just wrapping up my focal PR, I'll see if it happens again
[12:18] <stickupkid> manadart, CR my changes since thumper last reviewed - https://github.com/juju/juju/pull/11332
[12:18] <stickupkid> manadart, also I need help testing if possible
[12:19] <manadart> stickupkid: Yep. Gimme a few.
[12:22] <hml> stickupkid: 1 line review pls? https://github.com/CanonicalLtd/juju-qa-jenkins/pull/402
[13:00] <stickupkid> manadart, whilst you're there - https://github.com/juju/juju/pull/11333/files#diff-e8543713fc0c30ea33131d41edf815e7R16
[13:33] <manadart> rick_h_: Were we going to hang on and talk test?
[13:35] <rick_h_> manadart: yep my bad
[13:35] * rick_h_ got distracted
[15:42] <hml> anyone noticed something wrong with changes to “juju model-config logging-config”
[15:43] <hml> i updated logging-config value and it didn’t take effect in my model, though shown in the model config.
[15:43] <hml> had to bounce jujud for machine and unit agents
[15:44] <hml> in develop
[15:58] <danboid> Where is the juju user data stored eg credential/login stuff?
[15:59] <danboid> or in other words, how do I move a juju account?
[15:59] <rick_h_> danboid: so it's in .local/share/juju
[16:00] <rick_h_> danboid: to move an accout the best thing is to just juju login to the controller from a new location
[16:00] <rick_h_> danboid: but if you need credntials/etc you need the .local/share/juju/credentials.yaml and clouds.yaml
[16:01] <danboid> rick_h_, Thanks
[16:12] <danboid> rick_h_, I presume a single juju user account be used from multiple machines then?
[16:39] <rick_h_> danboid: yes, normally if you create a juju user and give them as passowrd you can then juju login with that user/pass as long as you know the IP of the controller
[16:39] <rick_h_> danboid: so it's pretty easy to have multiple clients against a single controller
[16:39] <danboid> Great, thanks
[17:54] <stickupkid> rick_h_, my PR landed in 2.7
[17:54] <stickupkid> rick_h_, and so has manadart's
[17:57] <rick_h_> stickupkid: woot woot, will watch for a ci run with that sha then ty!
[17:57] <stickupkid> nps
[19:51] <sdhd-sascha> hey, hi,
[19:51] <sdhd-sascha> i tried to connect to the mongodb, but cannot find the correct password. Like, described here:
[19:51] <sdhd-sascha> https://discourse.jujucharms.com/t/login-into-mongodb/309/5?u=sdhd
[20:00] <sdhd-sascha> Hmm, i just inside an juju controller, but can't find the `init.d` or `systemd` startup of the `mongod` ? Who launches this daemon?
[20:01] <rick_h_> sdhd-sascha: those are started via systemd as long as you're on ubuntu >= xenial
[20:02] <sdhd-sascha> rick_h_: thank you. I'm on 20.04 on the host. And the container is `bionic`
[20:02] <rick_h_> sdhd-sascha: this is juju bootstrapped on localhost?
[20:02] <sdhd-sascha> yes
[20:03] <rick_h_> sdhd-sascha: so to access the contrller you do "juju switch controller; juju ssh 0"
[20:03] <rick_h_> sdhd-sascha: that puts you on the controller machine which has mongodb and jujud running
[20:03] <sdhd-sascha> inside the controller, this command gives no output ... `# systemctl | grep mongo`
[20:03] <sdhd-sascha> But, `pstree -p` shows that mongod is running...
[20:03] <sdhd-sascha> Nope, if i try `juju ssh`, i have this:
[20:03] <sdhd-sascha> ```
[20:03] <sdhd-sascha> $ juju ssh 0
[20:03] <sdhd-sascha> ERROR opening environment: Get https://10.0.0.8:8443/1.0: Unable to connect to: 10.0.0.8:8443
[20:04] <rick_h_> sdhd-sascha: what version of Juju?
[20:04] <sdhd-sascha> rick_h_: that's because i delete `10.0.0.8` before from the cluster...
[20:04] <sdhd-sascha> wait...
[20:04] <sdhd-sascha> 3.22 (13840) on every server
[20:04] <timClicks> I believe the service is called juju-db
[20:05] <rick_h_> sdhd-sascha: what is the Juju version though? what does the version in juju status show you?
[20:05] <rick_h_> timClicks: +1
[20:05] <sdhd-sascha> sorry
[20:05] <sdhd-sascha> ```
[20:05] <sdhd-sascha> juju 2.7.4 10906 latest/stable canonical✓ classic
[20:05] <sdhd-sascha> lxd 3.22 13840 latest/stable canonical✓ -
[20:05] <sdhd-sascha> ```
[20:06] <rick_h_> sdhd-sascha: is your contrller up and running? can you do juju status successfully?
[20:06] <sdhd-sascha> rick_h_: yes
[20:06] <sdhd-sascha> ```
[20:06] <sdhd-sascha> | juju-b1a552-0 | RUNNING | 10.0.2.92 (eth0) | | CONTAINER | 0 | mars |
[20:06] <sdhd-sascha> +----------------+---------+------------------------+------+-----------+-----------+----------+
[20:06] <sdhd-sascha> | juju-b1a552-1 | RUNNING | 10.0.2.94 (eth0) | | CONTAINER | 0 | merkur |
[20:06] <sdhd-sascha> +----------------+---------+------------------------+------+-----------+-----------+----------+
[20:06] <sdhd-sascha> | juju-b1a552-2 | RUNNING | 10.0.2.93 (eth0) | | CONTAINER | 0 | mars |
[20:06] <sdhd-sascha> ```
[20:06] <rick_h_> sdhd-sascha: is it something you can pastebin? https://paste.ubuntu.com/
[20:07] <rick_h_> sdhd-sascha: so that's lxc list which is showing you the containers you have running
[20:07] <rick_h_> looks like you've got a 3 machine HA cluster going?
[20:07] <sdhd-sascha> rick_h_: what exactly, should i print ?
[20:07] <sdhd-sascha> rick_h_: yes, 3 HA
[20:07] <rick_h_> sdhd-sascha: `juju status`
[20:08] <sdhd-sascha> https://www.irccloud.com/pastebin/qGvYj9Bz/
[20:09] <rick_h_> sdhd-sascha: hmm, ok can you try that ssh command again with --debug `juju ssh 0 --debug`?
[20:09] <sdhd-sascha> https://www.irccloud.com/pastebin/ahwq5OaF/
[20:10] <rick_h_> sdhd-sascha: does that make any sense to you? It's confusing to me as the lxd machines are all 10.0.2.xx and not sure what a tomcat port has to do with anything?
[20:11] <sdhd-sascha> rick_h_: today, your colleage, say's i should send a bugreport about the failure with the deleted lxd-server.
[20:11] <sdhd-sascha> I will do tomorrow.
[20:11] <sdhd-sascha> But now, i only want to access the mongodb. But didn't have a password...
[20:12] <rick_h_> sdhd-sascha: ok, the services are run via /etc/systemd/system/juju-db.service and /etc/systemd/system/jujud-machine-0.service
[20:12] <sdhd-sascha> rick_h_: I bootstrapped the lxd-cluster from 10.0.0.8. Then i bootstrapped juju from the same ip. Then i figured out, that the machine has not enough RAM. So i deleted it from the cluster... ;-) ...
[20:13] <sdhd-sascha> rick_h_: Ah, thank you... i searched for `mongo`
[20:13] <sdhd-sascha> :-)
[20:13] <rick_h_> sdhd-sascha: the db password will be on the machine in the /var/lib/juju/agents/machine-0/agent.conf
[20:14] <rick_h_> sdhd-sascha: that post is the right one: https://discourse.jujucharms.com/t/login-into-mongodb/309 but since your juju ssh is wonky you'll have to pull the script apart manually
[20:14] <rick_h_> sdhd-sascha: you can skip the juju ssh to the machine since it's lxd and just lxd exec bash on the right instance
[20:14] <sdhd-sascha> rick_h_: my agent.conf's didn't have a value `statepassword` like i said before
[20:14] <rick_h_> juju-b1a552-0 from the status output
[20:15] <rick_h_> sdhd-sascha: it should be there.
[20:15] <sdhd-sascha> rick_h_: ah, thank you :-) i found it ;-)
[20:15] <rick_h_> sdhd-sascha: coolio
[20:15] <sdhd-sascha> :-)
[20:17] <sdhd-sascha> login works. great +1