UbuntuIRC / 2014 /06 /16 /#juju.txt
niansa
Initial commit
4aa5fce
=== 20WAAFCU3 is now known as wallyworld
=== CyberJacob|Away is now known as CyberJacob
=== vladk|offline is now known as vladk
=== Guest79566 is now known as wallyworld
=== CyberJacob is now known as CyberJacob|Away
=== vladk|offline is now known as vladk
=== vladk is now known as vladk|offline
=== vladk|offline is now known as vladk
=== isviridov|away is now known as isviridov
[08:43] <gnuoy> wallyworld, wow, thanks for the speedy fix to Bug#1329805
[08:43] <_mup_> Bug #1329805: juju search for image does not find item if endpoint and region are inherited from the top level <juju-core:Fix Committed by wallyworld> <simplestreams:Fix Released> <https://launchpad.net/bugs/1329805>
[10:38] <wallyworld> gnuoy: no problem. we didn't expect that the very top level would contain region/endpoint so didn't cater for that. we do now :-)
=== SIGILL_ is now known as SIGILL
=== Ursinha is now known as Ursinha-afk
=== mbruzek changed the topic of #juju to: Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP || Weekly Reviewers: mbruzek / tvansteenburgh || News and stuff: http://reddit.com/r/juju
[13:58] <jamespage> hazmat, is there a reason that juju-deployer does not support "to: " with arbitary machine numbers other than 0?
=== Ursinha-afk is now known as Ursinha
[14:30] <rick_h_> jamespage: because bundles can be deployed over an existing environment and the numbers are not promised
[14:30] <rick_h_> jamespage: is my understanding
[14:30] <jamespage> rick_h_, right - makes sense
[14:31] <jamespage> rick_h_, I figure it out with my manual provider usage - deploy the two services and then target everything else at those.
[14:38] <ali1234> does anyone know why my "juju status" has stopped working: http://paste.ubuntu.com/7643749/
[14:39] <ali1234> it worked for a bit, then it stopped
[14:42] <ali1234> the container is still running
[14:42] <ali1234> one of them anyway, the other one never finished setting up
[14:45] <ali1234> it tries to open a socket to the orchestration container and fails
[14:51] <hazmat> jamespage, its not reproducible
[14:51] <hazmat> jamespage, per what rick_h_ said.. i've come around though that there should be an unsafe mode for it, as it useful for a lot of folks
[14:53] <jamespage> hazmat, ack
[14:53] <jamespage> I worked around it
=== Ursinha is now known as Ursinha-afk
=== Ursinha-afk is now known as Ursinha
[15:53] <jamespage> gnuoy, so I bought up a fresh, two node cinder cluster, and then added another unit to it OK
[16:06] <gnuoy> jamespage, let me see if I can reproduce with cinder
[16:15] <gnuoy> jamespage, I seem to see the problem with cinder as well. Deploy cinder on trusty with HA. Kick pacemaker and corosync and everything is fine. Add a new unrelated service and cinder pacemaker refuses to stop.
[16:16] <jamespage> gnuoy, I don't understand why you are adding a new unrelated service?
[16:16] <gnuoy> jamespage, because that's what triggers the breakage
[16:17] <jamespage> gnuoy, oh - one second
[16:17] <jamespage> it might be crapping itself out
[16:18] <jamespage> the new units will get the existing configuration - however they won't have any of the right bits installed to actually run them
[16:18] <jamespage> so maybe that's what's causing the problem?
[16:18] <gnuoy> jamespage, the service I add is truly unrelated
[16:19] <jamespage> gnuoy, if you add another neutron-api unit does that work OK?
[16:19] <jamespage> gnuoy, one second - do you have just one instance of the hacluster charm deployed?
[16:19] <jamespage> so you are 'add-relation' to two different services?
[16:20] <gnuoy> I don't do anything to the unrelated service. I don;t add the ha charm at all
[16:20] <jamespage> gnuoy, you just deploy it?
[16:20] <jamespage> so its not running hacluster or anything?
[16:21] <gnuoy> yep, let me work up an example with cinder
[16:21] <jamespage> gnuoy, please do
=== Ursinha is now known as Ursinha-afk
[16:37] <gnuoy> jamespage, http://paste.ubuntu.com/7653811/
[16:42] <jamespage> gnuoy, I really don't understand this - how can turning on three arbitary new servers foobar your cluster?
[16:42] <gnuoy> I have no inkling of a clue
=== Ursinha-afk is now known as Ursinha
[16:45] <jamespage> gnuoy, can you check your serverstack hosts file on your basition pls
[16:45] <jamespage> make sure there are no dupes
[16:45] <gnuoy> jamespage, http://paste.ubuntu.com/7653847/ dupefree
[16:46] <jamespage> gnuoy, /etc/serverstack-dns/tenant_hosts
[16:46] <jamespage> gnuoy, I've just pretty much tried your steps and I'm not seeing the same problem
[16:47] <gnuoy> http://paste.ubuntu.com/7653850/
[16:47] <gnuoy> jamespage, that is interesting
[16:47] <jamespage> gnuoy, one second
[16:49] <jamespage> gnuoy, sorry - what is interesting?
[16:49] <gnuoy> jamespage, that you don't see the issue. I can reproduce it every time
[16:50] <lazyPower> dpb1: ping
[16:51] <jamespage> gnuoy, this really has my head scratching
[16:52] <kentb> Hi juju folks. Would this be the proper way to include the open-source components for my charm as well as a EULA for the Dell-specific ones?: http://bazaar.launchpad.net/~kentb/charms/trusty/openmanage/trunk/view/head:/copyright
[16:53] <kentb> OMSA = OpenManage Server Administrator
[16:56] <jamespage> gnuoy, your other servers are not using the same multicast address are they?
[16:56] <gnuoy> jamespage, I don't believe so I'm just redeploying to double check
[16:58] <jamespage> gnuoy, following your guide 100%
[17:01] <AskUbuntu> Machines required Juju bootstrap | http://askubuntu.com/q/484166
[17:02] <lazyPower> dpb1: i canceled that sync we had. I'm going to retarget @ the charm maintainer
[17:02] <gnuoy> jamespage, having terminated the other instances the problem has gone. I'm sorry to have messed you about but I'm not convinced my corosync woes are completely fixed. I'll try and work up another test case in the next few days
[17:04] <gnuoy> jamespage, I have a theory. When I was doing the ha testing before it was when other clusters were present (conder and nova-cc) I wonder if thats the problem.
[17:04] <gnuoy> s/conder/cinder/
[17:04] <jamespage> gnuoy, if you configured HA with the same multicast address it probably would be
[17:04] <gnuoy> different multi cast addresses but maybe they're clashing somehow
[17:04] <jamespage> gnuoy, pick a non-conflicting default :-)
[17:04] <gnuoy> I did
[17:06] <jamespage> gnuoy, hmm - now I see the same issue
[17:06] <gnuoy> jamespage, how have you reproduced ?
[17:07] <jamespage> gnuoy, I walked through your reproduced step-by-step
[17:08] <gnuoy> jamespage, do you have another cluster running in the same env ?
[17:08] <jamespage> gnuoy, noodles775
[17:08] <jamespage> gnuoy, no
[17:08] <jamespage> noodles775, sorry
[17:09] <gnuoy> jamespage, I need to EOD, thanks for the additional eyes
[17:09] <jamespage> gnuoy, ditto
=== CyberJacob|Away is now known as CyberJacob
=== CyberJacob is now known as CyberJacob|Away
=== CyberJacob|Away is now known as CyberJacob
=== roadmr is now known as roadmr_afk
[20:26] <ali1234> http://paste.ubuntu.com/7654841/ <- what does this mean?
[20:27] <ali1234> this is the point where it switched from saying "no tools available" to "too many open files"
[20:27] <ali1234> it was printing that "no tools available" message every 10 seconds for about 10 hours
[20:32] <mbruzek> Hello ali1234
[20:32] <ali1234> hi
[20:32] <mbruzek> Has juju worked before or is this a new environment
[20:32] <ali1234> new environment
[20:32] <ali1234> remember the other day, when i crashed it?
[20:33] <mbruzek> yes
[20:33] <ali1234> well we fixed that one. the problem was i ran bootstrap with sudo
[20:33] <ali1234> that created root owned files in ~/.juju
[20:33] <mbruzek> You seem to have the touch when it comes to breaking Juju
[20:33] <ali1234> so i cleaned all that stuff out
[20:33] <mbruzek> how?
[20:33] <mbruzek> apt-get remove -p ?
[20:33] <ali1234> juju destroy-environment --force
[20:33] <ali1234> rm -rf ~/.juju
[20:33] <mbruzek> Ok
[20:34] <ali1234> then i bootstrapped again without root
[20:34] <ali1234> that worked okay
[20:34] <ali1234> then i deployed an elasticsearch unit and it worked
[20:34] <mbruzek> But you said this was a new environment.
[20:34] <ali1234> yes
[20:34] <ali1234> in fact that unit is still working now, i can reach the container
[20:35] <mbruzek> So this is on a different machine ?
[20:35] <ali1234> no, this is on the same machine
[20:35] <mbruzek> So Juju was working and now it is broken?
[20:35] <ali1234> i guess you could say that
[20:36] <ali1234> when i tried to deploy a second machine it broke
[20:36] <ali1234> that machine never finished deploying
[20:36] <mbruzek> OK sorry for the problems lets troubleshoot what you are seeing now.
[20:36] <mbruzek> ali1234, would you mind destroying everything and starting "fresh" ?
[20:37] <ali1234> that's fine
[20:37] <ali1234> however currently juju commands do not work
[20:37] <ali1234> because i left it in an error state overnight and now it's exceeded the maximum open files somehow
[20:38] <mbruzek> sudo lxc-ls --fancy | pastebinit
[20:38] <ali1234> http://paste.ubuntu.com/7654902/
[20:38] <ali1234> okay i recognise one of those - machine-1 is the elasticsearch i deployed, which currently works correctly
[20:40] <mbruzek> sudo lxc-stop -n al-local-machine-1
[20:40] <ali1234> okay, it stopped
[20:40] <mbruzek> sudo lxc-destroy -n al-local-machine-1
[20:40] <ali1234> okay, it's gone
[20:41] <mbruzek> juju destroy-environment -y local --force
[20:41] <ali1234> okay
[20:42] <mbruzek> ps -ef | grep mongo
[20:42] <ali1234> not running
[20:42] <mbruzek> excellent
[20:43] <ali1234> should i always use the juju ppa?
[20:43] <ali1234> (seems like now would be a good time to install it if so)
[20:43] <mbruzek> Yes there is a stable and an devel branch if I am not mistaken
[20:43] <mbruzek> stable would be the one I would suggest
[20:43] <mbruzek> lets do this first
[20:44] <mbruzek> sudo apt-get purge juju-local
[20:44] <pmatulis> on ubuntu 14.04 i deployed wordpress/mysql and the charm for wordpress is from precise (charm: cs:precise/wordpress-22). normal?
[20:45] <ali1234> that just means the container machine will be running precise, doesn't it?
[20:46] <ali1234> purged
[20:46] <mbruzek> pmatulis, I get wordpress-22 when I deploy as well
[20:46] <mbruzek> sudo add-apt-repository -y ppa:juju/stable
[20:46] <mbruzek> ali1234, ^
[20:47] <ali1234> juju and juju-core are being updated...
[20:47] <mbruzek> ali1234, just incase something else is broken also purge juju-core
[20:47] <mbruzek> before installing?
[20:48] <mbruzek> ali1234, then sudo apt-get install juju-core juju-local
[20:48] <ali1234> okay, done
[20:48] <mbruzek> juju init
[20:49] <ali1234> ERROR A juju environment configuration already exists.
[20:49] <mbruzek> hrmm...
[20:49] <ali1234> delete ~/.juju?
[20:49] <mbruzek> back up your .juju/environments.yaml file and then delete the .juju directory.
[20:50] <ali1234> any reason to back it up?
[20:50] <mbruzek> ali1234, Only if you have other clouds defined other than local
[20:50] <mbruzek> Otherwise rm-rf
[20:50] <ali1234> okay, done
[20:50] <mbruzek> juju init
[20:50] <ali1234> done
[20:51] <pmatulis> mbruzek: so normal?
[20:51] <mbruzek> pmatulis, From what I saw it looks normal? What are you concerned about?
[20:52] <mbruzek> ali1234, can you pastebin your environments.yaml file?
[20:52] <pmatulis> mbruzek: i just expected everything to be on trusty is all
[20:52] <ali1234> the old one or the new one?
[20:52] <mbruzek> ali1234, new one
[20:52] <ali1234> http://paste.ubuntu.com/7654978/
[20:53] <mbruzek> pmatulis, Oh. No we are not auto promulgating charms. They must have tests and be tested on Trusty before they advance.
[20:53] <mbruzek> so pmatulis most of the charms are still on precise.
[20:53] <pmatulis> mbruzek: ok, fair enough, cheers
[20:56] <mbruzek> ali1234, you only need http://paste.ubuntu.com/7654994/
[20:57] <mbruzek> ali1234, you can keep the other stuff in there commented out
[20:57] <ali1234> i'm supposed to add that stuff?
[20:58] <ali1234> last time i didn't edit the file at all
[20:59] <mbruzek> default-series must be set
[21:00] <ali1234> what happens if it isn't?
[21:00] <mbruzek> https://juju.ubuntu.com/docs/config-LXC.html
[21:00] <mbruzek> problems
[21:01] <ali1234> that page doesn't say anything about default-series...
[21:01] <mbruzek> it should
[21:01] <mbruzek> Sorry I will fix that
[21:03] <ali1234> admin-secret is any string?
[21:04] <mbruzek> it is, that is just any string to log into juju-gui
[21:04] <ali1234> okay i used your paste and i get ERROR couldn't read the environment when i try to juju switch
[21:06] <mbruzek> can you give me uname -a ?
[21:06] <ali1234> Linux al-desktop 3.13.0-8-generic #28-Ubuntu SMP Tue Feb 11 17:55:27 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
[21:08] <mbruzek> ali1234, let me see error
[21:08] <ali1234> "ERROR couldn't read the environment" is the full output
[21:09] <ali1234> it works if i just add those lines into the generated config
[21:09] <mbruzek> ok
[21:11] <ali1234> if i do juju generate-config -f while your version is in place it says: "ERROR cannot parse "/home/al/.juju/environments.yaml": YAML error: line 1: mapping values are not allowed in this context"
=== isviridov is now known as isviridov|away
[21:11] <ali1234> okay the problem is environments: should not be indented
[21:12] <mbruzek> OK don't use my paste bin then just edit your own file
[21:12] <ali1234> so now it works
[21:12] <ali1234> right, fixed
[21:12] <mbruzek> ok
[21:12] <mbruzek> juju deploy ubuntu
[21:12] <ali1234> ERROR environment is not bootstrapped
[21:13] <mbruzek> juju bootstrap -e local
[21:13] <ali1234> uploading tools for series [trusty precise]
[21:13] <ali1234> that's different
[21:13] <ali1234> last time i did this it only said trusty
[21:14] <ali1234> and then it failed hard when i tried to deploy something on precise
=== roadmr_afk is now known as roadmr
[21:14] <ali1234> machine-1 was a trusty instance, machine-2 was precise and gave that "no tools" error
[21:14] <ali1234> okay it's bootstrapped
[21:15] <ali1234> deployed and pending
[21:15] <mbruzek> ok
[21:17] <mbruzek> ali1234, let me know if that works
[21:17] <ali1234> i expect it will work now
[21:18] <ali1234> tail: inotify resources exhausted
[21:18] <ali1234> tail: inotify cannot be used, reverting to polling
[21:18] <ali1234> but that's because of the previous "fun" i expect
[21:19] <ali1234> i think the AU answer needs updating again
[21:19] <jose> kentb: I *think* it's good now in terms of the license, up to the ~charmers now :)
[21:19] <ali1234> because it doesn't specify to set default-series
[21:19] <jose> mbruzek: taking a look at chamilo and how can I approach it now!
[21:19] <kentb> jose, ok thanks!
[21:20] <mbruzek> ali1234, did you get any errors
[21:21] <ali1234> just that one about inotify on the all-machines.log
[21:22] <ali1234> the instance hasn't come up yet
[21:22] <ali1234> the inotify error is the last thing on the log too
[21:23] <ali1234> and there is no cpu usage or network usage... it doesn't appear to be doing anything
[21:23] <ali1234> oh hang on
[21:24] <ali1234> http://paste.ubuntu.com/7655115/ <- machine-1.log
[21:25] <mbruzek> ali1234, I also have the kvm support message
[21:25] <ali1234> okay it's running now
[21:26] <ali1234> i'm ssh'd into the machine-1
[21:26] <ali1234> lxc-ls --fancy doesn't list any machines though
[21:28] <mbruzek> you will not see any lxc on machine-1
[21:28] <ali1234> no i mean on the host
[21:32] <mbruzek> ali1234, Are you running or not? I see that you can ssh to machine 1
[21:33] <ali1234> the machine is running
[21:33] <ali1234> it just doesn't show on lxc-ls
[21:34] <ali1234> http://paste.ubuntu.com/7655165/
[21:34] <ali1234> oh okay, it's cos i didn't sudo it
[21:35] <ali1234> so that appears to be working fine
[21:38] <ali1234> so now i'm going to attempt to "juju deploy solr" which is what broke it all last time
[21:38] <ali1234> and it failed
[21:38] <ali1234> agent-state-info: '(error: hook failed: "install")'
[21:38] <ali1234> but at least it didn't completely ruin the whole environment this time
[21:39] <ali1234> oh, it just failed to download the right source tarball (404 error)
[21:43] <mbruzek> Right ali1234 it looks like your juju environment is "normal" now
[21:44] <ali1234> yeah seems that way
[21:44] <ali1234> unfortunately there's no working charm for solr
[21:44] <ali1234> they all point to a 404 URL
[21:44] <mbruzek> ali1234, your lxc-ls did not return any containers because you need sudo before it
[21:44] <ali1234> yeah i figured that out already :)
[21:45] <mbruzek> ali1234, you can open a bug on solr if the urls are incorrect
[21:45] <ali1234> already has an open bug
[21:45] <ali1234> https://bugs.launchpad.net/charms/+source/solr/+bug/1324641
[21:45] <_mup_> Bug #1324641: install hook fails (download link inexistant) <solr (Juju Charms Collection):New> <https://launchpad.net/bugs/1324641>
[21:45] <mbruzek> ali1234, OK
[21:45] <ali1234> confirmed it :)
[21:46] <ali1234> i did "juju destroy-service solr" and now it says it is dying... will it go away eventually?
[21:56] <ali1234> i've bug-reported this experience: https://bugs.launchpad.net/juju-core/+bug/1330719
[21:56] <_mup_> Bug #1330719: juju-local exceeded open file ulimit <juju-core:New> <https://launchpad.net/bugs/1330719>
[21:58] <ali1234> mbruzek: thanks for the help
[21:59] <mbruzek> ali1234, You are welcome, glad to get you working.
[22:00] <mbruzek> ali1234, Sorry for all the trouble.
=== CyberJacob is now known as CyberJacob|Away
[22:43] <AskUbuntu> Openstack Neutron - Cannot Access Tenant Router Gateway | http://askubuntu.com/q/484293