=== uru_ is now known as urulama === menn0_ is now known as menn0 === Guest6594 is now known as ackk` === ackk` is now known as ackk === CyberJacob|Away is now known as CyberJacob [07:08] https://docs.google.com/a/canonical.com/document/d/1t_55N1il3XoL8z-jfa1CBoSxzOQjC90cgSpCqx5wkH0/edit# [07:08] thumper, ^^^ [07:09] bac, fwereade: https://docs.google.com/a/canonical.com/document/d/1t_55N1il3XoL8z-jfa1CBoSxzOQjC90cgSpCqx5wkH0/edit# [07:38] thumper: https://docs.google.com/a/canonical.com/document/d/1t_55N1il3XoL8z-jfa1CBoSxzOQjC90cgSpCqx5wkH0/edit# [07:38] thumper: no, https://wiki.canonical.com/InformationInfrastructure/IS/Mojo === CyberJacob is now known as CyberJacob|Away === mjs0 is now known as menn0 === jameinel is now known as j-a-meinel === jameinel is now known as jam === psivaa is now known as psivaa-reboot [11:57] Tribaal: I think the non-corosync leader election stuff is still racy, in that you can have two or more units that think they are the leader running hooks at the same time. [11:58] stub: interesting, but how can that work? [11:58] stub: seems like "I am the unit with the smallest unit number" should be relatively easy to determine? [11:59] stub: or do you mean it races with the peer list fetching? [11:59] A three unit cluster, units 2 and 3 have joined the peer relationship and happily running hooks. unit 1 is finally provisioned and joins the peer relation [11:59] ah [11:59] smartass units :) [11:59] Last I checked, it is impossible to elect a leader reliably if you create a service with more than 2 units [12:00] * stub looks for the bug number [12:00] yeah, seems very dodgy to do so. I guess the decoumentation should reflect that, but the comments are still valid [12:00] stub: can we query the juju state server for the list of peers? [12:00] :) [12:01] Tribaal: I haven't looked into unsupported mechanisms :) [12:01] stub: hehe [12:01] Tribaal: I'm just sticking with the 'create 2 units, wait, then add more' as a documented limitation until juju gives us leader election [12:01] https://bugs.launchpad.net/juju-core/+bug/1258485 [12:02] * Tribaal looks into how complex a corosync setup is [12:02] Let me know, that might solve my issues too... [12:03] stub: seems like it would be generally useful, yes. seems like a job zookeeper would have handled well though [12:03] sorry if I'm breaking a taboo :) [12:04] I think juju has the information we need, it just needs to be exposed to the charms ;) [12:07] stub: yeah [12:07] stub: ohh [12:08] stub: I think I have an idea :) [12:09] stub: I'll give it a spin when I'm on the beach this week and see if it can work [12:10] Tribaal: I've proven to myself that it is impossible, and nobody has yet corrected me, but you are more than welcome to prove me wrong :) [12:10] My test suite seems guaranteed to trigger the race conditions :) [12:10] stub: sweet! [12:10] stub: a reproductible race is half he battle already [12:11] s/he/the/ [12:49] so, corosync uses multicast it seems [12:49] that comes with its own set of problems [14:08] jacekn: hi, i'm working the charm review queue this week, do you have any updates for https://code.launchpad.net/~jacekn/charms/precise/rabbitmq-server/queue-monitoring/+merge/218580 ? [14:11] tvansteenburgh: sorry no another team took over this project [14:11] tvansteenburgh: I will let them know [14:11] jacekn: ok thanks [14:13] I am having an issue with the juju mongodb filling up my 8gb micro sd card -- is there a way I can periodically flush this db? [14:59] dimitern, http://paste.ubuntu.com/7961799/ [15:00] dimitern, http://paste.ubuntu.com/7961802/ [15:06] i'm creating a new charm my-nova-compute and it has to be installed on top of nova-compute. This means my-nova-compute has to be installed after installing nova-compute on the same machine. What kind of relationship can I use to achieve this? [15:30] sinzui: did you sort that source tarball for me, please? [15:31] sinzui: I was having connectivity issues, so don't know if I missed a URL. === med_ is now known as Guest26240 [15:34] rbasak, I am so sorry. I forgot. http://juju-ci.vapour.ws:8080/job/build-revision/1666/ [15:35] sinzui: no problem. Only getting to it now, as I wait on some very slow mysql tests :-/ [15:53] sinzui: are you free in eight minutes? The TB meeting has had some questions about Juju upstream QA for the exception request. [15:53] sinzui: looks like it's dragged on for a while. If you could answer their questions, that might speed things up. [15:53] sinzui: #ubuntu-meeting-2 [15:54] rbasak, I don't have time, sorry. I am sprinting and debating at this moment [15:54] sinzui: OK, I'll try and do what I can. [16:39] anyone know why I would get this error when trying to bootstrap using local? [16:39] WARNING ignoring environments.yaml: using bootstrap config in file "/home/vagrant/.juju/environments/local.jenv" [16:39] 1.20.1-saucy-amd64 [16:40] hatch: I believe that's just a warning letting you know it's using the local.jenv instead of the environments.yaml [16:41] hatch: if the local.jenv doesn't exist juju will create it the first time using environments.yaml as the template [16:41] hatch: but after the local.jenv has been created, any changes in that section of the environments.yaml won't get picked up === viperZ28__ is now known as viperZ28 [16:42] ohh ok, it subsequently fails with: [16:42] ERROR Get http://10.0.3.1:8040/provider-state: dial tcp 10.0.3.1:8040: connection refused [16:42] so I thought that might have been the problem [16:43] hatch: hmm, that seems like an unrelated error. Not sure what that one is [16:44] here is the full output https://gist.github.com/hatched/5849510b38afac01b6cf [16:45] not sure if that helps at all heh [16:46] hatch: interesting. The WARNING unknown config field "shared-storage-port" bit is interesting [16:46] hatch: but I'm not sure it's related either [16:46] hatch: I'm suspecting lxc issues maybe [16:47] hatch: can you 'juju destroy-environment local' and 'juju bootstrap' again? [16:47] yeah i have to use --force though because it seems to have created a 'partial' env [16:47] the same issue happens [16:47] hatch: hmm [16:47] yeah I'm at a loss at how to debug this heh [16:48] hatch: I'm afraid I don't know much more than that. What does 'sudo lxc-ls --fancy' show? [16:48] * jcw4 grasping at straws [16:48] a fancy empty table :) [16:48] hmm; that's interesting. I would expecte at least one row [16:48] after destroying? [16:49] jamespage, are you available for a question? [16:49] jcw4 well thanks for the help, I'll keep poking around [16:49] hatch: yeah, I think the 'juju-*-template' would stay around [16:50] hatch: yw... good luck :) [16:50] thanks - I'll need it haha [16:51] hatch: lazyPower or marcoceppi or someone else may know better, if they're available right now [16:51] * lazyPower reads scrollback [16:52] hatch: do you have teh juju-plugins repository added? [16:52] there's a plugin to help clean this up and get you to a known good state - fresh from the cloud. juju-clean [16:52] lazyPower not sure.... [16:52] unrecognized command [16:52] so probably not [16:52] https://github.com/juju/plugins [16:53] install instructions are in the README. just clone and add to $PATH [16:53] oh ok will try [16:54] how to view unit log in amazon instance? [16:55] themonk: either jujud ebug-log, or cat/tail/less it in /var/log/juju/unit-service-#.log [16:55] *juju debug-log [16:55] ok thanks :) [16:57] lazyPower I don't want to jinx it but it appears to be working now.... [16:57] woo [16:57] so...was that caused by the upgrade path or something? [16:57] any idea why it was broken? [16:57] hard to say [16:57] local provider can be picky [16:58] is this plugins stuff in the docs? I couldn't find it, it definitely should be :) [16:58] nope [16:58] its very unofficial atm [16:59] lazyPower, its not there i have /var/log/juju-themonk-local it has only local unit log, i want amazon instance unit log [16:59] themonk: you need to juju ssh to the unit, then look for it in /var/log/juju [17:00] bbiaf, lunch [17:01] ok got it [17:18] man, memtest is not fast === psivaa is now known as psivaa-holiday [17:51] natefinch you're sure having bad luck lately :) [17:56] probably same problem as before... I just thought it wasn't hardware, since the live disk worked, but maybe it's something specific to booting === roadmr is now known as roadmr_afk [18:22] natefinch: no sir [18:22] memtest is slooowwwww especially when you have quite a bit of it. [18:23] heh, reminds me of the first time using it on a machine with 16 gigs.. "oh haha look how long this is going to take! *wait five minutes* oh. this is annoying." [18:24] haha, seems about right === CyberJacob|Away is now known as CyberJacob === scuttlemonkey is now known as scuttle|afk [18:40] took an hour... no errors in the first pass though [18:57] Can anyone help me with my quantum configuration for openstack using maas and juju? === uru_ is now known as urulama === uru_ is now known as urulama === roadmr_afk is now known as roadmr === roadmr is now known as roadmr_afk === roadmr_afk is now known as roadmr [20:29] Hello all. Does anybody have experience using the hacluster charm? We received agent-state-info: 'hook failed: "ha-relation-changed/joined"' on most subordinates. [20:48] jamespage, do you have a minute to help me with my quantum issue? [20:54] I am not getting anything after hitting amazon public-address i cant ping too !!!! [20:55] amazone dashbord shows me that instance are running [20:57] themonk: did you expose it? [20:57] lazyPower, yes === natefinch__ is now known as natefinch [20:57] did you validate your security groups were modified to actually open the ports? [20:57] and its not some hiccup on the AWS API side of things? [20:58] 30 min ago it was ok [20:58] did your units public address change on you? [20:58] i just redeploy my charm [20:59] i use --to 2 so public address should not change [20:59] and it remain same [21:00] i just expose my amother service and i cant access it now too !!! === CyberJacob is now known as CyberJacob|Away === menn0_ is now known as menn0