UbuntuIRC / 2012 /02 /21 /#juju.txt
niansa
Initial commit
4aa5fce
[10:23] <TeTeT> hi, I'm following http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage and I run into a problem at bootstrap already. the virtual net cannot be set up
[10:23] <TeTeT> http://pastebin.ubuntu.com/851168/ http://pastebin.ubuntu.com/851170/
[10:23] <TeTeT> I try this in a vm, maybe the network setup does not work with the vm's network emulation?
=== TheMue_ is now known as TheMue
[11:05] <koolhead17> TeTeT: i tried it on a physical machine :P
[11:07] <TeTeT> koolhead17: hmm
[15:03] <SpamapS> TeTeT: what release of juju? (apt-cache policy juju)
[15:05] <TeTeT> SpamapS: Installed: 0.5+bzr457-0ubuntu1 from preicse
[15:06] <TeTeT> SpamapS: is it supposed to work on a vm managed through libvirt and using the default virbr0 connected interface? Or does it need a real bridge to the ethernet card of the system?
[15:07] <SpamapS> TeTeT: it should work fine inside a VM yes
[15:10] <TeTeT> SpamapS: sudo virsh net-start default just does not work, maybe because of vm inside of a vm constraint?
[15:14] <SpamapS> TeTeT: no, the networking part of virsh is pretty simple
[15:14] <SpamapS> TeTeT: those "virbr" bridges are just bridges with no physical components.
[15:15] <TeTeT> SpamapS: ok. any idea how to get this working?
[15:16] <SpamapS> TeTeT: whats the problem with virsh net-start default?
[15:16] <TeTeT> SpamapS: error: internal error Network is already in use by interface eth0
[15:17] <benji> I'm having a problem running on EC2; the bootstrap appears to work then I run juju status, lie to ssh about verifying the fingerprint and then juju status hangs, never to return
[15:19] <SpamapS> TeTeT: sounds like you have it configured to be a "real" bridge instead of a virtual one.
[15:19] <SpamapS> benji: does 'juju ssh 0' work?
[15:20] * benji tries
[15:20] <benji> SpamapS: nope, I get "2012-02-21 10:20:05,876 INFO Connecting to environment..." and then a hang
[15:20] <TeTeT> SpamapS: doubt it, the vm has the normal 192.168.122.x address for virbr0 on the host
[15:22] <SpamapS> benji: interesting...
[15:23] <benji> SpamapS: this strace output looks like it might mean something to someone that knows what's going on: http://paste.ubuntu.com/851476/
[15:24] <SpamapS> benji: hmmm.. you are running with default-series: precise ?
[15:24] <benji> SpamapS: yep
[15:24] <SpamapS> benji: have to go afk for a bit
[15:24] <SpamapS> benji: try oneiric.. if that works.. we have a bug in precise
[15:24] <benji> will do
[15:34] <benji> SpamapS: it works fine with oneiric
[15:35] <SpamapS> benji: ok, do you want to open a bug report against juju? I can do it too if you don't want to.
[15:36] <benji> SpamapS: I can
[15:38] <SpamapS> benji: *thank you*!
[15:39] * hazmat upgrades to precise
[15:41] <hazmat> TeTeT, did you already have a bridge setup? it looks like its just the default libvirt bridge there
[15:41] <TeTeT> hazmat: nah, I didn't setup an extra bridge.
[15:41] <TeTeT> hazmat: oh, the reboot didn't change anything for me
[15:42] <TeTeT> hazmat: which is mentioned in the askubuntu article
[15:57] <_mup_> Bug #937889 was filed: Hang on "juju status" with EC2 and Precise <juju:New> < https://launchpad.net/bugs/937889 >
[15:57] <m_3> benji SpamapS: I had problems with precise juju instances at the end of last week too... it couldn't install packages b/c of dep problems (txzookeeper iirc)
[16:00] <m_3> didn't get a chance to really debug it tho, but it's easily reproducable with just a bootstrap
[16:00] <m_3> (ec2)
[16:03] <jcastro> SpamapS: m_3: did you guys get a mail from wordpress for the server blog?
[16:04] <benji> SpamapS: https://bugs.launchpad.net/juju/+bug/937889
[16:04] <_mup_> Bug #937889: Hang on "juju status" with EC2 and Precise <juju:New> < https://launchpad.net/bugs/937889 >
[16:04] <m_3> jcastro: looking now
[16:04] <m_3> benji: thanks, I'll +1 it
[16:05] <jcastro> m_3: oh so it worked? awesome, didn't know if it was set up to send mail, nice work!
[16:05] <m_3> jcastro: don't see it
[16:06] <m_3> easiest is probably postfix/gmail to send... I'd bet any direct sends'll be already blacklisted
[16:06] * jcastro nods
[16:06] <jcastro> our first issue, mail!
[16:06] <m_3> jcastro: I've got that config snapshotted somewhere if you want me to dig
[16:06] <jcastro> this will be fun
[16:07] <m_3> jcastro: yup!
[16:08] <jcastro> m_3: hey so, you think you can charm up lp:summit this week?
[16:08] <jcastro> it'd be a nice win
[16:10] <m_3> jcastro: lemme look at it... I'm buried in stuff atm, but it wasn't too bad iirc
[16:11] <SpamapS> jcastro: still powering through my email
[16:11] <jcastro> SpamapS: it's ok it probably didn't send
[16:20] <robbiew> jcastro: you got blog access now
[16:20] <jcastro> yep, thanks
[16:20] <jcastro> man, wordpress needs FTP to import blogs
[16:20] <jcastro> m_3: everytime we run into a problem I'm going to say "octo"
[16:33] <m_3> jcastro: and I'll be happy to "+1
[16:33] <m_3> " that
[16:34] <m_3> jcastro: zk charm reviewed and ready to promulgate
[16:34] * m_3 rings the promulgate bell
[16:36] <jcastro> oooh cute!
[16:37] * jcastro will blog that one
[16:37] <jcastro> anything special I should care about, or is it in the readme?
[16:40] <m_3> jcastro: readme's on the way still... it's almost eod for jamespage too so prob won't happen until tomorrow
[16:40] <jcastro> ok no worries
[16:41] <jamespage> \o/
[16:41] * jamespage is going to charm hbase tomorrow
[16:43] <jamespage> m_3: I was thinking of using dotdee to manage the config for hbase - have you seen any use of it in charms so far?
[16:44] <m_3> jamespage: have not
[16:44] <m_3> I think that's a great idea
[16:45] <jamespage> m_3: it would mean that you could generate the part of the config file associated with the event without having knowledge of the rest of the configuration
[16:45] <m_3> some charms have used a combo of config snippets and sed... but dotdee would be a little less manual
[16:46] <m_3> I've been experimenting with calling cheetah from the command line... (exporting vars into the env)... but it's... eh
[16:47] <SpamapS> sed is a fail IMO
[16:47] <SpamapS> we should be building config files from scratch or using dotdee
[16:49] <m_3> sed can actually be a simpler/cleaner solution for small config changes
[16:49] <m_3> and easier to maintain b/c it's just pure diffs
[16:49] <m_3> but yes, for large-scale or complex config... it sucks
[16:51] <jcastro> m_3: hey the IP for that blog instance won't change will it?
[16:52] <m_3> yeah, it totally will... lemme attach an elastic IP real quick... hang on
[16:52] <jcastro> thanks
[16:56] <jamespage> m_3: I did some cheetah stuff for tomcat7 I think
[16:57] <m_3> jcastro: ok, 23.21.249.196... can you point the server blog url directly to that IP addr? I can add a url like xxx.markmims.com in front of it if we need to
[16:58] <jcastro> ok
[16:58] <jcastro> hey should juju ssh blog/0 work?
[16:58] <SpamapS> hazmat: is that reboot support I see landing in lp:juju ?!
[17:01] <jcastro> m_3: hey so does assigning that IP break the existing aws URL?
[17:02] <m_3> jcastro: you might need to specify environment... juju ssh -efido blog/0
[17:02] <m_3> jcastro: checking on everything now
[17:04] <hazmat> SpamapS, yup, and upstartification, and all kinds of yummy goodness
[17:04] <jcastro> m_3: I get a connection timed out error
[17:05] <m_3> jcastro: wow... it looks like aws just removed the old dns entry
[17:05] <jcastro> and the ec2 url just stopped working for me altogether
[17:05] <m_3> gave it a new one that matches the elastic ip
[17:05] <m_3> ec2-23-21-249-196.compute-1.amazonaws.com
[17:06] <jcastro> hah, awesome
[17:06] <m_3> but now it seems like juju's lost
[17:06] <jcastro> yep, I was going to ask, did we just find a new bug?
[17:07] <m_3> I've done this before without this problem... lemme poke around and see what happened
[17:07] <jcastro> And we're off to a great start!
[17:07] <m_3> in fact, it's up without this problem in another environment
[17:07] <jcastro> m_3: save your history, this will be a good post.
[17:08] <m_3> nice, now juju status reports different addresses for the service unit and the machine
[17:09] <m_3> it's trying to ssh to the old one that the service unit shows (and not getting through)
[17:09] <jcastro> yep
[17:10] <m_3> jcastro: you can still get in using the explicit machine id... 'juju ssh -efido 2'
[17:10] <jcastro> ah!
[17:20] <jcastro> jamespage: thanks for the promulgation!
[17:20] <jamespage> jcastro, no problemo!
[17:21] <_mup_> Bug #937949 was filed: juju status shows addresses that are out of sync <juju:New> < https://launchpad.net/bugs/937949 >
[17:21] <m_3> jcastro: ^^
[17:29] <mars> Hi guys, I have a question about installing from Launchpad private PPAs: is there a good charm cookbook recipe for doing this?
[17:29] <mars> This looks good for a start: http://charms.kapilt.com/~openstack-ubuntu-testing/precise/nova-volume/hooks/nova-volume-common
[17:33] <m_3> mars: https://code.launchpad.net/~james-page/charms/precise/zookeeper/trunk is another example
[17:33] <m_3> it's pretty straightforward once you have packages _in_ the ppa
[17:34] <m_3> the charm's install hook just add-apt-repository and then apt-get update, then apt-get install
[17:34] <m_3> getting packages into the ppa is a whole other ballgame :)
[17:35] <mars> m_3, thanks, that is a nice way to do public archives
[17:36] <mars> Private ones require a bit more work, what with the custom URL and all. AFAIK add-apt-repository doesn't handle them
[17:38] <mars> The two recipes can probably be merged to what I want: install from a private PPA, public PPA, or archive.
[17:39] <m_3> mars: ah, gotcha... sorry
[17:39] <m_3> yeah, any ppas outside of launchpad seems like the tough thing with that would be key exchange
[17:39] <m_3> but I really don't know
[17:40] <m_3> we try to make sure any downloaded payloads can be cryptographically verified... for the charms in the charm store
[17:41] <m_3> for other charms, anything goes... pulling from github for node.js, npm, gems, etc is pretty common
[17:51] <SpamapS> m_3: whats the txzookeeper problem exactly?
[17:52] <m_3> SpamapS: boostrap a precise environment... ppa or distro doesn't matter
[17:53] <m_3> then ssh directly to the instance (juju ssh hangs)
[17:54] <m_3> dig through the log... packages aren't installed b/c of a dep problem (I vaguely remember it being txzk...)
[17:55] <m_3> I can reproduce and dig out the logs if you want, just lemme know
[17:57] <m_3> http://paste.ubuntu.com/851637/
[17:57] <m_3> SpamapS: ^^
[17:58] <m_3> SpamapS: added your key to ubuntu@ec2-174-129-55-132.compute-1.amazonaws.com
[18:00] <SpamapS> m_3: that looks like just plain broken images
=== grapz_afk is now known as grapz
[18:03] <m_3> SpamapS: it's strangely similar to my desktop problem atm: http://paste.ubuntu.com/851649/
[18:04] <SpamapS> m_3: yours looks like an out of sync mirror..
[18:04] <m_3> oh, nice... /me fixing that!
[18:04] <m_3> very scared I have to reinstall
[18:05] <SpamapS> m_3: definitely not
[18:05] <SpamapS> m_3: just point at us.archive.ubuntu.com
[18:05] <jcastro> SpamapS: m_3: incoming mail about a charm contest, please review by end o business today, so I can launch this badboy
[18:05] <m_3> jcastro: roger roger
[18:06] <m_3> SpamapS: yeah, taking my apt-cacher-ng out of the picture
[18:08] <m_3> SpamapS: no change
[18:09] <SpamapS> m_3: have you tried dist-upgrade ?
[18:10] <SpamapS> m_3: or apt-get -f install ?
[18:10] <m_3> yup, same
[18:10] <SpamapS> m_3: that does not make any sense.
[18:11] <m_3> variations of -f or --fix-xxx didn't seem to do much
[18:11] <SpamapS> m_3: you have something damanged.. what version of gnome-control-center?
[18:11] <m_3> I hadn't been using any mirrors.... other than apt-cacher-ng
[18:12] <m_3> gnome-control-center 1:3.2.2-2ubuntu8
[18:12] <SpamapS> m_3: 1:3.3.5-0ubuntu2 is the latest. It should be installing that
[18:12] <m_3> it was an upgrade from oneiric and not a fresh install
[18:12] <SpamapS> mine too
[18:12] <SpamapS> mine goes back to 10.10 :)
[18:13] <m_3> the juju one is more important though
[18:13] <SpamapS> yes I'm looking into the juju one
[18:13] <SpamapS> I think thats just broken images
[18:38] <m_3> jcastro: comments in on the charm contest
[19:36] <SpamapS> hazmat: https://code.launchpad.net/~clint-fewbar/juju/use-packages-yaml/+merge/94040
[19:37] <SpamapS> hazmat: fixes running juju w/ precise instances
[19:38] <SpamapS> hazmat: oddly enough.. I used lbox propose, but I don't see it talking to rietveld. :-P
[19:38] * SpamapS lunches
[19:40] <hazmat> SpamapS, it needs a -cr flag to make it go to reitveld
[19:40] <SpamapS> ah, next time.. its trivial anyway
[21:46] <SpamapS> benji: hey we found a resolution for the bug you reported this morning
[21:46] <SpamapS> err
[21:47] <SpamapS> I should say, a few hours ago.. might not have been morning
[21:47] <benji> It was morning somewhere.
[21:57] <SpamapS> benji: thanks for trying out precise. :) The problem was that libc6 was updated between alpha1 and now, so debconf was prompting for some questions
[21:58] <SpamapS> And the real underlying problem was that we were doing apt-get in runcmd instead of listing the package in cloud-init's 'packages' line
[21:59] <m_3> SpamapS: did you merge and kick off a ppa build?
[21:59] <benji> SpamapS: interesting; good turn-around time on the fix
[22:03] <m_3> SpamapS: nevermind... I see
[22:08] <hazmat> SpamapS, that seems a little strange only because by default will use nightlies builds on ec2, i suppose that's not the case for openstack
[22:19] <SpamapS> hazmat: no, we use released builds by default on ec2
[22:20] <SpamapS> def get_current_ami(ubuntu_release="oneiric", architecture="i386", persistent_storage=True, region="us-east-1", daily=False, desktop=False, url_fetch=None):
[22:20] <SpamapS> data["version"] = daily and "daily" or "released"
[22:20] <hazmat> ugh.
[22:20] <SpamapS> hazmat: thats the more conservative approach
[22:20] <SpamapS> hazmat: trouble is, there's no way to override it
[22:20] <SpamapS> though I suspect that will come with the full implementation of constraints
[22:22] <hazmat> SpamapS, we should be using the nightlies, we tell cloud-init to do an update/upgrade
[22:22] <hazmat> so we're just wasting bandwidth
[22:22] <hazmat> and it also means we get newer kernels
[22:22] <SpamapS> hazmat: true!
[22:22] <SpamapS> though
[22:23] <SpamapS> less repeatable deploys with that strategy
[22:23] <SpamapS> can't tell you how many times I had version skews drive me *NUTS* in tracking down problems at 3am
[22:23] <hazmat> SpamapS, is the reality any different if we're doing an upgrade on the machine prior to setting up juju?
[22:24] <SpamapS> hazmat: I'm suggesting that the upgrade is also a problem
[22:25] <SpamapS> hazmat: I think ultimately we just need ways to say "fire all install hooks" on a service so that you can re-assert versions
[22:25] * hazmat nods
[22:25] <hazmat> version skew pincer movement, wiped out entire roman legions, didn't even need the elephants at cannae
[22:25] <SpamapS> lol
[22:26] <hazmat> yeah.. i can see it both ways
[22:27] <hazmat> if your reproducing or adding units to the service, you definitely want the same versions, if your deploying fresh, i'd see wanting to have the latest stable-updates applied
[22:27] <hazmat> as a goal
[22:28] <hazmat> SpamapS, sounds worth some more discussion on list to poll a larger audience
[22:35] <SpamapS> hazmat: perhaps add-unit should not do the update/upgrade
[22:35] <SpamapS> hazmat: another thought is to defer updates/upgrades to charms always
[22:35] <hazmat> SpamapS, but that could be at a different version delta than the original unit
[22:36] <hazmat> SpamapS, ie. if their off releases, and the first unit did the update/upgrade, than the second unit is stuck at the base image version.. doesn't really make sense
[22:37] <SpamapS> hazmat: right, so really, perhaps update/upgrade should be pushed off to charms.
[22:37] <hazmat> SpamapS, i think its probably more of use to use the nightly unless its an add-unit in which case we use the previously used image, and we don't update/upgrade.. BUT.. there are lots of management tools and probably colo services that might also bbe doing package management
[22:37] <SpamapS> hazmat: *or* disconnected from deploy/add-unit
[22:38] <SpamapS> hazmat: I think we should just document how it works now, and think about how to improve the "update the whole service" story
[22:39] <hazmat> SpamapS, that's fair, although i think it could use some discussion on the wider list as well to help advance the story
[22:44] <SpamapS> hazmat: I asked earlier.. but saw no response.. did I see rebooting landing in lp:juju ?
[22:51] <jimbaker> SpamapS, hazmat replied at 10:04 MST: SpamapS, yup, and upstartification, and all kinds of yummy goodness
[22:52] <jimbaker> nice to see those features land, as i mentioned in #juju-dev at the time :)
[22:52] <SpamapS> ahh ok cool!
[22:52] <SpamapS> Thats like.. huge
[22:52] <jimbaker> SpamapS, indeed!
[22:53] <SpamapS> https://bugs.launchpad.net/juju/+bug/863526
[22:53] <_mup_> Bug #863526: Juju agents do not handle reboots <production> <juju:Triaged> < https://launchpad.net/bugs/863526 >
[22:53] <SpamapS> So, thats kind of a meta-bug, but does a reboot work now?
[22:53] * SpamapS tries it
[22:53] <hazmat> SpamapS, you can kill any juju agent and it should do the right thing
[22:53] <hazmat> mad props to fwereade_ who did the heavy lifting
[22:54] <hazmat> which reminds me, i should circle back to the branches i had waiting on that
[22:54] * SpamapS bootstraps and then reboots to see what happens
[22:55] <hazmat> SpamapS, that might not work ;-)
[22:55] <hazmat> SpamapS, its more that agents can be restarted, and killed at arbitrary points and do the right thing
[22:55] * SpamapS would really like to close some 'production' bugs
[22:55] <SpamapS> hazmat: any reason that might not work?
[22:56] <hazmat> SpamapS, the session expiration stuff hasn't landed, as it was waiting on the restart work as a mechanism
[22:56] <SpamapS> so if the reboot doesn't happen in like, 3 seconds, something bad happens?
[22:57] <hazmat> SpamapS, if the zk server advances the clock and expires the session, the agents might not handle that.. i dunno there is some support there for killing old sessions when the process comes backs .. so it might work
[22:57] <hazmat> but if the agent is alive when the session expiration happens then they don't do anything for it
[22:57] <SpamapS> hazmat: well in this case, I'm rebooting zookeeper too...
[22:58] <hazmat> SpamapS, right, but zk will effectively advance the clock on all extant sessions when it comes back up... like i said it might work, i just can't guarantee it yet
[22:59] <SpamapS> hazmat: I don't understand what advance the clock means, and I don't understand why an expired session does anything. :-P
[22:59] <SpamapS> doesn't the agent just start a new session?
[23:01] <hazmat> SpamapS, it will when it starts up, but not if the session is expired while its up
[23:01] <SpamapS> Actually the agents are failing on start
[23:01] <SpamapS> http://paste.ubuntu.com/851980/
[23:01] <SpamapS> juju.errors.JujuError: No session file specified
[23:02] <hazmat> SpamapS, hmm. they should all have session files specified if the env was started with the latest trunk
[23:03] <SpamapS> JUJU_ZOOKEEPER=localhost:2181 python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log
[23:03] <SpamapS> --pidfile=/var/run/juju/machine-agent.pid', 'JUJU_ZOOKEEPER=localhost:2181 python
[23:04] <SpamapS> bootstrap seems to not use the upstart yet?
[23:06] <hazmat> SpamapS, quite possible, its a large set of changes that just landed on trunk.. i'll do some additional qa testing now
[23:06] <SpamapS> ./juju/providers/common/tests/data/cloud_init_branch_trunk:runcmd: [sudo apt-get install -y python-txzookeeper, sudo mkdir -p /usr/lib/juju,
[23:06] <SpamapS> DOH
[23:06] <SpamapS> missed some apt-get's
[23:07] <hazmat> SpamapS, you want the cake or egg treatment ;-)
[23:07] <hazmat> jk
[23:08] <SpamapS> hazmat: the egg goes on my face.. and the cake.. well
[23:08] <SpamapS> THE CAKE IS A LIE
[23:08] <hazmat> but so much tastier
[23:11] <SpamapS> oh
[23:11] <SpamapS> hahaha
[23:11] <SpamapS> hazmat: stand down
[23:11] <SpamapS> my local version of juju was the distro version
[23:11] <SpamapS> *DUH*
[23:15] <hazmat> cool, i do remember testing that earlier (killing machine agent), it looks ok locally.. still not 100% sure about the restart capability
[23:23] <SpamapS> hazmat: but there are branches that will solve that in flight?
[23:24] <hazmat> SpamapS, yes
[23:24] <hazmat> heading out to check out a user group meeting, g'night
[23:25] <SpamapS> The system is going down for reboot NOW!
[23:25] * SpamapS crosses fingers
[23:25] <SpamapS> hazmat: have fun
[23:26] <SpamapS> initially it looks to have worked quite nicely
[23:26] <SpamapS> Heh.. it helps that we reach runlevel 2 at 7 seconds.
[23:27] <SpamapS> 2012-02-21 23:25:19,466:2636(0xb73896c0):ZOO_INFO@zookeeper_close@2304: Closing zookeeper sessionId=0x135a235ffb30001 to [127.0.0.1:2181]
[23:28] <SpamapS> 2012-02-21 23:25:47,188:576(0xb74ed6c0):ZOO_INFO@log_env@658: Client environment:zookeeper.version=zookeeper C client 3.3.3
[23:28] <SpamapS> looks like it worked fine for machine/provisioning agent
[23:30] <SpamapS> sweet... and they can be restarted with 'service juju-machine-agent restart'
[23:52] <SpamapS> I just noticed
[23:52] <SpamapS> am I not being asked to verify ssh keys now?!
[23:52] <SpamapS> is that landed?
[23:52] <SpamapS> because if it is.. woot!
[23:52] <SpamapS> if I"m just dumb and fixed my .ssh/config to not be asked.. then ignore moe
[23:52] <SpamapS> ignore me too.. but really ignore moe.. that jaerk