UbuntuIRC / 2014 /08 /01 /#juju.txt
niansa
Initial commit
4aa5fce
=== Beret- is now known as Beret
=== ming is now known as Guest12249
=== menn0-afk is now known as menn0
=== jcw4 is now known as jcw4_zzz
[06:01] <AskUbuntu> juju server relation hook files are not running? | http://askubuntu.com/q/505310
=== CyberJacob|Away is now known as CyberJacob
=== CyberJacob is now known as CyberJacob|Away
[08:27] <AskUbuntu> juju charm relation-joined hook not working? | http://askubuntu.com/q/505342
=== CyberJacob|Away is now known as CyberJacob
[09:09] <jamespage> marcoceppi, hey - do you think it would be possible to have an openstack-charmers review queue like we have a charmers on on jujucharms.com?
[09:10] <jamespage> marcoceppi, I'm struggling with visibility of proposed changes right now and a central report would be useful for everyone
[09:10] <jamespage> gnuoy, ^^
=== CyberJacob is now known as CyberJacob|Away
=== vila is now known as vila-lunch
=== CyberJacob|Away is now known as CyberJacob
=== CyberJacob is now known as CyberJacob|Away
[11:24] <marcoceppi> jamespage: yes, I'm in the process of re-doing the review queue to be way more robust. In doing so it'll be a stand alone application that anyone can run and configure to track changes for whatever user/group/project
[11:24] <marcoceppi> as such, we could spin up an instance on canonistack to track openstack-charmers stuff
=== vila-lunch is now known as vila
[11:57] <jamespage> marcoceppi, sounds good
[11:58] <marcoceppi> jamespage: it's a little lower priority than everything else, but it's on my personal "I really care about this and want it done" list
[11:58] <jamespage> marcoceppi, if you have something in flight maybe post a branch? I'm happy to hack on this as well
[11:59] <marcoceppi> jamespage: I have early musing of some pyramid stuff, nothing connected to lp or gh yet
[11:59] <marcoceppi> just a databases schema
[11:59] <marcoceppi> jamespage: https://github.com/marcoceppi/review-queue
[12:00] <marcoceppi> I hope to get more time on it this weekend, get celery and lp hooked up for initial imports
[12:48] <mfa298> I've got a Juju instance bootstrapped within Openstack using trusty. Is it possibly to use that to deploy a charm that uses precise or is that not supported (I've got the precise image in glance but can't see a way to tell Juju where to find that image).
[12:51] <marcoceppi> mfa298: you'll need to upload a custom image-metadata file to juju
[12:51] <marcoceppi> so it'll know where the precise images are
[12:52] <mfa298> that's presumably created with juju metadata generate-image, how do I then upload it ?
[12:52] <marcoceppi> mfa298: great question, I forget how, but I believe it's done at bootstrap time
[12:52] * marcoceppi checks
[12:53] <mfa298> so it may not be possible to upload after bootstrap
[12:54] <marcoceppi> mfa298: does not appear, but again I'm not 100% certain, it's something that's defined in the environments.yaml https://juju.ubuntu.com/docs/config-openstack.html
[12:54] <marcoceppi> However, let me check set-environment
[12:55] <marcoceppi> mfa298: you can update this after bootstrap
[12:55] <marcoceppi> with `juju set-environment image-metadata-url="url-to-generated-metadata"`
[13:00] <mfa298> hmmm, looking at the metadata that generate-metadata created for the precise image it seems to reference 14.04 rather than 12.04
[13:02] <marcoceppi> mfa298: you should be able, as a command line option, provide a series flag
[13:02] <marcoceppi> mfa298: with the -s flag
[13:02] <mfa298> I'd just spotted that
[13:08] <mfa298> looks like juju still can't find the image
[13:09] <marcoceppi> mfa298: what do the logs look like when trying to deploy a precise charm?
[13:10] <mfa298> debug-log shows machine-0: 2014-08-01 13:07:20 WARNING juju.worker.instanceupdater updater.go:231 cannot get instance info for instance "": no instances found
[13:10] <mfa298> and the machine state in juju status shows: agent-state-info: '(error: index file has no data for cloud {RegionOne http://192.168.17.17:5000/v2.0}
[13:11] <marcoceppi> mgz_: who should I bug about juju deploying to openstack?
[13:12] <mfa298> trying juju metadata validate-images seems to suggest it's using cloud-images.ubuntu.com rather than the local images, although that could be me missing something
[13:12] <mgz_> marcoceppi: that'd pretty generic
[13:12] <mgz_> what more specifically?
[13:13] <mgz_> I'm a pretty good atrting point
[13:13] <marcoceppi> mgz_: mfa298 is trying to upload image-metadata post bootstrap, not sure the process as I've never tried
[13:13] <mgz_> I see, reading log
[13:18] <mgz_> setting up image-metadata is something you do prior to bootstrap
[13:19] <mfa298> so having bootsrapped juju with it only knowing about trusty there's not a way to add in precise other than destroying the environment and bootstrapping again ?
[13:19] <mgz_> yup
[13:20] <mfa298> or is this going down the wrong route for what I was hoping to acheive.
[13:20] <mgz_> if you're setting this up yourself, you should instead make your keystone advertise the simplestreams
[13:20] <mgz_> rather than have juju supply it at run time
[13:23] <mfa298> quickly googled and that looks like it might be what I want. Is there a decent guide somewhere for setting that up ?
[13:26] <mgz_> mfa298: https://juju.ubuntu.com/docs/howto-privatecloud.html
[13:27] <marcoceppi> mgz_: whoa, where has this link been all my life
[13:28] <marcoceppi> also, wow tha page is rendered wrong
[13:28] * marcoceppi goes to patch
[13:35] <mfa298> mgz_: I've done the juju metadata generate-image commands and have the metadata files which contain both the precise and trusty files.
[13:35] <mfa298> the issue seems to be getting that into juju
[13:36] <mfa298> I can run juju set-environment image-metadata-url=file://home/ubuntu but I still don't seem to be able to deploy a charm using precise
[13:37] <mfa298> or is the answer here that I need to have a web server to provide those files rather than file://
[13:37] <mgz_> no, that doesn't help
[13:38] <mgz_> you set those values in your environments.yaml at the start, and they need to be accessible from the cloud you've deployed, eg in swift
[13:38] <mgz_> file:// is no good
[13:39] <mfa298> so installing a webserver would be enough or do I also need to destroy the environment and bootstrap again with the url configured as well
[13:49] <mfa298> looks like a http server is enough. That seems to be working
[13:49] <mfa298> thanks
[13:52] <mfa298> now to work out the sets of commands that were actually needed so I can write the local documentation
[13:56] <jcastro> sebas5384, arosales: hi
[13:57] <sebas5384> hey jcastro!
[13:57] <arosales> sounds like jose has us set up
[13:57] <jcastro> oh cool
[13:57] <jcastro> link?
[13:57] <arosales> just in case I am going to joing the old hangout in case any folks join there
[13:57] <jose> ohai
[13:57] <jose> jcastro: sec
[14:00] <sebas5384> link?
[14:00] <sebas5384> :P
[14:01] <arosales> Just a quick reminder to folks that we will be hosting on ubuntuonair.com not via the google hangout event.
[14:03] * marcoceppi tunes in
[14:03] <arosales> I confirmed no folks are in the Google Event hangout, which is good
[14:05] <jcastro> sebas5384, https://plus.google.com/hangouts/_/ytl/vjVQa1PpcFNNnOvmc6cVeIQVTxHO4-o-wB5b0uus7JY
[14:06] <arosales> also we'll be using the following document to capture input
[14:06] <arosales> https://docs.google.com/a/canonical.com/document/d/1t_55N1il3XoL8z-jfa1CBoSxzOQjC90cgSpCqx5wkH0/edit
[14:06] <arosales> we'll be starting in a couple of minutes.
[14:06] <lazyPower> ah ok we haven't started yet. I was following along on Ubuntu on Air and its still broadcasting please stand by.
[14:07] <arosales> lazyPower: correct haven't staretd yet.
[14:07] <jose> :P
[14:08] <sebas5384> i'm there!
[14:08] <jose> we should have a slide saying 'Hey! We're late, but don't go!'
[14:10] <arosales> jose: looks like we are running into a perms issue
[14:11] <arosales> jose: are you in the hangout?
[14:11] <jose> arosales: I am
[14:11] <arosales> jose: jorge and I are in the hangout but don't see you.
[14:11] <jcastro> jose, can you paste the link in here?
[14:11] <jcastro> the one you PMed me isn't the one you are in apparently
[14:11] <cory_fu> Regarding the first point, I think what we really need is an extension to `juju resolved --retry` that essentially does a forced upgrade-charm before retrying the failed (or maybe even last successful) hook. `juju resolved --upgrade-and-retry`
[14:11] <sebas5384> i'm already there, but i cant hear jose
[14:11] <jose> https://plus.google.com/hangouts/_/ytl/vjVQa1PpcFNNnOvmc6cVeIQVTxHO4-o-wB5b0uus7JY
[14:12] <zirpu> i thought the hangout wasn't being used today.
[14:12] <jose> I could go ahead and create another event
[14:13] <jose> arosales, jcastro: want me to create another event?
[14:13] <jcastro> Why don't we all just join the same hangout?
[14:13] <jcastro> I am confused why there are two?
[14:13] <sebas5384> jose: can you hear me?
[14:13] <jose> sebas5384: not at all
[14:14] <sebas5384> damn it
[14:14] <sebas5384> hangout is trolling us
[14:14] <jose> blame Google
[14:14] <jose> aaaalways happens
[14:14] <jcastro> what hangout are you in? can you paste in the URL?
[14:14] <jose> jcastro: https://plus.google.com/hangouts/_/ytl/vjVQa1PpcFNNnOvmc6cVeIQVTxHO4-o-wB5b0uus7JY
[14:14] <sebas5384> https://plus.google.com/hangouts/_/g6mlkq4hfo6jvgjqmvksxo3inia?authuser=1&hl=pt-BR
[14:14] <jose> wait... that's another hangout link
[14:14] <jose> urgh
[14:14] <jcastro> ...
[14:14] <jose> I believe Google is playing with us
[14:15] <jcastro> ok which hangout link are we going to?
[14:15] <jose> we have different links
[14:15] <jcastro> pick one
[14:15] <jose> https://plus.google.com/hangouts/_/ytl/vjVQa1PpcFNNnOvmc6cVeIQVTxHO4-o-wB5b0uus7JY is the one I'm in
[14:15] <sebas5384> permissions problems
[14:16] <jose> let me just quickly create another event. Google is a mess atm
[14:16] <jcastro> do you have them set to private or something?
[14:16] <jose> not at all
[14:17] <jcastro> hmmm
[14:17] <jcastro> ok fire up a new one I guess
[14:17] <jose> https://plus.google.com/hangouts/_/ytl/vjVQa1PpcFO4tsipIQdBcOcv_43jywHwKQvrzxaa9XA=
[14:17] <zirpu> the ubuntu-on-air one was supposed to replace the hangout.
[14:17] <jose> jcastro, arosales, sebas5384: ^
[14:18] <sebas5384> now i'm in the last link you passed jose
[14:18] <arosales> this one is looking better
[14:18] <jose> zirpu: ubuntuonair uses hangouts :)
[14:18] <arosales> jcastro: I am in
[14:19] <jose> refresh ubuntuonair if you were in before
[14:20] <arosales> we got it started
[14:20] <arosales> Thanks for the patience.
[14:21] <arosales> live on ubuntuonair.com or join the hangout @ https://plus.google.com/hangouts/_/ytl/vjVQa1PpcFO4tsipIQdBcOcv_43jywHwKQvrzxaa9XA=
[14:22] <zirpu> i switched to the ubuntuonair version. i'm just listening.
[14:35] <cory_fu> You can give debug-hooks a hook name to only have it trigger on the one hook you're interested in fixing.
[14:35] <cory_fu> I just learned this the other day. Very helpful
[14:37] <cory_fu> jcastro: If local-mapped-to-remote charm source is not an option, my preferred alternative would be `juju resolved --update-and-retry`
[14:38] <lazyPower> thats good to know cory_fu. We should make that more pronounced in the docs, which might help alleviate some of that frustration.
[14:46] <lazyPower> noodles775: you've done roles with ansible scripts in charms correct?
[14:47] <lazyPower> i'm fairly sure its still very experimental at present right? we haven't ironed out how it shoud look
[14:47] <lazyPower> jcastro: we dont have --force on service.
[14:48] <jcastro> ack
[14:48] <arosales> lazyPower: thanks.
[14:48] <cory_fu> If you --force destroy the machine out from under the service, you can then remove the service
[14:48] <zirpu> i think the ansible buffering is a function of the parallelism code. so make ansible serial and lower the polling from the default 15 seconds.
[14:49] <lazyPower> cory_fu: i think the idea is more that they want to force destroy the service, and leave the machine, so they aren't waiting for a machine spin up
[14:49] <lazyPower> jcastro: you're left with a service definition, with no units.
[14:50] <cory_fu> lazyPower: If you redeploy after removing a service, it creates a new machine. If you use --to to put it on the same machine, that's not much different than upgrade-charm. But I guess it would force it to re-run all of the hooks, at least.
[14:51] <lazyPower> well, the scary part about what's being asked is deploying to a tainted machine may yield really crazy results
[14:51] <lazyPower> but i get what you're saying cory_fu
[14:51] * cory_fu isn't arguing against adding --force to destroy-service, though.
[14:51] <cory_fu> I've tried to do that many times, even after realizing it doesn't work.
[14:51] <lazyPower> haha
[14:51] <lazyPower> it gets me too
[14:51] <lazyPower> more often than i care to admit... and i know it doesn't exist.
[14:51] <lazyPower> muscle memory i suppose
[14:52] <cory_fu> :)
[15:13] <arosales> lazyPower: is this: http://manage.jujucharms.com/~lazypower/precise/dns your latest DNS charm?
[15:13] <sebas5384> arosales: Handrus and Renato where here too :)
[15:14] <arosales> ah thanks Handrus and Renato!
[15:15] <arosales> sebas5384: take a lookt at http://manage.jujucharms.com/~lazypower/precise/dns just need to confirm this is the last rev from lazyPower (re DNS)
[15:15] <lazyPower> arosales: it is. i sync'd it a few weeks ago witht he latest work.
[15:16] <arosales> lazyPower: thanks for confirming.
[15:21] <jose> hey guys! question around here. I'm working on a chamilo-memcached relation, and it would allow multiple servers. when I do 'relation-get host', will it tell me just one IP address, or multiple IP addresses?
[15:22] <jose> (in the event I have multiple memcached instances)
[15:23] <benji> now
[15:23] <benji> wrong window :)
[15:25] <marcoceppi> jose: you can infer all the hosts at once if you wanted to
[15:25] <marcoceppi> jose: using relation-list
[15:25] <marcoceppi> jose: then just loop through the list
[15:26] <jose> hmm, I'm gonna check how that may work for me in a debug-hooks session
[15:26] <marcoceppi> memcached_hosts=`relation_list`; for m in "$memcached_hosts"; do relation-get host $m >> /file/to/track/hosts; done
[15:26] <marcoceppi> as an example
[15:26] <jose> cool
[15:26] <marcoceppi> the relation-get is from memory
[15:27] <marcoceppi> but there's a way to specify which unit you wish to query in a relation context
=== jcw4_zzz is now known as jcw4
[15:45] <jcastro> lazyPower, https://github.com/juju/docs/pull/135
[17:33] <sebas5384> lazyPower: thanks!!! could you show us how to use it? http://manage.jujucharms.com/~lazypower/precise/dns
=== roadmr is now known as roadmr_afk
=== StoneTable is now known as aisrael
[19:23] <lazyPower> sebas5384: its not production ready yet. There's no HA support as of yet.
[19:23] <lazyPower> if you use that, and your DNS charm server tanks, you've lost DNS
[19:23] <lazyPower> sebas5384: but i'm more than happy to talk you through the implementation details, and how its structured / how to implement hooks.
[19:25] <lazyPower> sebas5384: take a look at https://github.com/chuckbutler/DNS-Charm - and scroll down to CHARM Integration, it talks about a programmable and programmable-multiple relationship hook. You set the proper variables, and it will build the configuration on the fly for you. The DNS charm itself spits out the public-address OTW so you can update /etc/resolve.conf as the primary DNS server, and your domains will then be avialble to each node connected to the dns
[19:25] <lazyPower> charm.
[19:25] <lazyPower> sebas5384: there's more implementation logic that needs to happen with regard to updating third party providers, and/or implementing your DNS server in the global DNS tree with your registrar (if you want it to be authoritative)
[19:30] <jcastro> lazyPower, hey so btw my last PR didn't touch the precise box URLs
[19:30] <jcastro> so you might want to pull those
[19:30] <lazyPower> ack was already on it
[19:31] <lazyPower> jcastro: just in master? or do i need to touch another branch?
[19:34] <lazyPower> jcastro: https://github.com/juju/docs/pull/136
=== rektide_ is now known as rektide
[19:49] <jcastro> lazyPower, LGTM, merged
[19:50] <jcastro> lazyPower, marco told me a while back it's better to just do all the work in personal branches and then submit to master
[19:50] <jcastro> rather than under the juju namespace
[19:50] <jcastro> I was like, ok, sounds good to me
[19:50] <lazyPower> jcastro: thats what i did
[19:50] <jcastro> yeah, I saw
[19:50] <jcastro> I was just responding to your irc question
[19:50] <lazyPower> oh you mean the web editor
[19:50] <lazyPower> well i was curious which branch to target
[19:50] <lazyPower> if i needed to touch the 1.18 docs as well
[19:50] <lazyPower> not that we are still actively pointing anything at them
[20:07] <lazyPower> actually no need to sync, just riddle me this batman
[20:07] <lazyPower> https://code.launchpad.net/~asanjar/charms/trusty/hdp-hadoop/trunk <-- hortonworks?
[20:08] <lazyPower> https://code.launchpad.net/~asanjar/charms/trusty/hdp-zookeeper/trunk <-- has first traces of hortonworks charm helpers?
[20:13] <asanjar> lazyPower: yes, that is hortonworks .. but if you need to investigate bdutils.py (general big data charm helper) or hdputils.py (hortonworks dirstro specific charm helpers) look at ~asanjar/charms/trusty/hdp-zookeeper
[20:15] <asanjar> lazyPower: as soon as I get a chance, I will update hortonworks hadoop with the latest helper..
[20:16] <lazyPower> ok thats all i needed
[20:16] <lazyPower> I'm wrapping up my last fringe issues thsi week with vagrant that jcastro just brought to me
[20:16] <lazyPower> Monday is when i start digging heavy into teh apache hadoop rewrite
=== roadmr_afk is now known as roadmr
=== CyberJacob|Away is now known as CyberJacob
=== CyberJacob is now known as CyberJacob|Away
=== CyberJacob|Away is now known as CyberJacob
[23:31] <AskUbuntu> Swift Through Horizon | http://askubuntu.com/q/505650