UbuntuIRC / 2014 /02 /17 /#juju.txt
niansa
Initial commit
4aa5fce
=== mwhudson is now known as zz_mwhudson
=== zz_mwhudson is now known as mwhudson
=== mwhudson is now known as zz_mwhudson
=== CyberJacob|Away is now known as CyberJacob
=== zz_frobware is now known as frobware
=== CyberJacob is now known as CyberJacob|Away
=== psivaa-afk is now known as psivaa
=== Ursinha is now known as Ursinha-afk
=== Ursinha-afk is now known as Ursinha
=== gary_poster|away is now known as gary_poster
[14:07] <cargill> if a charm has a database relation, how should it handle being in relation to multiple databases at once? should it reject the second join? should it ignore it?
[14:19] <hazmat> cargill, well as a client to a database, each of the db conns would have different relation names
[14:20] <hazmat> cargill, ie.. mediawiki does this.. it can have multiple mysql relations.. one for read slave and one for db.. it distinguishes the usage based on the relation name (which also means different rel hooks)
[14:20] <hazmat> cargill, client/require deps are only satisified once per service.
[14:21] <cargill> hazmat: but if you have a database relation, the user can still join it with multiple charms, right?
[14:21] <cargill> (database charms)
[14:22] <hazmat> cargill, right.. the server/provider side can have many instances of the relation
[14:23] <hazmat> cargill, in terms of distinguishing between those, you can use relation-ids to list the different instances of that named relation on the server
[14:24] <hazmat> cargill, its not clear what your question/use case is.. could you elaborate?
[14:24] <cargill> you say provider can have multiple instances of a relation, but the other side cannot?
[14:26] <cargill> designing a db-relation-joined/departed, I wonder if I have to handle a user setting up a relation to multiple database charms (where the application can only connect to a single database)
[14:27] <hazmat> cargill, well... technically it can, its just not common (and certains tools like the gui don't support it)
[14:27] <hazmat> cargill, you mean like they can connect to postgres or mysql?
[14:29] <hazmat> cargill, maybe this example clarifies http://pastebin.ubuntu.com/6949028/
[14:31] <hazmat> actually that simplifies it too much.. here's a better example http://pastebin.ubuntu.com/6949032/
[14:33] <cargill> so again, the question is, if someone tries to do that (add a second db relation, where one is already active), what's the right response?
[14:34] <cargill> (from the *joined/departed hooks)
[14:56] <hazmat> cargill, i'd error so it draws attention from admin
[14:56] <cargill> thanks
[14:56] <hazmat> cargill, and log an appropriate error msg
[14:56] <cargill> sure :)
=== freeflying is now known as freeflying_away
=== frobware is now known as zz_frobware
[15:43] <tomixxx3> hi, is it bad if it says "instance-state: missing" after deploying a charm?
[15:43] <tomixxx3> agent-state is "started" :-)
[16:17] <marcoceppi> tomixxx3: is this on local provider?
[16:17] <tomixxx3> hi marcoceppi
[16:17] <tomixxx3> what do u mean with "local provider" ?
[16:18] <marcoceppi> tomixxx3: the instance-state: missing, what provider are you using? Local, amazon, hp cloud, etc
[16:18] <tomixxx3> openstack
[16:18] <marcoceppi> tomixxx3: interesting, does it still say missing?
[16:18] <tomixxx3> yep, btw nodes has internet-access now :-)
[16:19] <tomixxx3> i had to set "router ip" in maas dashboard to the same ip of the MaaS-Server
[16:19] <tomixxx3> figured this out with jtv in #maas
[16:19] <marcoceppi> tomixxx3: ah, good to know
[16:20] <tomixxx3> marcoceppi: right now, i have deployed a bunch of charms and i'am waiting until the all have "started"
[16:20] <marcoceppi> tomixxx3: well that means it simply can't figure out if the instance is running or not. missing could mean the instance is gone or it can't get a status
[16:21] <tomixxx3> marcoceppi: oh no, sounds not good
[16:21] <tomixxx3> but let's see
[16:21] <marcoceppi> tomixxx3: could you show me your juju status?
[16:21] <marcoceppi> tomixxx3: also, in the horizon dashboard do you see instances launched?
[16:21] <tomixxx3> i mean, i have deployed multiple charms on a single node, because i have not that much nodes
[16:22] <tomixxx3> with lxc-creat if u remember
[16:22] <marcoceppi> so, are you using openstack or maas?
[16:23] <tomixxx3> both ? ^^
[16:23] <tomixxx3> https://help.ubuntu.com/community/UbuntuCloudInfrastructure
[16:23] <marcoceppi> tomixxx3: can you pastebin your juju status please
[16:24] <tomixxx3> one sec
[16:24] <tomixxx3> http://pastebin.ubuntu.com/6949593
[16:25] <tomixxx3> as u can see, cloud2.master is still booting
[16:25] <tomixxx3> (ok i cann see the node is booting ^^)
[16:25] <marcoceppi> tomixxx3: Okay, so this is on the maas environment
[16:25] <tomixxx3> however, nova-volume failed
[16:25] <marcoceppi> instance-state missing is probably a known issue with lxc containers, the agent-start is started and that's all that matters
[16:25] <tomixxx3> yep
[16:26] <marcoceppi> cloud2.master probably needs to be power cycled depending on how long ago you commisioned it
[16:26] <marcoceppi> nova-volume is in error, so try running juju resolved --retry nova-volume/0 see if that helps
[16:26] <tomixxx3> cloud2.master is installing ubuntu right now
[16:26] <tomixxx3> i have it in front of me
[16:26] <marcoceppi> tomixxx3: gotchya
[16:27] <marcoceppi> tomixxx3: also, could you pastebin the log from nova-volume/0
[16:27] <tomixxx3> kk
[16:27] <marcoceppi> it'll be in /var/log/juju/unit-nova-volume-0.log
[16:28] <marcoceppi> on nova-volume/0
[16:28] <tomixxx3> i have to login on nova-volume/0 for this i guess?
[16:29] <marcoceppi> tomixxx3: if you recall, co-locating most all services to LXC /might/ work but isn't recommended. You might need to do some re-jiggering to get it to work
[16:29] <marcoceppi> tomixxx3: yes, run juju ssh nova-volume/0
[16:29] <tomixxx3> "re-jiggering" ?
[16:30] <marcoceppi> tomixxx3: you might have to massage the node a little bit to get it to setup
[16:30] <tomixxx3> at home, i have two physical nodes lying around, maybe i attach them to the cloud
[16:30] <marcoceppi> tomixxx3: it might not be needed
[16:30] <tomixxx3> kk
[16:30] <marcoceppi> it depends on why nova-volume errored out
[16:33] <tomixxx3> do u know how i can Strg+A the content of a file openend with vi
[16:33] <tomixxx3> ?
[16:33] <marcoceppi> tomixxx3: you can install pastebinit
[16:33] <marcoceppi> then run cat /var/log/juju/unit-nova-volume-0.log | pastebinit
[16:33] <marcoceppi> and it'll give you a pastebin url
[16:34] <tomixxx3> a nice ^^
[16:36] <tomixxx3> here it is: http://pastebin.ubuntu.com/6949659
[16:37] <marcoceppi> tomixxx3: okay, so this is the error
[16:37] <marcoceppi> nova-volume ERROR: /dev/xvdb is not a valid block device
[16:37] <marcoceppi> nova-volume needs a block device to take over
[16:37] <marcoceppi> like ceph
[16:38] <marcoceppi> I don't know if you actually need nova-volume
[16:39] <marcoceppi> jamespage: do you actually need cinder or nova-volume to deploy openstack?
[16:39] <jamespage> marcoceppi, you can elect to not have block storage and drop it
[16:39] <marcoceppi> jamespage: cool, thanks
[16:39] <jamespage> also nova-volume is < folsom btw
[16:39] <tomixxx3> btw, all other charms are started now :-)
[16:39] <marcoceppi> jamespage: right, cinder is recommended for folsom right?
[16:40] <jamespage> and should not be carried through to 14.04
[16:40] <jamespage> marcoceppi, that's correct yes
[16:40] <marcoceppi> jamespage: cool, thanks!
[16:40] <marcoceppi> tomixxx3: what you can do, for the sake of getting your openstack demo running, is remove nova-volume and continue on with the deployment
[16:41] <tomixxx3> nova-volume needs its own machine, i guess? (i have read sth like this a few weeks ago, if i remember correct)
[16:41] <cargill> in tests, when I've changed a condif value, how do I find out when the change has been carried out so that I can test the result?
[16:41] <marcoceppi> tomixxx3: yeah, though in future deployments you'll want to use cinder instead
[16:41] <marcoceppi> cargill: are you using amulet?
[16:41] <cargill> not yet
[16:41] <marcoceppi> cargill: then there really isn't a way at the moment
[16:42] <cargill> but can do if it makes things like that possible
[16:42] <marcoceppi> cargill: well, it's not perfect, but it strives to resolve that problem by monitoring the hook queue for all the services to know when the environment is idle
[16:43] <marcoceppi> cargill: otherwise you'll just have to put a sleep or something in your test for X seconds you think it takes on average for the config-change to occur
[16:43] <tomixxx3> marcoceppi: More abstractly, later on, i want to upload sth to my cloud, process sth on my cloud and download sth from my cloud. so, is nova-volume not a kind of cloud-storage which i need?
[16:44] <tomixxx3> (for now, i will remove nova-volume)
[16:44] <cargill> marcoceppi: well, a config change can be a change in the deployed version, that means a redownload, there's no telling really, then
[16:44] <marcoceppi> tomixxx3: you'll probably use an object store, nova-volume is for attaching drives and blocks to your servers
[16:44] <tomixxx3> kk
[16:44] <marcoceppi> where as the object store can be used to upload stuff, have your servers process stuff, then place the results there
[16:44] <marcoceppi> swift is the object store used in OpenStack
[16:44] <marcoceppi> cargill: exactly
[16:45] <marcoceppi> cargill: that's why I started amulet, to be able to intercept relation values and validate those values and to know when an environment was idle
[16:48] <cargill> where's the docs for amulet? can't find it in the juju docs
[16:48] <tomixxx3> hmm i have executed "juju destroy-service nova-volume" but it does not disappear when i call "juju status"
[16:48] <marcoceppi> cargill: https://juju.ubuntu.com/docs/tools-amulet.html
[16:49] <marcoceppi> tomixxx3: because it's in an error state
[16:49] <marcoceppi> tomixxx3: just keep runnin juju resolved nova-volume/0
[16:49] <tomixxx3> kk
[16:51] <tomixxx3> if i do "juju add-relation nova-compute rabbitmq-server" i get ambiguoos relation
[16:52] <marcoceppi> tomixxx3: what's the ambiguous relation output?
[16:52] <tomixxx3> http://pastebin.ubunut.com/6949742
[16:52] <tomixxx3> sorry
[16:52] <marcoceppi> no worries
[16:52] <tomixxx3> http://pastebin.ubuntu.com/6949742
[16:53] <marcoceppi> tomixxx3: nova-compute:amqp rabbitmq-server:amqp
[16:53] <marcoceppi> tomixxx3: use `juju add-relation nova-compute:amqp rabbitmq-server:amqp`
[16:53] <tomixxx3> kk
[16:58] <tomixxx3> ok, all relations added
[16:59] <tomixxx3> (except those with nova-volume)
[16:59] <tomixxx3> now, i should point to http://node-address/horizon
[16:59] <tomixxx3> i got an "Internal Server Error" when calling 10.0.0.109/horizon
[17:00] <marcoceppi> tomixxx3: you may have to wait for a few mins
[17:00] <tomixxx3> kk
[17:00] <tomixxx3> this leightweight-container thing is quite interesting, they have their own ips ^^
[17:02] <cargill> marcoceppi: amulet is awesome, it actually allows one to look into the service unit and see whether things are ok or not
[17:02] <tomixxx3> latest juju state: http://pastebin.ubuntu.com/6949780
[17:03] <marcoceppi> cargill: glad you think so, there are still a few bugs being worked out with how subordinates function, but it's coming along quite nicely
[17:03] <cargill> where anything else would be a lot of boilerplate around ssh duplicated between charms
[17:03] <marcoceppi> tomixxx3: is the dashboarding working now?
[17:04] <tomixxx3> marcoceppi: no, not yet. do i have to expose some charms?
[17:04] <tomixxx3> according to guide: 5. Expose the services you want (optional)
[17:04] <tomixxx3> but i have maas, not?
[17:05] <tomixxx3> guide: https://help.ubuntu.com/community/UbuntuCloudInfrastructure#Install_Juju
=== zz_frobware is now known as frobware
[17:06] <marcoceppi> tomixxx3: maas has no firewaller, so it doesn't matter
[17:06] <tomixxx3> ok, btw, http://10.0.0.109 works
[17:07] <tomixxx3> and it says, it has no content yet
[17:07] <marcoceppi> tomixxx3: what version of openstack did you deploy? folsom? grizzly?
[17:08] <tomixxx3> dunno
[17:09] <marcoceppi> tomixxx3: what does juju get openstack-dashboard show for openstack-origin?
[17:09] <tomixxx3> default: true
[17:10] <marcoceppi> tomixxx3: what does value show?
[17:10] <tomixxx3> distro
[17:11] <marcoceppi> okay, so you have folosm, which means you ran into the django bug
[17:16] <tomixxx3> ok, is this a bad bug?
[17:17] <marcoceppi> tomixxx3: well it prevents the dashboard from working
[17:17] <marcoceppi> which is kind of annoying
[17:19] <tomixxx3> k, is there a way to fix this or can i deploy another openstack version?
[17:19] <tomixxx3> i want a dashboard, i have seen already the dashboard on the usb-all-in-one-node-cloud-demo and it looked nice :-)
[17:20] <tomixxx3> gives me the feeling everything works as it should
[17:23] <xp1990> Hi! is anyone here available to help me with a problem?.
[17:23] <xp1990> I'm using juju 1.16.6
[17:24] <xp1990> and the I'm getting the old index file contains no data for cloud, error.
[17:24] <xp1990> I have generated imagemetadata.json and index.json
[17:24] <xp1990> and uploaded them, using swift, to my cloud public bucket
[17:25] <xp1990> which is named juju-<hash>/streams/v1/
[17:25] <xp1990> then the two json files are there
[17:25] <xp1990> yet I still get an error when running juju bootstrap
[17:25] <xp1990> any ideas?
[17:26] <tomixxx3> horizon is folsom, right?
[17:27] <tomixxx3> is this a possible fix to the dashbaord error: https://lists.launchpad.net/openstack/msg17255.html
[17:29] <tomixxx3> marcoceppi: ok, i have to go now! however, today we made good progress :-) ty for all your help so far!
[17:29] <marcoceppi> tomixxx3: np, I'll look for a patch for your django issue
[17:29] <tomixxx3> marcoceppi: kk, ty!
[17:29] <marcoceppi> xp1990: can you run juju bootstrap --show-log -debug and pastebin the output?
[17:47] <jamespage> marcoceppi, the dashboard is hosed with juju deployment prior to havana
[17:47] <jamespage> marcoceppi, cloud-tools contains a new version f django
[17:47] <marcoceppi> jamespage: yeah, I remember, this is just because the cloud archive has a more recent version of django, right?
[17:47] <jamespage> it should be fixed soon - I think its commited in juju-core
[17:48] <jamespage> marcoceppi, yeah - you got it
[17:48] <marcoceppi> there should be a way to lower priority remove and reinstall django though, right?
[17:51] <roadmr> marcoceppi: I just juju ssh'd into the node and removed django 1.5. 1.3 mostly works, though it also bombs on a few pages :/
[17:52] <marcoceppi> roadmr: bummer, I guess it's best to just use havana if possible
[17:53] <roadmr> marcoceppi: that'd be ideal! I'm lazy and I just juju deployed openstack-dashboard. Is there a way to point juju to charms that use havana?
[17:53] <marcoceppi> roadmr: yeah, so you'll have to change the openstack-origin to havana for each charm, but that should trigger an upgrade
[17:54] <roadmr> marcoceppi: oh cool! so it will just upgrade my existing charms/services? (if it destroys stuff that's OK, I don't have anything important there yet)
[17:55] <marcoceppi> roadmr: well, something like openstack-origin: cloud:precise-havana/updates
[17:55] <marcoceppi> roadmr: but yeah, it'll just upgarde the services and it shouldn't break anything or lose anything in the process
[17:56] <roadmr> marcoceppi: awesome! I'll give it a try, thanks!
[18:04] <jamespage> roadmr, it will upgrade yes - but openstack upstream only officially supports serial release upgrades
[18:04] <jamespage> so you need to step
[18:04] <jamespage> cloud:precise-grizzly
[18:04] <jamespage> cloud:precise-havana
[18:04] <jamespage> some things might double jump
[18:05] <jamespage> its an area the server team is doing some work on for icehouse
[18:05] <roadmr> jamespage: oh ok, I'll keep that in mind
=== BradCrittenden is now known as bac
=== CyberJacob|Away is now known as CyberJacob
[22:09] <med_> marcoceppi, et al: Is there a way with the local provider to pass in an lxc bind mount (or a way to edit the bindmounts and restart the container?)
[22:09] * med_ needs to attach a larger drive in an lxc/local-provider
[22:09] <med_> and/or is sherpa (ssh provider) now available?
[22:15] <marcoceppi> med_: not with lxc/local
[22:15] <marcoceppi> med_: but manual provider (previously ssh/sherpa/null) is now available
[22:15] <marcoceppi> recommended you use 1.17.2 release for manual provider as it's still relatively new
[22:15] <med_> marcoceppi, thanks
[22:15] <med_> nodz.
[22:16] <med_> marcoceppi, https://juju.ubuntu.com/docs/config-manual.html the right place to start with manual/sherpa?
[22:16] <marcoceppi> med_: yeah, except it's not called null anymore
[22:16] <med_> looks good to me.
[22:17] <med_> thanks marcoceppi, giving it a whirl.
[22:17] * marcoceppi files a bug to fix docs
=== Ursinha is now known as Ursinha-afk
=== Ursinha-afk is now known as Ursinha
=== zz_mwhudson is now known as mwhudson
=== freeflying_away is now known as freeflying
[23:11] * JoshStrobl asks marcoceppi for a link to the bug so he can track it.
[23:17] <marcoceppi> JoshStrobl: which one?
[23:18] <JoshStrobl> marcoceppi: all of them! :P Well, any that are specific to fixes / improvements to Juju documentation, particularly if there are anything regarding improving documentation for local environments, promoting the use of Vagrant, etc.
[23:18] <JoshStrobl> If there aren't any bugs regarding promoting the use of the Vagrant container, I'd be more than willing to file the bug if you just point me in the right place.
[23:21] <marcoceppi> JoshStrobl: there's none about that in particular, you can file bugs here: https://bugs.launchpad.net/juju-core/+filebug make sure to target the "docs" branch of juju-core
[23:22] <marcoceppi> JoshStrobl: we're also in the process of migrating the docs to gh, so eventually I think we'll track issues there as well
=== mwhudson is now known as zz_mwhudson
[23:22] <JoshStrobl> noted!
[23:27] <JoshStrobl> Hey marcoceppi, by branch do you mean apply the "docs" tag in the tag section the file bug form in juju-core?
[23:28] <marcoceppi> JoshStrobl: no, there's a way to target a specific series
[23:28] <marcoceppi> the docs are a series of juju-core
[23:31] <JoshStrobl> I see it listed on the right side of https://bugs.launchpad.net/juju-core/docs/+bugs as a "Series-targeted bugs" but when you click "docs" and then go to file a bug, still shows the same form with no input area for providing the series. Is there a way to do that post filing the bug?
[23:32] * JoshStrobl thinks marcoceppi is probably face-palming right now
[23:34] <marcoceppi> JoshStrobl: you have to first submit the bug before changing it
[23:34] <marcoceppi> it's just a limitation of lp bugs
[23:35] <JoshStrobl> Well, hopefully that'll get resolved in the future. Or maybe I should file a bug (if there isn't one already) for that too :P
[23:36] <sarnold> launchpad is feeling a touch unloved, 92 critical bugs, 655 high importance bugs, https://bugs.launchpad.net/launchpad/
[23:57] <JoshStrobl> marcoceppi: https://bugs.launchpad.net/juju-core/+bug/1281345