UbuntuIRC / 2014 /07 /11 /#juju.txt
niansa
Initial commit
4aa5fce
=== thumper is now known as thumper-afk
=== thumper-afk is now known as thumper
=== Ursinha is now known as Ursinha-afk
=== Ursinha-afk is now known as Ursinha
[05:22] <leotr> hello
[05:22] <leotr> i added one admin address in postgresql service configuration
[05:23] <leotr> can't understand why juju doesn't add corresponding line to pg_hba.conf?
[05:23] <leotr> and when i add it manually it by some reason restores pg_hba.conf file to original state
[05:23] <leotr> i use ubuntu 14-04
[05:23] <leotr> juju-local
[05:23] <leotr> any ideas?
[05:26] <leotr> or maybe i could do something to add my lines to pg_hba.conf or somewhere else?
[05:30] <leotr> ok, found solution
[05:30] <leotr> entered ip address in CIDR form
[05:30] <leotr> juju is so juju
=== CyberJacob|Away is now known as CyberJacob
=== vladk|offline is now known as vladk
=== CyberJacob is now known as CyberJacob|Away
=== CyberJacob|Away is now known as CyberJacob
=== vladk is now known as vladk|offline
=== vladk|offline is now known as vladk
[09:06] <pmatulis> yep
=== vladk is now known as vladk|offline
=== vladk|offline is now known as vladk
=== urulama is now known as uru-afk
=== uru-afk is now known as urulama
=== CyberJacob is now known as CyberJacob|Away
=== vladk is now known as vladk|offline
=== vladk|offline is now known as vladk
=== vladk is now known as vladk|offline
=== vladk|offline is now known as vladk
=== psivaa is now known as psivaa-lunch
=== Ursinha is now known as Ursinha-afk
=== Ursinha-afk is now known as Ursinha
=== psivaa-lunch is now known as psivaa
[14:24] <jcastro> asanjar, lazyPower, you guys wanna talk bundles?
[14:25] <asanjar> sure
[14:26] <jcastro> can I whip one up in jujucharms.com?
[14:26] <jcastro> I was thinking of just following your readme, then bundle that as a start?
[14:27] <lazyPower> jcastro: that'll work - mbruzek is working on getting hive2 promoted into trusty for the store
[14:28] <jcastro> asanjar, where would hive fit in?
[14:31] <asanjar> hive is used as a bigdata warehousing plaform with SQL like interface. It fits in on top of hdfs for data access and distribution while using mapreduce to process the data based on HiveQL commands.
[14:32] <asanjar> so in our solution - we will have Hive connected to HDFS and YARN
[14:33] <jcastro> I assume you'll add that to the trusty/hadoop readme as well?
[14:33] <asanjar> also we have elasticsearch-hadoop.jar included in HIVE jar path to communicate with ES cluster for data indexing
=== vladk is now known as vladk|offline
[14:34] <jcastro> ok, I'll try to make it less than 9 units
[14:34] <jcastro> so it's orangeboxable
[14:34] <lazyPower> jcastro: which part particularly? the notation about Hive being the big data warehousing?
[14:35] <jcastro> ok, why don't I get a bundle up and running that is the scale out hadoop example with ES
[14:35] <jcastro> and then you guys tell me what to connect next?
=== vladk|offline is now known as vladk
[14:36] <asanjar> lazyPower: I found this last night http://www.elasticsearch.org/guide/en/elasticsearch/hadoop/current/hive.html
[14:37] <asanjar> there are HIVE sample command for ES
[14:38] <lazyPower> interesting
[14:38] <asanjar> would you try them, that would be a good demo for jcastro
[14:38] <asanjar> something as simple as this:
[14:38] <asanjar> CREATE EXTERNAL TABLE artists (...)
[14:38] <asanjar> STORED BY 'org.elasticsearch.hadoop.hive.EsStorageHandler'
[14:38] <asanjar> TBLPROPERTIES('es.resource' = 'radio/artists',
[14:38] <asanjar> 'es.index.auto.create' = 'false') ;
[14:38] <asanjar> would show hive is talking to ES
[14:39] <asanjar> jcastro: would that be sufficient
[14:39] <lazyPower> Sure, give me a bit to wrap what i'm working on
[14:39] <lazyPower> jcastro: ship me your bundle when you get it mapped up and i'll work through this ES tutorial
[14:39] <lazyPower> in validation mode
[14:39] <jcastro> I'll push it in about 10 minutes
[14:40] <asanjar> if jcastro could just do this aspart of the demo:
[14:41] <asanjar> CREATE EXTERNAL TABLE artists (
[14:41] <asanjar> id BIGINT,
[14:41] <asanjar> name STRING,
[14:41] <asanjar> links STRUCT<url:STRING, picture:STRING>)
[14:41] <asanjar> STORED BY 'org.elasticsearch.hadoop.hive.EsStorageHandler'
[14:41] <asanjar> TBLPROPERTIES('es.resource' = 'radio/artists');
[14:41] <asanjar> -- insert data to Elasticsearch from another table called 'source'
[14:41] <asanjar> INSERT OVERWRITE TABLE artists
[14:41] <asanjar> SELECT NULL, s.name, named_struct('url', s.url, 'picture', s.picture)
[14:41] <asanjar> FROM source s;
[14:42] <jcastro> asanjar, to add elasticsearch
[14:42] <jcastro> the readme doesn't mention how to add it to the scale out configuration
=== vladk is now known as vladk|offline
[14:44] <jcastro> http://i.imgur.com/xqjY5r0.png
[14:44] <jcastro> this is what I have so far
[14:50] <jcastro> asanjar, ^^ what should I connect elasticsearch to?
[14:51] <lazyPower> jcastro: looks like we have a path for hive. export your bundle and ship it to me so i can deploy it
[14:51] <lazyPower> i'll start hacking on this ES demo
[14:52] <jcastro> ok on it
[14:59] <jcastro> lazyPower, https://code.launchpad.net/~jorge/charms/bundles/oscondemo/bundle
[15:04] <lazyPower> jcastro: got it, give me a few and i'll ping back with results.
[15:19] <asanjar> lazyPower: I am in a meeting, will join u as soon as I am done
[15:20] <lazyPower> asanjar: no rush, i'm just getting started
[15:28] <lazyPower> weird
[15:28] <lazyPower> jcastro: drag/drop of the bundle yielded deployment errors - only ES came through
[15:28] <jcastro> I only just pushed it
[15:28] <jcastro> I haven't tested it
[15:28] <lazyPower> ack - just a heads up
[15:28] <jcastro> Got pulled into a call
[15:28] <lazyPower> i'll get it working and push a patch once its deploying proper
[15:29] <jcastro> what errors did you get?
[15:29] <lazyPower> the gui didnt say, just errors on bundle
[15:29] <jcastro> I was just following the readme
[15:29] <jcastro> oh, use deployer man. :)
[15:30] <hazmat> :-)
[15:30] <lazyPower> :P
[15:30] * lazyPower pushes the easy button
[15:31] <lazyPower> same status result
[15:31] <jcastro> pastebin please
[15:31] <lazyPower> An error occurred while deploying the bundle: no further details can be provided about a minute ago
[15:32] <lazyPower> thats all i get. i'm going to nuke the relations and see if its the service definition
[15:33] <jcastro> oh I also forgot to rename envExport to something useful
[15:34] <lazyPower> yeah i took care of that
=== vladk|offline is now known as vladk
[17:01] <asanjar> lazyPower: go back to "eco"
=== urulama is now known as uru-afk
[17:12] <jcastro> wow
[17:12] <jcastro> lazyPower, my deployer straight up crashes on that bundle
[17:13] <lazyPower> jcastro: i'm rebuilding it by hand
[17:13] <lazyPower> this deployment configuration specified is also not one you'll see very commonly
[17:14] <lazyPower> we're going to hand off a ready made deployment W/ elasticsearch and hive
[17:14] <lazyPower> i'll post a bzr branch shortly for you to consume
[17:16] <jcastro> ack, I'm going to lunch
=== ev_ is now known as ev
[18:04] <l1fe> anyone experiencing 409 conflicts with MAAS+JUJU when adding machines?
=== scuttle` is now known as scuttle|afk
[18:14] <marcoceppi> l1fe: that usually means no MAAS nodes match your constraints (or you're out of available nodes)
[18:18] <l1fe> right, in this case, all nodes are ready
[18:18] <l1fe> (well aside from the bootstrapped node)
[18:18] <marcoceppi> l1fe: what architecture are they?
[18:18] <l1fe> amd64 and trusty
[18:18] <l1fe> the environment actually used to work
[18:18] <l1fe> and then after one too many destroy-environments
[18:18] <marcoceppi> l1fe: run with --debug see what juju is saying maas is complaining about
[18:19] <lazyPower> jcastro: http://i.imgur.com/DZfKZWb.png
[18:19] <marcoceppi> 409 is pretty much the MAAS API saying "I heard you, but I don't really feel like doing this because: ..."
[18:19] <lazyPower> i can ship you the bundle as is - we're still validating
[18:19] <l1fe> it's pretty generic
[18:19] <l1fe> 2014-07-11 16:58:06 ERROR juju.provisioner provisioner_task.go:421 cannot start instance for machine "1": cannot run instances: gomaasapi: got error back from server: 409 CONFLICT (No matching node is available.)
=== roadmr is now known as roadmr_afk
[18:20] <l1fe> i've even reprovisioned the entire cluster
[18:20] <marcoceppi> l1fe: yeah, so maas says you don't have any nods available. What size ram are these?
[18:20] <l1fe> re-installed maas and juju
[18:20] <l1fe> all 16gb
[18:20] <marcoceppi> l1fe: are you on 1.20 ?
[18:20] <l1fe> 1.20.1
[18:20] * marcoceppi scratches head
[18:20] <l1fe> and it literally was working 24 hours ago
[18:21] <l1fe> until after the last fateful destroy environment
[18:21] <marcoceppi> it's always that last destroy ;)
[18:21] <l1fe> :)
[18:21] <marcoceppi> And you've tried recommissioniong the nodes?
[18:21] <l1fe> yup
[18:21] <l1fe> full recommission
[18:21] <l1fe> reason why i even destroyed my environment
[18:21] <marcoceppi> it's weird
[18:22] <l1fe> was after i reported https://bugs.launchpad.net/bugs/1340261
[18:22] <_mup_> Bug #1340261: juju add-machine lxc:0 fails to start due to incorrect network name in trusty config <lxc> <network> <placement> <juju-core:Triaged> <https://launchpad.net/bugs/1340261>
[18:22] <marcoceppi> that it bootstraps
[18:22] <lazyPower> jcastro: lp:~lazypower/charms/bundles/oscondemo/bundle/
[18:22] <l1fe> i wanted to re-setup my openstack environment with lxc...and figured i'd just start from scratch
[18:22] <l1fe> yeup, bootstraps...and if i bootstrap, it can even pick up random nodes
[18:22] <l1fe> that it would otherwise NOT pick up during add-machine
[18:23] <marcoceppi> l1fe: what does juju get-contstraints say?
[18:23] <l1fe> oh jesus
[18:23] <l1fe> you're kidding me
[18:23] <l1fe> as;dlkfjas;dlfkjlasdf
[18:23] <l1fe> thanks marcoceppi :)
[18:24] <marcoceppi> <3
[18:24] <marcoceppi> l1fe: was it set to 32 bit?
[18:24] <l1fe> when i was bootstrapping, i was setting a constraint for 32G because i wanted to install on a particular maas node
[18:24] <marcoceppi> ah, yeah
[18:24] <marcoceppi> that sets it globally
[18:24] <l1fe> that's good to know
[18:24] <l1fe> omg
[18:24] <l1fe> i did a full reprovision twice
[18:24] <l1fe> thanks
[18:25] <marcoceppi> l1fe: no problem, you can use juju set-contstraints to put the mem limit lower
[18:25] <l1fe> yup
[18:26] <l1fe> i just did juju set constraints ""
[18:26] <l1fe> to clear it
[18:26] <marcoceppi> cool
[18:26] <l1fe> oh my god, i thought i was going insane
[18:26] <l1fe> i went down to 1.18
[18:26] <l1fe> and 1.19 in devel
[18:26] <l1fe> and was like...this was working!!
[18:39] <jrwren> lazyPower: https://bugs.launchpad.net/charms/+source/mysql/+bug/965094
[18:39] <_mup_> Bug #965094: Provide config option to specify database encoding <mysql (Juju Charms Collection):Confirmed> <https://launchpad.net/bugs/965094>
[18:40] <lazyPower> jrwren: thanks for filing the bug. The charm maintainer will see it shortly.
[18:42] <jrwren> lazyPower: i want to fix it. i wanted to ask you about possible ways to fix.
[19:15] <marcoceppi> jrwren: the "problem" with setting it in the interface, is that redefines the interface
[19:15] <marcoceppi> jrwren: it' a good idea, but the new key will have to be 100% optional
[19:15] <marcoceppi> jrwren: so, you'd want to have a default-encoding configuration option, which is used when no encoding is sent via the relation
[19:16] <jrwren> agreed. going for 100% optional
[19:16] <jrwren> exactly what I was thinking.
[19:17] <jrwren> i spiked it and tested it out and it works, i'm just charm new, and wanted to ask before i submit a merge request
[19:26] <lazyPower> jrwren: :) we're really open to code reviews as MP's
[19:26] <lazyPower> it makes life great when its of good quality and we just merge it in
[19:26] <lazyPower> if not, its a source to track the conversation as the merge evolves
[19:27] <jrwren> lazyPower: ok, thanks. i'll get something up
[19:32] <l1fe> quick question regarding juju and lxc containers...does juju provide a unified network fabric for any services exposed over LXC? as in something on LXC:0 being able to communicate with LXC:1
[19:33] <marcoceppi> l1fe: yes, something in LXC:0 will be able to talk to LXC:1
[19:33] <l1fe> hmm
[19:33] <l1fe> so, i have a mysql lxc on 1/lxc/1
[19:34] <l1fe> and nova-cloud-controller on 0/lxc/1
[19:34] <l1fe> and i get errors where it can't connect to mysql
[19:34] <l1fe> via the ip address
[19:34] <l1fe> which makes sense since those 10.0.* ip addresses are only for each machine
[19:35] <l1fe> physical machine
[19:35] <l1fe> unless i'm misunderstanding something
[19:36] <l1fe> (given the constraints thing earlier, that's entirely possible)
[19:37] <lazyPower> l1fe: are you sure teh mysql service has started? its using a bridged virtual ethernet adapter to do the communication in LXC
[19:37] <lazyPower> LXCBR0 is the default virtual device for lxc connections, and all lxc containers use 10.0.3.x ip addresses by default as well
[19:37] <l1fe> yeah, it's started
[19:39] <l1fe> yeah, i raised a bug about how lxcbr0 wasn't actually being used, so i even manually went in and changed that in containers directory
[19:39] <l1fe> for lxc.cfg
[19:41] <lazyPower> strange, works on every other setup. The only time i had to change the br interface was when i wanted to bridge it with an actual ethernet device
[19:41] <lazyPower> and even then i think i configured it wrong and i had soem races where the interface wouldn't come online during boot and I had to manually initialize it every time the server was reset.
=== vladk is now known as vladk|offline
[19:41] <lazyPower> but thats a story for another time....
=== vladk|offline is now known as vladk
[19:42] <lazyPower> so if you can reach the MySQL node, and you've verified the server is up - whats the log output that's stating you cannot connec to MySQL?
[19:42] <l1fe> https://bugs.launchpad.net/bugs/1340261
[19:42] <_mup_> Bug #1340261: juju add-machine lxc:0 fails to start due to incorrect network name in trusty config <lxc> <network> <placement> <juju-core:Triaged> <https://launchpad.net/bugs/1340261>
[19:42] <l1fe> for the templates not being setup properly for trusty
[19:43] <lazyPower> ahhh ok so this is not a local deployment
[19:43] <lazyPower> this is lxc, in a maas node for density based deployments
[19:43] <lazyPower> ack.
[19:44] <l1fe> unit-nova-cloud-controller-0: 2014-07-11 19:12:44 ERROR juju.worker.uniter uniter.go:486 hook failed: exit status 1 unit-cinder-0: 2014-07-11 19:12:55 INFO shared-db-relation-changed 2014-07-11 19:12:55.604 18833 CRITICAL cinder [-] OperationalError: (OperationalError) (2003, "Can't connect to MySQL server on '10.0.3.34' (113)") None None
[19:44] <l1fe> yeah, not local
[19:44] <l1fe> sorry forgot to mention that
[19:44] <l1fe> these are different physical maas nodes
[19:44] <lazyPower> its ok, its my own brain going immediately to local when someone mentions LXC
[19:44] <l1fe> i'm just trying to test out the best ways to deploy my environment :)
[19:45] <l1fe> and experimenting with encapsulating services
[19:46] <l1fe> i guess bridging networks between lxc containers across different physical nodes would be something that's not possible right now
[19:47] <lazyPower> i know that the networking when using juju deploy --to lxc:# can be a bit troublesome when it comes to networking configuration
[19:47] <l1fe> ah
[19:47] <l1fe> i don't see any DHCP or DNS changes in MAAS (which is controlling all of that stuff right now)
=== roadmr_afk is now known as roadmr
[19:48] <lazyPower> There was a topic on this not long ago
[19:48] <l1fe> and juju is using the actual IP address vs a fqdn
[19:48] <lazyPower> there's either a script or a subordinate charm that helps this... i forget how long ago the thread was - but its in the juju mailng list.
[19:48] <lazyPower> *mailing list
[19:49] <l1fe> ah, i'll have to do some searching
=== vladk is now known as vladk|offline
=== cmagina_ is now known as cmagina
=== scuttle|afk is now known as scuttlemonkey
=== CyberJacob|Away is now known as CyberJacob
=== CyberJacob is now known as CyberJacob|Away