UbuntuIRC / 2020 /03 /12 /#juju.txt
niansa
Initial commit
4aa5fce
[00:30] <narispo> hi, what can one do to deploy a recent hadoop/spark/solr/hbase stack? Apache Ambari ships Hadoop 2.7, Apache Bigtop, Hadoop 2.8, Slor 6.5. Jujucharms are based on Bigtop so also ship these unsupported/outdated/with-unfixed-cve versions.
[00:30] <narispo> Solr *
=== _thumper_ is now known as thumper
[01:03] <thumper> narispo: to be honest I'm not sure, but perhaps ask on our discourse?
[01:05] <narispo> thumper: I guess. Should probably reach to Canonical sales/engineering directly.
[01:05] <thumper> narispo: canonical doesn't own those charms as far as I'm aware
[01:06] <narispo> bigdata-charmers is community work?
[01:06] <thumper> ah... I think it is at the moment...
[01:06] <thumper> it didn't used to be
[01:28] <rick_h> thumper: narispo yes, it's community work
[01:28] <rick_h> we've got some close folks in the community that leverage them but there's no team in canonical that maintains them
[01:32] <narispo> rick_h: okay! well I'm looking into improving Bigtop itself with version upgrades right now so that we can deploy a recent Hadoop cluster. We care about Spark, Solr, HBase, with Hadoop. So Bigtop's next release (1.5) plans to update to Solr 6.6 (unsupported, very bad, latest is 8.4.1), Spark 2.4.5 (latest, very good), HBase 1.5 (outdated, bad, latest is 2.2.3), and Hadoop 3.2.1 (latest, very good).
[01:32] <narispo> So that gives me HBase and Solr to look into and upgrade inside Bigtop.
[01:33] <rick_h> narispo: yea, I think you're finding the ones more used (up to date) and less used (out of date). It'll be awesome to have them brought up to speed. Thanks!
[01:35] <narispo> rick_h: another disadvantage is that Bigtop doesnt seem to do patch releases.. so individual components may stay vulnerable to security issues for months until the next Bigtop release is out.
[01:35] <narispo> And that includes the big ones used by tons, such as Spark or Hadoop.
[03:42] <thumper> https://github.com/juju/juju/pull/11310 for anyone that wants a faster juju status
[03:49] <tlm> can take a look in a bit thumper just wrapping something up
=== jam1 is now known as jam
[04:30] <thumper> tlm: thanks
[04:44] <tlm> lgtm thumper
[04:49] <thumper> tlm: thanks
[04:50] <thumper> I'll merge it once the 2.7.4 release branch has merged it
[04:50] <thumper> s/it/in/
[04:53] <babbageclunk> uh, what's happening with the 2.7 merge run?
[04:53] <babbageclunk> https://jenkins.juju.canonical.com/job/github-make-check-juju/4364/console
[04:54] <babbageclunk> looks like dep has hung?
[04:54] <thumper> babbageclunk: it has only been going two minutes
[04:55] <babbageclunk> oh duh, missed that it was new
[04:55] <babbageclunk> that would do it
[04:55] <thumper> :)
[04:57] <thumper> it's off and racing
=== Guest63738 is now known as skay
[17:08] <achilleasa> rick_h: is it even possible for a unit to depart a peer relation?
[17:15] <rick_h> achilleasa: ...thinking. The destroy process always confuses me because there is "I'm going away"
[17:16] <rick_h> achilleasa: but it's not like a relation you can choose to opt out of
[17:17] <achilleasa> yes, with the relation-created changes units will automatically be in a peer relation (even if they are is only a single unit)
[17:25] <rick_h> achilleasa: right, makes sense
[18:28] <achilleasa> hml: just to confirm, the TODO in `runCharmProcessOnRemote` in 11257 will be handled with an upcoming PR, right? I remember seeing a comment in the other PR
[18:29] <hml> achilleasa: yes
[18:30] <hml> achilleasa: working on the unit tests for the pr that resolves the TODO now
[18:31] <achilleasa> hml: I think 11257 is good to go with your changes. Running the QA steps
[18:31] <hml> achilleasa: cool, ty
[18:52] <achilleasa> hml: is this passing for you? WorkerSuite.TestInvalidDeploymentChange
[18:54] <hml> achilleasa: checking
[18:54] <hml> achilleasa: yes. running with stress script
[19:00] <hml> achilleasa: 130 successful runs
[19:15] <achilleasa> hml: 11257 approved
[19:15] <hml> achilleasa: awesome! ty
[19:16] <achilleasa> hml: if you push the other one later today assign it to me so I can review it in the morn
[19:16] <hml> achilleasa: ack
[23:10] <wallyworld> kelvinliu: tlm: bug 1867168 ... why would they be connecting to pods and not the service? I want to push back and tell them to use the service front end. do you agree it's bad k8s practice for extermal actors to use the pods and not the service?
[23:10] <mup> Bug #1867168: No easy way to retrieve pod fqdn <juju:New> <https://launchpad.net/bugs/1867168>
[23:11] <tlm> taking a look
[23:14] <tlm> wallyworld: want me to reply to it ?
[23:14] <wallyworld> tlm: do you agree with my assertion?
[23:16] <tlm> yep, there is use cases for what they want but there is an accepted way todo this through multiple services
[23:17] <wallyworld> tlm: gr8, if you could suggest the approach that would be good
[23:19] <tlm> wallyworld, we could implement what they want FYI
[23:20] <wallyworld> we could but would prefer not to if it's bad practice
[23:20] <wallyworld> we support valueRefs now etc, i asusme we'd use that if needed
[23:21] <tlm> do we support multiple services ?
[23:21] <wallyworld> or they could use that, assuming k8s exposes such info that way
[23:21] <kelvinliu> probably they want to maintain the mongodb replica set connection string
[23:21] <tlm> the cases where I have seen this is where you have a db replicas and want to talk to masters and or slaves
[23:21] <tlm> but the solution has always been run two services
[23:21] <wallyworld> we support a core service for the app plus a headless one
[23:21] <tlm> or n services
[23:22] <tlm> want to HO this before I finish reply ?
[23:22] <wallyworld> sure
[23:48] <babbageclunk> anastasiamac: could you take another look at https://github.com/juju/juju-restore/pull/8? I've made changes from your comments and also added getting ha-nodes from the backed-up agent.conf
[23:55] <anastasiamac> babbageclunk: looking
[23:55] <babbageclunk> thanks!