 1.Druid实战案例之需求分析
   
   1).场景分析
   数据量大，需要在这些数据中根据业务需要灵活做查询
   实时性要求高
   数据实时的推过来，要在秒级对数据进行分析并查询出结果
   2).数据描述
{"ts":1607499629841,"orderId":"1009388","userId":"807134","orderStatusId":1,"orderStatus":"已
支付","payModeId":0,"payMode":"微信","payment":"933.90","products":
[{"productId":"102163","productName":"贝合
xxx+粉","price":18.7,"productNum":3,"categoryid":"10360","catname1":"厨卫清洁、纸制用
品","catname2":"生活日用","catname3":"浴室用品"},
{"productId":"100349","productName":"COxxx0C","price":877.8,"productNum":1,"categoryid":"103
02","catname1":"母婴、玩具乐器","catname2":"西洋弦乐器","catname3":"吉他"}]}

   ts：交易时间
   orderId：订单编号
   userId：用户id
   orderStatusId：订单状态id
   orderStatus：订单状态  
   0-11:未支付,已支付,发货中,已发货,发货失败,已退款,已关单,订单过期,订单已失效,产品已失效,代付拒绝,支付中
   
   payModeId：支付方式id
   payMode：支付方式
   0-6：微信,支付宝,信用卡,银联,货到付款,现金,其他
   
   payment：支付金额
   products：购买商品
   备注：一个订单可能包含多个商品，这里是一个嵌套结构
   productId：商品id
   productName：商品名称
   price：单价
   productNum：购买数量
   categoryid：商品分类id
   catname1：商品一级分类名称
   catname2：商品二级分类名称
   catname3：商品三级分类名称
   
   以上的嵌套的json数据格式，Druid不好处理，需要对数据进行预处理，将数据拉平，处理后的数据格式：
   {"ts":1607499629841,"orderId":"1009388","userId":"807134","orderStatusId":1,"orderStatus":"已
支付","payModeId":0,"payMode":"微信","payment":"933.90","product":
{"productId":"102163","productName":"贝合
xxx+粉","price":18.7,"productNum":3,"categoryid":"10360","catname1":"厨卫清洁、纸制用
品","catname2":"生活日用","catname3":"浴室用品"}}

   {"ts":1607499629841,"orderId":"1009388","userId":"807134","orderStatusId":1,"orderStatus":"已
支付","payModeId":0,"payMode":"微信","payment":"933.90","product":
{"productId":"100349","productName":"COxxx0C","price":877.8,"productNum":1,"categoryid":"103
02","catname1":"母婴、玩具乐器","catname2":"西洋弦乐器","catname3":"吉他"}}

 2.项目实战
   
   1).kafka生产者
package cn.lagou.Streaming.kafka

import java.util.Properties

import org.apache.kafka.clients.producer.{KafkaProducer, ProducerConfig, ProducerRecord}
import org.apache.kafka.common.serialization.StringSerializer

import scala.io.BufferedSource

object KafkaProducerForDruid {
  def main(args: Array[String]): Unit = {
    // 定义 kafka 参数
    val brokers = "linux121:9092,linux122:9092,linux123:9092"
    val topic = "lagoudruid2"
    val prop = new Properties()

    prop.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, brokers)
    prop.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, classOf[StringSerializer])
    prop.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, classOf[StringSerializer])

    // KafkaProducer
    val producer = new KafkaProducer[String, String](prop)
    val source: BufferedSource = scala.io.Source.fromFile("data/lagou_orders.json")
    val iter: Iterator[String] = source.getLines()

    iter.foreach{line =>
      val msg = new ProducerRecord[String, String](topic, line)
      producer.send(msg)
      Thread.sleep(2)
    }

    producer.close()
    source.close()
  }

}
   
   2).启动服务
   启动 ZooKeeper、Kafka、Druid、HDFS服务
   -- 创建topic
kafka-topics.sh --create --zookeeper linux121:2181,linux122:2181/myKafka 
--replication-factor 1 --partitions 3 --topic lagoudruid2
   3).定义数据摄取规范
   备注：
	   json数据要拉平
	   不用定义Rollup
   数据摄取规范：
{
	"type": "kafka",
	"spec": {
		"ioConfig": {
			"type": "kafka",
			"consumerProperties": {
				"bootstrap.servers": "linux121:9092,linux122:9092"
			},
			"topic": "lagoudruid1",
			"inputFormat": {
				"type": "json"
			},
			"useEarliestOffset": true,
			"appendToExisting": true
		},
		"tuningConfig": {
			"type": "kafka"
		},
		"dataSchema": {
			"dataSource": "lagoutab1",
			"granularitySpec": {
				"type": "uniform",
				"queryGranularity": "MINUTE",
				"segmentGranularity": "DAY",
				"rollup": true
			},
			"timestampSpec": {
				"column": "ts",
				"format": "iso"
			},
			"dimensionsSpec": {
				"dimensions": ["dstip", "protocol", "srcip", {
					"type": "long",
					"name": "srcport"
				}, {
					"type": "long",
					"name": "dstPort"
				}]
			},
			"metricsSpec": [{
				"name": "count",
				"type": "count"
			}, {
				"name": "min_bytes",
				"type": "longMin",
				"fieldName": "bytes"
			}, {
				"name": "sum_cost",
				"type": "doubleSum",
				"fieldName": "cost"
			}, {
				"name": "max_packets",
				"type": "longMax",
				"fieldName": "packets"
			}, {
				"name": "min_packets",
				"type": "longMin",
				"fieldName": "packets"
			}, {
				"name": "sum_packets",
				"type": "longSum",
				"fieldName": "packets"
			}]
		}
	}
}
   4).查询计算
-- 查记录总数
select count(*) as recordcount
from lagoudruid2

-- 查订单总数
select count(distinct orderId) as orderscount
from lagoudruid2

-- 查有多少用户数
select count(distinct userId) as usercount
from lagoudruid2

-- 统计各种订单状态的订单数
select orderStatus, count(*)
from (
select orderId, orderStatus
from lagoudruid2
group by orderId, orderStatus
)
group by orderStatus

-- 统计各种支付方式的订单数
select payMode, count(1)
from (
select orderId, payMode
from lagoudruid2
group by orderId, payMode
)
group by payMode
-- 订单金额最大的前10名
select orderId, payment, count(1) as productcount, sum(productNum) as products
from lagoudruid2
group by orderId, payment
order by payment desc limit 10

-- 计算每秒订单总金额
select timesec, round(sum(payment)/10000, 2)
from (
select date_trunc('second', __time) as timesec, orderId, payment
from lagoudruid2
group by date_trunc('second', __time), orderId, payment
)
group by timesec
   
   SQL参考：https://druid.apache.org/docs/0.19.0/querying/sql.html#data-types
   5).Druid案例小结
   在配置摄入源时要设置为True从流的开始进行消费数据，否则在数据源中可能查不到数据
   Druid的join能力非常有限，分组或者聚合多的场景推荐使用
   sql支持能力也非常受限
   数据的分区组织只有时间序列一种方式