 1.电商分析之--会员活跃度之日志数据采集
   
   原始日志数据(一条启动日志)
   数据采集的流程：
   选择Flume作为采集日志数据的工具：
   Flume 1.6
       无论是Spooling Directory Source、Exec Source均不能很好的满足动态实
时收集的需求
   Flume 1.8+
       (1).提供了一个非常好用的 Taildir Source
       (2).使用该source，可以监控多个目录，对目录中新写入的数据进行实时采集
   1).taildir source配置
   taildir Source的特点：
     (1).使用正则表达式匹配目录中的文件名
     (2).监控的文件中，一旦有数据写入，Flume就会将信息写入到指定的Sink
     (3).高可靠，不会丢失数据
     (4).不会对跟踪文件有任何处理，不会重命名也不会删除
     (5).不支持Windows，不能读二进制文件。支持按行读取文本文件
   taildir source配置
a1.sources.r1.type = TAILDIR
a1.sources.r1.positionFile =/data/lagoudw/conf/startlog_position.json
a1.sources.r1.filegroups = f1
a1.sources.r1.filegroups.f1 = /data/lagoudw/logs/start/.*log
   
   positionFile
   配置检查点文件的路径，检查点文件会以 json 格式保存已经读取文件的位置，解
决断点续传的问题
   filegroups
   指定filegroups，可以有多个，以空格分隔(taildir source可同时监控多个目录
中的文件)
   filegroups.
   配置每个filegroup的文件绝对路径，文件名可以用正则表达式匹配
   2).hdfs sink配置
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = /user/data/logs/start/%Y-%m-%d/
a1.sinks.k1.hdfs.filePrefix = startlog.
a1.sinks.k1.hdfs.fileType = DataStream

# 配置文件滚动方式(文件大小32M)
a1.sinks.k1.hdfs.rollSize = 33554432
a1.sinks.k1.hdfs.rollCount = 0
a1.sinks.k1.hdfs.rollInterval = 0
a1.sinks.k1.hdfs.idleTimeout = 0
a1.sinks.k1.hdfs.minBlockReplicas = 1

# 向hdfs上刷新的event的个数
a1.sinks.k1.hdfs.batchSize = 100

# 使用本地时间
a1.sinks.k1.hdfs.useLocalTimeStamp = true
   
   HDFS Sink 都会采用滚动生成文件的方式，滚动生成文件的策略有：
     基于时间。hdfs.rollInterval 30 秒
     基于文件大小。hdfs.rollSize 1024 字节
     基于event数量。hdfs.rollCount 10 个event
     基于文件空闲时间。hdfs.idleTimeout 0
     0,禁用
	 minBlockReplicas。默认值与 hdfs 副本数一致。设为1是为了让Flume感知不
到hdfs的块复制，此时其他的滚动方式配置(时间间隔、文件大小、events数量)才
不会受影响
   3).Agent的配置
   /data/lagoudw/conf/flume-log2hdfs1.conf
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# taildir source
a1.sources.r1.type = TAILDIR
a1.sources.r1.positionFile =/data/lagoudw/conf/startlog_position.json
a1.sources.r1.filegroups = f1
a1.sources.r1.filegroups.f1 = /data/lagoudw/logs/start/.*log

# memorychannel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 100000
a1.channels.c1.transactionCapacity = 2000

# hdfs sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = /user/data/logs/start/%Y-%m-%d/
a1.sinks.k1.hdfs.filePrefix = startlog.
a1.sinks.k1.hdfs.fileType = DataStream

# 配置文件滚动方式(文件大小32M)
a1.sinks.k1.hdfs.rollSize = 33554432
a1.sinks.k1.hdfs.rollCount = 0
a1.sinks.k1.hdfs.rollInterval = 0
a1.sinks.k1.hdfs.idleTimeout = 0
a1.sinks.k1.hdfs.minBlockReplicas = 1

# 向hdfs上刷新的event的个数
a1.sinks.k1.hdfs.batchSize = 1000

# 使用本地时间
a1.sinks.k1.hdfs.useLocalTimeStamp = true

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
   4).Flume的优化配置
   (1).启动agent
   flume-ng agent --conf-file /data/lagoudw/conf/flume-log2hdfs1.
conf -name a1 -Dflume.root.logger=INFO,console
   (2).向 /data/lagoudw/logs/ 目录中放入日志文件，报错：
   java.lang.OutOfMemoryError: GC overhead limit exceeded
   缺省情况下 Flume jvm堆最大分配20m，这个值太小，需要调整。
   (3).解决方案：在 $FLUME_HOME/conf/flume-env.sh 中增加以下内容
   export JAVA_OPTS="-Xms4000m -Xmx4000m -Dcom.sun.management.jmxremote"
   # 要想使配置文件生效，还要在命令行中指定配置文件目录
   flume-ng agent --conf /opt/lagou/servers/flume-1.9.0/conf --conf-file
 /data/lagoudw/conf/flume-log2hdfs1.conf -name a1 -Dflume.root.logger=INFO,console

   
   flume-ng agent --conf-file /data/lagoudw/conf/flume-log2hdfs1.
conf -name a1 -Dflume.root.logger=INFO,console
   Flume内存参数设置及优化：
     (1).根据日志数据量的大小，Jvm堆一般要设置为4G或更高
     (2).-Xms -Xmx 最好设置一致，减少内存抖动带来的性能影响
   5).自定义拦截器
   前面 Flume Agent 的配置使用了本地时间，可能导致数据存放的路径不正确。
   要解决以上问题需要使用自定义拦截器。
   agent用于测试自定义拦截器。netcat source =>logger sink
    /data/lagoudw/conf/flumetest1.conf

# a1是agent的名称。source、channel、sink的名称分别为：r1 c1 k1
a1.sources = r1
a1.channels = c1
a1.sinks = k1

# source
a1.sources.r1.type = netcat
a1.sources.r1.bind = linux122
a1.sources.r1.port = 9999
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type =cn.lagou.dw.flume.interceptor.CustomerInterceptor$Builder

# channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 10000
a1.channels.c1.transactionCapacity = 100

# sink
a1.sinks.k1.type = logger

# source、channel、sink之间的关系
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
   自定义拦截器的原理：
   1).自定义拦截器要集成Flume 的 Interceptor
   2).Event 分为header 和 body（接收的字符串）
   3).获取header和body
   4).从body中获取"time":1596382570539,并将时间戳转换为字符串 "yyyy-MMdd-dd"
   5).将转换后的字符串放置header中
   自定义拦截器的实现：
   1)、获取 event 的 header
   2)、获取 event 的 body
   3)、解析body获取json串
   4)、解析json串获取时间戳
   5)、将时间戳转换为字符串 "yyyy-MM-dd"
   6)、将转换后的字符串放置header中
   7)、返回event

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    </properties>

    <dependencies>
        <!-- flume -->
        <dependency>
            <groupId>org.apache.flume</groupId>
            <artifactId>flume-ng-core</artifactId>
            <version>1.9.0</version>
            <scope>provided</scope>
        </dependency>
        <!-- 解析json串 -->
        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>fastjson</artifactId>
            <version>1.1.23</version>
        </dependency>
        <!-- 单元测试 -->
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.12</version>
            <scope>provided</scope>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>2.3.2</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>
            <plugin>
                <artifactId>maven-assembly-plugin</artifactId>
                <configuration>
                    <descriptorRefs>
                        <descriptorRef>jar-with-dependencies</descriptorRef>
                    </descriptorRefs>
                </configuration>
                <executions>
                    <execution>
                        <id>make-assembly</id>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
   代码实现
package cn.lagou.dw.flume.interceptor;

import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.JSONObject;
import org.apache.commons.compress.utils.Charsets;
import org.apache.flume.Context;
import org.apache.flume.Event;
import org.apache.flume.event.SimpleEvent;
import org.apache.flume.interceptor.Interceptor;
import org.junit.Test;

import java.time.Instant;
import java.time.LocalDateTime;
import java.time.ZoneId;
import java.time.format.DateTimeFormatter;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

public class CustomerInterceptor implements Interceptor {
    @Override
    public void initialize() {

    }

    @Override
    // 逐条处理event
    public Event intercept(Event event) {
        // 获取 event 的 body
        String eventBody = new String(event.getBody(), Charsets.UTF_8);
        // 获取 event 的 header
        Map<String, String> headersMap = event.getHeaders();
        // 解析body获取json串
        String[] bodyArr = eventBody.split("\\s+");

        try {

            String jsonStr = bodyArr[6];
            // 解析json串获取时间戳
            JSONObject jsonObject = JSON.parseObject(jsonStr);
            String timestampStr = jsonObject.getJSONObject("app_active").getString("time");

            // 将时间戳转换为字符串 "yyyy-MM-dd"
            // 将字符串转换为long
            long timestamp = Long.parseLong(timestampStr);
            DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd");

            Instant instant = Instant.ofEpochMilli(timestamp);
            LocalDateTime localDateTime = LocalDateTime.ofInstant(instant, ZoneId.systemDefault());
            String date = formatter.format(localDateTime);

            // 将转换后的字符串放置header中
            headersMap.put("logtime", date);
            event.setHeaders(headersMap);
        } catch (Exception e) {
            headersMap.put("logtime", "Unknown");
            event.setHeaders(headersMap);
        }
        //返回event
        return event;
    }

    @Override
    public List<Event> intercept(List<Event> events) {
        List<Event> lstEvent = new ArrayList<>();

        for (Event event : events) {
            Event outEvent = intercept(event);
            if (outEvent != null) {
                lstEvent.add(outEvent);
            }
        }
        return lstEvent;
    }

    @Override
    public void close() {

    }

    public static class Builder implements Interceptor.Builder {

        @Override
        public Interceptor build() {
            return new CustomerInterceptor();
        }

        @Override
        public void configure(Context context) {

        }
    }

    @Test
    public void testJunit() {
        StringBuffer str = new StringBuffer();
        str.append("2020-08-02 18:19:32.959 [main] INFO");
        str.append(" com.lagou.ecommerce.AppStart - {\"app_active\":");
        str.append("{\"name\":\"app_active\",\"json\":");
        str.append("{\"entry\":\"1\",\"action\":\"0\",\"error_code\":\"0\"},\"tim");
        str.append("e\":1596342840284},\"attr\":{\"area\":\"大庆");
        str.append("\",\"uid\":\"2F10092A2\",\"app_v\":\"1.1.15\",\"event_type\":");
        str.append("\"common\",\"device_id\":\"1FB872-");
        str.append("9A1002\",\"os_type\":\"2.8\",\"channel\":\"TB\",\"language\":");
        str.append("\"chinese\",\"brand\":\"iphone-8\"}}");
        Map<String, String> map = new HashMap<>();
        // new Event
        SimpleEvent event = new SimpleEvent();
        event.setHeaders(map);
        event.setBody(str.toString().getBytes(Charsets.UTF_8));
        // 调用interceptor处理event
        CustomerInterceptor customerInterceptor = new CustomerInterceptor();
        Event outEvent = customerInterceptor.intercept(event);
        // 处理结果
        Map<String, String> headersMap = outEvent.getHeaders();
        System.out.println(JSON.toJSONString(headersMap));
    }
}

   将程序打包，放在 flume/lib目录下；
   [root@linux122 ~]# cd /data/lagoudw/jars
   [root@linux122 jars]# rz
   cn.lagou.dw-1.0-SNAPSHOT-jar-with-dependencies.jar
   [root@linux122 jars]# ln -s /data/lagoudw/jars/cn.lagou.dw-1.0-SNAPSHOT
-jar-with-dependencies.jar /opt/lagou/servers/flume-1.9.0/lib/cn.lagou.dw-1.0
-SNAPSHOT-jar-with-dependencies.jar
   启动Agent测试
   flume-ng agent --conf /opt/lagou/servers/flume-1.9.0/conf --conf-file /data/
lagoudw/conf/flumetest1.conf -name a1 -Dflume.root.logger=INFO,console
   [root@linux122 ~]# telnet linux122 9999
2020-08-02 18:19:32.959 [main] INFO com.lagou.ecommerce.AppStart - {"app_ac
tive":{"name":"app_active","json":{"entry":"1","action":"0","error_code":"0
"},"time":1596342840284},"attr":{"area":"大庆","uid":"2F10092A2","app_v":"1
.1.15","event_type":"common","device_id":"1FB872-9A1002","os_type":"2.8","c
hannel":"TB","language":"chinese","brand":"iphone-8"}}
   
   ok
   Event: {headers:{logtime=2020-08-02} 
   body: 32 30 32 30 2D 30 38 2D 30 32 20 31 38 3A 31 39 2020-08-02 18:19 }
   