Fork me on GitHub

Flume基础学习-二之开发案例

开发案例

监控端口数据官方案例

案例需求

  • 首先启动Flume任务,监控本机44444端口,服务端;
  • 然后通过netcat工具向本机44444端口发送消息,客户端;
  • 最后Flume将监听的数据实时显示在控制台。

需求分析

image

实现步骤

  1. 安装netcat工具

    1
    [rickyin@hadoop102 software]$ sudo yum install -y nc
  2. 判断44444端口是否被占用

    1
    [rickyin@hadoop102 flume-telnet]$ sudo netstat -tunlp | grep 44444

功能描述:netstat命令是一个监控TCP/IP网络的非常有用的工具,它可以显示路由表、实际的网络连接以及每一个网络接口设备的状态信息。

1
2
3
4
5
6
7
基本语法:netstat [选项]
选项参数:
-t或--tcp:显示TCP传输协议的连线状况;
-u或--udp:显示UDP传输协议的连线状况;
-n或--numeric:直接使用ip地址,而不通过域名服务器;
-l或--listening:显示监控中的服务器的Socket;
-p或--programs:显示正在使用Socket的程序识别码(PID)和程序名称;
  1. 创建Flume Agent配置文件flume-netcat-logger.conf

    • 在flume目录下创建job文件夹并进入job文件夹。

      1
      2
      [rickyin@hadoop102 flume]$ mkdir job
      [rickyin@hadoop102 flume]$ cd job/
    • 在job文件夹下创建Flume Agent配置文件flume-netcat-logger.conf。

      1
      [rickyin@hadoop102 job]$ touch flume-netcat-logger.conf
    • 在flume-netcat-logger.conf文件中添加如下内容。

      1
      [rickyin@hadoop102 job]$ vim flume-netcat-logger.conf
    • 添加内容如下:

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      # Name the components on this agent
      a1.sources = r1
      a1.sinks = k1
      a1.channels = c1

      # Describe/configure the source
      a1.sources.r1.type = netcat
      a1.sources.r1.bind = localhost
      a1.sources.r1.port = 44444

      # Describe the sink
      a1.sinks.k1.type = logger

      # Use a channel which buffers events in memory
      a1.channels.c1.type = memory
      a1.channels.c1.capacity = 1000
      a1.channels.c1.transactionCapacity = 100

      # Bind the source and sink to the channel
      a1.sources.r1.channels = c1
      a1.sinks.k1.channel = c1

image

  1. 先开启flume监听端口

    • 第一种写法:

      1
      [rickyin@hadoop102 flume]$ bin/flume-ng agent --conf conf/ --name a1 --conf-file job/flume-netcat-logger.conf -Dflume.root.logger=INFO,console
    • 第二种写法:

      1
      [rickyin@hadoop102 flume]$ bin/flume-ng agent -c conf/ -n a1 –f job/flume-netcat-logger.conf -Dflume.root.logger=INFO,console
1
2
3
4
5
参数说明:
--conf conf/ :表示配置文件存储在conf/目录
--name a1 :表示给agent起名为a1
--conf-file job/flume-netcat.conf :flume本次启动读取的配置文件是在job文件夹下的flume-telnet.conf文件。
-Dflume.root.logger==INFO,console :-D表示flume运行时动态修改flume.root.logger参数属性值,并将控制台日志打印级别设置为INFO级别。日志级别包括:log、info、warn、error。
  1. 使用netcat工具向本机的44444端口发送内容

    1
    2
    3
    [rickyin@hadoop102 ~]$ nc localhost 44444
    hello
    atguigu
  2. 在Flume监听页面观察接收数据情况

实时读取本地文件到HDFS案例

案例需求

实时监控Hive日志,并上传到HDFS中

需求分析

image

实现步骤

  1. Flume要想将数据输出到HDFS,必须持有Hadoop相关jar包

    1
    2
    3
    4
    5
    6
    7
    将commons-configuration-1.6.jar、
    hadoop-auth-2.7.2.jar、
    hadoop-common-2.7.2.jar、
    hadoop-hdfs-2.7.2.jar、
    commons-io-2.4.jar、
    htrace-core-3.1.0-incubating.jar
    拷贝到/opt/module/flume/lib文件夹下。
  2. 创建flume-file-hdfs.conf文件

    • 创建文件

      1
      2
      3
      4
      5
      [rickyin@hadoop102 job]$ touch flume-file-hdfs.conf

      注:要想读取Linux系统中的文件,就得按照Linux命令的规则执行命令。由于Hive日志在Linux系统中所以读取文件的类型选择:exec即execute执行的意思。表示执行Linux命令来读取文件。

      [rickyin@hadoop102 job]$ vim flume-file-hdfs.conf
    • 添加如下内容

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      # Name the components on this agent
      a2.sources = r2
      a2.sinks = k2
      a2.channels = c2

      # Describe/configure the source
      a2.sources.r2.type = exec
      a2.sources.r2.command = tail -F /opt/module/hive/logs/hive.log
      a2.sources.r2.shell = /bin/bash -c

      # Describe the sink
      a2.sinks.k2.type = hdfs
      a2.sinks.k2.hdfs.path = hdfs://hadoop101:9000/flume/%Y%m%d/%H
      #上传文件的前缀
      a2.sinks.k2.hdfs.filePrefix = logs-
      #是否按照时间滚动文件夹
      a2.sinks.k2.hdfs.round = true
      #多少时间单位创建一个新的文件夹
      a2.sinks.k2.hdfs.roundValue = 1
      #重新定义时间单位
      a2.sinks.k2.hdfs.roundUnit = hour
      #是否使用本地时间戳
      a2.sinks.k2.hdfs.useLocalTimeStamp = true
      #积攒多少个Event才flush到HDFS一次
      a2.sinks.k2.hdfs.batchSize = 1000
      #设置文件类型,可支持压缩
      a2.sinks.k2.hdfs.fileType = DataStream
      #多久生成一个新的文件
      a2.sinks.k2.hdfs.rollInterval = 60
      #设置每个文件的滚动大小
      a2.sinks.k2.hdfs.rollSize = 134217700
      #文件的滚动与Event数量无关
      a2.sinks.k2.hdfs.rollCount = 0

      # Use a channel which buffers events in memory
      a2.channels.c2.type = memory
      a2.channels.c2.capacity = 1000
      a2.channels.c2.transactionCapacity = 100

      # Bind the source and sink to the channel
      a2.sources.r2.channels = c2
      a2.sinks.k2.channel = c2

注意:1. 对于所有与时间相关的转义序列,Event Header中必须存在以 “timestamp”的key(除非hdfs.useLocalTimeStamp设置为true,此方法会使用TimestampInterceptor自动添加timestamp)。

a3.sinks.k3.hdfs.useLocalTimeStamp = true

image

  1. 执行监控配置

    1
    [rickyin@hadoop102 flume]$ bin/flume-ng agent --conf conf/ --name a2 --conf-file job/flume-file-hdfs.conf
  2. 开启Hadoop和Hive并操作Hive产生日志

    1
    2
    3
    4
    5
    [rickyin@hadoop102 hadoop-2.7.2]$ sbin/start-dfs.sh
    [rickyin@hadoop103 hadoop-2.7.2]$ sbin/start-yarn.sh

    [rickyin@hadoop102 hive]$ bin/hive
    hive (default)>
  3. 在HDFS上查看文件.

实时读取目录文件到HDFS案例

案例需求

使用Flume监听整个目录的文件

需求分析

image

实现步骤

创建配置文件flume-dir-hdfs.conf
  1. 创建一个文件

    1
    [rickyin@hadoop102 job]$ touch flume-dir-hdfs.conf
  2. 打开文件

    1
    [rickyin@hadoop102 job]$ vim flume-dir-hdfs.conf
  3. 添加如下内容

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    a3.sources = r3
    a3.sinks = k3
    a3.channels = c3

    # Describe/configure the source
    a3.sources.r3.type = spooldir
    a3.sources.r3.spoolDir = /opt/module/flume/upload
    a3.sources.r3.fileSuffix = .COMPLETED
    a3.sources.r3.fileHeader = true
    #忽略所有以.tmp结尾的文件,不上传
    a3.sources.r3.ignorePattern = ([^ ]*\.tmp)

    # Describe the sink
    a3.sinks.k3.type = hdfs
    a3.sinks.k3.hdfs.path = hdfs://hadoop101:9000/flume/upload/%Y%m%d/%H
    #上传文件的前缀
    a3.sinks.k3.hdfs.filePrefix = upload-
    #是否按照时间滚动文件夹
    a3.sinks.k3.hdfs.round = true
    #多少时间单位创建一个新的文件夹
    a3.sinks.k3.hdfs.roundValue = 1
    #重新定义时间单位
    a3.sinks.k3.hdfs.roundUnit = hour
    #是否使用本地时间戳
    a3.sinks.k3.hdfs.useLocalTimeStamp = true
    #积攒多少个Event才flush到HDFS一次
    a3.sinks.k3.hdfs.batchSize = 100
    #设置文件类型,可支持压缩
    a3.sinks.k3.hdfs.fileType = DataStream
    #多久生成一个新的文件
    a3.sinks.k3.hdfs.rollInterval = 60
    #设置每个文件的滚动大小大概是128M
    a3.sinks.k3.hdfs.rollSize = 134217700
    #文件的滚动与Event数量无关
    a3.sinks.k3.hdfs.rollCount = 0

    # Use a channel which buffers events in memory
    a3.channels.c3.type = memory
    a3.channels.c3.capacity = 1000
    a3.channels.c3.transactionCapacity = 100

    # Bind the source and sink to the channel
    a3.sources.r3.channels = c3
    a3.sinks.k3.channel = c3

image

启动监控文件夹命令
1
2
3
4
5
6
[rickyin@hadoop102 flume]$ bin/flume-ng agent --conf conf/ --name a3 --conf-file job/flume-dir-hdfs.conf

说明: 在使用Spooling Directory Source时
1) 不要在监控目录中创建并持续修改文件
2) 上传完成的文件会以.COMPLETED结尾
3) 被监控文件夹每500毫秒扫描一次文件变动
向upload文件夹中添加文件
  1. 在/opt/module/flume目录下创建upload文件夹

    1
    [rickyin@hadoop102 flume]$ mkdir upload
  2. 向upload文件夹中添加文件

    1
    2
    3
    [rickyin@hadoop102 upload]$ touch atguigu.txt
    [rickyin@hadoop102 upload]$ touch atguigu.tmp
    [rickyin@hadoop102 upload]$ touch atguigu.log
查看HDFS上的数据
等待1s,再次查询upload文件夹
1
2
3
4
5
[rickyin@hadoop102 upload]$ ll
总用量 0
-rw-rw-r--. 1 atguigu atguigu 0 5月 20 22:31 atguigu.log.COMPLETED
-rw-rw-r--. 1 atguigu atguigu 0 5月 20 22:31 atguigu.tmp
-rw-rw-r--. 1 atguigu atguigu 0 5月 20 22:31 atguigu.txt.COMPLETED

单数据源多出口案例(选择器)

单Source多Channel、Sink如图所示。

image

案例需求

使用Flume-1监控文件变动,Flume-1将变动内容传递给Flume-2,Flume-2负责存储到HDFS。同时Flume-1将变动内容传递给Flume-3,Flume-3负责输出到Local FileSystem。

需求分析

image

实现步骤

准备工作
  1. 在/opt/module/flume/job目录下创建group1文件夹

    1
    [rickyin@hadoop102 job]$ cd group1/
  2. 在/opt/module/datas/目录下创建flume3文件夹

    1
    [rickyin@hadoop102 datas]$ mkdir flume3
创建flume-file-flume.conf
  1. 配置1个接收日志文件的source和两个channel、两个sink,分别输送给flume-flume-hdfs和flume-flume-dir。
  2. 创建配置文件并打开

    1
    2
    [rickyin@hadoop102 group1]$ touch flume-file-flume.conf
    [rickyin@hadoop102 group1]$ vim flume-file-flume.conf
  3. 添加如下内容

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    # Name the components on this agent
    a1.sources = r1
    a1.sinks = k1 k2
    a1.channels = c1 c2
    # 将数据流复制给所有channel
    a1.sources.r1.selector.type = replicating

    # Describe/configure the source
    a1.sources.r1.type = exec
    a1.sources.r1.command = tail -F /opt/module/hive/logs/hive.log
    a1.sources.r1.shell = /bin/bash -c

    # Describe the sink
    # sink端的avro是一个数据发送者
    a1.sinks.k1.type = avro
    a1.sinks.k1.hostname = hadoop101
    a1.sinks.k1.port = 4141

    a1.sinks.k2.type = avro
    a1.sinks.k2.hostname = hadoop101
    a1.sinks.k2.port = 4142

    # Describe the channel
    a1.channels.c1.type = memory
    a1.channels.c1.capacity = 1000
    a1.channels.c1.transactionCapacity = 100

    a1.channels.c2.type = memory
    a1.channels.c2.capacity = 1000
    a1.channels.c2.transactionCapacity = 100

    # Bind the source and sink to the channel
    a1.sources.r1.channels = c1 c2
    a1.sinks.k1.channel = c1
    a1.sinks.k2.channel = c2

注:Avro是由Hadoop创始人Doug Cutting创建的一种语言无关的数据序列化和RPC框架。

注:RPC(Remote Procedure Call)—远程过程调用,它是一种通过网络从远程计算机程序上请求服务,而不需要了解底层网络技术的协议。

创建flume-flume-hdfs.conf
  1. 配置上级Flume输出的Source,输出是到HDFS的Sink。创建配置文件并打开

    1
    2
    [rickyin@hadoop102 group1]$ touch flume-flume-hdfs.conf
    [rickyin@hadoop102 group1]$ vim flume-flume-hdfs.conf
  2. 添加如下内容

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    # Name the components on this agent
    a2.sources = r1
    a2.sinks = k1
    a2.channels = c1

    # Describe/configure the source
    # source端的avro是一个数据接收服务
    a2.sources.r1.type = avro
    a2.sources.r1.bind = hadoop102
    a2.sources.r1.port = 4141

    # Describe the sink
    a2.sinks.k1.type = hdfs
    a2.sinks.k1.hdfs.path = hdfs://hadoop101:9000/flume2/%Y%m%d/%H
    #上传文件的前缀
    a2.sinks.k1.hdfs.filePrefix = flume2-
    #是否按照时间滚动文件夹
    a2.sinks.k1.hdfs.round = true
    #多少时间单位创建一个新的文件夹
    a2.sinks.k1.hdfs.roundValue = 1
    #重新定义时间单位
    a2.sinks.k1.hdfs.roundUnit = hour
    #是否使用本地时间戳
    a2.sinks.k1.hdfs.useLocalTimeStamp = true
    #积攒多少个Event才flush到HDFS一次
    a2.sinks.k1.hdfs.batchSize = 100
    #设置文件类型,可支持压缩
    a2.sinks.k1.hdfs.fileType = DataStream
    #多久生成一个新的文件
    a2.sinks.k1.hdfs.rollInterval = 600
    #设置每个文件的滚动大小大概是128M
    a2.sinks.k1.hdfs.rollSize = 134217700
    #文件的滚动与Event数量无关
    a2.sinks.k1.hdfs.rollCount = 0

    # Describe the channel
    a2.channels.c1.type = memory
    a2.channels.c1.capacity = 1000
    a2.channels.c1.transactionCapacity = 100

    # Bind the source and sink to the channel
    a2.sources.r1.channels = c1
    a2.sinks.k1.channel = c1
创建flume-flume-dir.conf
  1. 配置上级Flume输出的Source,输出是到本地目录的Sink。
    创建配置文件并打开

    1
    2
    [rickyin@hadoop102 group1]$ touch flume-flume-dir.conf
    [rickyin@hadoop102 group1]$ vim flume-flume-dir.conf
  2. 添加如下内容

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    # Name the components on this agent
    a3.sources = r1
    a3.sinks = k1
    a3.channels = c2

    # Describe/configure the source
    a3.sources.r1.type = avro
    a3.sources.r1.bind = hadoop102
    a3.sources.r1.port = 4142

    # Describe the sink
    a3.sinks.k1.type = file_roll
    a3.sinks.k1.sink.directory = /opt/module/data/flume3

    # Describe the channel
    a3.channels.c2.type = memory
    a3.channels.c2.capacity = 1000
    a3.channels.c2.transactionCapacity = 100

    # Bind the source and sink to the channel
    a3.sources.r1.channels = c2
    a3.sinks.k1.channel = c2

提示:输出的本地目录必须是已经存在的目录,如果该目录不存在,并不会创建新的目录。

执行配置文件

分别开启对应配置文件:flume-flume-dir,flume-flume-hdfs,flume-file-flume。

1
2
3
4
5
[rickyin@hadoop102 flume]$ bin/flume-ng agent --conf conf/ --name a3 --conf-file job/group1/flume-flume-dir.conf

[rickyin@hadoop102 flume]$ bin/flume-ng agent --conf conf/ --name a2 --conf-file job/group1/flume-flume-hdfs.conf

[rickyin@hadoop102 flume]$ bin/flume-ng agent --conf conf/ --name a1 --conf-file job/group1/flume-file-flume.conf

启动Hadoop和Hive
1
2
3
4
5
[rickyin@hadoop102 hadoop-2.7.2]$ sbin/start-dfs.sh
[rickyin@hadoop103 hadoop-2.7.2]$ sbin/start-yarn.sh

[rickyin@hadoop102 hive]$ bin/hive
hive (default)>
检查HDFS上数据
检查/opt/module/datas/flume3目录中数据
1
2
3
[rickyin@hadoop102 flume3]$ ll
总用量 8
-rw-rw-r--. 1 rickyin rickyin 5942 5月 22 00:09 1526918887550-3

单数据源多出口案例(Sink 组)

单Source、Channel多Sink(负载均衡)如图所示。

image

案例需求

使用Flume-1监控文件变动,Flume-1将变动内容传递给Flume-2,Flume-2负责存储到HDFS。同时Flume-1将变动内容传递给Flume-3,Flume-3也负责存储到HDFS

需求分析

image

实现步骤

准备工作
  • 在/opt/module/flume/job目录下创建group2文件夹
    1
    [rickyin@hadoop102 job]$ cd group2/
创建flume-netcat-flume.conf
  • 配置1个接收日志文件的source和1个channel、两个sink,分别输送给flume-flume-console1和flume-flume-console2。创建配置文件并打开

    1
    2
    [rickyin@hadoop102 group2]$ touch flume-netcat-flume.conf
    [rickyin@hadoop102 group2]$ vim flume-netcat-flume.conf
  • 添加如下内容

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    # Name the components on this agent
    a1.sources = r1
    a1.channels = c1
    a1.sinkgroups = g1
    a1.sinks = k1 k2

    # Describe/configure the source
    a1.sources.r1.type = netcat
    a1.sources.r1.bind = localhost
    a1.sources.r1.port = 44444

    a1.sinkgroups.g1.processor.type = load_balance
    a1.sinkgroups.g1.processor.backoff = true
    a1.sinkgroups.g1.processor.selector = round_robin
    a1.sinkgroups.g1.processor.selector.maxTimeOut=10000

    # Describe the sink
    a1.sinks.k1.type = avro
    a1.sinks.k1.hostname = hadoop102
    a1.sinks.k1.port = 4141

    a1.sinks.k2.type = avro
    a1.sinks.k2.hostname = hadoop102
    a1.sinks.k2.port = 4142

    # Describe the channel
    a1.channels.c1.type = memory
    a1.channels.c1.capacity = 1000
    a1.channels.c1.transactionCapacity = 100

    # Bind the source and sink to the channel
    a1.sources.r1.channels = c1
    a1.sinkgroups.g1.sinks = k1 k2
    a1.sinks.k1.channel = c1
    a1.sinks.k2.channel = c1

注:Avro是由Hadoop创始人Doug Cutting创建的一种语言无关的数据序列化和RPC框架。

注:RPC(Remote Procedure Call)—远程过程调用,它是一种通过网络从远程计算机程序上请求服务,而不需要了解底层网络技术的协议。

创建flume-flume-console1.conf
  • 配置上级Flume输出的Source,输出是到本地控制台。创建配置文件并打开

    1
    2
    [rickyin@hadoop102 group2]$ touch flume-flume-console1.conf
    [rickyin@hadoop102 group2]$ vim flume-flume-console1.conf
  • 添加如下内容

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    # Name the components on this agent
    a2.sources = r1
    a2.sinks = k1
    a2.channels = c1

    # Describe/configure the source
    a2.sources.r1.type = avro
    a2.sources.r1.bind = hadoop102
    a2.sources.r1.port = 4141

    # Describe the sink
    a2.sinks.k1.type = logger

    # Describe the channel
    a2.channels.c1.type = memory
    a2.channels.c1.capacity = 1000
    a2.channels.c1.transactionCapacity = 100

    # Bind the source and sink to the channel
    a2.sources.r1.channels = c1
    a2.sinks.k1.channel = c1
创建flume-flume-console2.conf
  • 配置上级Flume输出的Source,输出是到本地控制台。创建配置文件并打开

    1
    2
    [rickyin@hadoop102 group2]$ touch flume-flume-console2.conf
    [rickyin@hadoop102 group2]$ vim flume-flume-console2.conf
  • 添加如下内容

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    # Name the components on this agent
    a3.sources = r1
    a3.sinks = k1
    a3.channels = c2

    # Describe/configure the source
    a3.sources.r1.type = avro
    a3.sources.r1.bind = hadoop102
    a3.sources.r1.port = 4142

    # Describe the sink
    a3.sinks.k1.type = logger

    # Describe the channel
    a3.channels.c2.type = memory
    a3.channels.c2.capacity = 1000
    a3.channels.c2.transactionCapacity = 100

    # Bind the source and sink to the channel
    a3.sources.r1.channels = c2
    a3.sinks.k1.channel = c2
执行配置文件
  • 分别开启对应配置文件:flume-flume-console2,flume-flume-console1,flume-netcat-flume。
    1
    2
    3
    4
    5
    [rickyin@hadoop102 flume]$ bin/flume-ng agent --conf conf/ --name a3 --conf-file job/group2/flume-flume-console2.conf -Dflume.root.logger=INFO,console

    [rickyin@hadoop102 flume]$ bin/flume-ng agent --conf conf/ --name a2 --conf-file job/group2/flume-flume-console1.conf -Dflume.root.logger=INFO,console

    [rickyin@hadoop102 flume]$ bin/flume-ng agent --conf conf/ --name a1 --conf-file job/group2/flume-netcat-flume.conf
使用netcat工具向本机的44444端口发送内容
1
$ nc localhost 44444