资讯专栏INFORMATION COLUMN

Linux ELK 安装(服务器架设篇)

funnyZhang / 2746人阅读

摘要:新增了一个,它是一个轻量级的日志收集处理工具,占用资源少,适合于在各个服务器上搜集日志后传输给,官方也推荐此工具。具体的结构如下主机系统部署服务安装准备是需要的,建议安装。先下载通用安装包。

ELK简介

ELK是三个开源软件的缩写,分别表示:Elasticsearch , Logstash, Kibana , 它们都是开源软件。新增了一个FileBeat,它是一个轻量级的日志收集处理工具(Agent),Filebeat占用资源少,适合于在各个服务器上搜集日志后传输给Logstash,官方也推荐此工具。

Elasticsearch是个开源分布式搜索引擎,提供搜集、分析、存储数据三大功能。它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等。

Logstash 主要是用来日志的搜集、分析、过滤日志的工具,支持大量的数据获取方式。一般工作方式为c/s架构,client端安装在需要收集日志的主机上,server端负责将收到的各节点日志进行过滤、修改等操作在一并发往elasticsearch上去。

Kibana 也是一个开源和免费的工具,Kibana可以为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以帮助汇总、分析和搜索重要数据日志。

Filebeat隶属于Beats。目前Beats包含四种工具:

Packetbeat(搜集网络流量数据)

Topbeat(搜集系统、进程和文件系统级别的 CPU 和内存使用情况等数据)

Filebeat(搜集文件数据)

Winlogbeat(搜集 Windows 事件日志数据)

一般而言,ELK主要用在海量零散数据的汇总和信息提取分析上。在分布式系统的日志统计,大数据的数据分析,业务数据的快速检索,服务器集群上每台服务器的运行情况查询监控等方面有很强大的功能。

拿ELK在分布式系统上的日志收集举例。随着微服务的流行,分布式的使用,以往日志文件写在具体的服务器上的某一位置做法变得不符合需求,首先是服务器越来越多而且后端服务集群横跨多个服务器导致日志越来越散,不论是开发,测试还是线上的日志定位越来越难,准确的找到有用的信息需要运维/开发不段的排查,这时ELK就派上用场了,它将服务集群里面的日志收集汇总并建立索引,当出现问题是定位问题就像Google这类搜素引擎一样高效简单。

安装

一般单台机器就可以安装了,我这里为了贴近实际使用,分为3个机器来部署一个入门的ELK。

具体的结构如下

主机 IP 系统 部署服务
thinkvmc01 192.168.50.207 CentOS7 ElasticSearch
thinkvmc02 192.168.50.19 CentOS7 Logstash
thinkvmc03 192.168.50.54 CentOS7 Kibana
安装准备

ELK是需要Java的,建议安装Java8。这里我就啰嗦了

# 先检查JDK环境
[thinktik@thinkvmc01 ~]$ java -version
java version "1.8.0_201"
Java(TM) SE Runtime Environment (build 1.8.0_201-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode)
开始安装

安装ELK不难,按照官方的文档即可,官网链接下

开源搜索与分析 · Elasticsearch

我们先安装 ElasticSearch。先下载 Linux 通用安装包 elasticsearch-6.7.1.tar.gz
。当然为了简单,你也可以下载具体Linux发行版的预编译包,这样安装更加简单,不过缺少灵活性。

thinkvmc01 先安装ES
# 下载
[thinktik@thinkvmc01 thinktik]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.7.1.tar.gz
--2019-04-08 22:51:05--  https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.7.1.tar.gz
Resolving artifacts.elastic.co (artifacts.elastic.co)... 151.101.230.222, 2a04:4e42:1a::734
Connecting to artifacts.elastic.co (artifacts.elastic.co)|151.101.230.222|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 148542786 (142M) [application/x-gzip]
Saving to: ‘elasticsearch-6.7.1.tar.gz.1’

 2% [>                                                                ] 3,699,945   42.4KB/s  eta 25m 36s

....

# 下载完毕,解压
[thinktik@thinkvmc01 thinktik]# ls
elasticsearch-6.7.1.tar.gz  java8  jdk-8u201-linux-x64.tar.gz
[thinktik@thinkvmc01 thinktik]# tar -zxvf elasticsearch-6.7.1.tar.gz 
elasticsearch-6.7.1/
elasticsearch-6.7.1/lib/

....

elasticsearch-6.7.1/logs/
elasticsearch-6.7.1/plugins/

# 进入安装文件目录
[thinktik@thinkvmc01 thinktik]# cd elasticsearch-6.7.1
[thinktik@thinkvmc01 elasticsearch-6.7.1]# ls
bin  config  lib  LICENSE.txt  logs  modules  NOTICE.txt  plugins  README.textile
[thinktikt@thinkvmc01 elasticsearch-6.7.1]# cd config/
[thinktik@thinkvmc01 config]# ls
elasticsearch.yml  jvm.options  log4j2.properties  role_mapping.yml  roles.yml  users  users_roles
# 修改配置,绑定我们的网卡。不修改默认为127.0.0.1,那样其余的机器上的Logstash,Kibana就没法访问这台机的ES了
[thinktik@thinkvmc01 config]# vim elasticsearch.yml 

#修改如下
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
# IP地址
network.host: 192.168.50.207
#
# Set a custom port for HTTP:
# 端口,默认9200
http.port: 9200

# 启动
[thinktik@thinkvmc01 bin]$ ./elasticsearch
warning: Falling back to java on path. This behavior is deprecated. Specify JAVA_HOME
[2019-04-08T23:11:44,120][INFO ][o.e.e.NodeEnvironment    ] [ZVfIMzv] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [26.7gb], net total_space [28.9gb], types [rootfs]
[2019-04-08T23:11:44,126][INFO ][o.e.e.NodeEnvironment    ] [ZVfIMzv] heap size [1015.6mb], compressed 

....

# 这里报了错,很明显了
ERROR: [2] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2019-04-08T23:12:06,558][INFO ][o.e.n.Node               ] [ZVfIMzv] stopping ...
[2019-04-08T23:12:06,636][INFO ][o.e.n.Node               ] [ZVfIMzv] stopped
[2019-04-08T23:12:06,637][INFO ][o.e.n.Node               ] [ZVfIMzv] closing ...
[2019-04-08T23:12:06,673][INFO ][o.e.n.Node               ] [ZVfIMzv] closed

# 我们按它的提示该系统配置
[thinktik@thinkvmc01 bin]$ vim /etc/security/limits.conf
[thinktik@thinkvmc01 bin]$ su
Password: 

#添加如下配置
    * soft nofile 65536
    * hard nofile 131072
    * soft nproc 2048
    * hard nproc 4096

# 继续启动
[thinktik@thinkvmc01 bin]$ ./elasticsearch
# 报错,那么继续修改
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2019-04-08T23:19:38,825][INFO ][o.e.n.Node               ] [ZVfIMzv] stopping ...
[2019-04-08T23:19:38,844][INFO ][o.e.n.Node               ] [ZVfIMzv] stopped
[2019-04-08T23:19:38,845][INFO ][o.e.n.Node               ] [ZVfIMzv] closing ...
[2019-04-08T23:19:38,887][INFO ][o.e.n.Node               ] [ZVfIMzv] closed
[2019-04-08T23:19:38,889][INFO ][o.e.x.m.p.NativeController] [ZVfIMzv] Native controller process has stopped - no new native processes can be started

# 继续修改
[thinktik@thinkvmc01 bin]$ su
Password: 
[root@thinkvmc01 bin]# sysctl -w vm.max_map_count=262144
vm.max_map_count = 262144
# 继续启动
[thinktik@thinkvmc01 bin]$ ./elasticsearch
[2019-04-08T23:22:37,612][INFO ][o.e.c.s.ClusterApplierService] [ZVfIMzv] new_master {ZVfIMzv}{ZVfIMzviR5ie4WVCaO9CZA}{B3vTE3wKSriPc-LwHC8J-A}{192.168.50.207}{192.168.50.207:9300}{ml.machine_memory=1927471104, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {ZVfIMzv}{ZVfIMzviR5ie4WVCaO9CZA}{B3vTE3wKSriPc-LwHC8J-A}{192.168.50.207}{192.168.50.207:9300}{ml.machine_memory=1927471104, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2019-04-08T23:22:37,792][INFO ][o.e.h.n.Netty4HttpServerTransport] [ZVfIMzv] publish_address {192.168.50.207:9200}, bound_addresses {192.168.50.207:9200}
[2019-04-08T23:22:37,792][INFO ][o.e.n.Node               ] [ZVfIMzv] started
[2019-04-08T23:22:38,740][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [ZVfIMzv] Failed to clear cache for realms [[]]
[2019-04-08T23:22:38,850][INFO ][o.e.l.LicenseService     ] [ZVfIMzv] license [41e1ad3d-893b-48c6-98b1-71e02ab1a367] mode [basic] - valid
[2019-04-08T23:22:38,873][INFO ][o.e.g.GatewayService     ] [ZVfIMzv] recovered [0] indices into cluster_state

# 成功

验证

[thinktik@thinkvmc01 ~]$ netstat -nlp
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -                   
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      -                   
# 9200,9300 被ES监听
tcp6       0      0 192.168.50.207:9200     :::*                    LISTEN      12829/java          
tcp6       0      0 192.168.50.207:9300     :::*                    LISTEN      12829/java          
tcp6       0      0 :::22                   :::*                    LISTEN      -                   
tcp6       0      0 ::1:25                  :::*                    LISTEN      -                   
udp        0      0 0.0.0.0:68              0.0.0.0:*                           -                   
udp        0      0 0.0.0.0:68              0.0.0.0:*                           -                   
raw6       0      0 :::58                   :::*                    7           -                   
raw6       0      0 :::58                   :::*                    7           -    

# 防火墙开放端口
[root@thinkvmc01 thinktik]# firewall-cmd --zone=public --add-port=9200/tcp --permanent
success
[root@thinkvmc01 thinktik]# firewall-cmd --zone=public --add-port=9300/tcp --permanent
success
[root@thinkvmc01 thinktik]# firewall-cmd --reload
success


# thinkvmc02 主机验证 thinkvmc01 的ES效果。你用浏览器验证下面的地址也可以 
[thinktik@thinkvmc02 ~]$ curl -i http://192.168.50.207:9200/
HTTP/1.1 200 OK
content-type: application/json; charset=UTF-8
content-length: 493

{
  "name" : "ZVfIMzv",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "mhuFY2EcRl6Bt9xqKiyY7Q",
  "version" : {
    "number" : "6.7.1",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "2f32220",
    "build_date" : "2019-04-02T15:59:27.961366Z",
    "build_snapshot" : false,
    "lucene_version" : "7.7.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

你用浏览器验证下面的地址也可以

到这里ES就安装好了

thinkvmc02 先安装Logstash
# 验证java
[thinktik@thinkvmc02 java8]$ java -version
java version "1.8.0_201"
Java(TM) SE Runtime Environment (build 1.8.0_201-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode)

# 下载
[thinktik@thinkvmc02 java8]$ wget https://artifacts.elastic.co/downloads/logstash/logstash-6.7.1.tar.gz
--2019-04-08 23:32:55--  https://artifacts.elastic.co/downloads/logstash/logstash-6.7.1.tar.gz
Resolving artifacts.elastic.co (artifacts.elastic.co)... 151.101.110.222, 2a04:4e42:36::734
Connecting to artifacts.elastic.co (artifacts.elastic.co)|151.101.110.222|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 175824421 (168M) [application/x-gzip]
Saving to: ‘logstash-6.7.1.tar.gz’

 6% [==>                                                              ] 10,605,295   113KB/s  eta 9m 57s
 
...

[thinktik@thinkvmc02 ~]$ ls
java8  jdk-8u201-linux-x64.tar.gz  logstash-6.7.1.tar.gz
[thinktik@thinkvmc02 ~]$ tar -zxvf logstash-6.7.1.tar.gz 
...
logstash-6.7.1/x-pack/src/test/java/org
logstash-6.7.1/x-pack/src/test/java/org/logstash
logstash-6.7.1/x-pack/src/test/java/org/logstash/xpack
logstash-6.7.1/x-pack/src/test/java/org/logstash/xpack/test
logstash-6.7.1/x-pack/src/test/java/org/logstash/xpack/test/RSpecIntegrationTests.java
logstash-6.7.1/x-pack/src/test/java/org/logstash/xpack/test/RSpecTests.java
logstash-6.7.1/LICENSE.txt
logstash-6.7.1/logstash-core/lib/logstash/build.rb


[thinktik@thinkvmc02 ~]$ cd logstash-6.7.1
[thinktik@thinkvmc02 logstash-6.7.1]$ ls
bin     CONTRIBUTORS  Gemfile       lib          logstash-core             modules     tools   x-pack
config  data          Gemfile.lock  LICENSE.txt  logstash-core-plugin-api  NOTICE.TXT  vendor
[thinktik@thinkvmc02 logstash-6.7.1]$ cd config/
[thinktik@thinkvmc02 config]$ ls
jvm.options  log4j2.properties  logstash-sample.conf  logstash.yml  pipelines.yml  startup.options
[thinktik@thinkvmc02 config]$ cp logstash-sample.conf logstash.conf 
[thinktik@thinkvmc02 config]$ vim logstash.conf 
# 这里将ES地址写对就可以了
    input {
      beats {
        port => 5044
      }
    }
    
    output {
      elasticsearch {
        hosts => ["http://192.168.50.207:9200"]
        index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
        #user => "elastic"
        #password => "changeme"
      }
    }

[thinktik@thinkvmc02 config]$ vim logstash.yml 
# 这里写正确自己的IP   
    # ------------ Metrics Settings --------------
    #
    # Bind address for the metrics REST endpoint
    #
    http.host: "192.168.50.19"


# 启动
[thinktik@thinkvmc02 bin]$ ./logstash -f ../config/logstash.conf 
Sending Logstash logs to /home/thinktik/logstash-6.7.1/logs which is now configured via log4j2.properties
[2019-04-08T23:47:53,295][WARN ][logstash.config.source.multilocal] Ignoring the "pipelines.yml" file because modules or command line options are specified
[2019-04-08T23:47:53,324][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.7.1"}
[2019-04-08T23:48:08,245][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-04-08T23:48:09,323][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.50.207:9200/]}}
# 日志显示ES地址对了
[2019-04-08T23:48:09,919][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://192.168.50.207:9200/"}
[2019-04-08T23:48:10,080][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-04-08T23:48:10,096][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won"t be used to determine the document _type {:es_version=>6}
[2019-04-08T23:48:10,174][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://192.168.50.207:9200"]}
[2019-04-08T23:48:10,250][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2019-04-08T23:48:10,318][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2019-04-08T23:48:11,308][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2019-04-08T23:48:11,360][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#"}
[2019-04-08T23:48:11,499][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
# 日志显示5044,9600被监听
[2019-04-08T23:48:11,589][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2019-04-08T23:48:12,194][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

# 检查下端口监听
[thinktik@thinkvmc02 ~]$ netstat -nlp
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -                   
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      -                   
tcp6       0      0 :::5044                 :::*                    LISTEN      27467/java          
tcp6       0      0 :::22                   :::*                    LISTEN      -                   
tcp6       0      0 ::1:25                  :::*                    LISTEN      -                   
tcp6       0      0 192.168.50.19:9600      :::*                    LISTEN      27467/java          
udp        0      0 0.0.0.0:68              0.0.0.0:*                           -                   
udp        0      0 0.0.0.0:68              0.0.0.0:*                           -                   
raw6       0      0 :::58                   :::*                    7           -                   
raw6       0      0 :::58                   :::*                    7           -       
# 防火墙打开
[root@thinkvmc02 thinktik]# firewall-cmd --zone=public --add-port=9600/tcp --permanent
success
[root@thinkvmc02 thinktik]# firewall-cmd --zone=public --add-port=5044/tcp --permanent
success
[root@thinkvmc02 thinktik]# firewall-cmd --reload
success

到这里logstash安装完毕

thinkvmc03 先安装Kibana
[thinktik@thinkvmc03 ~]$ java -version
java version "1.8.0_201"
Java(TM) SE Runtime Environment (build 1.8.0_201-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode)

# 修改配置
[thinktik@thinkvmc03 config]$ pwd
/home/thinktik/kibana-6.7.1-linux-x86_64/config
[thinktik@thinkvmc03 config]$ vim kibana.yml
# 这里修改为自己的IP,端口默认5601
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "192.168.50.54"
# 这里修改ES服务的地址
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://192.168.50.54:9200"]

# 启动
[thinktik@thinkvmc03 bin]$ ./kibana

  log   [16:04:24.455] [info][status][plugin:kibana@6.7.1] Status changed from uninitialized to green - Ready
  log   [16:04:24.507] [info][status][plugin:elasticsearch@6.7.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [16:04:24.510] [info][status][plugin:xpack_main@6.7.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [16:04:24.523] [info][status][plugin:graph@6.7.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch


# 检查
[thinktik@thinkvmc03 config]$ netstat -nlp
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -                   
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      -                   
# 监听正常
tcp        0      0 192.168.50.54:5601      0.0.0.0:*               LISTEN      27474/./../node/bin 
tcp6       0      0 :::22                   :::*                    LISTEN      -                   
tcp6       0      0 ::1:25                  :::*                    LISTEN      -                   
udp        0      0 0.0.0.0:68              0.0.0.0:*                           -                   
udp        0      0 0.0.0.0:68              0.0.0.0:*                           -                   
raw6       0      0 :::58                   :::*                    7           -                   
raw6       0      0 :::58                   :::*                    7           -      
# 防火墙开放端口
[root@thinkvmc03 config]# firewall-cmd --zone=public --add-port=5601/tcp --permanent
success
[root@thinkvmc03 config]# firewall-cmd --reload
success

kibana效果

到这里就完成了ELK基础搭建

thinkvmc02 先安装Filebeat

接下来我们安装 Filebeat,使用ELKF架构来实现log4j的日志收集。

为了方便Filebeat安装在thinkvmc03上与thinkvmc02的Logstash形成分布式结构来模拟日志数据的收集与传输

官方的安装教程也很简单,属于基本操作了

[thinktik@thinkvmc03 ~]$ tar -zxvf filebeat-6.7.1-linux-x86_64.tar.gz 
filebeat-6.7.1-linux-x86_64/.build_hash.txt
filebeat-6.7.1-linux-x86_64/fields.yml
filebeat-6.7.1-linux-x86_64/LICENSE.txt
filebeat-6.7.1-linux-x86_64/NOTICE.txt
filebeat-6.7.1-linux-x86_64/kibana/
filebeat-6.7.1-linux-x86_64/kibana/5/
filebeat-6.7.1-linux-x86_64/

...
filebeat-6.7.1-linux-x86_64/module/traefik/access/machine_learning/visitor_rate.json
filebeat-6.7.1-linux-x86_64/module/traefik/access/manifest.yml
filebeat-6.7.1-linux-x86_64/module/traefik/module.yml
filebeat-6.7.1-linux-x86_64/filebeat.reference.yml
filebeat-6.7.1-linux-x86_64/filebeat

# 修改配置使filebeat对准我们的输出
[thinktik@thinkvmc03 filebeat-6.7.1-linux-x86_64]$ vim filebeat.yml 



    #=========================== Filebeat inputs =============================
    # 设置Filebeat读取/home/thinktik/ELKF_TEST.log日志
    filebeat.inputs:
    
    # Each - is an input. Most options can be set at the input level, so
    # you can use different inputs for various configurations.
    # Below are the input specific configurations.
    
    - type: log
    
      # Change to true to enable this input configuration.
      # 这里设为True开启日志读入
      enabled: true
    
      # Paths that should be crawled and fetched. Glob based paths.
      paths:
        - /home/thinktik/ELKF_TEST.log
        #- /var/log/*.log
        #- c:programdataelasticsearchlogs*

    #-------------------------- Elasticsearch output ------------------------------
    # 直接输出到Elasticsearch 这里我们不建议直接输出
    #output.elasticsearch:
      # Array of hosts to connect to.
      # hosts: ["192.168.50.207:9200"]
    
      # Enabled ilm (beta) to use index lifecycle management instead daily indices.
      #ilm.enabled: false
    
      # Optional protocol and basic auth credentials.
      #protocol: "https"
      #username: "elastic"
      #password: "changeme"
    
    #----------------------------- Logstash output --------------------------------
    # 这里才是Logstash,直接输出到logstash 这里我们建议直接输出,地址配对就可以
    output.logstash:
      # The Logstash hosts
      hosts: ["192.168.50.19:5044"]
    
      # Optional SSL. By default is off.
      # List of root certificates for HTTPS server verifications
      #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
    
      # Certificate for SSL client authentication
      #ssl.certificate: "/etc/pki/client/cert.pem"
    
      # Client Certificate Key

# 保存后启动
[thinktik@thinkvmc03 filebeat-6.7.1-linux-x86_64]$ ./filebeat 

# 后续的你直接修改/home/thinktik/ELKF_TEST.log,写入一些数据到这个文件里面等待Kibana显示出来效果
# 我们先理下思路,流程是:filebeat -> Logstash -> ES -> Kibana
# 如果没有问题,那么我们再Kibana检查效果

检查ELKF效果

这里看到日志被正确读取

我们再细节设置下





我们搜索下看看

先搜索主机 host.name=thinkvmc03 的日志

再搜索主机 host.name=thinkvmc03 的日志 而且日志源文件是 source =/home/thinktik/ELKF_TEST.log的

看到匹配正确

我们继续收集下其他类型的日志,那log4j来试一下

本文原创链接

Linux ELK 安装(服务器架设篇)

参考链接

https://juejin.im/entry/59e6b...

文章版权归作者所有,未经允许请勿转载,若此文章存在违规行为,您可以联系管理员删除。

转载请注明本文地址:https://www.ucloud.cn/yun/77528.html

相关文章

  • 简单的通过源码安装 elk 平台

    摘要:编译安装日志分析平台首发链接开始之前假设您已经拥有一台内存至少的计算机或虚拟机并且安装了命令并且您应该了解命令的使用如果不了解您可以粗略的看一下简明教程下载安装包下载安装包传输到服务器中如果是里直接下载则可以忽略这一步准备安 编译安装日志分析平台 elk + filebeat 首发链接 [https://system.out.println.or...]() 开始之前 假设您已经拥有一...

    cyrils 评论0 收藏0
  • Linux云计算高端架构师+DevOps高级虚拟化高级进阶视频

    摘要:课程大纲开班典礼开班典礼开班典礼操作系统系统安装及启动流程操作系统系统安装及启动流程必备命令讲解必备命令讲解必备命令讲解及系统启动流程必备命令讲解及系统启动流程启动流程和用户及用户组讲解启动流程和用户及用户组讲解用户权限讲解及编辑器用户权限 课程大纲1.开班典礼(1)_rec.mp42.开班典礼(2)_rec.mp43.开班典礼(3)_rec.flv4.Linux操作系统系统安装及启动...

    Cheng_Gang 评论0 收藏0
  • elk 第二 , 为elk加入redis, 替换下beats

    摘要:为加入替换下这是的第二篇文章编译安装日志分析平台支持多种输入输出方法本文章主要描述通过做队列中间件缓解平台的压力使用场景数据不可控时例如日志不是文件而是由直接推送到的就无法使用了当然是可以和一起使用的例如读取文件解析后输出到再由正常流程处理 为elk加入redis, 替换下beats 这是elk的第二篇文章 编译安装日志分析平台 elk + filebeat elk支持多种输入输出方法...

    springDevBird 评论0 收藏0
  • elk 第二 , 为elk加入redis, 替换下beats

    摘要:为加入替换下这是的第二篇文章编译安装日志分析平台支持多种输入输出方法本文章主要描述通过做队列中间件缓解平台的压力使用场景数据不可控时例如日志不是文件而是由直接推送到的就无法使用了当然是可以和一起使用的例如读取文件解析后输出到再由正常流程处理 为elk加入redis, 替换下beats 这是elk的第二篇文章 编译安装日志分析平台 elk + filebeat elk支持多种输入输出方法...

    godlong_X 评论0 收藏0
  • elk 第二 , 为elk加入redis, 替换下beats

    摘要:为加入替换下这是的第二篇文章编译安装日志分析平台支持多种输入输出方法本文章主要描述通过做队列中间件缓解平台的压力使用场景数据不可控时例如日志不是文件而是由直接推送到的就无法使用了当然是可以和一起使用的例如读取文件解析后输出到再由正常流程处理 为elk加入redis, 替换下beats 这是elk的第二篇文章 编译安装日志分析平台 elk + filebeat elk支持多种输入输出方法...

    changfeng1050 评论0 收藏0

发表评论

0条评论

最新活动
阅读需要支付1元查看
<