elk+filebeat+kafka集群部署

发布于:2024-08-08 ⋅ 阅读:(99) ⋅ 点赞:(0)

 

3台es+file,3台kafka ,1台nginx

192.168.124.10 es

192.168.124.20 es

192.168.124.30 kibana

192.168.124.50 kafka

192.168.124.51 kafka

192.168.124.60 kafka

192.168.124.40 nginx filebeat

安装nginx服务

vim /usr/local/nginx/html/index.html

this is nginx 到浏览器测试一下页面访问

vim filebeat.yml进入配置文件中

- type: log
 enabled: true
 paths:
  - /usr/local/nginx/logs/access.log
  - /usr/local/nginx/logs/error.log
 tags: ["nginx"]
 fields:
  service_name: 20.0.0.10_nginx
  log_type: nginx
  from: 20.0.0.10

output.kafka:
 enabled: true
 hosts: ["20.0.0.11:9092","20.0.0.12:9092","20.0.0.13:9092"]
 topic: "nginx"

运行filebeat.yml

nohup ./filebeat -e -c filebeat.yml > filebeat.out &

 在主机192.168.124.30上配置文件logstash

cd /opt/log     vim kafka.conf

input {
 kafka {
  bootstrap_servers => "192.168.124.50:9092,192.168.124.51:9092,192.168.124.60:9092"
  topics => "nginx"
  type => "nginx_kafka"
  codec => "json"
  \#解析json格式的代码
  auto_offset_reset => "earliest"
  \#从头拉取,latest
  decorate_events => true
  \#传递给es实例中的信息包含kafka的属性数据
  }
 }
output {
 if "access" in [tags] {
  elasticsearch {
   hosts => ["192.168.124.10:9200","192.168.124.20:9200"]
    index => "nginx_access-%{+YYYY.MM.dd}"
}
}
 
  if "error" in [tags] {
    elasticsearch {
      hosts => ["192.168.124.10:9200","192.168.124.20:9200"]
      index => "nginx_error-%{+YYYY.MM.dd}"

  }
 }
}

 

浏览器访问 http://192.168.124.30:5601 登录 Kibana,
单击“Create Index Pattern”按钮添加索引“filebeat_test-*”,单击 “create” 按钮创建,
单击 “Discover” 按钮可查看图表信息及日志信息。


网站公告

今日签到

点亮在社区的每一天
去签到