SpringBoot 整合 ELK 管理日志

准备环境

ELK整个环境基于 Docker 部署,用到的系统及的软件版本信息如下:

  • 系统:MacOS Sonoma 14.5
  • Docker Engine: 24.0.6
  • Docker Compose: v2.22.0-desktop.2
  • Elasticsearch: 8.14.1
  • Kibana: 8.14.1
  • Logstash: 8.14.1
  • Spring Boot: 2.5.15

本次分别用到了 elasticsearchkibanalogstash,docker 拉取相关镜像

docker pull elasticsearch:8.14.1

docker pull kibana:8.14.1

docker pull logstash:8.14.1

准备配置文件

├── elasticsearch  
│   ├── config 
│   │   └── elasticsearch.yml # ES配置文件
│   ├── data    # ES 数据文件
│   └── plugins # ES 插件
├── kibana
│   ├── config  
│   │   └── kibana.yml    # kibana 配置文件
└── logstash
    ├── config
    │   └── logstash.yml  # logstash 配置
    ├── data
    └── pipeline
        └── logstash.conf # logstash 配置

配置文件具体如下:

# elasticsearch.yml

# 以下为自动生成的默认配置,按需自行变更配置
cluster.name: "docker-cluster"
network.host: 0.0.0.0
# kibana.yml

server.host: "0.0.0.0"
server.shutdownTimeout: "5s"
monitoring.ui.container.elasticsearch.enabled: true

# 配置 elasticsearch 网络
elasticsearch.hosts: [ "http://elasticsearch:9200" ] 

# 修改中文显示配置
i18n.locale: zh-CN
# logstash.yml

http.host: "0.0.0.0"

# 配置 elasticsearch 网络
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
# logstash.conf
input {
	tcp {
		# 允许任意主机发送日志
		host => "0.0.0.0" 
		# logstash暴露的端口
		port => 5044
	}
}
filter { }
output {
	elasticsearch {
		# 输入日志到那个服务器(集群可添加多个)
		hosts => ["http://elasticsearch:9200"]
		# 索引名称(kibana查看日志)
		index => "application-logs-%{+YYY-MM}" 
	}
}

配置说明:

input:日志的输出来源,暴露5044端口接收SpringBoot日志
filter:日志的过滤器,暂时不配置
output:日志的输出,将日志输出到 elasticsearch中进行保存,index 配置的 application-logs-%{+YYY-MM}为索引名称

运行Docker 镜像

Elasticsearch 运行命令

docker run -d \
--name elasticsearch \
-env "ES_JAVA_OPTS=-Xms512m -Xmx512m" \
-env "discovery.type=single-node" \
-env "xpack.security.enabled=false" \
-v ./elasticsearch/data:/usr/share/elasticsearch/data \
-v ./elasticsearch/plugins:/usr/share/elasticsearch/plugins \
-v ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
--privileged \
--network es-net \
-p 9200:9200 \
-p 9300:9300 \
elasticsearch:8.14.1

Kibana 运行命令

docker run -d \
--name kibana \
-v ./kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml \
--network es-net \
-p 5601:5601 \
kibana:8.14.1

Logstash 运行命令

docker run -d \
--name logstash \
-v ./logstash/data/:/usr/share/logstash/data \
-v ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml \
-v ./logstash/pipeline/:/usr/share/logstash/pipeline \
--privileged \
--network es-net \
-p 5044:5044 \
-p 9600:9600 \
logstash:8.14.1

参数解释

  • -d:在后台运行容器(detached 模式)。
  • --name: 给容器指定一个名字,这里是 elasticsearch
  • -env:设置环境变量。
    • ES_JAVA_OPTS=-Xms512m -Xmx512m:配置 Java 虚拟机的堆内存大小
    • "discovery.type=single-node":设置 Elasticsearch 运行在单节点模式
    • "xpack.security.enabled=false":禁用 X-Pack 安全功能
  • -v:挂载本地目录到容器内部的目录。前面路径为宿主机路径
  • --privileged:以特权模式运行容器,赋予容器更多的权限(不推荐在生产使用)
  • --network:将容器连接到一个指定的 Docker 网络
  • -p:端口映射,前面端口号为宿主机端口

Docker Compose 运行

# docker-compose.yaml

version: "3"

services:
  kibana:
    image: kibana:8.14.1
    restart: on-failure:3
    depends_on:
      elasticsearch:
        condition: service_healthy
    networks:
      es-net:
    volumes:
      - ./kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
    ports:
      - "5601:5601"
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:5601"]
      interval: 30s
      timeout: 5s
      retries: 5
      start_period: 30s  
  logstash:
    image: logstash:8.14.1
    restart: on-failure:3
    networks:
      es-net:
    volumes:
      - ./logstash/data/:/usr/share/logstash/data
      - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
      - ./logstash/pipeline/:/usr/share/logstash/pipeline
    ports:
      - "5044:5044"
      - "9600:9600"
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9600"]
      interval: 10s
      timeout: 5s
      retries: 5
  elasticsearch:
    image: elasticsearch:8.14.1
    restart: on-failure:3
    networks:
      es-net:
    volumes:
      - ./elasticsearch/data:/usr/share/elasticsearch/data
      - ./elasticsearch/plugins:/usr/share/elasticsearch/plugins
      - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    ports:
      - "9200:9200"
      - "9300:9300"
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9200"]
      interval: 10s
      timeout: 5s
      retries: 5
    environment:
      - ES_JAVA_OPTS=-Xms512m -Xmx512m
      - discovery.type=single-node
      - xpack.security.enabled=false
networks:
  es-net:

直接在目录运行

docker-compose up -d

配置SpringBoot

[!info] 本次项目以开源项目 ruoyi-vue 为基础,在此开源项目基础上添加 logstash 以支持记录日志

项目父 pom.xml 中添加依赖管理:

<properties>  
    <logstash.logback.version>7.3</logstash.logback.version>  
</properties>

<!-- 依赖声明 -->
<dependencyManagement>  
    <dependencies>
	    <dependency>  
		    <groupId>net.logstash.logback</groupId>  
		    <artifactId>logstash-logback-encoder</artifactId>  
		    <version>${logstash.logback.version}</version>  
		</dependency>
	</dependencies>  
</dependencyManagement>

common 项目 pom.xml 中添加依赖

<dependencies>
	<dependency>  
	    <groupId>net.logstash.logback</groupId>  
	    <artifactId>logstash-logback-encoder</artifactId>  
	</dependency>
</dependencies>

配置 logback.xml 文件

<?xml version="1.0" encoding="UTF-8"?>  
<configuration>  
  
    <!-- 将日志输出 Logstash -->  
    <appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">  
        <!-- 获取logstash地址作为输出的目的地 -->  
        <destination>192.168.0.120:5044</destination>  
        <encoder chatset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder"/>  
    </appender>  

  
    <root level="info">
        <appender-ref ref="logstash" />  
    </root>  
</configuration>

随便写个日志看看,找到 CaptchaController(免登录), 随便打印一条日志试试效果

@RestController  
public class CaptchaController  
{
	private static final Logger log = LoggerFactory.getLogger(CommonController.class);
	@GetMapping("/captchaImage")  
	public AjaxResult getCode(HttpServletResponse response) throws IOException  
	{
		log.info("生成验证码");  
		log.error("生成验证码");
	}
}

接下来直接运行项目,待项目启动成功后,直接访问验证码接口,可以多请求几次

http://localhost:8080/captchaImage

CleanShot_20240718161731.png

上图可以看到,有 INFOERROR 两种日志,分别对应了代码输出

查看ELK 日志

首先打开 kibana 的访问地址, 本地默认为 http://localhost:5601/app/home#/,以下内容直接看图片吧!

kibana-1.png

kibana-2.png

kibana-3.png

kibana-4.png

kibana-5.png

至此,SpringBoot整合ELK就完成了!