记录一个druid与log4j2的坑,目前还没找到解决方法
程序员文章站
2022-07-03 20:48:07
...
每次项目部署完之后的info日志都打印正常,第二天日志压缩之后,重新生成新的info日志文件,项目的日志就打印不出来
都是打印的druid的异常,项目其他的一切正常
druid的异常:该url(172.28.186.170)是我本机的,已经被我注释掉了,换成了测试环境对应的地址,依然每隔45秒出现一次,部署完第二天项目的接口调用的都正常,但是接口调用的日志打印不出来
22:54:47.762 [Druid-ConnectionPool-Create-1889457907] ERROR com.alibaba.druid.pool.DruidDataSource - create connection error, url: jdbc:sqlserver://172.28.186.170;DatabaseName=testsqlserver, errorCode 0, state 08S01
com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host 172.28.186.170, port 1433 has failed. Error: "connect timed out. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.".
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:234) ~[mssql-jdbc-7.4.1.jre8.jar:?]
at com.microsoft.sqlserver.jdbc.SQLServerException.ConvertConnectExceptionToSQLServerException(SQLServerException.java:285) ~[mssql-jdbc-7.4.1.jre8.jar:?]
at com.microsoft.sqlserver.jdbc.SocketFinder.findSocket(IOBuffer.java:2431) ~[mssql-jdbc-7.4.1.jre8.jar:?]
at com.microsoft.sqlserver.jdbc.TDSChannel.open(IOBuffer.java:656) ~[mssql-jdbc-7.4.1.jre8.jar:?]
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:2472) ~[mssql-jdbc-7.4.1.jre8.jar:?]
at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:2142) ~[mssql-jdbc-7.4.1.jre8.jar:?]
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectInternal(SQLServerConnection.java:1993) ~[mssql-jdbc-7.4.1.jre8.jar:?]
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:1164) ~[mssql-jdbc-7.4.1.jre8.jar:?]
at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:760) ~[mssql-jdbc-7.4.1.jre8.jar:?]
at com.alibaba.druid.pool.DruidAbstractDataSource.createPhysicalConnection(DruidAbstractDataSource.java:1408) ~[druid-1.0.18.jar:1.0.18]
at com.alibaba.druid.pool.DruidAbstractDataSource.createPhysicalConnection(DruidAbstractDataSource.java:1464) ~[druid-1.0.18.jar:1.0.18]
at com.alibaba.druid.pool.DruidDataSource$CreateConnectionThread.run(DruidDataSource.java:1969) [druid-1.0.18.jar:1.0.18]
22:55:32.764 [Druid-ConnectionPool-Create-1889457907] ERROR com.alibaba.druid.pool.DruidDataSource - create connection error, url: jdbc:sqlserver://172.28.186.170;DatabaseName=testsqlserver, errorCode 0, state 08S01
com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host 172.28.186.170, port 1433 has failed. Error: "connect timed out. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.".
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:234) ~[mssql-jdbc-7.4.1.jre8.jar:?]
at com.microsoft.sqlserver.jdbc.SQLServerException.ConvertConnectExceptionToSQLServerException(SQLServerException.java:285) ~[mssql-jdbc-7.4.1.jre8.jar:?]
at com.microsoft.sqlserver.jdbc.SocketFinder.findSocket(IOBuffer.java:2431) ~[mssql-jdbc-7.4.1.jre8.jar:?]
at com.microsoft.sqlserver.jdbc.TDSChannel.open(IOBuffer.java:656) ~[mssql-jdbc-7.4.1.jre8.jar:?]
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:2472) ~[mssql-jdbc-7.4.1.jre8.jar:?]
at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:2142) ~[mssql-jdbc-7.4.1.jre8.jar:?]
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectInternal(SQLServerConnection.java:1993) ~[mssql-jdbc-7.4.1.jre8.jar:?]
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:1164) ~[mssql-jdbc-7.4.1.jre8.jar:?]
at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:760) ~[mssql-jdbc-7.4.1.jre8.jar:?]
at com.alibaba.druid.pool.DruidAbstractDataSource.createPhysicalConnection(DruidAbstractDataSource.java:1408) ~[druid-1.0.18.jar:1.0.18]
at com.alibaba.druid.pool.DruidAbstractDataSource.createPhysicalConnection(DruidAbstractDataSource.java:1464) ~[druid-1.0.18.jar:1.0.18]
at com.alibaba.druid.pool.DruidDataSource$CreateConnectionThread.run(DruidDataSource.java:1969) [druid-1.0.18.jar:1.0.18]
相关依赖
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>druid-spring-boot-starter</artifactId>
<version>1.1.10</version>
</dependency>
<dependency>
<groupId>com.microsoft.sqlserver</groupId>
<artifactId>mssql-jdbc</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-log4j2</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<exclusions><!-- 去掉springboot默认配置 -->
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-logging</artifactId>
</exclusion>
</exclusions>
</dependency>
配置文件配置
log4j2.xml
<?xml version="1.0" encoding="UTF-8"?>
<!--Configuration后面的status,这个用于设置log4j2自身内部的信息输出,可以不设置,当设置成trace时,你会看到log4j2内部各种详细输出-->
<!--monitorInterval:Log4j能够自动检测修改配置 文件和重新配置本身,设置间隔秒数-->
<configuration monitorInterval="5">
<!--日志级别以及优先级排序: OFF > FATAL > ERROR > WARN > INFO > DEBUG > TRACE > ALL -->
<!--变量配置-->
<Properties>
<!-- 格式化输出:%date表示日期,%thread表示线程名,%-5level:级别从左显示5个字符宽度 %msg:日志消息,%n是换行符-->
<!-- %logger{36} 表示 Logger 名字最长36个字符 -->
<property name="LOG_PATTERN" value="%date{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n" />
<!-- 定义日志存储的路径 -->
<property name="FILE_PATH" value="log/logs" />
<property name="FILE_NAME" value="CtripBusiness" />
<property name="FILE_SIZE" value="50 MB" />
</Properties>
<appenders>
<console name="Console" target="SYSTEM_OUT">
<!--输出日志的格式-->
<PatternLayout pattern="${LOG_PATTERN}"/>
<!--控制台只输出level及其以上级别的信息(onMatch),其他的直接拒绝(onMismatch)-->
<ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/>
</console>
<!--文件会打印出所有信息,这个log每次运行程序会自动清空,由append属性决定,适合临时测试用-->
<!--<File name="Filelog" fileName="${FILE_PATH}/test.log" append="false">-->
<!--<PatternLayout pattern="${LOG_PATTERN}"/>-->
<!--</File>-->
<!-- 这个会打印出所有的info及以下级别的信息,每次大小超过size,则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档-->
<RollingFile name="RollingFileInfo"
fileName="${FILE_PATH}/info.log"
filePattern="${FILE_PATH}/${FILE_NAME}-INFO-%d{yyyy-MM-dd}_%i.log.gz">
<!--控制台只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch)-->
<ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/>
<PatternLayout pattern="${LOG_PATTERN}"/>
<Policies>
<!--interval属性用来指定多久滚动一次,默认是1 hour-->
<TimeBasedTriggeringPolicy interval="1"/>
<SizeBasedTriggeringPolicy size="${FILE_SIZE}"/>
</Policies>
<!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖-->
<DefaultRolloverStrategy max="15"/>
</RollingFile>
<!-- 这个会打印出所有的warn及以下级别的信息,每次大小超过size,则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档-->
<RollingFile name="RollingFileWarn" fileName="${FILE_PATH}/warn.log" filePattern="${FILE_PATH}/${FILE_NAME}-WARN-%d{yyyy-MM-dd}_%i.log.gz">
<!--控制台只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch)-->
<ThresholdFilter level="warn" onMatch="ACCEPT" onMismatch="DENY"/>
<PatternLayout pattern="${LOG_PATTERN}"/>
<Policies>
<!--interval属性用来指定多久滚动一次,默认是1 hour-->
<TimeBasedTriggeringPolicy interval="1"/>
<SizeBasedTriggeringPolicy size="${FILE_SIZE}"/>
</Policies>
<!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖-->
<DefaultRolloverStrategy max="15"/>
</RollingFile>
<!-- 这个会打印出所有的error及以下级别的信息,每次大小超过size,则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档-->
<!--<RollingFile name="RollingFileError" fileName="${FILE_PATH}/error.log" filePattern="${FILE_PATH}/${FILE_NAME}-ERROR-%d{yyyy-MM-dd}_%i.log.gz">-->
<!--<!–控制台只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch)–>-->
<!--<ThresholdFilter level="error" onMatch="ACCEPT" onMismatch="DENY"/>-->
<!--<PatternLayout pattern="${LOG_PATTERN}"/>-->
<!--<Policies>-->
<!--<!–interval属性用来指定多久滚动一次,默认是1 hour–>-->
<!--<TimeBasedTriggeringPolicy interval="1"/>-->
<!--<SizeBasedTriggeringPolicy size="${FILE_SIZE}"/>-->
<!--</Policies>-->
<!--<!– DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖–>-->
<!--<DefaultRolloverStrategy max="15"/>-->
<!--</RollingFile>-->
<!--druid的日志记录追加器-->
<RollingFile name="druidSqlRollingFile" fileName="${FILE_PATH}/druid-sql.log"
filePattern="logs/$${date:yyyy-MM}/api-%d{yyyy-MM-dd}-%i.log.gz">
<PatternLayout pattern="[%d{yyyy-MM-dd HH:mm:ss}] %-5level %L %M - %msg%xEx%n"/>
<Policies>
<SizeBasedTriggeringPolicy size="500 MB"/>
<TimeBasedTriggeringPolicy/>
</Policies>
</RollingFile>
</appenders>
<!--Logger节点用来单独指定日志的形式,比如要为指定包下的class指定不同的日志级别等。-->
<!--然后定义loggers,只有定义了logger并引入的appender,appender才会生效-->
<loggers>
<!-- 设置对打印sql语句的支持 -->
<logger name="java.sql" level="debug" additivity="false">
<appender-ref ref="Console"/>
</logger>
<!--过滤掉spring和mybatis的一些无用的DEBUG信息-->
<logger name="org.mybatis" level="info" additivity="false">
<AppenderRef ref="Console"/>
</logger>
<!--监控系统信息-->
<!--若是additivity设为false,则 子Logger 只会在自己的appender里输出,而不会在 父Logger 的appender里输出。-->
<Logger name="org.springframework" level="info" additivity="false">
<AppenderRef ref="Console"/>
</Logger>
<root level="info">
<appender-ref ref="Console"/>
<!--<appender-ref ref="Filelog"/>-->
<appender-ref ref="RollingFileInfo" level="info"/>
<appender-ref ref="RollingFileWarn" level="warn"/>
<!--<appender-ref ref="RollingFileError"/>-->
</root>
<!--记录druid-sql的记录-->
<logger name="druid.sql.Statement" level="warn" additivity="false">
<appender-ref ref="druidSqlRollingFile"/>
</logger>
<logger name="druid.sql.Statement" level="warn" additivity="false">
<appender-ref ref="druidSqlRollingFile"/>
</logger>
<!--spring日志-->
<Logger name="org.springframework" level="info"/>
<!--druid数据源日志-->
<!--<Logger name="druid.sql.Statement" level="warn"/>-->
<!-- mybatis日志 -->
<Logger name="com.mybatis" level="warn"/>
<!--<Logger name="org.hibernate" level="warn"/>-->
<!--<Logger name="com.zaxxer.hikari" level="info"/>-->
<Logger name="org.quartz" level="info"/>
<!--<Logger name="com.andya.demo" level="debug"/>-->
</loggers>
</configuration>
最后我的解决方法是又增加了一个warn的日志文件,所有的项目日志信息都输出到info和warn两个文件,目前warn的日志正常,但是info的日志依然和之前一样