Flink on Yarn 日志滚动实现

Flink on Yarn 模式下,业务应用内部配置的 logback.xml/log4j.properties 是无法加载的,Flink 会默认采用安装目录下的 $FLINK_HOME/logback.xml/log4j.properties 作为统一的 logger 定义文件。
 Flink 提供的 logback.xml/log4j.properties 只配置了 rootLogger,如果不加修改,集群上面运行的所有作业日志都将输出到 rootLogger 指向的目录。随着时间的增长日志量会很大,加载会非常慢甚至直接导致页面挂掉。

因此可以通过修改 $FLINK_HOME/conf/logback.xml 文件的方式对日志进行定制(官方推荐使用logback.xml)。

一、实现步骤:

1、修改 $FLINK_HOME/conf/logback.xml

指定 Appender 为 RollingFileAppender,配置你想要的滚动策略(这里只做示例,具体配置根据需求定制)。

<!--
 ~ Licensed to the Apache Software Foundation (ASF) under one
 ~ or more contributor license agreements. See the NOTICE file
 ~ distributed with this work for additional information
 ~ regarding copyright ownership. The ASF licenses this file
 ~ to you under the Apache License, Version 2.0 (the
 ~ "License"); you may not use this file except in compliance
 ~ with the License. You may obtain a copy of the License at
 ~
 ~ http://www.apache.org/licenses/LICENSE-2.0
 ~
 ~ Unless required by applicable law or agreed to in writing, software
 ~ distributed under the License is distributed on an "AS IS" BASIS,
 ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 ~ See the License for the specific language governing permissions and
 ~ limitations under the License.
 -->

<configuration>
 <!-- 按照每日滚动的方式生成日志文件 -->
 <appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender">
 <file>${log.file}</file>
 <append>false</append>
 <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
 <!-- 日志按天滚动 -->
 <fileNamePattern>${log.file}_%d{yyyy-MM-dd}.%i</fileNamePattern>
 <!-- 每个文件最大50MB, 保留7天的历史日志, 最多保留2GB -->
 <maxFileSize>50MB</maxFileSize>
 <maxHistory>7</maxHistory>
 <totalSizeCap>2GB</totalSizeCap>
 </rollingPolicy>
 <encoder>
 <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{60} %X{sourceThread} - %msg%n</pattern>
 </encoder>
 </appender>

 <!-- This affects logging for both user code and Flink -->
 <root level="INFO">
 <appender-ref ref="file"/>
 </root>

 <!-- Uncomment this if you want to only change Flink's logging -->
 <!--<logger name="org.apache.flink" level="INFO">-->
 <!--<appender-ref ref="file"/>-->
 <!--</logger>-->

 <!-- The following lines keep the log level of common libraries/connectors on
 log level INFO. The root logger does not override this. You have to manually
 change the log levels here. -->
 <logger name="akka" level="INFO">
 <appender-ref ref="file"/>
 </logger>
 <logger name="org.apache.kafka" level="INFO">
 <appender-ref ref="file"/>
 </logger>
 <logger name="org.apache.hadoop" level="INFO">
 <appender-ref ref="file"/>
 </logger>
 <logger name="org.apache.zookeeper" level="INFO">
 <appender-ref ref="file"/>
 </logger>

 <!-- Suppress the irrelevant (wrong) warnings from the Netty channel handler -->
 <logger name="org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline" level="ERROR">
 <appender-ref ref="file"/>
 </logger>
</configuration>

2、添加 logback.xml 依赖的jar包(注意版本号与实际使用一致)

    添加 log4j-over-slf4j-1.7.25.jar,logback-classic-1.2.3.jar, logback-core-1.2.3.jar 到 $FLINK_HOME/lib 路径下。

此时,配置已经完成。按正常程序启动 flink 任务即可。

二、注意事项

    1、该方式只适用於单任务单集群部署方式。

    2、$FLINK_HOME/lib/ 下的logback 相关jar 需要与业务项目中logback jar包版本一致。

发布了28 篇原创文章 · 获赞 10 · 访问量 3万+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章