Flink on Yarn 日誌滾動實現

Flink on Yarn 模式下,業務應用內部配置的 logback.xml/log4j.properties 是無法加載的,Flink 會默認採用安裝目錄下的 $FLINK_HOME/logback.xml/log4j.properties 作爲統一的 logger 定義文件。
 Flink 提供的 logback.xml/log4j.properties 只配置了 rootLogger,如果不加修改,集羣上面運行的所有作業日誌都將輸出到 rootLogger 指向的目錄。隨着時間的增長日誌量會很大,加載會非常慢甚至直接導致頁面掛掉。

因此可以通過修改 $FLINK_HOME/conf/logback.xml 文件的方式對日誌進行定製(官方推薦使用logback.xml)。

一、實現步驟:

1、修改 $FLINK_HOME/conf/logback.xml

指定 Appender 爲 RollingFileAppender,配置你想要的滾動策略(這裏只做示例,具體配置根據需求定製)。

<!--
 ~ Licensed to the Apache Software Foundation (ASF) under one
 ~ or more contributor license agreements. See the NOTICE file
 ~ distributed with this work for additional information
 ~ regarding copyright ownership. The ASF licenses this file
 ~ to you under the Apache License, Version 2.0 (the
 ~ "License"); you may not use this file except in compliance
 ~ with the License. You may obtain a copy of the License at
 ~
 ~ http://www.apache.org/licenses/LICENSE-2.0
 ~
 ~ Unless required by applicable law or agreed to in writing, software
 ~ distributed under the License is distributed on an "AS IS" BASIS,
 ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 ~ See the License for the specific language governing permissions and
 ~ limitations under the License.
 -->

<configuration>
 <!-- 按照每日滾動的方式生成日誌文件 -->
 <appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender">
 <file>${log.file}</file>
 <append>false</append>
 <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
 <!-- 日誌按天滾動 -->
 <fileNamePattern>${log.file}_%d{yyyy-MM-dd}.%i</fileNamePattern>
 <!-- 每個文件最大50MB, 保留7天的歷史日誌, 最多保留2GB -->
 <maxFileSize>50MB</maxFileSize>
 <maxHistory>7</maxHistory>
 <totalSizeCap>2GB</totalSizeCap>
 </rollingPolicy>
 <encoder>
 <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{60} %X{sourceThread} - %msg%n</pattern>
 </encoder>
 </appender>

 <!-- This affects logging for both user code and Flink -->
 <root level="INFO">
 <appender-ref ref="file"/>
 </root>

 <!-- Uncomment this if you want to only change Flink's logging -->
 <!--<logger name="org.apache.flink" level="INFO">-->
 <!--<appender-ref ref="file"/>-->
 <!--</logger>-->

 <!-- The following lines keep the log level of common libraries/connectors on
 log level INFO. The root logger does not override this. You have to manually
 change the log levels here. -->
 <logger name="akka" level="INFO">
 <appender-ref ref="file"/>
 </logger>
 <logger name="org.apache.kafka" level="INFO">
 <appender-ref ref="file"/>
 </logger>
 <logger name="org.apache.hadoop" level="INFO">
 <appender-ref ref="file"/>
 </logger>
 <logger name="org.apache.zookeeper" level="INFO">
 <appender-ref ref="file"/>
 </logger>

 <!-- Suppress the irrelevant (wrong) warnings from the Netty channel handler -->
 <logger name="org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline" level="ERROR">
 <appender-ref ref="file"/>
 </logger>
</configuration>

2、添加 logback.xml 依賴的jar包(注意版本號與實際使用一致)

    添加 log4j-over-slf4j-1.7.25.jar,logback-classic-1.2.3.jar, logback-core-1.2.3.jar 到 $FLINK_HOME/lib 路徑下。

此時,配置已經完成。按正常程序啓動 flink 任務即可。

二、注意事項

    1、該方式只適用於單任務單集羣部署方式。

    2、$FLINK_HOME/lib/ 下的logback 相關jar 需要與業務項目中logback jar包版本一致。

發佈了28 篇原創文章 · 獲贊 10 · 訪問量 3萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章