OOM描述信息:
-
2018-09-18 14:46:54.338 [http-nio-8099-exec-8] ERROR o.a.c.c.C.[.[.[.[dispatcherServlet] - Servlet.service() for servlet [dispatcherServlet] in context with path [/party-data-center] threw exception [Handler dispatch failed; nested exception is java.lang.OutOfMemoryError: GC overhead limit exceeded] with root cause
-
java.lang.OutOfMemoryError: GC overhead limit exceeded
-
at org.bson.io.ByteBufferBsonInput.readString(ByteBufferBsonInput.java:154)
-
at org.bson.io.ByteBufferBsonInput.readString(ByteBufferBsonInput.java:126)
-
at org.bson.BsonBinaryReader.doReadString(BsonBinaryReader.java:245)
-
at org.bson.AbstractBsonReader.readString(AbstractBsonReader.java:461)
-
at org.bson.codecs.BsonStringCodec.decode(BsonStringCodec.java:31)
-
at org.bson.codecs.BsonStringCodec.decode(BsonStringCodec.java:28)
-
at org.bson.codecs.BsonArrayCodec.readValue(BsonArrayCodec.java:102)
-
at org.bson.codecs.BsonArrayCodec.decode(BsonArrayCodec.java:67)
-
at org.bson.codecs.BsonArrayCodec.decode(BsonArrayCodec.java:37)
-
at org.bson.codecs.BsonDocumentCodec.readValue(BsonDocumentCodec.java:101)
-
at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:84)
-
at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:41)
-
at org.bson.codecs.configuration.LazyCodec.decode(LazyCodec.java:47)
-
at org.bson.codecs.BsonDocumentCodec.readValue(BsonDocumentCodec.java:101)
-
at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:84)
-
at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:41)
-
at org.bson.codecs.configuration.LazyCodec.decode(LazyCodec.java:47)
-
at org.bson.codecs.BsonDocumentCodec.readValue(BsonDocumentCodec.java:101)
-
at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:84)
-
at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:41)
-
at org.bson.codecs.configuration.LazyCodec.decode(LazyCodec.java:47)
-
at org.bson.codecs.BsonArrayCodec.readValue(BsonArrayCodec.java:102)
-
at org.bson.codecs.BsonArrayCodec.decode(BsonArrayCodec.java:67)
-
at org.bson.codecs.BsonArrayCodec.decode(BsonArrayCodec.java:37)
-
at org.bson.codecs.BsonDocumentCodec.readValue(BsonDocumentCodec.java:101)
-
at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:84)
-
at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:41)
-
at org.bson.codecs.BsonDocumentCodec.readValue(BsonDocumentCodec.java:101)
-
at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:84)
-
at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:41)
-
at com.mongodb.connection.ReplyMessage.<init>(ReplyMessage.java:51)
-
at com.mongodb.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:301)
根據以上信息,好像是MongoDB查詢數據的時候佔用內存過大,導致的OOM
導出dump文件並且分析一下 使用MAT打開文件後有個 Problem Suspect 1(最有可能導致內存溢出的提示)
-
The thread org.apache.tomcat.util.threads.TaskThread @ 0xf9b19fa0 http-nio-8099-exec-8 keeps local variables with total size 58,255,056 (60.49%) bytes.
-
The memory is accumulated in one instance of "java.lang.Object[]" loaded by "<system class loader>".
-
The stacktrace of this Thread is available. See stacktrace.
-
-
-
Keywords
-
java.lang.Object[]
-
-
Details »
點擊 See stacktrace
信息量還是很龐大的,慢慢分析。 找到
-
at com.mongodb.DB.command(Lcom/mongodb/DBObject;Lcom/mongodb/ReadPreference;Lcom/mongodb/DBEncoder;)Lcom/mongodb/CommandResult; (DB.java:496)
-
at com.mongodb.DB.command(Lcom/mongodb/DBObject;Lcom/mongodb/ReadPreference;)Lcom/mongodb/CommandResult; (DB.java:512)
-
at com.mongodb.DB.command(Lcom/mongodb/DBObject;)Lcom/mongodb/CommandResult; (DB.java:467)
我們可以發現是執行Mongo命令出的錯誤,MongoResult,,,這不是返回的Mongo查詢結果集嗎??難道是返回的結果集過大??很有可能!!! 繼續往下看。。。
-
at com.fosung.data.party.dao.DetailDao.detailQuery(Lcom/fosung/data/party/dto/PartyItemDto;)Lcom/fosung/data/party/vo/OutDetailCountVo; (DetailDao.java:314)
-
at com.fosung.data.party.dao.DetailDao$$FastClassBySpringCGLIB$$caf49f16.invoke(ILjava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; (Unknown Source)
此處看到我們業務代碼的方法,很有可能就是此處方法導致的OOM,進一步分析我們的業務方法: 經過我們仔細分析終於找出問題的原因: 上面出現問題的原因是在獲取總條數的時候,沒有加分頁條件(skip和limit)導致查詢所有符合條件的記錄(符合條件的記錄有6w多條),全部加載到內存中,因此導致了OOM問題。
解決: MongoDB使用管道查詢後獲取符合條件的總條數
-
db.getCollection('user_order').aggregate([
-
{ "$match" : { "code" : "100002255842358"}} ,
-
{ "$project" : { "code" : 1 , "yearInfo" : 1 , "personInfo" : 1}} ,
-
{ "$unwind" : "$yearInfo.counts"} ,
-
{ "$unwind" : "$yearInfo.counts.code"} ,
-
{ "$match" : { "yearInfo.counts.code" : { "$in" : [ "1"]}}} ,
-
{ "$sort" : { "code" : 1 , "yearInfo.counts.sort" : 1}} ,
-
{ "$lookup" : { "from" : "user_info" , "localField" : "yearInfo.counts.detail" , "foreignField" : "_id" , "as" : "personInfo"}} ,
-
{ "$unwind" : "$personInfo"} ,
-
{"$group":{"_id":null,"totalCount":{"$sum":1}}},
-
{"$project":{"totalCount":"$totalCount","_id":0}}
-
])
不需要每次去獲取所有記錄數,再取記錄的條數。
修改完後測試完美通過。。。
轉載於:https://my.oschina.net/u/2477500/blog/2054701