-
Notifications
You must be signed in to change notification settings - Fork 2k
Closed
Description
I'am using jetty-9.4.14 to send request, when I have some big data to response, there is an OutOfMemory exception,I have see the issue about GZIPContentDecoder ,but I'am not found how I could solve the problem,could you help me.The stack is below:
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) ~[?:1.8.0_191]
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[?:1.8.0_191]
at org.eclipse.jetty.util.BufferUtil.allocate(BufferUtil.java:116) ~[jetty-util-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.io.ByteBufferPool.newByteBuffer(ByteBufferPool.java:61) ~[jetty-io-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.io.MappedByteBufferPool.acquire(MappedByteBufferPool.java:65) ~[jetty-io-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.http.GZIPContentDecoder.acquire(GZIPContentDecoder.java:408) ~[jetty-http-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.http.GZIPContentDecoder.decodedChunk(GZIPContentDecoder.java:108) ~[jetty-http-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.http.GZIPContentDecoder.decodeChunks(GZIPContentDecoder.java:189) ~[jetty-http-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.http.GZIPContentDecoder.decode(GZIPContentDecoder.java:71) ~[jetty-http-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.client.HttpReceiver.responseContent(HttpReceiver.java:347) ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.client.http.HttpReceiverOverHTTP.content(HttpReceiverOverHTTP.java:283) ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.http.HttpParser.parseContent(HttpParser.java:1787) ~[jetty-http-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:1517) ~[jetty-http-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.client.http.HttpReceiverOverHTTP.parse(HttpReceiverOverHTTP.java:172) ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.client.http.HttpReceiverOverHTTP.process(HttpReceiverOverHTTP.java:135) ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.client.http.HttpReceiverOverHTTP.receive(HttpReceiverOverHTTP.java:73) ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.client.http.HttpChannelOverHTTP.receive(HttpChannelOverHTTP.java:133) ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.client.http.HttpConnectionOverHTTP.onFillable(HttpConnectionOverHTTP.java:155) ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305) ~[jetty-io-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) ~[jetty-io-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:411) ~[jetty-io-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:305) ~[jetty-io-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.io.ssl.SslConnection$2.succeeded(SslConnection.java:159) ~[jetty-io-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) ~[jetty-io-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118) ~[jetty-io-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333) ~[jetty-util-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310) ~[jetty-util-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168) ~[jetty-util-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126) ~[jetty-util-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366) ~[jetty-util-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765) ~[jetty-util-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683) ~[jetty-util-9.4.14.v20181114.jar:9.4.14.v20181114]
Activity
olamy commentedon Feb 20, 2019
what is the possible size of this
some big data to response
?you are probably too close to your maximum heap size so the
java.lang.OutOfMemoryError: Java heap space
sbordet commentedon Feb 20, 2019
I concur with @olamy: from the information you gave you probably have a too small heap and you are unzipping a zip bomb or similar.
ltp217 commentedon Feb 23, 2019
@olamy @sbordet first, I have checked the heap and the reponse stream, the heap is 256M, the response is about 1M, also I exculed the zip bomb reason.I have found the same problem in my three services .After I checked the environment, once I do the request, I have get the dump when the OOM happend, I found that there are 478 entries of java.util.concurrent.ConcurrenthashMap$Node, and in these Node ,it's all the response byte, the bytes are also the same and not complete, I catched these infomation by Memory Aanlyzer, one of the 478 entries is below. In the last line ,the byte is the response data but not complete . In the 478 entries, the entries are all the same , and the byte value is same too. I still think maybe when jetty handle the chunk, there maybe some problem. But I still can't get the really scenes。I guess is there any situation the pool may not clean the chunk data before, so it will accumulate ,for example: the chunk date is about 2k every time, first 2k, then 4k, then 6k, but the pool ,there maybe 2+4+6+...at last ,it will be OOM. BTW, I also used python to do the request, and I can get the response data sucessfully.
one of the 478 entries:
ltp217 commentedon Feb 23, 2019
@olamy @sbordet I have get the souce code and get some log, In Thread HttpClient@2424a36f-68 ,I see the size is larger and larger, And the bucket is about 482 ,The log where I set is below:
In GZIPContentDecoder.java
In MappedByteBufferPool.java
The log:
sbordet commentedon Feb 25, 2019
What is happening is that you are generating content that is always greater than the previous content.
From your logs, you can see that the
b
value is always increasing, andb
is a key in aMap
to aBucket
, which has a queue ofByteBuffer
s.Because of this, the
MappedByteBufferPool
holds oneBucket
with oneByteBuffer
for each different size, and eventually your small heap is exhausted.Is this a test case of some kind, or a real world scenario?
You can use
ArrayByteBufferPool
that allows you to poolByteBuffer
up to a certain size only, and that will constrain the overall memory used by the pool.This issue is a duplicate of #1861.
ltp217 commentedon Feb 25, 2019
@sbordet The log above is from a real world scenario.And I have sth new ,I did some log in GZIPContentDecoder.java ,and the generating content is not greater than the previous content, I found that the every time the response buffer size is always 2048 .
in GZIPContentDecoder.decodeChunks(),in case DATA:
and in GZIPContentDecoder.decodedChunk():
I get the log below:
In this log,in branch TEST0, the buffer size is 2048,it's not like that every time it increase, I found that In branch TEST3, it has run
release(_inflated);
, but we see the_inflated.remaining(),_inflated.capacity(),_inflated.limit()
is increase 2048 every time, but only the position is 0, is this a nomal phenomenon? In my option, when the ByteBuffer is released, it's limit should be 0 too. And in BufferUtil.java, the position and limit is truly set to 0, Only maybe there is some problem in ByteBufferPool.Bucket.release():In this method ,it not really release the
_inflated
buffer , If the _inflated.remaining() is not 0 but the size before, the next time ,in
GZIPContentDecoder.decodedChunk ()
:int size = _inflated.remaining() + chunk.remaining();
the size will plus chunk.remaining(),in my environment ,it's 2048. And the log is matched to my thread dump.
for example: If the response is 10240 bytes,every time the chunk is 2048, it will need 2048+4096+6144+8192+10240 bytes to do this .And if the response is bigger, OOM may happend.
Look forward to your reply.Thank you very much.
sbordet commentedon Feb 25, 2019
Okay so this is a combination of #1861 and
GZipContentDecoder
that by default is accumulating all decoded chunks into a singleByteBuffer
.The previous implementation (in
jetty-client
before it was moved tojetty-http
) was using abyte[]
so there was no interaction with the buffer pool.@gregw I'm inclined to not use a pooled buffer for
_inflated
; your thoughts?gregw commentedon Feb 25, 2019
@sbordet I tend to agree that a simple pool of buffers for
_inflated
is not going to work very well if the size of the inflated content varies greatly.However, I'm not that keen to give up on pooling all together, nor do I like the multiple copies we need to do as we discover the inflated size of the content.
So my proposal is that we have a chunk size (that should probably be a lot bigger than 2048... let's say 32768). We should inflate into a chunk buffer.... if that is the complete content, then great we just use that.
If it is not complete, then we simply get another chunk buffer from the pool and we build a chain of chunk buffers..... if possible we can try to use this chain of buffers... but more likely once we need the content we will need to allocated a new non-pool buffer/array of exact size and copy all the chunks into it. This means that we will have a buffer pool that will be filled with buffers only of chunk size. When content is larger than a chunk we will allocated one and only one buffer of exactly the right size. We do need double the memory of the content size.... but I think any approach that results in a single buffer approaches that requirement.
sbordet commentedon Feb 26, 2019
I wrote a test case, and for 1 MiB of content, which is then gzipped by the server, the client
ByteBufferPool
retains 267 MiB 😮sbordet commentedon Feb 26, 2019
@gregw after the fixes, the client
ByteBufferPool
retains only 8 KiB and I made the client using the gzip decoder asynchronously, so there is no need for aggregation.This means now that both the client and the server interceptor subclass
GZIPContentDecoder
to be fully async, so the aggregation code is never triggered.I have implemented your suggestion of a chain of buffers for the aggregation - it's never triggered now, but if it will be at least we won't blow up the buffer pool.
The PR with these changes depends on #3391.
ltp217 commentedon Feb 26, 2019
@sbordet yeah, it's similar to my service, in one of my service, the max heap size is 256M, 2MB of content has occured OOM. We now change the -Xmx=1024 to avoid this now. For so, there may need another way to decrease this.
Issue #3373 - OutOfMemoryError: Java heap space in GZIPContentDecoder.
sbordet commentedon Feb 26, 2019
@ltp217 can you please try branch
jetty-9.4.x-3373-gzip_oom
and see if it fixes your problem?ltp217 commentedon Feb 27, 2019
@sbordet I have tried the branch
jetty-9.4.x-3373-gzip_oom
, it seems that this can solve the problem, thank you very much! BTW, when will the official version be released, and the version number?sbordet commentedon Feb 27, 2019
It's going to be in 9.4.16, I don't know when yet.
Note that the issue is still under review and the fix may change.
Thanks for confirming that the issue is solved with the initial fix!
Issue #3373 - OutOfMemoryError: Java heap space in GZIPContentDecoder.
Issue #3373 - OutOfMemoryError: Java heap space in GZIPContentDecoder.