-
Notifications
You must be signed in to change notification settings - Fork 1.9k
OutOfMemoryError: Java heap space in GZIPContentDecoder #3373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
what is the possible size of this |
@olamy @sbordet first, I have checked the heap and the reponse stream, the heap is 256M, the response is about 1M, also I exculed the zip bomb reason.I have found the same problem in my three services .After I checked the environment, once I do the request, I have get the dump when the OOM happend, I found that there are 478 entries of java.util.concurrent.ConcurrenthashMap$Node, and in these Node ,it's all the response byte, the bytes are also the same and not complete, I catched these infomation by Memory Aanlyzer, one of the 478 entries is below. In the last line ,the byte is the response data but not complete . In the 478 entries, the entries are all the same , and the byte value is same too. I still think maybe when jetty handle the chunk, there maybe some problem. But I still can't get the really scenes。I guess is there any situation the pool may not clean the chunk data before, so it will accumulate ,for example: the chunk date is about 2k every time, first 2k, then 4k, then 6k, but the pool ,there maybe 2+4+6+...at last ,it will be OOM. BTW, I also used python to do the request, and I can get the response data sucessfully. one of the 478 entries:
|
@olamy @sbordet I have get the souce code and get some log, In Thread HttpClient@2424a36f-68 ,I see the size is larger and larger, And the bucket is about 482 ,The log where I set is below: In GZIPContentDecoder.java protected boolean decodedChunk(ByteBuffer chunk)
{
if (_inflated==null)
{
_inflated=chunk;
}
else
{
int size = _inflated.remaining() + chunk.remaining();
LOG.warn("["+Thread.currentThread().getName()+"]-------TEST------infremain:"+_inflated.remaining()
+",chunkremain:"+chunk.remaining()+",size:"+size);
if (size<=_inflated.capacity())
{
BufferUtil.append(_inflated,chunk);
release(chunk);
}
else
{
ByteBuffer bigger=acquire(size);
int pos=BufferUtil.flipToFill(bigger);
BufferUtil.put(_inflated,bigger);
BufferUtil.put(chunk,bigger);
BufferUtil.flipToFlush(bigger,pos);
release(_inflated);
release(chunk);
_inflated = bigger;
}
}
return false;
} In MappedByteBufferPool.java
The log:
|
What is happening is that you are generating content that is always greater than the previous content. Is this a test case of some kind, or a real world scenario? You can use This issue is a duplicate of #1861. |
@sbordet The log above is from a real world scenario.And I have sth new ,I did some log in GZIPContentDecoder.java ,and the generating content is not greater than the previous content, I found that the every time the response buffer size is always 2048 . in GZIPContentDecoder.decodeChunks(),in case DATA: case DATA:
{
while (true)
{
if (buffer==null)
buffer = acquire(_bufferSize);
try
{
byte[] bytes = buffer.array();
int length = _inflater.inflate(bytes,buffer.arrayOffset(),buffer.capacity());
buffer.limit(length);
LOG.warn("["+Thread.currentThread().getName()+"]-------TEST0------bufferlength:"+new String(bytes).length());
} and in GZIPContentDecoder.decodedChunk(): protected boolean decodedChunk(ByteBuffer chunk)
{
if (_inflated==null)
{
_inflated=chunk;
LOG.warn("["+Thread.currentThread().getName()+"]-------TEST1------_inflated is null");
}
else
{
int size = _inflated.remaining() + chunk.remaining();
LOG.warn("["+Thread.currentThread().getName()+"]-------TEST2------chunkremain:"+chunk.remaining()
+",infremain:"+_inflated.remaining()+",infcap:"+_inflated.capacity()+",size:"+size+",TOF:"+(size<=_inflated.capacity()));
if (size<=_inflated.capacity())
{
BufferUtil.append(_inflated,chunk);
release(chunk);
}
else
{
ByteBuffer bigger=acquire(size);
int pos=BufferUtil.flipToFill(bigger);
BufferUtil.put(_inflated,bigger);
BufferUtil.put(chunk,bigger);
BufferUtil.flipToFlush(bigger,pos);
release(_inflated);
release(chunk);
_inflated = bigger;
}
LOG.warn("["+Thread.currentThread().getName()+"]-------TEST3------infremain:"+_inflated.remaining()
+",infcap:"+_inflated.capacity()+",inflimit:"+_inflated.limit()+",infpos:"+_inflated.position());
}
return false;
} I get the log below:
In this log,in branch TEST0, the buffer size is 2048,it's not like that every time it increase, I found that In branch TEST3, it has run public void release(ByteBuffer buffer)
{
BufferUtil.clear(buffer);
if (_space == null)
queueOffer(buffer);
else if (_space.decrementAndGet() >= 0)
queueOffer(buffer);
else
_space.incrementAndGet();
} In this method ,it not really release the for example: If the response is 10240 bytes,every time the chunk is 2048, it will need 2048+4096+6144+8192+10240 bytes to do this .And if the response is bigger, OOM may happend. Look forward to your reply.Thank you very much. |
Okay so this is a combination of #1861 and The previous implementation (in @gregw I'm inclined to not use a pooled buffer for |
@sbordet I tend to agree that a simple pool of buffers for So my proposal is that we have a chunk size (that should probably be a lot bigger than 2048... let's say 32768). We should inflate into a chunk buffer.... if that is the complete content, then great we just use that. |
I wrote a test case, and for 1 MiB of content, which is then gzipped by the server, the client |
@gregw after the fixes, the client This means now that both the client and the server interceptor subclass The PR with these changes depends on #3391. |
@sbordet yeah, it's similar to my service, in one of my service, the max heap size is 256M, 2MB of content has occured OOM. We now change the -Xmx=1024 to avoid this now. For so, there may need another way to decrease this. |
Modified jetty-client content decoding to be fully non-blocking; this allows for a better backpressure and less usage of the buffer pool. Modified GZIPContentDecoder to aggregate decoded ByteBuffers in a smarter way that avoids too many data copies and pollution of the buffer pool with intermediate size buffers. Removed duplicate test GZIPContentDecoderTest. Signed-off-by: Simone Bordet <simone.bordet@gmail.com>
@ltp217 can you please try branch |
@sbordet I have tried the branch |
It's going to be in 9.4.16, I don't know when yet. |
Modified jetty-client content decoding to be fully non-blocking; this allows for a better backpressure and less usage of the buffer pool. Modified GZIPContentDecoder to aggregate decoded ByteBuffers in a smarter way that avoids too many data copies and pollution of the buffer pool with intermediate size buffers. Removed duplicate test GZIPContentDecoderTest. Signed-off-by: Simone Bordet <simone.bordet@gmail.com>
Modified jetty-client content decoding to be fully non-blocking; this allows for a better backpressure and less usage of the buffer pool. Modified GZIPContentDecoder to aggregate decoded ByteBuffers in a smarter way that avoids too many data copies and pollution of the buffer pool with intermediate size buffers. Removed duplicate test GZIPContentDecoderTest. Improved javadocs and improved AsyncMiddleManServlet to release buffers used by the GZIPContentDecoder. Signed-off-by: Simone Bordet <simone.bordet@gmail.com>
I'am using jetty-9.4.14 to send request, when I have some big data to response, there is an OutOfMemory exception,I have see the issue about GZIPContentDecoder ,but I'am not found how I could solve the problem,could you help me.The stack is below:
The text was updated successfully, but these errors were encountered: