-
-
Notifications
You must be signed in to change notification settings - Fork 16.1k
Introduce ByteBuf.maxFastWritableBytes() method #9086
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Can one of the admins verify this patch? |
This would be useful for #8931 which currently incorrectly assumes that growing up to maxCapacity is always cheap. I can add some unit tests if folks agree with the addition. |
FYI I force-pushed a minor rework to this including some update to the commit message. I originally forgot that |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
A unit test or two would be nice. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as @carl-mastrangelo code LGTM
I wasn't aware that the buffer may be copied/reallocated to reach its maxCapacity. And I hadn't realized that writing may increase the capacity... which means It sort of seems like the allocator should just be guaranteed to return a ByteBuf whose capacity is at least what you request, so that capacity matches the chunk size. Or something... But those sort of changes are impossible. Given the current methods // I'd be prone to doing this myself
ByteBuf buf = alloc.buffer(4 * 1024);
buf.capacity(buf.maxFastCapacity()); Although if you have a handler that is given a buffer via a write() and you want to append to it, then I guess you still would want to use |
@njhill please add some unit tests. |
@ejona86 I agree the current semantics w.r.t. capacity and writablity are unintuitive. What would make this cleaner and I think you are also alluding to (netty 5?) is to have precise capacity be determined by the buffer itself and anywhere a capacity is currently specified would instead be a min capacity (i.e. in allocator args, and I was actually already going back and forth on the idea of |
Motivation ByteBuf capacity is automatically increased as needed up to maxCapacity when writing beyond the buffer's current capacity. However there's no way to tell in general whether such an increase will result in a relatively costly internal buffer re-allocation. For unpooled buffers it always does, in pooled cases it depends on the size of the associated chunk of allocated memory, which I don't think is currently exposed in any way. It would sometimes be useful to know where this limit is when making external decisions about whether to reuse or preemptively reallocate. It would also be advantageous to take this limit into account when auto-increasing the capacity during writes, to defer such reallocation until really necessary. Modifications Introduce new AbstractByteBuf.maxFastWritableBytes() method which will return a value >= writableBytes() and <= maxWritableBytes(). Make use of the new method in the sizing decision made by the AbstractByteBuf.ensureWritable(...) methods. Result Less reallocation/copying.
@normanmaurer @carl-mastrangelo unit tests now added. |
@netty-bot test this please |
@normanmaurer I just noticed the failed tests - I hadn't noticed locally because I wasn't running with paranoid leak detection, have pushed a commit to fix. |
@netty-bot test this please |
Motivation ByteBuf capacity is automatically increased as needed up to maxCapacity when writing beyond the buffer's current capacity. However there's no way to tell in general whether such an increase will result in a relatively costly internal buffer re-allocation. For unpooled buffers it always does, in pooled cases it depends on the size of the associated chunk of allocated memory, which I don't think is currently exposed in any way. It would sometimes be useful to know where this limit is when making external decisions about whether to reuse or preemptively reallocate. It would also be advantageous to take this limit into account when auto-increasing the capacity during writes, to defer such reallocation until really necessary. Modifications Introduce new AbstractByteBuf.maxFastWritableBytes() method which will return a value >= writableBytes() and <= maxWritableBytes(). Make use of the new method in the sizing decision made by the AbstractByteBuf.ensureWritable(...) methods. Result Less reallocation/copying.
Motivation
ByteBuf
capacity is automatically increased as needed up to maxCapacity when writing beyond the buffer's current capacity. However there's no way to tell in general whether such an increase will result in a relatively costly internal buffer re-allocation.For unpooled buffers it always does, in pooled cases it depends on the size of the associated chunk of allocated memory (
PooledByteBuf.maxLength
), which I don't think is currently exposed in any way.It would sometimes be useful to know where this limit is when making external decisions about whether to reuse or preemptively reallocate.
It would also be advantageous to take this limit into account when auto-increasing the capacity during writes, to defer such reallocation until really necessary.
Modifications
Introduce new
AbstractByteBuf.maxFastWritableBytes()
method which will return a value>= writableBytes()
and<= maxWritableBytes()
.Make use of the new method in the sizing decision made by the
AbstractByteBuf.ensureWritable(...)
implementations.Result
Less reallocation/copying when working with
ByteBuf
s.