Skip to content

Introduce ByteBuf.maxFastWritableBytes() method #9086

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
May 22, 2019

Conversation

njhill
Copy link
Member

@njhill njhill commented Apr 23, 2019

Motivation

ByteBuf capacity is automatically increased as needed up to maxCapacity when writing beyond the buffer's current capacity. However there's no way to tell in general whether such an increase will result in a relatively costly internal buffer re-allocation.

For unpooled buffers it always does, in pooled cases it depends on the size of the associated chunk of allocated memory (PooledByteBuf.maxLength), which I don't think is currently exposed in any way.

It would sometimes be useful to know where this limit is when making external decisions about whether to reuse or preemptively reallocate.

It would also be advantageous to take this limit into account when auto-increasing the capacity during writes, to defer such reallocation until really necessary.

Modifications

Introduce new AbstractByteBuf.maxFastWritableBytes() method which will return a value >= writableBytes() and <= maxWritableBytes().

Make use of the new method in the sizing decision made by the AbstractByteBuf.ensureWritable(...) implementations.

Result

Less reallocation/copying when working with ByteBufs.

@netty-bot
Copy link

Can one of the admins verify this patch?

@njhill
Copy link
Member Author

njhill commented Apr 23, 2019

This would be useful for #8931 which currently incorrectly assumes that growing up to maxCapacity is always cheap.

I can add some unit tests if folks agree with the addition.

@njhill njhill changed the title Add maxFastWritableBytes() ByteBuf method Introduce ByteBuf.maxFastWritableBytes() method Apr 26, 2019
@njhill
Copy link
Member Author

njhill commented Apr 26, 2019

FYI I force-pushed a minor rework to this including some update to the commit message. I originally forgot that ByteBuf was not an interface :) so the new method can go there with its default impl. I can't think of a problem with this but I know it would be a first and appear a little inconsistent next to all the others with default impls in AbstractByteBuf.

Copy link
Member

@carl-mastrangelo carl-mastrangelo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@carl-mastrangelo
Copy link
Member

A unit test or two would be nice.

Copy link
Contributor

@franz1981 franz1981 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as @carl-mastrangelo code LGTM

@normanmaurer
Copy link
Member

@ejona86 @trustin any concerns ?

@ejona86
Copy link
Member

ejona86 commented May 13, 2019

I wasn't aware that the buffer may be copied/reallocated to reach its maxCapacity. And I hadn't realized that writing may increase the capacity... which means isWritable() is stricter than actual writes and writableBytes() is misleading. I will say I have wanted an API that would let me expand a ByteBuf to fill its allocated chunk.

It sort of seems like the allocator should just be guaranteed to return a ByteBuf whose capacity is at least what you request, so that capacity matches the chunk size. Or something... But those sort of changes are impossible.

Given the current methods writableBytes() and maxWritableBytes(), maxFastWritableBytes() seems fair. Although it sort raises the question of "what's the point in the current capacity?" I would consider a maxFastCapacity(), and let someone manually change the capacity. That would keep capacity() relevant.

// I'd be prone to doing this myself
ByteBuf buf = alloc.buffer(4 * 1024);
buf.capacity(buf.maxFastCapacity());

Although if you have a handler that is given a buffer via a write() and you want to append to it, then I guess you still would want to use maxFastWritableBytes(), since the capacity will be "wrong." Very well. capacity() is useless because the allocators aren't given any freedom when setting it.

@normanmaurer
Copy link
Member

@njhill please add some unit tests.

@njhill
Copy link
Member Author

njhill commented May 13, 2019

@ejona86 I agree the current semantics w.r.t. capacity and writablity are unintuitive. What would make this cleaner and I think you are also alluding to (netty 5?) is to have precise capacity be determined by the buffer itself and anywhere a capacity is currently specified would instead be a min capacity (i.e. in allocator args, and capacity(int) would be replaced with ensureCapacity(int) where the resulting capacity would be >=).

I was actually already going back and forth on the idea of maxFastCapacity() instead and would prefer it apart from the fact that the semantics are kind of awkward in the unpooled case. I would read it to mean "capacity can be fast-changed to anything <= this value" but unpooled buffers incur the cost for any nonzero capacity change, even a reduction. maxFastWritableBytes() nicely avoids that.

Motivation

ByteBuf capacity is automatically increased as needed up to maxCapacity
when writing beyond the buffer's current capacity. However there's no
way to tell in general whether such an increase will result in a
relatively costly internal buffer re-allocation.

For unpooled buffers it always does, in pooled cases it depends on the
size of the associated chunk of allocated memory, which I don't think is
currently exposed in any way.

It would sometimes be useful to know where this limit is when making
external decisions about whether to reuse or preemptively reallocate.

It would also be advantageous to take this limit into account when
auto-increasing the capacity during writes, to defer such reallocation
until really necessary.

Modifications

Introduce new AbstractByteBuf.maxFastWritableBytes() method which will
return a value >= writableBytes() and <= maxWritableBytes().

Make use of the new method in the sizing decision made by the
AbstractByteBuf.ensureWritable(...) methods.

Result

Less reallocation/copying.
@njhill
Copy link
Member Author

njhill commented May 13, 2019

@normanmaurer @carl-mastrangelo unit tests now added.

@normanmaurer
Copy link
Member

@netty-bot test this please

@njhill
Copy link
Member Author

njhill commented May 21, 2019

@normanmaurer I just noticed the failed tests - I hadn't noticed locally because I wasn't running with paranoid leak detection, have pushed a commit to fix.

@normanmaurer
Copy link
Member

@netty-bot test this please

@normanmaurer normanmaurer merged commit 128403b into netty:4.1 May 22, 2019
normanmaurer pushed a commit that referenced this pull request May 22, 2019
Motivation

ByteBuf capacity is automatically increased as needed up to maxCapacity
when writing beyond the buffer's current capacity. However there's no
way to tell in general whether such an increase will result in a
relatively costly internal buffer re-allocation.

For unpooled buffers it always does, in pooled cases it depends on the
size of the associated chunk of allocated memory, which I don't think is
currently exposed in any way.

It would sometimes be useful to know where this limit is when making
external decisions about whether to reuse or preemptively reallocate.

It would also be advantageous to take this limit into account when
auto-increasing the capacity during writes, to defer such reallocation
until really necessary.

Modifications

Introduce new AbstractByteBuf.maxFastWritableBytes() method which will
return a value >= writableBytes() and <= maxWritableBytes().

Make use of the new method in the sizing decision made by the
AbstractByteBuf.ensureWritable(...) methods.

Result

Less reallocation/copying.
@normanmaurer normanmaurer added this to the 4.1.37.Final milestone May 22, 2019
@njhill njhill deleted the fast-capacity branch May 22, 2019 19:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants