-
-
Notifications
You must be signed in to change notification settings - Fork 16.1k
Allow to automatically trim the PoolThreadCache in a timely interval #8941
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
executor.scheduleAtFixedRate(new Runnable() { | ||
@Override | ||
public void run() { | ||
cache.trim(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
try catch Exception to avoid suppressing subsequent executions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@johnou this should never throw, if it does there is something seriously wrong
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@normanmaurer but unless the app developer has hooked into java.lang.Thread#setDefaultUncaughtExceptionHandler they are not going to know and the exception will be swallowed..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
left a few suggestions
awesome @normanmaurer |
} | ||
|
||
@Test | ||
public void testDecorateTheadFactory() throws InterruptedException { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: typo, missing 'r'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very useful!
LGTM
there is a little typo in a test
adad8d0
to
880cf66
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me! Thanks for the quick turnaround.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@normanmaurer nice change! One comment inline.
Also not really part of this change but was wondering if there's any reason why GlobalEventExecutor
doesn't implement OrderedEventExecutor
?
@@ -59,6 +60,7 @@ public UnorderedThreadPoolEventExecutor(int corePoolSize) { | |||
*/ | |||
public UnorderedThreadPoolEventExecutor(int corePoolSize, ThreadFactory threadFactory) { | |||
super(corePoolSize, threadFactory); | |||
this.setThreadFactory(ThreadExecutorMap.apply(getThreadFactory(), this)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this is safe? I think the scheduled repeating trimming task could be run on an arbitrary thread, i.e. not necessarily the one from which it was scheduled that owns the referenced PoolThreadCache
instance.
Maybe for this one the scheduled task could just call PoolThreadCache.trimCurrentThreadCache()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you are right... let me fix it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@njhill also with this in mind I wonder if we should maybe restrict all of this to OrderedEventExecutor
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@njhill done that ....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @normanmaurer generally LGTM, but I'm not sure about the OrderedEventExecutor
check since my understanding is that although SingleThreadEventExecutor
and GlobalEventExecutor
are ok, that interface doesn't necessarily imply same-thread semantics, only serialized tasks. So having the check there might be misleading... it's necessary but not sufficient. Maybe we could have a static scheduledTasksRunOnSameThread(EventExecutor)
helper method which for now returns true only in the case of STEE and GEE?
(but I still like the change to make GEE implement OEE!)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@njhill maybe we should just only enable it for SingleThreadEventExecutor
for now. Let me do this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@normanmaurer that's probably fine but technically there's still a chance that someone has subclassed it with different scheduleAtFixedRate
behaviour that breaks the assertion, since the method is non-final (though this wouldn't really make much sense to do).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@njhill true.. I guess we could also just return directly if the Thread
is not the same we expect. WDYT ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@njhill alternatively I just noticed we could just call:
PooledByteBufAllocator.this.trimCurrentThreadCache();
This should be safe in any case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@normanmaurer yes, that would be safe for any of the cases (worst case it would be a repeating no-op)... that's actually what I was meaning to suggest above but obviously got the detail wrong since that method is on the BBAllocator not PoolThreadCache!
@@ -113,6 +118,9 @@ | |||
DEFAULT_CACHE_TRIM_INTERVAL = SystemPropertyUtil.getInt( | |||
"io.netty.allocator.cacheTrimInterval", 8192); | |||
|
|||
DEFAULT_CACHE_TRIM_RATE = SystemPropertyUtil.getLong( | |||
"io.netty.allocator.cacheTrimRateMs", 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about io.netty.allocation.cacheTrimIntervalMillis
? To avoid confusion, we could deprecate cacheTrimInterval
in favor of cacheTrimIntervalCount
. The current name rate
is misleading because it's not really a rate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@trustin I think I would like to still preserve the old one as well. Let me rename it tho.
common/src/main/java/io/netty/util/concurrent/GlobalEventExecutor.java
Outdated
Show resolved
Hide resolved
public final class ThreadExecutorMap { | ||
|
||
private static final FastThreadLocal<EventExecutor> mappings = | ||
new FastThreadLocal<EventExecutor>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: could be merged into a single line.
7852e46
to
8dead9a
Compare
Motivation: PooledByteBufAllocator uses a PoolThreadCache per Thread that allocates / deallocates to minimize the performance overhead. This PoolThreadCache is trimmed after X allocations to free up buffers that are not allocated for a long time. This works out quite well when the app continues to allocate but fails if the app stops to allocate frequently (for whatever reason) and so a lot of memory is wasted and not given back to the arena / freed. Modifications: - Add a ThreadExecutorMap that offers multiple methods that wrap Runnable / ThreadFactory / Executor and allow to call ThreadExecutorMap.currentEventExecutor() to get the current executing EventExecutor for the calling Thread. - Use these methods in the constructors of our EventExecutor implementations (which also covers the EventLoop implementations) - Add io.netty.allocator.cacheTrimIntervalMillis system property which can be used to specify a fixed rate / interval on which we should try to trim the PoolThreadCache for a EventExecutor that allocates. - Add PooledByteBufAllocator.trimCurrentThreadCache() to allow the user to trim the cache of the calling thread manually. - Add testcases - Introduce FastThreadLocal.getIfExists() Result: Allow to better / more frequently trim PoolThreadCache and so give back memory to the area / system.
8dead9a
to
d78225d
Compare
Just rebased it. |
…8941) Motivation: PooledByteBufAllocator uses a PoolThreadCache per Thread that allocates / deallocates to minimize the performance overhead. This PoolThreadCache is trimmed after X allocations to free up buffers that are not allocated for a long time. This works out quite well when the app continues to allocate but fails if the app stops to allocate frequently (for whatever reason) and so a lot of memory is wasted and not given back to the arena / freed. Modifications: - Add a ThreadExecutorMap that offers multiple methods that wrap Runnable / ThreadFactory / Executor and allow to call ThreadExecutorMap.currentEventExecutor() to get the current executing EventExecutor for the calling Thread. - Use these methods in the constructors of our EventExecutor implementations (which also covers the EventLoop implementations) - Add io.netty.allocator.cacheTrimIntervalMillis system property which can be used to specify a fixed rate / interval on which we should try to trim the PoolThreadCache for a EventExecutor that allocates. - Add PooledByteBufAllocator.trimCurrentThreadCache() to allow the user to trim the cache of the calling thread manually. - Add testcases - Introduce FastThreadLocal.getIfExists() Result: Allow to better / more frequently trim PoolThreadCache and so give back memory to the area / system.
Motivation:
PooledByteBufAllocator uses a PoolThreadCache per Thread that allocates / deallocates to minimize the performance overhead. This PoolThreadCache is trimmed after X allocations to free up buffers that are not allocated for a long time. This works out quite well when the app continues to allocate but fails if the app stops to allocate frequently (for whatever reason) and so a lot of memory is wasted and not given back to the arena / freed.
Modifications:
Result:
Allow to better / more frequently trim PoolThreadCache and so give back memory to the area / system.