New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to submit a listener notification task. Event loop shut down? #27226
Comments
This is almost certainly because you are creating too many shards on your cluster. Why do you have so many shards and why are you trying to create 4000 at one time? The cause of this error is probably due to the thread pool and queue filling up and causing rejected executions. Without any kinds of steps to reproduce there is not much we can do to diagnose this but regardless the solution here is to not overload your cluster with too many shards and to not try to create 1000's of shards in one go on such a small cluster. If you can reproduce this on a more reasonable cluster size with a more reasonable number of shards please post the steps for reproduction. For now I will close this issue |
From your log messages:
and 236ms later:
Your node was in the middle of shutting down, this is expected behavior. |
In case anyone (like me) lands here with the same error: check that you're not trying to run multiple instances of Logstash on the same port! |
@cawoodm asking, how do you check for multiple running instances of logstash ? |
same error. I found the there are two logstash.conf files: I back old conf and rename to logstash.conf.bak ,and create the new logstash.conf file. So there two files in logstash/pipeline:
|
Elasticsearch version (
bin/elasticsearch --version
): 5.6.1Plugins installed: [xpack, analysis-icu, analysis-smartcn, analysis-ik]
JVM version (
java -version
):1.8.0.45OS version (
uname -a
if on a Unix-like system):Linux ubuntu-100 4.4.0-31-generic #50~14.04.1-Ubuntu SMP Wed Jul 13 01:07:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Description of the problem including expected versus actual behavior:
Steps to reproduce:
I am not sure how to reproduce it, noly happen one time in my cluster. Maybe it is because too many shards in my cluster? (9000+ shards, two nodes, when this error happens, we are creating around 4000 shards)
Provide logs (if relevant):
log file
The text was updated successfully, but these errors were encountered: