-
Notifications
You must be signed in to change notification settings - Fork 41.3k
Closed
Description
We are using spring-boot and spring-cloud to realize a micro service archtecture. During load tests we are facing lots of errors such as "entitymanager closed and others" if we shut the micro service down during load.
We are wondering if there is an option to configure the embedded container to shut its service connector down and waits for an empty request queue before shutting down the complete application context.
If there is no such option, I did not find any, then it would be great to extend the shutdownhook of spring to respect such requirements.
shakuzen, maxjiang153, OrangeDog, janisz, davsclaus and 109 more
Metadata
Metadata
Assignees
Labels
type: enhancementA general enhancementA general enhancement
Type
Projects
Relationships
Development
Select code repository
Activity
wilkinsona commentedon Dec 2, 2015
We currently stop the application context and then stop the container. We did try to reverse this ordering but it had some unwanted side effects so we're stuck with it for now at least. That's the bad news.
The good news is that you can actually get a graceful shutdown yourself if you're happy to get your hands a bit dirty. The gist is that you need to pause Tomcat's connector and then wait for its thread pool to shutdown before allowing the destruction of the application context to proceed. It looks something like this:
I think it makes sense to provide behaviour like this out of the box, or at least an option to enable it. However, that'll require a bit more work as it'll need to work with all of the embedded containers that we support (Jetty, Tomcat, and Undertow), cope with multiple connectors (HTTP and HTTPS, for example) and we'll need to think about the configuration options, if any, that we'd want to offer: switching it on or off, configuring the period that we wait for the thread pool to shutdown, etc.
[-]graceful shutdown of embedded container should be possible[/-][+]Shut down embedded servlet container gracefully[/+]LutzStrobel commentedon Dec 4, 2015
Thank you for your advice.
We currently simply shut down the embedded container and the waiting a short time before closing the application context.
What do you thing, will it be possible in future to shut down a spring boot application more gracefully?
wilkinsona commentedon Dec 4, 2015
Yes. As I said above "I think it makes sense to provide behaviour like this out of the box, or at least an option to enable it".
I'm going to re-open this issue as we'll use it to track the possible enhancement.
tkvangorder commentedon Jan 19, 2016
+1 on this request. We ran into a similar problem when load testing and dropping a node from the test. @wilkinsona in your example, I was thinking of using an implementation of smartlifecylce so I can insure the connector is shutdown first. You said you ran into issues shutting down tomcat first?
bohrqiu commentedon Jan 19, 2016
I think the right order is:
pause the io endpoint
web container just pasue(deny new request come in),and RPC framework need notify client don't send new request and wait current request be proccessed
wait the request to be processed and response client
close the io endpoint
close spring service
close log system
so we do it as follows:
tkvangorder commentedon Jan 19, 2016
This issue is actually a bit more complicated.
I know that SmartLifeCycle can be used to set the order in which beans are notified of life cycle events. However, there is no generalized way for a service to know it should startup/shutdown before another service.
Consider the following:
A sprint boot application is running an embedded servlet container and is also producing/consuming JMS messages.
On the close event, the servlet container really needs to pause the connector first, process its remain working (any connections that have already been establish.)
We need to insure this is the FIRST thing that happens, prior to the JMS infrastructure shutting down because the work done inside tomcat may rely on JMS.
The JMS infrastructure has a similar requirement: it must stop listening for messages and chew through any remaining messages it has already accepted.
I can certainly implement a SmartLifeCycle class that sets the phase "very high"....and I could even create two instances, one for embedded servlet container and one for JMS and insure the correct order.
But in the spirit of Spring Boot, if I add a tomcat container, its my expectation that when the application shuts down, it will gracefully stop accepting connections, process remaining work, and exit.
It would be helpful if there was a mechanism to allow ordering to be expressed relative to other services "JMS must start before tomcat", "tomcat must close before JMS". This would be similar to the AutoConfigureBefore/AutoConfigureAfter annotations that are used in Spring boot.
One way to approach this might be to create an enum for the generalized services (This is not ideal, but I can't think of another way without introducing artificial, compile time dependencies.):
EMBEDDED_CONTAINER
JMS
DISCOVERY_CLIENT
.
.
The annotations could leverage the enums to order the life cycle listeners.
For now, its up to the developer to explicitly define the shutdown order of services via "SmartLifeCycle" instances...which can get a bit messy and seems like extra work for something that should really work out of the box.
wilkinsona commentedon Jan 19, 2016
@tkvangorder You don't need to use
SmartLifecycle
to gracefully shut down Tomcat as shown in my comment above. It happens in response to aContextClosedEvent
which is fired at the very beginning of the context's close processing before any of the beans in the context are processed.Beyond this, Spring Framework already has support for closing things down in the correct order. For example you can implement
DisposableBean
. When the container disposes of a bean, it will dispose of anything that depends on it first.140 remaining items
dheerajjoshim commentedon Sep 6, 2021
@gnagy Even in our deployment, we have HTTP endpoints and Kafka consumers. So we want to stop Kafka consumers from consuming the messages and processes already consumed messaged within the grace period.
Did you end up implementing a custom method? Or overriding
GracefulShutdown
implementation?gnagy commentedon Sep 7, 2021
I no longer have access to that project but if I remember correctly Spring Kafka integration already supported stop()-ing the listener container when the application context is shutting down, we just had to make sure our custom RetryTemplates in the consumers got notified as well.
dheerajjoshim commentedon Sep 7, 2021
@gnagy Thanks. I saw spring-Kafka has stop() method on application context shutdown.
I think it would be nice to test rolling upgrade under load to see if any messages are lost during shutdown
zdaccount commentedon Mar 18, 2022
Now there is the graceful shutdown feature (https://docs.spring.io/spring-boot/docs/current/reference/html/web.html#web.graceful-shutdown), but this feature uses a timeout, and only waits within the timeout. But sometimes I want to wait till all the requests are processed, without a timeout. Could you please support this?
bclozel commentedon Mar 18, 2022
@zdaccount what would happen if one of those outstanding requests is blocked and never finishes? The application instance would then stay up for ever?
zdaccount commentedon Mar 18, 2022
Yes, sometimes I want that. If the application instance hangs, then I may force kill it, but sometimes I don't want the instance itself to do this.
I think that this is like how the web server processes requests. When the web server process a request, there is no timeout on the process logic (I assume this, but I may be wrong, since I am new to Spring Boot). If this is the case, then it is also reasonable to have no timeout for the last requests.
wilkinsona commentedon Mar 18, 2022
I don’t think it’s reasonable to have no timeout at all. For one, it is highly unlikely that any network connection will survive indefinitely so the server will be unable to send a response.
Instead, I would recommend configuring a timeout that’s slightly greater than the maximum time that you would wait before manually killing the application instance.
zdaccount commentedon Mar 18, 2022
But there is no timeout on the ordinary processing of requests, right? If this is the case, then it seems also reasonable to have no timeout on the processing of last requests.
wilkinsona commentedon Mar 18, 2022
Requests that have the potential to be long-running should be async and will then be subject to the async request timeout. This timeout is in addition to timeouts at the network level.
In a Spring MVC app, a request can be made async by returning
DeferredResult
, among others, from a controller.Azmisov commentedon Apr 18, 2024
When I am testing, I am seeing the container (Tomcat) getting shutdown before the application context / beans have shutdown. From logs, the order is:
Which is problematic for me: I have some async threads running some computations, and the server is publishing metrics/health of that computation via its API. When shutdown is triggered, I want the computations to gracefully exit, but keep the server running and accepting connections to have metrics/health during that shutdown period. I can wait for the computation to finish in Application context or Predestroy hooks, but the tomcat server shuts down anyways. Any solution that would allow the server to stay up while the async thread is finishing its computation?
wilkinsona commentedon Apr 18, 2024
You'll have to implement
SmartLifecycle
with an appropriate phase so that you can participate in the lifecycle. Your implementation can then wait for the async tasks to complete. You'll want it to be called tostop
beforeWebServerStartStopLifecycle
is called. This ordering is controlled by the phase. You may also want to consider the phase ofWebServerGracefulShutdownLifecycle
.If you have any further questions, please follow up on Stack Overflow. As mentioned in the guidelines for contributing, we prefer to use GitHub issues only for bugs and enhancements.
Azmisov commentedon Apr 18, 2024
Follow up, for those reading this thread in the future