In this post I’d like to focus only on the simplest scenario: we have a REST API implemented in ASP.NET Core, which doesn’t have any of the above issues. Will it leave something in an inconsistent state that we have to clean up? Can it be doing a long-running operation that we need to cancel? Do we have to send a signal to notify some other component that this instance is stopping? When implementing a service, we always have to think it through how it will behave if it’s suddenly stopped.
In that case, our services will only be stopped when we’re deploying, or in some exceptional cases like if they crash, or the server machines have to be restarted for some reason. This is something we have to worry about much less if we have a fixed number of server machines to which we’re directly deploying our application. One of these challenges is that due to Kubernetes constantly scaling our services up and down, and possible evicting our pods from certain nodes and moving them to another, instances of our services will constantly be stopped and then started up. On the other hand, we also face several challenges inherent to this architectural style, due to our components being inevitably distributed, and our containers constantly being shuffled around on the “physical” (technically, most probably still virtual) nodes.
Using a container-orchestration technology like Kubernetes, running applications in small containers, and scaling out horizontally rather than scaling a single machine up has numerous benefits, such as flexible allocation of the raw resources among different services, being able to precisely adjust the number of instances we’re running according to the volume of traffic we’re receiving, and forcing us to run our applications in immutable containers, thereby making our releases repeatable, thus easier to reason about. Graceful termination in Kubernetes with ASP.NET Core