New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes is vulnerable to stale reads, violating critical pod safety guarantees #59848
Comments
Note that we added this to improve large cluster performance, so we are almost certainly going to regress to some degree (how much has been mitigated by other improvements in the last year is uncertain) |
/sub |
Disabling resourceVersion=0 for lists when more than one master is present is an option as well. As Jordan noted the optimization here is impactful because we avoid having to fetch many full pod lists from etcd when we only return a subset to each node. It’s possible we could require all resourceVersion=0 calls to acquire a read lease on etcd which bounds the delay, but doesn’t guarantee happens before if the cache is delayed. If the watch returned a freshness guarantee we could synchronize on that as well. |
We can do a synthetic write to the range and wait until it is observed by the cache, then service the rv=0. The logical equivalent is a serializable read on etcd for the range, but we need to know the highest observed RV on the range. We can accomplish that by executing a range request with min create and mod revisions equal to the latest observed revision, and then performing the watch cache list at the highest RV on the range. So:
We can mitigate the performance impact by ensuring that an initial list from the watch cache preserves happens before. We can safely serve historical lists (rv=N) at any time. The watch cache already handles rv=N mostly correctly. So the proposed change here is:
That resolves this issue. |
Serializable read over the range, that is. |
Also:
|
if this MUST be resolved for 1.10, please add status/approved-for-milestone to it before Code Freeze. Thanks! |
Hi Clayton, could you go ahead and add "approved-for-milestone" label to this, as well as status (in progress)? That will help it stay in the milestone if this is a 1.10 blocker. Thanks! |
Caveat on the label - still trying to identify how we fix this - but no matter what because this is a critical data integrity issues we'll end up back porting this several releases. Will try to get a better estimate of timeframe soon. |
ooops, fixing labels. |
This can happen on a singleton master using an HA etcd cluster:
|
Why do we need (3) and (4)?
In the second case, reflector may retry without setting any RV for consistent read from etcd (that should be pretty rare so it should be fine from performance point of view). @smarterclayton What an I missing? |
@kubernetes/sig-scalability-misc |
Please stay in close contact with the release team on this. |
Hi, Maybe it would be a good idea to have a set of options to completely disable watch cache ? These options would be helpful for testing purpose, but also would open more broadly the way etcd shims such as Kine or FoundationDB |
What do you mean by local cache? The |
I am not very familiar with exactly how many caching layers there are between etcd and final client, but the idea would be to deactivate all of them except the one in the final client (we do not convert Watch in Get semantic). |
@julienlau you might want to read the details shared in this project, explaining the time travel issues in detail and how to detect/prevent them: https://github.com/sieve-project/sieve |
Just setting a status, this is still fundamentally broken in Kube, and the use of watch cache is unsafe in HA apiservers in a way that allows two kubelets to be running a pod with the same name at the same time, which violates assumptions that statefulsets and PVC depend on. Any cache that is HA must establish that it has the freshest data (data from time T > time request was made) to preserve the guarantees that Kube controllers implicitly rely on. I have not seen a solution proposed other than the watch cache being required to verify the cache is up to date before serving a LIST @ rv=0, but we do need to address this soon. I'm open to arguments that the pod exclusion property is not fundamental but I haven't seen any credible ones in the last five years :) |
Welcome to the CAP theorem, I guess? |
Maybe defining specific test cases to work it out would help ? |
I wanted to make an update on this for quite some time and finally getting here (motivated by a need to link it) With #83520 (so since 1.18), this problem has much lower exposure. That particular PR ensures, then when a reflector has to relist (e.g. because of out-of-history in watch or whatever other reason), the relist will NOT go back in time (we either list from cache by providing the currently known RV which ensures that our LIST will be at least that fresh or list from etcd using quorum reads). This means, that within a single incarnation of a component, it will never go back in time. The remaining (and unsolved case) is what happens on component start. It still lists from cache then (without any preconditions), which means that it can go back in time comparing to where it was before it was restarted. But within a single incarnation of a component - we're now safe. |
Update appreciated. |
Naive solution, which likely is not applicable, but I'll ask nevertheless, is listing from etcd with quorum read an option here? |
This break scalability (especially badly for some resource, pods in particular, because you can't list "pods from my node" from etcd - you can only list all pods and then deserialize in kube-apiserver and filter out. For each node independently. If you have 5k-node cluster with 150k pods that had an incident and all of kubelets are now starting at the same time ....] |
With https://github.com/sieve-project/sieve we found several bugs in real-world controllers, some of those caused by time travel anomalies. While not a perfect mitigation (bc always depending on the problem domain), using optimistic concurrency control patterns not only during updates but also (conditional) deletes, can mitigate some of the time travel anomalies by fencing off operations from stale views. Again, not perfect, but a good practice to not sacrifice scalability. On smaller clusters, disabling the API server cache might be a safe alternative, too. This eliminates the aforementioned (re)start time travel anomaly in controllers. Thoughts @wojtek-t ? |
Sure - those are all good, but are only partial mitigations, not a full solution. And what we're trying to do is to provide full solution. |
When we added resourceVersion=0 to reflectors, we didn't properly reason about its impact on nodes. Its current behavior can cause two nodes to run a pod with the same name at the same time when using multiple api servers, which violates the pod safety guarantees on the cluster. Because a read serviced by the watch cache can be arbitrarily delayed, a client that connects to that api server can read an arbitrarily old history. We explicitly use quorum reads against etcd to prevent this.
Scenario:
pod-0
(uid 1) which is scheduled tonode-1
pod-0
is deleted as part of a rolling upgradenode-1
sees thatpod-0
is deleted and cleans it up, then deletes the pod in the apipod-0
(uid 2) which is assigned tonode-2
node-2
sees thatpod-0
has been scheduled to it and startspod-0
node-1
crashes and restarts, then performs an initial list of pods scheduled to it against an API server in an HA setup (more than one API server) that is partitioned from the master (watch cache is arbitrarily delayed). The watch cache returns a list of pods from before T2node-1
fills its local cache with a list of pods from before T2node-1
startspod-0
(uid 1) andnode-2
is already runningpod-0
(uid 2).This violates pod safety. Since we support HA api servers, we cannot use
resourceVersion=0
from reflectors on the node, and probably should not use it on the masters. We can only safely useresourceVersion=0
after we have retrieved at least one list, and only if we verify that resourceVersion is in the future.@kubernetes/sig-apps-bugs @kubernetes/sig-api-machinery-bugs @kubernetes/sig-scalability-bugs This is a fairly serious issue that can lead to cluster identity guarantees being lost, which means clustered software cannot run safely if it has assumed the pod safety guarantee prevents two pods with the same name running on the cluster at the same time. The user impact is likely data loss of critical data.
This is also something that could happen for controllers - during a controller lease failover the next leader could be working from a very old cache and undo recent work done.
No matter what, the first list of a component with a clean state that must preserve "happens-before" must perform a live quorum read against etcd to fill their cache. That can only be done by omitting resourceVersion=0.
Fixes:
1 is a pretty significant performance regression, but is the most correct and safest option (just like we enabled quorum reads everywhere). 2 is more complex, and there are a few people trying to remove the monotonicity guarantees from resource version, but would retain most of the performance benefits of using this in the reflector. 3 is probably less complex than 2, but i'm not positive it actually works. 4 is hideous and won't fix other usage.
Probably needs to be backported to 1.6.
The text was updated successfully, but these errors were encountered: