Skip to content

Fix use-after-free threading bug in ClockCache #8261

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 5 commits into from

Conversation

pdillinger
Copy link
Contributor

@pdillinger pdillinger commented May 4, 2021

Summary: In testing for #8225 I found cache_bench would crash with
-use_clock_cache, as well as db_bench -use_clock_cache, but not
single-threaded. Smaller cache size hits failure much faster. ASAN
reported the failuer as calling malloc_usable_size on the key pointer
of a ClockCache handle after it was reportedly freed. On detailed
inspection I found this bad sequence of operations for a cache entry:

state=InCache=1,refs=1
[thread 1] Start ClockCacheShard::Unref (from Release, no mutex)
[thread 1] Decrement ref count
state=InCache=1,refs=0
[thread 1] Suspend before CalcTotalCharge (no mutex)

[thread 2] Start UnsetInCache (from Insert, mutex held)
[thread 2] clear InCache bit
state=InCache=0,refs=0
[thread 2] Calls RecycleHandle (based on pre-updated state)
[thread 2] Returns to Insert which calls Cleanup which deletes key

[thread 1] Resume ClockCacheShard::Unref
[thread 1] Read key in CalcTotalCharge

To fix this, I've added a field to the handle to store the metadata
charge so that we can efficiently remember everything we need from
the handle in Unref. We must not read from the handle again if we
decrement the count to zero with InCache=1, which means we don't own
the entry and someone else could eject/overwrite it immediately.

Note before this change, on amd64 sizeof(Handle) == 56 even though there
are only 48 bytes of data. Grouping together the uint32_t fields would
cut it down to 48, but I've added another uint32_t, which takes it
back up to 56. Not a big deal.

Also fixed DisownData to cooperate with ASAN as in LRUCache.

Test Plan: Manual + adding use_clock_cache to db_crashtest.py

Base performance
./cache_bench -use_clock_cache
Complete in 17.060 s; QPS = 2458513
New performance
./cache_bench -use_clock_cache
Complete in 17.052 s; QPS = 2459695

Any difference is easily buried in small noise.

Crash test shows still more bug(s) in ClockCache, so I'm expecting to
disable ClockCache from production code in a follow-up PR (if we
can't find and fix the bug(s))

Summary: In testing for facebook#8225 I found cache_bench would crash with
-use_clock_cache, as well as db_bench -use_clock_cache, but not
single-threaded. Smaller cache size hits failure much faster. ASAN
reported the failuer as calling malloc_usable_size on the `key` pointer
of a ClockCache handle after it was reportedly freed. On detailed
inspection I found this bad sequence of operations for a cache entry:

state=InCache=1,refs=1
[thread 1] Start ClockCacheShard::Unref (from Release, no mutex)
[thread 1] Decrement ref count
state=InCache=1,refs=0
[thread 1] Suspend before CalcTotalCharge (no mutex)

[thread 2] Start UnsetInCache (from Insert, mutex held)
[thread 2] clear InCache bit
state=InCache=0,refs=0
[thread 2] Calls RecycleHandle (based on pre-updated state)
[thread 2] Returns to Insert which calls Cleanup which deletes `key`

[thread 1] Resume ClockCacheShard::Unref
[thread 1] Read `key` in CalcTotalCharge

To fix this, I've added a field to the handle to store the metadata
charge so that we can efficiently remember everything we need from
the handle in Unref. We must not read from the handle again if we
decrement the count to zero with InCache=1, which means we don't own
the entry and someone else could eject/overwrite it immediately.

Note before this change, on amd64 sizeof(Handle) == 56 even though there
are only 48 bytes of data. Grouping together the uint32_t fields would
cut it down to 48, but I've added another uint32_t, which takees it
back up to 56. Not a big deal.

Test Plan: Manual + adding use_clock_cache to db_crashtest.py

Base performance
./cache_bench -use_clock_cache
Complete in 17.060 s; QPS = 2458513
New performance
./cache_bench -use_clock_cache
Complete in 17.052 s; QPS = 2459695

Any difference is easily buried in small noise.
@pdillinger
Copy link
Contributor Author

Unfortunately, there is at least one more bug in ClockCache that shows up in the crash test:

***db_stress: cache/clock_cache.cc:486: bool rocksdb::{anonymous}::ClockCacheShard::Unref(rocksdb::{anonymous}::CacheHandle*, bool, rocksdb::{anonymous}::CleanupContext*): Assertion `CountRefs(flags) > 0' failed.***

I tried a few possible fixes, but none worked. At least cache_bench seems to work well enough to unblock me on #8225.

I think I will update this PR to disable the ClockCache as a feature, keeping it for testing only when/if someone fixes it.

@pdillinger
Copy link
Contributor Author

I suspect this race:

[thread 1] Lookup starts, up to Ref()
[thread 2] Erase/evict the entry just looked up
[thread 1] Ref() the handle, even though it's in the recycle bin
[thread 2] Insert with recycling that handle, handle->flags.store obliterates the other thread's Ref

Which would be fixable by never blindly overwriting flags, only making relative updates. But I am not going to invest more into this to try to reproduce to validate the fix.

@pdillinger pdillinger requested a review from mrambacher May 4, 2021 22:59
@pdillinger
Copy link
Contributor Author

Actually, I want to do the disabling ClockCache in another PR for cleaner git history. I'm not adding to HISTORY.md for this one because I'm expecting this bug fix to be irrelevant for production users (in the near term at least).

@facebook-github-bot
Copy link
Contributor

@pdillinger has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@mrambacher mrambacher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

Do you want to put a TODO comment about the race condition in the code or enter that as a new issue to track elsewhere?

Copy link
Contributor

@mrambacher mrambacher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@facebook-github-bot
Copy link
Contributor

@pdillinger has updated the pull request. You must reimport the pull request before landing.

@facebook-github-bot
Copy link
Contributor

@pdillinger has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@pdillinger
Copy link
Contributor Author

I added some notes about ClockCache being broken in case we forget to follow-up with any sort of functionality disabling.

@facebook-github-bot
Copy link
Contributor

@pdillinger merged this pull request in 3b981ea.

pdillinger added a commit to pdillinger/rocksdb that referenced this pull request May 6, 2021
Summary: From facebook#8261

Test Plan: briefly make blackbox_crash_test
facebook-github-bot pushed a commit that referenced this pull request May 6, 2021
Summary:
From #8261

Pull Request resolved: #8277

Test Plan: briefly make blackbox_crash_test

Reviewed By: zhichao-cao

Differential Revision: D28270648

Pulled By: pdillinger

fbshipit-source-id: 9bfd46c5a1a449165f6597bddb17af910331773f
@ZacAttack
Copy link

It seems this bug is still referenced in the documentation: https://github.com/facebook/rocksdb/wiki/Block-Cache#clock-cache

WARNING: The ClockCache implementation has at least one remaining bug that could lead to crash or data corruption. Please do not use ClockCache until this is fixed.

Does the above statement still stand in latest release, or should the documentation be updated? Hoping to use clock cache in production.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants