Skip to content

Add configuration options for variables in activeExpireCycle #5843

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

tejom
Copy link

@tejom tejom commented Feb 15, 2019

  • Add option for number keys looked at on each iteration and option to configure if the loop should keep trying

  • Update example configuration

 - Add option for number keys looked at on each iteration and option to configure if the loop should keep trying

 - Update example configuration
@ricardobeat
Copy link

For future reference: https://blog.twitter.com/engineering/en_us/topics/infrastructure/2019/improving-key-expiration-in-redis.html

@antirez
Copy link
Contributor

antirez commented Apr 26, 2019

Hello @tejom, thanks for the PR! I think I'll end merging this in Redis 5 & unstable even if unstable is going to get a completely different implementation of expire soon, because Redis 5 will likely be used as the main Redis system for another 1/2 years in many shops. Before merging, given that this will hit a production release, I want to check why there was this obscure slowdown reported in the blog during the development. I'll ping you shortly.

@tejom
Copy link
Author

tejom commented May 3, 2019

Hey @antirez sounds good. I didn't miss anything from you did I?

@antirez
Copy link
Contributor

antirez commented May 10, 2019

@tejom Hi again! In the blog post you say that commenting the conditional made no difference because you use only one database. However I want to understand if you have Redis explicitly configured to have just one database, that is, what is the value of server.dbnum in your instances? Note that even if you use only one database, but still you have a default of 16 configured, the value of such global will be 16. Please may you clarify if you just use 1 DB or of you really have Redis set to only allow 1 DB? Thanks!

@tejom
Copy link
Author

tejom commented May 10, 2019

@antirez Oh I see. Yeah it had the default configured of 16. I wish I wrote that section better now, and there wasn't such a long time between working on this, writing and finally getting to publish it.

I guess still this ends up not doing anything in this test? The variable never changes. It would be one db used and 16 configured by default.

if (dbs_per_call > server.dbnum || timelimit_exit)
        dbs_per_call = server.dbnum;

would just be this and skip over it every time

if (16 > 16 || timelimit_exit)
        dbs_per_call = server.dbnum;

@madolson
Copy link
Contributor

madolson commented Jun 24, 2019

@antirez Any update on when this might get accepted?

@antirez
Copy link
Contributor

antirez commented Jun 24, 2019

Hello @madolson, I'm waiting to have a bit of time to check this PR because while I think it is a great addition, I'm not sure it should be exposed to the user in such terms of iterations / divisor, that are implementation details.

@csunwold
Copy link

csunwold commented Sep 9, 2019

Hi @antirez. Checking in to see if you had considered this PR any further? My team is facing an issue almost exactly like what the Twitter team detailed in their blog. I'm very hopeful that either this or the radix tree based implementation you made reference to for Redis 6 could make a big improvement for us.

My own two cents - Exposing this seems very similar to maxmemory-samples. I could see a concern in exposing direct division as an implementation detail. Perhaps instead of making the divisor itself configurable, it could instead be like a "target %" config? For example, instead of using a config value of 4 to represent that it should continue if more than 25%, 25 itself could be the config value? The divisor would then be something like server.expired_threshold_divisor = atoi(argv[1]) / 100

@yoav-steinberg
Copy link
Contributor

yoav-steinberg commented Dec 14, 2021

This seems to be a slightly different approach than what was finally merged here: 84b01f6 (I couldn't find a relevant issue/pr).
See:

redis/redis.conf

Lines 1118 to 1133 in c7dc17f

# Redis reclaims expired keys in two ways: upon access when those keys are
# found to be expired, and also in background, in what is called the
# "active expire key". The key space is slowly and interactively scanned
# looking for expired keys to reclaim, so that it is possible to free memory
# of keys that are expired and will never be accessed again in a short time.
#
# The default effort of the expire cycle will try to avoid having more than
# ten percent of expired keys still in memory, and will try to avoid consuming
# more than 25% of total memory and to add latency to the system. However
# it is possible to increase the expire "effort" that is normally set to
# "1", to a greater value, up to the value "10". At its maximum value the
# system will use more CPU, longer cycles (and technically may introduce
# more latency), and will tolerate less already expired keys still present
# in the system. It's a tradeoff between memory, CPU and latency.
#
# active-expire-effort 1

I guess this should be closed.

@yoav-steinberg yoav-steinberg added state:to-be-closed requesting the core team to close the issue and removed REVIEW-AND-MERGE labels Dec 14, 2021
@oranagra
Copy link
Member

Right, IIRC this, and other discussions with Twitter is what lead Salvatore to implement the changes in 84b01f6.
I do hope these changes solved your concerns (and that you where able to converge to vanilla redis code and dismiss this private patch).

For the record, the big difference between the two approaches, is that this PR exposes tunables of the internal implementation in redis to the config, something that creates coupling with the internals, and means it'll be harder to later change.
What Salvatore implemented only exposes a generic active-expire-effort (0..10), which can in theory be applied on any other mechanism we'll want to implement.

Closing this one, feel free to post more details or open another PR if the latest code in unstable isn't satisfactory.

@oranagra oranagra closed this Dec 15, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
state:to-be-closed requesting the core team to close the issue
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants