-
Notifications
You must be signed in to change notification settings - Fork 24k
Fragmentation ratio < 1 but not swapping. #946
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@quentin389 hi, guy. used_memory_rss and mem_fragmentation_ratio have nothing to do with swap. mem_fragmentation_ratio = used_memory_rss / zmalloc_used_memory and in top, your system is using 8348k swap memory |
OK, to make myself more clear:
So if there is some obvious reason this happens and it's not an error than I'd appreciate an explanation. I also think that in that situation the documentation should be updated to say that when used >> rss it may mean that we are swapping. |
Hello, in your case used physical memory appears to be smaller than virtual memory but you report no swapped pages. It is likely that in your data set there are many blank pages (all filled with zero) so multiple pages are actually mapped to the same zero page. Anyway this is never going to be an issue, the worst condition may be an error in the way Redis reports memory, but will have no effects in the stability and functionality of the server. Thanks for reporting, closing because it is not a bug but just an effect of OS memory management. |
If it's an error in Redis memory reporting, woudn't it have a negative effect of capping the used memory too early? eg. Redis could use this 0.7 GB for additional keys, but it doesn't because used_memory is already equal to maxmemory |
@quentin389 I don't think it is possible that in the equation rss/virtual the problem can be in the virtual side, that is the one used by Redis internals, since this is just a counter that is incremented and decremented, so we should be safe. Anyway I don't think it is a problem with memory reporting at all, just that RSS is influenced by many things, including shared and zero-ed pages. |
OK, thanks for the explanation :) |
No prob at all! Cheers. |
We have fragmentation ratio of 0.9 but our Redis does not use swap (I think). If that's the case, then where is that additional memory it supposedly uses? Maybe it's a stats error?
Our Redis is configured not to save anything to disk and to evict using 'volatile-lru'.
From what I can see below, stuff just doesn't add up. Redis reports that it uses 7.23 GB which is .7 GB above the rss use. But even used swap + buffered memory doesn't cover that.
Our top says:
Mem: 8029276k total, 7285248k used, 744028k free, 135528k buffers
Swap: 4095992k total, 8348k used, 4087644k free, 144400k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2662 root 20 0 16.0g 6.5g 1000 R 3.5 84.7 486:06.36 redis-server
And 'info all' says:
Server
redis_version:2.6.2
redis_git_sha1:00000000
redis_git_dirty:0
redis_mode:standalone
os:Linux 2.6.32-279.11.1.el6.x86_64 x86_64
arch_bits:64
multiplexing_api:epoll
gcc_version:4.4.6
process_id:2662
run_id:8c1fc7d583d30f8c7e12b07e33358c2b8053b519
tcp_port:6379
uptime_in_seconds:574022
uptime_in_days:6
lru_clock:1871118
Clients
connected_clients:342
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
Memory
used_memory:7759258544
used_memory_human:7.23G
used_memory_rss:6965792768
used_memory_peak:7761372816
used_memory_peak_human:7.23G
used_memory_lua:31744
mem_fragmentation_ratio:0.90
mem_allocator:jemalloc-3.0.0
Persistence
loading:0
rdb_changes_since_last_save:63327067
rdb_bgsave_in_progress:0
rdb_last_save_time:1360314445
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
Stats
total_connections_received:109245
total_commands_processed:369676297
instantaneous_ops_per_sec:599
rejected_connections:0
expired_keys:11906158
evicted_keys:42395410
keyspace_hits:307011748
keyspace_misses:61787169
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:0
Replication
role:master
connected_slaves:0
CPU
used_cpu_sys:18075.06
used_cpu_user:11096.02
used_cpu_sys_children:0.00
used_cpu_user_children:0.00
Commandstats
cmdstat_get:calls=252954141,usec=1815720351,usec_per_call=7.18
cmdstat_setex:calls=59305361,usec=476603250,usec_per_call=8.04
cmdstat_del:calls=20246502,usec=89782407,usec_per_call=4.43
cmdstat_exists:calls=66480,usec=243021,usec_per_call=3.66
cmdstat_mget:calls=33655918,usec=598846435,usec_per_call=17.79
cmdstat_lpush:calls=1396422,usec=15847508,usec_per_call=11.35
cmdstat_llen:calls=9396,usec=52856,usec_per_call=5.63
cmdstat_lrange:calls=9396,usec=214068,usec_per_call=22.78
cmdstat_select:calls=6,usec=27,usec_per_call=4.50
cmdstat_keys:calls=1887,usec=435023249,usec_per_call=230536.97
cmdstat_dbsize:calls=7,usec=27,usec_per_call=3.86
cmdstat_multi:calls=1015377,usec=3725495,usec_per_call=3.67
cmdstat_exec:calls=1015377,usec=62748743,usec_per_call=61.80
cmdstat_flushall:calls=1,usec=8713549,usec_per_call=8713549.00
cmdstat_info:calls=12,usec=3988,usec_per_call=332.33
cmdstat_config:calls=7,usec=443,usec_per_call=63.29
cmdstat_time:calls=7,usec=42,usec_per_call=6.00
Keyspace
db0:keys=620511,expires=620347
The text was updated successfully, but these errors were encountered: