Closed
Description
In many scenarios etcd nodes run on designated nodes that are very restrictive for outside traffic.
If we want to monitor etcd, we don't need access to any critical APIs but just /metrics
. It's not generally feasible to run Prometheus on the same nodes as etcd though. Having etcd expose /metrics
on a designated port that can be exposed relatively safely would solve those scenarios.
It could be an optional flag,that exposes an additonal /metrics
endpoint on another address if set.
Metadata
Metadata
Assignees
Type
Projects
Relationships
Development
No branches or pull requests
Activity
heyitsanthony commentedon Jun 14, 2017
@fabxc would etcd get an allocation on https://github.com/prometheus/prometheus/wiki/Default-port-allocations or is that something different?
brancz commentedon Jun 14, 2017
That would generally make sense @heyitsanthony. Next one up is usually marked. It is
9267
right now. This should of course be configurable if a user desires to bind it differently.fabxc commentedon Jun 14, 2017
brancz commentedon Jun 14, 2017
FWIW certainly not all exporters and applications oblige to it, so etcd being a relatively prominent application I think it's valid to choose whatever we like best. The wiki page is also open and I don't think anyone monitors the changes much, so the reliability is questionable either way (I've brought this up before but it was decided to keep as guidance for now).
xiang90 commentedon Jun 19, 2017
@heyitsanthony
do you think this is a reasonable enhancement we could do in 3.3?
gyuho commentedon Jul 11, 2017
@xiang90 @heyitsanthony Do we want to move
/metrics
endpoints to a separate port, or duplicate the handler in the separate port?heyitsanthony commentedon Jul 11, 2017
Duplicate the handler. There could be clients that have access to the internal port that expect /metrics to be available.
brancz commentedon Jul 17, 2017
Awesome! Thanks for the collaboration everyone!
12 remaining items