Closed
Description
Currently it's cAdvisor doesn't export pod labels for container level metrics. This would be desirable to be able to aggregate container level metrics by application.
As Kubernetes doesn't set pod labels on Docker containers (#25301) cAdvisor can't export those labels to it's /metrics
endpoint which makes metric aggregation by pod labels impossible. Kubernetes doesn't set these labels, because currently it's not possible to dynamically set Docker Container Labels (moby/moby#21721). It's also not clear when this situation will change.
I propose a implementing a workaround, so that cAdvisor can directly get the labels from the kublet and export them accordingly.
Metadata
Metadata
Assignees
Labels
Type
Projects
Milestone
Relationships
Development
No branches or pull requests
Activity
grobie commentedon Sep 14, 2016
At SoundCloud we work around the problem by using an extra exporter which exports all pod labels in a separate metric and we then join these together in our queries. This is similar to the approach described in @brian-brazil's blog post about machine role labels. Exporting a lot labels per metric makes it more difficult to work with them. So this might even be the more desirable approach.
@fabxc @kubernetes/sig-instrumentation We should discuss this need in our next meeting.
davidopp commentedon Sep 14, 2016
IIUC this is the approach @vishh has advocated as well, particularly as part of the work for #18770.
brian-brazil commentedon Sep 14, 2016
http://www.robustperception.io/exposing-the-software-version-to-prometheus/ is a slightly more relevant version of that blog post.
jimmidyson commentedon Sep 14, 2016
@grobie @brian-brazil Although we're using Prometheus' exposition format, we do have to make sure that any decisions we make around labelling don't restrict/negatively affect other consumers. If this was a Prometheus only decision then I would 100% agree with you though.
fabxc commentedon Sep 14, 2016
I think its a reasonable step of normalization to constrain labels to the minimum identity-giving set in the exposition. The consumer is free to denormalize again for its own purposes – in Prometheus this happens to be at query time. But other systems can easily do so on write.
Going the other direction is generally harder for the consumer while bloating the exposed metrics.
Of course it's saner if it's happening in the same metric set. But #18770 aims at just that.
hanikesn commentedon Sep 14, 2016
@grobie Any chance of open sourcing that?
vishh commentedon Sep 14, 2016
My idea was that kubelet should expose an extension API that let's
monitoring agents figure out the list of pods on the node along with
detailed runtime information like container & image identifiers, etc.
cAdvisor, being a monitoring agent, can then use this API to come up with
pod level metrics and metadata for k8s pods.
On Wed, Sep 14, 2016 at 1:27 PM, Steffen Hanikel notifications@github.com
wrote:
tomwilkie commentedon Dec 29, 2016
@hanikesn @grobie FYI just added support for exporting a pod-label-metric for kube-api-exporter: tomwilkie/kube-api-exporter#9
(edited) I also wrote a blog post on how to do the join in Prometheus - https://www.weave.works/aggregating-pod-resource-cpu-memory-usage-arbitrary-labels-prometheus/
natalia-k commentedon Feb 21, 2017
is any chance that this will be implemented in the next release ?
Thanks a lot!
fejta-bot commentedon Dec 21, 2017
Issues go stale after 90d of inactivity.
Mark the issue as fresh with
/remove-lifecycle stale
.Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an
/lifecycle frozen
comment.If this issue is safe to close now please do so with
/close
.Send feedback to sig-testing, kubernetes/test-infra and/or
@fejta
./lifecycle stale
48 remaining items