Skip to content

cAdvisor should export pod labels for container metrics #32326

Closed
@hanikesn

Description

@hanikesn

Currently it's cAdvisor doesn't export pod labels for container level metrics. This would be desirable to be able to aggregate container level metrics by application.

As Kubernetes doesn't set pod labels on Docker containers (#25301) cAdvisor can't export those labels to it's /metrics endpoint which makes metric aggregation by pod labels impossible. Kubernetes doesn't set these labels, because currently it's not possible to dynamically set Docker Container Labels (moby/moby#21721). It's also not clear when this situation will change.

I propose a implementing a workaround, so that cAdvisor can directly get the labels from the kublet and export them accordingly.

Activity

grobie

grobie commented on Sep 14, 2016

@grobie
Contributor

At SoundCloud we work around the problem by using an extra exporter which exports all pod labels in a separate metric and we then join these together in our queries. This is similar to the approach described in @brian-brazil's blog post about machine role labels. Exporting a lot labels per metric makes it more difficult to work with them. So this might even be the more desirable approach.

@fabxc @kubernetes/sig-instrumentation We should discuss this need in our next meeting.

davidopp

davidopp commented on Sep 14, 2016

@davidopp
Member

I propose a implementing a workaround, so that cAdvisor can directly get the labels from the kublet and export them accordingly.

IIUC this is the approach @vishh has advocated as well, particularly as part of the work for #18770.

brian-brazil

brian-brazil commented on Sep 14, 2016

@brian-brazil

http://www.robustperception.io/exposing-the-software-version-to-prometheus/ is a slightly more relevant version of that blog post.

jimmidyson

jimmidyson commented on Sep 14, 2016

@jimmidyson
Member

@grobie @brian-brazil Although we're using Prometheus' exposition format, we do have to make sure that any decisions we make around labelling don't restrict/negatively affect other consumers. If this was a Prometheus only decision then I would 100% agree with you though.

fabxc

fabxc commented on Sep 14, 2016

@fabxc
Contributor

Although we're using Prometheus' exposition format, we do have to make sure that any decisions we make around labelling don't restrict/negatively affect other consumers. If this was a Prometheus only decision then I would 100% agree with you though.

I think its a reasonable step of normalization to constrain labels to the minimum identity-giving set in the exposition. The consumer is free to denormalize again for its own purposes – in Prometheus this happens to be at query time. But other systems can easily do so on write.
Going the other direction is generally harder for the consumer while bloating the exposed metrics.

Of course it's saner if it's happening in the same metric set. But #18770 aims at just that.

hanikesn

hanikesn commented on Sep 14, 2016

@hanikesn
Author

@grobie Any chance of open sourcing that?

vishh

vishh commented on Sep 14, 2016

@vishh
Contributor

My idea was that kubelet should expose an extension API that let's
monitoring agents figure out the list of pods on the node along with
detailed runtime information like container & image identifiers, etc.
cAdvisor, being a monitoring agent, can then use this API to come up with
pod level metrics and metadata for k8s pods.

On Wed, Sep 14, 2016 at 1:27 PM, Steffen Hanikel notifications@github.com
wrote:

@grobie https://github.com/grobie Any chance of open sourcing that?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#32326 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AGvIKPZqoTMWEa_8odh1h_EMsVjWw8FZks5qqFiwgaJpZM4J4h6h
.

tomwilkie

tomwilkie commented on Dec 29, 2016

@tomwilkie

@hanikesn @grobie FYI just added support for exporting a pod-label-metric for kube-api-exporter: tomwilkie/kube-api-exporter#9

(edited) I also wrote a blog post on how to do the join in Prometheus - https://www.weave.works/aggregating-pod-resource-cpu-memory-usage-arbitrary-labels-prometheus/

natalia-k

natalia-k commented on Feb 21, 2017

@natalia-k

is any chance that this will be implemented in the next release ?

Thanks a lot!

fejta-bot

fejta-bot commented on Dec 21, 2017

@fejta-bot

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

48 remaining items

Loading
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/kubeletarea/monitoringlifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.sig/nodeCategorizes an issue or PR as relevant to SIG Node.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @grobie@tomwilkie@jimmidyson@Nowaker@hanikesn

        Issue actions

          cAdvisor should export pod labels for container metrics · Issue #32326 · kubernetes/kubernetes