Skip to content

Ability to get root container stats at separate housekeeping interval #1247

Open
@derekwaynecarr

Description

@derekwaynecarr

In order to support out of resource monitoring in Kubernetes, I want to be able to get information about the root container at a separate interval than the information I gather on containers associated with pods. For example, I would set housekeeping for containers associated with pods at 10s, but root container at 100ms.

A potential option is to add a flag:

-root_housekeeping_interval duration 
   if specified, perform housekeeping on the root container at the specified interval 
   rather than default housekeeping interval

/cc @pmorie @ncdc @vishh - Thoughts?

Activity

ncdc

ncdc commented on Apr 27, 2016

@ncdc
Collaborator

SGTM

derekwaynecarr

derekwaynecarr commented on Apr 27, 2016

@derekwaynecarr
CollaboratorAuthor

From what I can gather, fsInfo is computed on demand, so no separate housekeeping interval is needed there, but @pmorie has informed me that thin_ls data per container is cached. Either way for out of resource killing, we care more about rootfs available bytes and imagefs available bytes.

pmorie

pmorie commented on Apr 27, 2016

@pmorie
Collaborator

Yep, thin_ls data is cached, but my WIP hasn't established at what interval it is refreshed

vishh

vishh commented on Apr 27, 2016

@vishh
Contributor

My understanding is that kubelet mainly needs higher resolution for machine level stats and not for container stats. thin_ls matters only to containers and not the machine itself.
@timstclair suggested adding a query param maxAge to /api/v2.1/machineStats endpoint that will help us get more recent stats on demand.

timstclair

timstclair commented on Apr 27, 2016

@timstclair
Contributor

+1 for on demand stats. I'd also like to avoid adding more flags if we can.

derekwaynecarr

derekwaynecarr commented on Apr 27, 2016

@derekwaynecarr
CollaboratorAuthor

@vishh @timstclair
My understanding is that machine stats are derived by the root container stats:

https://github.com/google/cadvisor/blob/master/api/versions.go#L483

I will throw another wrinkle in here, and broaden the request.

I suspect when we get a little further into the future, we will want to get the stats for certain special containers at a higher frequency. For example, the kubelet container, the docker container, and the container that parents end-user pods are all special, and should reasonably be able to ask for higher fidelity housekeeping intervals.

The container that parents end-user pods is probably the container that we will want to drive eviction on when we move to pod-level cgroups world.

So I want to come back and re-phrase my request, I want to be able to tell cadvisor a set of special containers that have a shorter housekeeping interval. I am fine not exposing it as a flag to the binary, but I would like to be able to specify it for how Kubernetes starts its internal cadvisor.

Thoughts?

timstclair

timstclair commented on Apr 27, 2016

@timstclair
Contributor

Can't we address that with on-demand scraping as well?

WRT configuring internal-cAdvisor, I opened a proposal in #1224. It's an intermediate step, but would mean we could stop leaking cAdvisor flags into kube-binaries, and avoid the flag-editing needed for things like kubernetes/kubernetes#24771

derekwaynecarr

derekwaynecarr commented on Apr 27, 2016

@derekwaynecarr
CollaboratorAuthor

@timstclair - I am happy to defer to what you and @vishh think is best, you have more expertise in this area then me, just wanted to state where my confusion came from as all things looked derived from the cached root container. If the desire is to support on-demand scraping instead, that works for me because I get the same net result as the caller. Any suggestions on how you would want to see this implemented? I am volunteering my time because I think this is needed to make evictions actually useful in Kubernetes to end users without having to sacrifice large amounts of reserved memory on the node.

timstclair

timstclair commented on Apr 27, 2016

@timstclair
Contributor

I don't know that I have more experience in this area, but my main concern is that as we have more and more components which want various stats at various intervals the complexity will get out of hand, and stats will be collected unnecessarily often. If we can make it a happen, I think a good on demand model could clean this up and lead to greater efficiency.

I think this is probably complex enough to warrant at least an informal design document. I'd be happy to help out with it, but here are a few issues I can think of off the top of my head:

  • Blocking callers on a potentially slow operation: we made need to provide an async interface, or at least a timeout
  • Concurrent stats requests: we should return the stats to both callers in this case, but that could be a problem if the requests are slightly different
  • Cacheing the data appropriately
  • Handling slow operations (stating large directories): we should continue to rely on asynchronous scrapers for this
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @ncdc@pmorie@dashpole@timstclair@derekwaynecarr

        Issue actions

          Ability to get root container stats at separate housekeeping interval · Issue #1247 · google/cadvisor