-
Notifications
You must be signed in to change notification settings - Fork 2.4k
Support filtering monitored containers by container label #2380
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
:10255/metrics/cadvisor is where you can find them in recent releases. |
Thanks David! I now know 3 different ways to scrape cAdvisor metrics. Option 1) Scrape the API server for each node in the cluster Option 2) Option 3) |
Option 1 and option 2 are the same endpoint. One is just proxied by the API Server. Prefer (2) when possible because it is more direct. If you want to customize the set of metrics exposed by cAdvisor, you can run it yourself as a daemonset. If you just want the metrics in the kubelet's metrics/cadvisor endpoint, I would just use that to save on resource consumption. |
@dashpole I have one specific question about cAdvisor. I noticed that node_explorer supports "collectors" with the ability to enable/disable them at the source. Does cAdvisor supports collectors so that user can select the group of cAdvisor metrics to enable/disable? Do I need to install cAdvisor daemonset for this? Or will Prometheus get all cAdvisor metrics during a pull request? If so, is it possible to filter some of them in Prometheus server? |
You can use the |
|
|
I am at root directory on the node and cadvisor file cannot be found: node01 $ /.cadvisor --help |
yeah, you will need to run that on the cAdvisor binary, which most likely isn't in the root directory of your node. Try:
|
The k8s cluster is already running and I see kubelet is reachable on port 10250: node01 $ netstat -plant | grep kubelet Do you think I still need to install cAdvisor binary on the node? |
See this comment above. You only need to run cAdvisor seperately if you need to customize the set of metrics. Also, I would not recommend manually installing it. I would use a DaemonSet instead, and use the docker image. |
Got it. Sorry for taking so much of your time. |
In my cluster, the command line flags for the kubelet are stored in |
@dashpole |
We don't really support that use-case today. If you run cAdvisor as a daemonset, you would need the pod to be privileged for host filesystem access anyways, so there isn't really a good way to use it without elevated privileges. |
@dashpole |
How would cAdvisor know the namespace of the container? |
I don't think it is very difficult but I am no expert. You tell me :) |
namespace is a kubernetes construct. cAdvisor doesn't "understand" kubernetes constructs. Say we want to only collect metrics for containers in namespace foo. This is how it currently works:
|
In the docker/container runtime I see the following data: |
That is a container label. We probably don't want to rely on that, as it isn't an actual API. We could potentially have label filtering (e.g. only collect metrics for containers where label foo=bar). |
That would be great David! |
Is there actual support for passing |
I don't think it currently does and this would be part of this proposed enhancement. |
cAdvisor in the kubelet has its own labeling for cAdvisor metrics: https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/server/server.go#L959 I think you should be able to use the store container labels and label whitelist flags here: |
@dashpole |
@stevebail Did you try to whitelist on the prometheus scrape job ? - job_name: cadvisor
scrape_interval: 5s
static_configs:
- targets:
- cadvisor:8080
metric_relabel_configs:
- source_labels: [ container_label_prometheus_io_scrape ]
regex: True
action: keep Knowing that my whitelisted containers have the following label labels:
prometheus.io/scrape: true |
@celian-garcia |
Yeah I did the suggestion mainly for people like me who want to filter containers by label having the hand on the Prometheus configuration. If it is not your case, the solution won't fit your need. I still think that the feature is worth it in cAdvisor. |
The issue with this configuration is that the filtering is done in Prometheus, not query time, which can lead to significant bigger memory usage. |
I am working with kubelet cAdvisor that comes along with kubernetes cluster.
I know that cAdvisor exposes container stats as Prometheus metrics but I am not very familiar on how to retrieve them manually using curl.
What is the command to know 1) if cAdvisor is running on each node, 2) what version and 3) what port it is exposing?
Are the cAdvisor metrics always served on the /metrics endpoint?
Your help is greatly appreciated.
The text was updated successfully, but these errors were encountered: