Description
Application contact emails
hchen@redhat.com, chen.wang1@ibm.com, niki@weave.works
Project Summary
Kepler is a lightweight Pod level power consumption metrics exporter.
Project Description
Kepler (Kubernetes-based Efficient Power Level Exporter) uses eBPF to probe CPU performance counters and Linux kernel tracepoints. These data and stats from cgroup and sysfs are fed into ML models to estimate power consumption by Pods. The power consumption stats are then presented as Prometheus metrics and telemetry that can be used for Pod scheduling or scaling, energy consumption reporting and visulization, or can be extended with carbon intensity metrics to report on carbon footprint of Cloud Native workload.
Org repo URL
https://github.com/sustainable-computing-io/
Project repo URL
https://github.com/sustainable-computing-io/kepler
Additional repos
Kepler develops its own online ML trainer and model server (kepler-model-server), inference server (kepler-estimator). The models are developed by using code in the workload repo(energy-measure-data). The docuementation repo (kepler-doc) and deployment repos (kepler-operator and kepler-helm-chart) as well as GitHub CI artifacts (kepler-ci-artifacts) are all part of the Kepler sandbox application package.
In addition, the Kepler community has explored to integrate Kepler metrics with Kubernetes Scheduler (in repo peaks) and VPA (in repo clever). These exploration projects are also donated to CNCF to help the CNCF community get insight of how to integrate Kepler into their own use cases.
We also have our customized the Github Actions as well local Kubernetes cluster environment standup for integration and development tests, they are also part of the process for CNCF sandbox.
These repos are:
https://github.com/sustainable-computing-io/kepler-model-server
https://github.com/sustainable-computing-io/kepler-estimator
https://github.com/sustainable-computing-io/energy-measurement-data
https://github.com/sustainable-computing-io/kepler-doc
https://github.com/sustainable-computing-io/kepler-operator
https://github.com/sustainable-computing-io/kepler-helm-chart
https://github.com/sustainable-computing-io/kepler-ci-artifacts/
https://github.com/sustainable-computing-io/peaks/
https://github.com/sustainable-computing-io/clever/
https://github.com/sustainable-computing-io/KeplerK8SAction
https://github.com/sustainable-computing-io/local-dev-cluster
Website URL
https://sustainable-computing.io/
Roadmap
https://github.com/sustainable-computing-io/kepler/wiki/Roadmap
Roadmap context
No response
Contributing Guide
https://github.com/sustainable-computing-io/kepler/blob/main/CONTRIBUTING.md
Code of Conduct (CoC)
https://github.com/sustainable-computing-io/kepler/blob/main/code-of-conduct.md
Adopters
No response
Contributing or Sponsoring Org
Maintainers file
https://github.com/sustainable-computing-io/kepler/blob/main/Contributors.md
IP Policy
- If the project is accepted, I agree the project will follow the CNCF IP Policy
Trademark and accounts
- If the project is accepted, I agree to donate all project trademarks and accounts to the CNCF
Why CNCF?
Global emissions Cloud computing accounts for 2.5% to 3.7% of all global greenhouse gas emissions [1]. Both Cloud operators and end users are increasingly eager to measure and manage carbon footprint from the infrastructure and workload.
Kepler aims to work with the CNCF community to measure how much power that are consumed the Cloud Native workload. It uses eBPF to reduce the runtime overhead and scientific methods to improve measurement accuracy. Kepler can measure workload that run on private and Public cloud, physical or virtual machines, CPU or GPU. It strives to support all environments that CNCF projects are deployed.
Kepler project is contributed by Red Hat, IBM, Intel and WeaveWorks. CNCF ecosystem inclusion will promote Kepler project's community engagement.
Benefit to the Landscape
Kepler is being integrated with projects in CNCF ecosystems such as Kubernetes Scheduler, Vertical Pod Autoscaler.
It will also help TAG Environmental Sustainability to provide fact based evidence for research, investigation, and improvement.
Cloud Native 'Fit'
Kepler is built with Cloud Native technologies: it is designed to measure Kubernetes workload power consumption and export the Prometheus metrics.
Cloud Native 'Integration'
Kepler runs on Kubernetes and export Prometheus metrics. The power consumption metrics complements those provided existing node agents such as cAdvisor and node_exporter.
Kepler has been presented at KubeCon NA 2022 and EU 2023. It is also presented at CNCF TAG Sustainability and Runtime meetings.
Cloud Native Overlap
No response
Similar projects
Product or Service to Project separation
N/A
Project presentations
No response
Project champions
Erin Boyd
Additional information
To clarify the confusion that was introduced in the previous application, Kepler project has multiple GitHub repo, all of them are donated to CNCF.
Activity
dims commentedon Jan 6, 2023
@rootfs (slightly tangential to the submission), In the linux member summit, there was an interesting talk about open source licenses applicability to "ML Models" https://lfms22.sched.com/?iframe=no .. given that context
rootfs commentedon Jan 6, 2023
@dims that's a good point! The kepler community has open sourced both ML model training process and pre-trained models, these models are under Apache license.
The kepler model server is designed in such way that the ML training can be separated from the end users' runtime environment so as to preserve their data privacy. The goal is to allow end users to share their models for the open source community.
If this project is under CNCF ecosystem, we'll have the opportunity to improve these open source models and provide more platforms coverage for the end users in the community. It is also possible that this project can make a case study on how to be transparent on model sharing, while still maintaining end user data privacy.
cathyhongzhang commentedon Jan 6, 2023
@rootfs Basically Kepler will open source the ML model training part as well as those ML models in the Kepler repository, right?
rootfs commentedon Jan 7, 2023
@cathyhongzhang yes, the ML training process and trained models are and will continue to be open sourced.
amye commentedon May 9, 2023
/vote-sandbox
git-vote commentedon May 9, 2023
Vote created
@amye has called for a vote on
[Sandbox] Kepler
(#19).The members of the following teams have binding votes:
Non-binding votes are also appreciated as a sign of support!
How to vote
You can cast your vote by reacting to
this
comment. The following reactions are supported:Please note that voting for multiple options is not allowed and those votes won't be counted.
The vote will be open for
7days
. It will pass if at least66%
of the users with binding votes voteIn favor 👍
. Once it's closed, results will be published here as a new comment.berryq460 commentedon May 9, 2023
In favor ! 🙌🏽👏🏼👍🏼
smazziotta commentedon May 9, 2023
👍
erinaboyd commentedon May 9, 2023
43 remaining items
dims commentedon May 15, 2023
+1 non-binding!
lizrice commentedon May 15, 2023
+1 NB
rootfs commentedon May 15, 2023
/check-vote
git-vote commentedon May 15, 2023
Vote status
So far
81.82%
of the users with binding vote are in favor (passing threshold:66%
).Summary
Binding votes (9)
Non-binding votes (76)
hwang37 commentedon May 16, 2023
+1
jiangphcn commentedon May 16, 2023
/check-vote
git-vote commentedon May 16, 2023
Votes can only be checked once a day.
caldeirav commentedon May 17, 2023
+1 NB
git-vote commentedon May 17, 2023
Vote closed
The vote passed! 🎉
81.82%
of the users with binding vote were in favor (passing threshold:66%
).Summary
Binding votes (9)
Non-binding votes (77)