OpenFaaS (Functions as a Service) is a framework for building serverless functions with Docker and Kubernetes which has first class support for metrics. Any process can be packaged as a function enabling you to consume a range of web events without repetitive boiler-plate coding.
- Ease of use through UI portal and one-click install
- Write functions in any language for Linux or Windows and package in Docker/OCI image format
- Portable - runs on existing hardware or public/private cloud. Native Kubernetes support, Docker Swarm also available
- Operator / CRD option available
- faas-cli available with stack.yml for creating and managing functions
- Auto-scales according to metrics from Prometheus
- Scales to zero and back again and can be tuned at a per-function level
- Works with service-meshes
It is recommended that you use arkade to install OpenFaaS. arkade is a CLI tool which automates the helm CLI and chart download and installation. The openfaas
app also has a number of options available via arkade install openfaas --help
The installation with arkade is as simple as the following which installs OpenFaaS, sets up an Ingress record, and a TLS certificate with cert-manager.
arkade install openfaas
arkade install openfaas-ingress \
--domain openfaas.example.com \
--email wm@example.com
See a complete example here: Get TLS for OpenFaaS the easy way with arkade
If you wish to continue without using arkade, read on for instructions.
To use the chart, you will need to use Helm 3:
Get it from arkade:
arkade get helm
Or use the Helm installation script:
curl -sSLf https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
We recommend creating two namespaces, one for the OpenFaaS core services and one for the functions.
You can skip this step if you're using arkade to install OpenFaaS.
kubectl apply -f https://raw.githubusercontent.com/openfaas/faas-netes/master/namespaces.yml
You will now have openfaas
and openfaas-fn
. If you want to change the names or to install into multiple installations then edit namespaces.yml
from the faas-netes
repo.
Add the OpenFaaS helm
chart:
helm repo add openfaas https://openfaas.github.io/faas-netes/
Now decide how you want to expose the services and edit the helm upgrade
command as required.
- To use NodePorts/ClusterIP - (the default and best for development, with port-forwarding)
- To use an IngressController add
--set ingress.enabled=true
(recommended for production, for use with TLS) - follow the full guide - To use a LoadBalancer add
--set serviceType=LoadBalancer
(not recommended, since it will expose plain HTTP)
OpenFaaS Community Edition is meant exploration and development.
OpenFaaS Pro has been tuned for production use including flexible auto-scaling, high-available deployments, durability, add-on features, and more.
Deploy CE from the helm chart repo directly:
helm repo update \
&& helm upgrade openfaas \
--install openfaas/openfaas \
--namespace openfaas
The above command will also update your helm repo to pull in any new releases.
Retrieve the OpenFaaS credentials with:
PASSWORD=$(kubectl -n openfaas get secret basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode) && \
echo "OpenFaaS admin password: $PASSWORD"
You should have landed here from the following page: Install OpenFaaS Pro, if you did not, please go there now and read the instructions before continuing.
It's recommended to install OpenFaaS Pro with a ClusterRole so that:
- Prometheus can scrape node metrics for CPU-based autoscaling, and report CPU/RAM consumption usage of functions via the API.
- The Operator can obtain accurate namespace information for the installation
- The Operator can manage functions across multiple namespaces
Create the required secret with your OpenFaaS Pro license:
kubectl create secret generic \
-n openfaas \
openfaas-license \
--from-file license=$HOME/.openfaas/LICENSE
If you wish to use the OpenFaaS Pro dashboard, you must run the steps to "Create a signing key" before installing the Helm chart.
Review the recommended Pro values in values-pro.yaml. These are overlaid on top of the default values in values.yaml, which is used as a base for all installations.
Now deploy OpenFaaS from the helm chart repo:
helm repo update \
&& helm upgrade openfaas \
--install openfaas/openfaas \
--namespace openfaas \
-f values.yaml \
-f values-pro.yaml
For production, you should take a copy of the values-pro.yaml
file, and keep any of your own custom settings and overrides there. We do not recommend copying the base values.yaml
files as it is rather large, and contains settings that are updated by our team on a regular basis such as image versions.
OpenFaaS OEM is available subject to contract for local development and requires a secret to be created with your separate OpenFaaS OEM license key:
kubectl create secret generic \
-n openfaas \
openfaas-license \
--from-file license=$HOME/.openfaas/LICENSE-OEM
Then, pass the --set oem=true
flag to helm
, or set oem: true
in your values.yaml file.
There are two potential issues when installing OpenFaaS without Cluster Admin access:
- You cannot create a ClusterRole. In this case, your admin team can install OpenFaaS and create the required resources, you can then perform updates with limited permissions. Pass the
--set rbac=false
flag to thehelm
command to prevent your account from tying to update or change resources. Alternatively, you can also use a Role instead of a ClusterRole however this may result in more limited functionality for OpenFaaS, set--set clusterRole=false
- You cannot install CRDs. If you do not have a Cluster Admin account, you won't be able to create CRDs, see the notes in crds/README.md for how to perform a "split installation" of CRDs with a privileged account.
If you are working on a patch for the helm chart, you can deploy it directly from a local folder.
You can run the following command from within the faas-netes
folder, not the chart's folder.
helm upgrade openfaas \
--install chart/openfaas \
--namespace openfaas \
-f ./chart/openfaas/values.yaml \
-f ./chart/openfaas/values-pro.yaml
In the example above, I'm overlaying two additional YAML files for settings for the chart.
You can override specific images by adding --set gateway.image=
for instance.
If you're using a GitOps tool like ArgoCD or Flux to install OpenFaaS, then you will need to pre-create the basic-auth credentials, so that they remain stable.
Why? The chart has a pre-install hook which can generate basic-auth credentials. It is enabled by default and can be turned off with --set generateBasicAuth=false
.
Example command to generate a random password:
# generate a random password
kubectl -n openfaas create secret generic basic-auth \
--from-literal=basic-auth-user=admin \
--from-literal=basic-auth-password="$PASSWORD"
echo "OpenFaaS admin password: $PASSWORD"
The concept of a cold-start in OpenFaaS only applies if you enable it for a specific function, using scale to zero. Otherwise there is not a cold-start, because at least one replica of your function remains available.
See also: Fine-tuning the cold-start in OpenFaaS
There are two ways to reduce the Kubernetes cold-start for a pre-pulled image, which is around 1-2 seconds.
- Don't set the function to scale down to zero, just set it a minimum availability i.e. 1/1 replicas
- Use async invocations via the
/async-function/<name>
route on the gateway, so that the latency is hidden from the caller - Tune the readinessProbes to be aggressively low values. This will reduce the cold-start at the cost of more
kubelet
CPU usage
To achieve a coldstart of between 0.7 and 1.9s, set the following in values.yaml
:
functions:
imagePullPolicy: "IfNotPresent"
...
readinessProbe:
initialDelaySeconds: 0
timeoutSeconds: 1
periodSeconds: 1
successThreshold: 1
failureThreshold: 3
livenessProbe:
initialDelaySeconds: 0
timeoutSeconds: 1
periodSeconds: 1
failureThreshold: 3
In addition:
- Pre-pull images on each node
- Use an in-cluster registry to reduce the pull latency for images
- Set the
functions.imagePullPolicy
toIfNotPresent
so that thekubelet
only pulls images which are not already available - Explore alternatives such as not scaling to absolute zero, and using async calls which do not show the cold start
For OpenFaaS CE, both liveness and readiness probes are set to, and the PullPolicy
for functions is set to Always
:
initialDelaySeconds: 2
periodSeconds: 2
timeoutSeconds: 1
Once all the services are up and running, log into your gateway using the OpenFaaS CLI. This will cache your credentials into your ~/.openfaas/config.yml
file.
Fetch your public IP or NodePort via kubectl get svc -n openfaas gateway-external -o wide
and set it as an environmental variable as below:
export OPENFAAS_URL=http://127.0.0.1:31112
If using a remote cluster, you can port-forward the gateway to your local machine:
export OPENFAAS_URL=http://127.0.0.1:8080
kubectl port-forward -n openfaas svc/gateway 8080:8080 &
Now log in with the CLI and check connectivity:
echo -n $PASSWORD | faas-cli login -g $OPENFAAS_URL -u admin --password-stdin
faas-cli version
OpenFaaS CE uses the old "controller" mode of OpenFaaS, where Kubernetes objects are directly modified by the HTTP API call. If the HTTP call fails, the objects may not be updated or create successfully.
OpenFaaS Pro uses a more idiomatic Kubernetes integration through the use of CustomResourceDefinitions. The operator pattern used with the CRD means that you can interact with a "Function" Custom Resource along with the HTTP REST API.
The HTTP REST endpoints all interact with the Function CRD, then the operator watches for changes and updates the Kubernetes objects accordingly, retrying, backing off, and reporting the progress via the .Status field.
Some long-time OpenFaaS Pro users still use the controller mode, for backwards compatibility. It should be considered deprecated and will be removed in the future. No no users should adopt it for this reason.
See also: How and why you should upgrade to the Function Custom Resource Definition (CRD)
This option is good for those that have issues with or concerns about installing Tiller, the server/cluster component of helm. Using the helm
CLI, we can pre-render and then apply the templates using kubectl
.
-
Clone the faas-netes repository
git clone https://github.com/openfaas/faas-netes.git cd faas-netes
-
Render the chart to a Kubernetes manifest called
openfaas.yaml
helm template \ openfaas chart/openfaas/ \ --namespace openfaas \ --set basic_auth=true \ --set functionNamespace=openfaas-fn > openfaas.yaml
You can set the values and overrides just as you would in the install/upgrade commands above.
-
Install the components using
kubectl
kubectl apply -f namespaces.yml,openfaas.yaml
Use the following guide to setup TLS for the Gateway and Dashboard.
If you are using Ingress locally, for testing, then you can access the gateway by adding:
ingress:
enabled: true
Update the fields for the ingress
section of values.yaml
By default, the name gateway.openfaas.local
will be used with HTTP access only, without TLS.
By default a NodePort will be created for the OpenFaaS Gateway on port 31112, you can prevent this by setting exposeServices
to false
, or serviceType
to ClusterIP
.
There is no reason to use a LoadBalancer for OpenFaaS, because it will expose the gateway using plain-text HTTP, and you will have no encryption.
This flag is controlled by --set serviceType=LoadBalancer
, which creates a new gateway service of type LoadBalancer.
Some configurations in combination with client-side KeepAlive settings may because load to be spread unevenly between replicas of a function. If you experience this, there are three ways to work around it:
-
Install Linkerd2 which takes over load-balancing from the Kubernetes L4 Service (recommended)
-
Disable KeepAlive in the client-side code (not recommended)
-
Configure the gateway to pass invocations through to the faas-netes provider (alternative to using Linkerd2)
--set gateway.directFunctions=false
In this mode, all invocations will pass through the gateway to faas-netes, which will look up endpoint IPs directly from Kubernetes, the additional hop may add some latency, but will do fair load-balancing, even with KeepAlive.
It's better to view the metrics from OpenFaaS via the official Grafana dashboards, than by running direct queries, however, it can be useful to view the metrics directly for exploration and debugging.
You temporarily access the Prometheus metrics by using port-forward
kubectl -n openfaas \
port-forward deployment/prometheus 9090:9090
Then open http://127.0.0.1:9090
to directly query the OpenFaaS metrics scraped by Prometheus.
If you use a service mesh like Linkerd or Istio in your cluster, then you should enable the directFunctions
mode using:
--set gateway.directFunctions=true
Istio requires OpenFaaS Pro to function correctly.
To install OpenFaaS Pro with Istio mTLS pass --set istio.mtls=true
and disable the HTTP probes:
helm upgrade openfaas --install chart/openfaas \
--namespace openfaas \
--set openfaasPro=true \
--set exposeServices=false \
--set gateway.directFunctions=true \
--set gateway.probeFunctions=true \
--set istio.mtls=true \
-f values-pro.yaml
The above command will enable mTLS for the openfaas control plane services and functions excluding NATS.
Scaling up from zero replicas is enabled by default, to turn it off set scaleFromZero
to false
in the helm chart options for the gateway
component. There is very little reason to turn this setting off.
--set gateway.scaleFromZero=true/false
Scaling to zero is managed by the OpenFaaS Standard autoscaler, which:
- can save on infrastructure costs
- helps to reduce the attack surface of your application
- mitigates against functions which use high CPU or memory at idle, or which contain leaks
OpenFaaS Pro will only scale down functions which have marked themselves as eligible for this behaviour through the use of a label: com.openfaas.scale.zero=true
. The time to wait until scaling a specific function to zero is controlled by: com.openfaas.scale.zero-duration
which is a Go duration set in minutes i.e. 15m
.
See also: Scale to Zero docs.
In order to build custom autoscaling rules, an additional recording rule is required for Prometheus for each type of scaling you want to add.
To add latency-based scaling with the metrics recorded at the gateway, you could add the following to values.yaml:
prometheus:
recordingRules:
- record: job:function_current_load:sum
expr: |
sum by (function_name) (rate(gateway_functions_seconds_sum{}[30s])) / sum by (function_name) (rate( gateway_functions_seconds_count{}[30s]))
and on (function_name) avg by(function_name) (gateway_service_target_load{scaling_type="latency"}) > bool 1
labels:
scaling_type: latency
To check the configuration of current recording rules use the Prometheus UI or run kubectl edit -n openfaas configmap/prometheus-config
.
See also: How to scale OpenFaaS Functions with Custom Metrics.
All control plane components can be cleaned up with helm:
helm uninstall -n openfaas openfaas
Follow this by the following to remove all other associated objects:
kubectl delete namespace openfaas openfaas-fn
Then delete the CRDs:
kubectl delete crd -l app.kubernetes.io/name=openfaas
If you have created additional namespaces for functions, delete those too, with kubectl delete namespace <namespace>
.
This Helm chart currently supports version 1.19+
Note that OpenFaaS itself may support a wider range of versions, see here
Comprehensive documentation is offered in the OpenFaaS docs.
For technical support and questions, join the Weekly Office Hours call
For suspected bugs, feel free to raise an issue. If you do not fill out the whole issue template, or delete the template, it's unlikely that you'll get the help that you're looking for.
Specify each parameter using the --set key=value[,key=value]
argument to helm install
.
See values.yaml for detailed configuration.
Parameter | Description | Default |
---|---|---|
affinity |
Global affinity rules assigned to deployments | {} |
async |
Enables asynchronous function invocations. If .nats.external.enabled is false , also deploys NATS |
true |
queueMode |
Set to jetstream to run the async system backed by NATS JetStream. By default the async system uses NATS Streaming |
|
basic_auth |
Enable basic authentication on the gateway and Prometheus. Warning: do not disable. | true |
generateBasicAuth |
Generate admin password for basic authentication | true |
basicAuthPlugin.image |
Container image used for basic-auth-plugin | See values.yaml |
basicAuthPlugin.replicas |
Replicas of the basic-auth-plugin | 1 |
basicAuthPlugin.resources |
Resource limits and requests for basic-auth-plugin containers | See values.yaml |
caBundleSecretName |
Name of the Kubernetes secret that contains the CA bundle for making HTTP requests for IAM (optional) | "" |
clusterRole |
Use a ClusterRole for the Operator or faas-netes. Set to true for multiple namespace, pro scaler and CPU/RAM metrics in OpenFaaS REST API |
false |
exposeServices |
Expose NodePorts/LoadBalancer |
true |
functionNamespace |
Functions namespace, preferred openfaas-fn |
openfaas-fn |
gatewayExternal.annotations |
Annotation for getaway-external service | {} |
httpProbe |
Setting to true will use HTTP for readiness and liveness probe on the OpenFaaS system Pods (compatible with Istio >= 1.1.5) | true |
ingress.enabled |
Create ingress resources | false |
istio.mtls |
Create Istio policies and destination rules to enforce mTLS for OpenFaaS components and functions | false |
kubernetesDNSDomain |
Domain name of the Kubernetes cluster | cluster.local |
k8sVersionOverride |
Override kubeVersion for the ingress creation, this should be left blank. | "" |
nodeSelector |
Global NodeSelector | {} |
openfaasImagePullPolicy |
Image pull policy for openfaas components, can change to IfNotPresent in offline env |
Always |
openfaasPro |
Deploy OpenFaaS Pro | false |
oem |
Deploy OpenFaaS oem | false |
psp |
Enable Pod Security Policy for OpenFaaS accounts | false |
rbac |
Enable RBAC | true |
securityContext |
Give a securityContext template to be applied to each of the various containers in this chart, set to {} to disable, if required for Istio side-car injection. |
See values.yaml |
serviceType |
Type of external service to use NodePort/LoadBalancer |
NodePort |
tolerations |
Global Tolerations | [] |
Parameter | Description | Default |
---|---|---|
gateway.directFunctions |
Invoke functions directly using Service without delegating to the provider |
false |
gateway.image |
Container image used for the gateway | See values.yaml |
gateway.logsProviderURL |
Set a custom logs provider url | "" |
gateway.maxIdleConns |
Set max idle connections from gateway to functions | 1024 |
gateway.maxIdleConnsPerHost |
Set max idle connections from gateway to functions per host | 1024 |
gateway.nodePort |
Change the port when creating multiple releases in the same baremetal cluster | 31112 |
gateway.probeFunctions |
Set to true for Istio users as a workaround for: openfaas/faas#1721 | false |
gateway.readTimeout |
Read timeout for the gateway API | 65s |
gateway.replicas |
Replicas of the gateway, pick more than 1 for HA |
1 |
gateway.resources |
Resource limits and requests for the gateway containers | See values.yaml |
gateway.scaleFromZero |
Enables an intercepting proxy which will scale any function from 0 replicas to the desired amount | true |
gateway.upstreamTimeout |
Maximum duration of upstream function call, should be lower than readTimeout /writeTimeout |
60s |
gateway.writeTimeout |
Write timeout for the gateway API | 65s |
gatewayPro.image |
Container image used for the gateway when openfaasPro=true |
See values.yaml |
Parameter | Description | Default |
---|---|---|
faasnetes.image |
Container image used for provider API | See values.yaml |
faasnetes.readTimeout |
Read timeout for the faas-netes API | "" (defaults to gateway.readTimeout) |
faasnetes.resources |
Resource limits and requests for faas-netes container | See values.yaml |
faasnetes.writeTimeout |
Write timeout for the faas-netes API | "" (defaults to gateway.writeTimeout) |
faasnetesPro.image |
Container image used for faas-netes when openfaasPro=true |
See values.yaml |
faasnetesOem.image |
Container image used for faas-netes when oem=true |
See values.yaml |
faasnetesPro.logs.format |
Set the log format, supports console or json |
console |
faasnetesPro.logs.debug |
Print debug logs | false |
operator.create |
Use the OpenFaaS operator CRD controller, default uses faas-netes as the Kubernetes controller | false |
operator.image |
Container image used for the openfaas-operator | See values.yaml |
operator.resources |
Resource limits and requests for openfaas-operator containers | See values.yaml |
operator.image |
Container image used for the openfaas-operator | See values.yaml |
operator.kubeClientQPS |
QPS rate-limit for the Kubernetes client, (OpenFaaS for Enterprises) | "" (defaults to 100) |
operator.kubeClientBurst |
Burst rate-limit for the Kubernetes client (OpenFaaS for Enterprises) | "" (defaults to 250) |
operator.reconcileWorkers |
Number of reconciliation workers to run to convert Function CRs into Deployments | 1 |
operator.logs.format |
Set the log format, supports console or json |
console |
operator.logs.debug |
Print debug logs | false |
operator.leaderElection.enabled |
When set to true, only one replica of the operator within the gateway pod will perform reconciliation | false |
Parameter | Description | Default |
---|---|---|
functions.httpProbe |
Use a httpProbe instead of exec | true |
functions.imagePullPolicy |
Image pull policy for deployed functions (OpenFaaS Pro) | Always |
functions.livenessProbe.initialDelaySeconds |
Number of seconds after the container has started before probe is initiated | 2 |
functions.livenessProbe.periodSeconds |
How often (in seconds) to perform the probe | 2 |
functions.livenessProbe.timeoutSeconds |
Number of seconds after which the probe times out | 1 |
functions.livenessProbe.failureThreshold |
After a probe fails failureThreshold times in a row, Kubernetes considers that the overall check has failed. | 3 |
functions.readinessProbe.initialDelaySeconds |
Number of seconds after the container has started before probe is initiated | 2 |
functions.readinessProbe.periodSeconds |
How often (in seconds) to perform the probe | 2 |
functions.readinessProbe.timeoutSeconds |
Number of seconds after which the probe times out | 1 |
functions.readinessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed. | 1 |
functions.readinessProbe.failureThreshold |
After a probe fails failureThreshold times in a row, Kubernetes considers that the overall check has failed. | 3 |
functions.setNonRootUser |
Force all function containers to run with user id 12000 |
false |
Parameter | Description | Default |
---|---|---|
autoscaler.disableHorizontalScaling |
Set to true, to only scale to zero, without scaling replicas between the defined Min and Max count for the function | false |
autoscaler.enabled |
Enable the autoscaler - if openfaasPro is set to true | true |
autoscaler.image |
Container image used for the autoscaler | See values.yaml |
autoscaler.replicas |
Replicas of the autoscaler | 1 |
autoscaler.resources |
Resource limits and requests for the autoscaler pods | See values.yaml |
Parameter | Description | Default |
---|---|---|
jetstreamQueueWorker.durableName |
Durable name used by JetStream consumers | faas-workers |
jetstreamQueueWorker.image |
Container image used for the queue-worker when the queueMode is jetstream |
See values.yaml |
jetstreamQueueWorker.maxWaiting |
Configure the max waiting pulls for the queue-worker JetStream consumer. The value should be at least max_inflight * queue_worker.replicas. Note that this value can not be updated once the consumer is created. | 512 |
jetstreamQueueWorker.logs.debug |
Log debug messages | false |
jetstreamQueueWorker.logs.format |
Set the log format, supports console or json |
console |
nats.channel |
The name of the NATS Streaming channel or NATS JetStream stream to use for asynchronous function invocations | faas-request |
nats.external.clusterName |
The name of the externally-managed NATS Streaming server | "" |
nats.external.enabled |
Whether to use an externally-managed NATS Streaming server | false |
nats.external.host |
The host at which the externally-managed NATS Streaming server can be reached | "" |
nats.external.port |
The port at which the externally-managed NATS Streaming server can be reached | "" |
nats.image |
Container image used for NATS | See values.yaml |
nats.resources |
Resource limits and requests for the nats pods | See values.yaml |
nats.streamReplication |
JetStream stream replication factor. For production a value of at least 3 is recommended. | 1 |
queueWorker.ackWait |
Max duration of any async task/request | 60s |
queueWorker.image |
Container image used for the CE edition of the queue-worker | See values.yaml |
queueWorker.maxInflight |
Control the concurrent invocations | 1 |
queueWorker.replicas |
Replicas of the queue-worker, pick more than 1 for HA |
1 |
queueWorker.resources |
Resource limits and requests for the queue-worker pods | See values.yaml |
queueWorker.queueGroup |
The name of the queue group used to process asynchronous function invocations | faas |
queueWorkerPro.backoff |
The backoff algorithm used for retries. Must be one off exponential , full or equal |
exponential |
queueWorkerPro.httpRetryCodes |
Comma-separated list of HTTP status codes the queue-worker should retry | 408,429,500,502,503,504 |
queueWorkerPro.image |
Container image used for the Pro version of the queue-worker | See values.yaml |
queueWorkerPro.initialRetryWait |
Time to wait for the first retry | 10s |
queueWorkerPro.insecureTLS |
Enable insecure TLS for callback invocations | false |
queueWorkerPro.maxRetryAttempts |
Amount of times to try sending a message to a function before discarding it | 10 |
queueWorkerPro.maxRetryWait |
Maximum amount of time to wait between retries | 120s |
queueWorkerPro.printResponseBody |
Print the function response body | false |
queueWorkerPro.printRequestBody |
Print the request body | false |
stan.image |
Container image used for NATS streaming server | See values.yaml |
Parameter | Description | Default |
---|---|---|
iam.enabled |
Enable Access and Identity Management for OpenFaaS | false |
iam.systemIssuer.url |
URL for the OpenFaaS OpenID connect provider for system components. This is usually the public url of the gateway | https://gateway.example.com |
iam.dashboardIssuer.url |
URL of the OpenID connect provider to used by the dashboard | https://example.eu.auth0.com |
iam.dashboardIssuer.clientId |
OAuth Client Id for the dashboard | "" |
iam.dashboardIssuer.clientSecret |
Name of the Kubernetes secret that contains the OAuth client secret for the dashboard | "" |
iam.dashboardIssuer.scopes |
OpenID Connect (OIDC) scopes for the dashboard | [openid, email, profile] |
iam.kubernetesIssuer.create |
Create a JwtIssuer object for the kubernetes service account issuer | true |
iam.kubernetesIssuer.tokenExpiry |
Expiry time of OpenFaaS access tokens exchanged for tokens issued by the Kubernetes issuer. | 2h |
iam.kubernetesIssuer.url |
URL for the Kubernetes service account issuer. | https://kubernetes.default.svc.cluster.local |
Parameter | Description | Default |
---|---|---|
dashboard.enabled |
Enable the dashboard | false |
dashboard.image |
Container image used for the dashboard | See values.yaml |
dashboard.publicURL |
URL used to expose the dashboard. Needs to be a fully qualified domain name (FQDN) | https://dashboard.example.com |
dashboard.logs.debug |
Log debug messages | false |
dashboard.logs.format |
Set the log format, supports console or json |
console |
dashboard.replicas |
Replicas of the dashboard | 1 |
dashboard.resources |
Resource limits and requests for the dashboard pods | See values.yaml |
dashboard.signingKeySecret |
Name of signing key secret for sessions. Can be left blank for development, see https://docs.openfaas.com/openfaas-pro/dashboard/ for production and staging. | "" |
Parameter | Description | Default |
---|---|---|
oidcAuthPlugin.image |
Container image used for the oidc-auth-plugin | See values.yaml |
oidcAuthPlugin.insecureTLS |
Enable insecure TLS | false |
oidcAuthPlugin.replicas |
Replicas of the oidc-auth-plugin | 1 |
oidcAuthPlugin.logs.debug |
Log debug messages | false |
oidcAuthPlugin.logs.format |
Set the log format, supports console or json |
console |
oidcAuthPlugin.resources |
Resource limits and requests for the oidc-auth-plugin containers | See values.yaml |
Parameter | Description | Default |
---|---|---|
eventSubscription.endpoint |
The webhook endpoint to receive events. | "" |
eventSubscription.endpointSecret |
Name of the Kubernetes secret that contains the secret key for signing webhook requests. | "" |
eventSubscription.insecureTLS |
Enable insecure TLS for webhook invocations | false |
eventSubscription.metering.enabled |
Enable metering events. | false |
eventSubscription.metering.defaultRAM |
Default memory value used in function_usage events for metering when no memory limit is set on the function. | 512Mi |
eventSubscription.metering.excludedNamespaces |
Comma-separated list of namespaces to exclude from metering for when functions are used to handle the metering webhook events | "" |
eventSubscription.auditing.enabled |
Enable auditing events | false |
eventSubscription.auditing.httpVerbs |
Comma-separated list of HTTP methods to audit | "PUT,POST,DELETE" |
eventWorker.image |
Container image used for the events-worker | See values.yaml |
eventWorker.resources |
Resource limits and requests for the event-worker container | See values.yaml |
eventWorker.logs.format |
Set the log format, supports console or json |
console |
eventWorker.logs.debug |
Print debug logs | false |
Parameter | Description | Default |
---|---|---|
ingressOperator.create |
Create the ingress-operator component | false |
ingressOperator.image |
Container image used in ingress-operator | openfaas/ingress-operator:0.6.2 |
ingressOperator.replicas |
Replicas of the ingress-operator | 1 |
ingressOperator.resources |
Limits and requests for memory and CPU usage | Memory Requests: 25Mi |
For legacy scaling in OpenFaaS Community Edition.
Parameter | Description | Default |
---|---|---|
alertmanager.create |
Create the AlertManager component | true |
alertmanager.image |
Container image used for alertmanager | See values.yaml |
alertmanager.resources |
Resource limits and requests for alertmanager pods | See values.yaml |
Parameter | Description | Default |
---|---|---|
prometheus.create |
Create the Prometheus component | true |
prometheus.image |
Container image used for prometheus | See values.yaml |
prometheus.retention.time |
When to remove old data from the prometheus db. | 15d |
prometheus.retention.size |
The maximum number of bytes of storage blocks to retain. Units supported: B, KB, MB, GB, TB, PB, EB. 0 meaning disabled. See: Prometheus storage | 0 |
prometheus.resources |
Resource limits and requests for prometheus containers | See values.yaml |
prometheus.recordingRules |
Custom recording rules for autoscaling. | [] |
prometheus.pvc |
Persistent volume claim for Prometheus used so that metrics survive restarts of the Pod and upgrades of the chart | {} |
prometheus.pvc.enabled |
Enable persistent volume claim for Prometheus | false |
prometheus.pvc.storageClassName |
Storage class for Prometheus PVC, set to "" for the default/standard class to be picked |
"" |
prometheus.pvc.size |
Size of the Prometheus PVC, 60-100Gi may be a better fit for a busy production environment | 10Gi |
prometheus.pvc.name |
Name of the Prometheus PVC, required for multiple installations within the same cluster | "" |
prometheus.fsGroup |
Simplified way to set fsGroup for persistent volume access. Set to 65534 (nobody) for proper volume permissions | "" |
prometheus.securityContext |
Simplified container-level security context for Prometheus. Takes precedence over global securityContext when set. Falls back to global securityContext when not set for backward compatibility |
{} |