Skip to main content
Version: 2.33.1



The Aperture Agent is the decision executor of the Aperture system. In addition to gathering data, the Aperture Agent functions as a gatekeeper, acting on traffic based on periodic adjustments made by the Aperture Controller. Specifically, depending on feedback from the Controller, the agent will effectively allow or drop incoming requests. Further, supporting the Controller, the agent works to inject information into traffic, including the specific traffic-shaping decisions made and classification labels which can later be used for observability and closed loop feedback.


All the configuration parameters for the Aperture Agent are listed here.

Installation Modes

The Aperture Agent can be installed in the following modes:


Upgrading from one of the installation modes below to the other is discouraged and can result in unpredictable behavior.

  1. Kubernetes

    1. Namespace-scoped Installation

      The Aperture Agent can also be installed with only namespace-scoped resources.

    2. Install with Operator

      The Aperture Agent can be installed using the Kubernetes Operator available for it.


      This method requires access to create cluster level resources such as ClusterRole, ClusterRoleBinding, CustomResourceDefinition and so on.

      Use the Namespace-scoped Installation if you do not want to assign the cluster level permissions.

      • DaemonSet

        The Aperture Agent can be installed as a Kubernetes DaemonSet, where it will get deployed on all the nodes of the cluster.

      • Sidecar

        The Aperture Agent can also be installed as a Sidecar. In this mode, whenever a new pod is started with required labels and annotations, the agent container will be attached with the pod.

  2. Bare Metal or VM

    The Aperture Agent can be installed as a system service on any Linux system that is supported.

  3. Docker

    The Aperture Agent can also be installed on Docker as containers.

Self-Hosted Aperture Controller

When using the self-hosted Aperture Controller instead of the Aperture Cloud Controller, you need to turn off the enable_cloud_controller flag and configure Controller, etcd and Prometheus endpoints directly, for example:

enable_cloud_controller: false
endpoint: ""
endpoints: ["http://controller-etcd.default.svc.cluster.local:2379"]
address: "http://controller-prometheus-server.default.svc.cluster.local:80"
endpoints: ["aperture-controller.default.svc.cluster.local:8080"]
create: true
name: aperture-apikey
key: apiKey
value: "API_KEY"

The values above assume that you have installed the Aperture Controller on the same cluster in default namespace, with etcd and Prometheus using controller as release name. If your setup is different, adjust these endpoints accordingly.


If you're not using Aperture Cloud, simply remove the fluxninja and secrets sections.