Aperture Controller
Overview
The Aperture Controller functions as the brain of the Aperture system. Leveraging an advanced control loop, the Controller routinely analyzes polled metrics and indicators to determine how traffic should be shaped as defined by set policies. Once determined, these decisions are then exported to all Aperture Agents to effectively handle workloads.
The closed feedback loop functions primarily by monitoring the variables reflecting stability conditions (i.e. process variables) and compares them against set points. The difference in the variable values against these points is referred to as the error signal. The feedback loop then works to minimize these error signals by determining and distributing control actions, that adjust these process variables and maintain their values within the optimal range.
Controller CRD
The Aperture Controller is a Kubernetes based application and is installed using the Kubernetes Custom Resource, which is managed by the Aperture Operator.
The configuration for the Aperture Controller process is provided to the
Controller CRD under the controller.config
section. All the configuration
parameters for Aperture Controller are listed
here.
Prerequisites
You can do the installation using aperturectl
CLI tool or using Helm
.
Install the tool of your choice using below links:
- Refer
aperturectl install controller to see all the available command line arguments.
Once the Helm CLI is installed, add the Aperture Controller Helm chart repo in your environment for install/upgrade:
helm repo add aperture https://fluxninja.github.io/aperture/
helm repo update
Installation
By following these instructions, you will have deployed the Aperture Controller into your cluster.
Kubernetes Objects which will be created by following steps are listed here.
Run the following
install
command:- aperturectl
- Helm
aperturectl install controller --version v1.1.0
helm install controller aperture/aperture-controller
By default, Prometheus and Etcd instances are installed. If you don't want to install and use your existing instances of Prometheus or Etcd, configure below values in the
values.yaml
file and pass it withinstall
command:controller:
config:
etcd:
endpoints: ["ETCD_ENDPOINT_HERE"]
prometheus:
address: "PROMETHEUS_ADDRESS_HERE"
etcd:
enabled: false
prometheus:
enabled: falseReplace the values of
ETCD_ENDPOINT_HERE
andPROMETHEUS_ADDRESS_HERE
with the actual values of Etcd and Prometheus, which will be used by the Aperture Controller.- aperturectl
- Helm
aperturectl install controller --version v1.1.0 --values-file values.yaml
helm install controller aperture/aperture-controller -f values.yaml
A list of all the configurable parameters for Etcd are available here and Prometheus are available here.
Note: Please make sure that the flag
web.enable-remote-write-receiver
is enabled on your existing Prometheus instance as it is required by the Aperture Controller.If you want to modify the default parameters or the Aperture Controller config, for example
log
, you can create or update thevalues.yaml
file and pass it withinstall
command:controller:
config:
log:
level: debug
pretty_console: true
non_blocking: false- aperturectl
- Helm
aperturectl install controller --version v1.1.0 --values-file values.yaml
helm install controller aperture/aperture-controller -f values.yaml
All the config parameters for the Aperture Controller are available here.
A list of configurable parameters for the installation can be found in the README.
If you want to deploy the Aperture Controller into a namespace other than
default
, use the--namespace
flag:- aperturectl
- Helm
aperturectl install controller --version v1.1.0 --values-file values.yaml --namespace aperture-controller
helm install controller aperture/aperture-controller -f values.yaml --namespace aperture-controller --create-namespace
Alternatively, you can create the Controller Custom Resource directly on the Kubernetes cluster using the below steps:
Create a
values.yaml
for starting the operator and disabling the creation of Controller Custom Resource and pass it withinstall
command:controller:
create: false- aperturectl
- Helm
aperturectl install controller --version v1.1.0 --values-file values.yaml
helm install controller aperture/aperture-controller -f values.yaml
Create a YAML file with below specifications:
apiVersion: fluxninja.com/v1alpha1
kind: Controller
metadata:
name: controller
spec:
image:
registry: docker.io/fluxninja
repository: aperture-controller
tag: latest
config:
etcd:
endpoints: ["http://controller-etcd.default.svc.cluster.local:2379"]
prometheus:
address: "http://controller-prometheus-server.default.svc.cluster.local:80"All the configuration parameters for the Controller Custom Resource are listed on the README file of the Helm chart.
Apply the YAML file to the Kubernetes cluster using
kubectl
kubectl apply -f controller.yaml
Exposing Etcd and Prometheus services
If the Aperture Controller is installed with the packaged Etcd and Prometheus, follow below steps to expose them outside of the Kubernetes cluster so that the Aperture Agent running on Linux can access them.
Contour is used as a Kubernetes Ingress Controller in below steps to expose the Etcd and Prometheus services out of Kubernetes cluster using Load Balancer.
Any other tools can also be used to expose the Etcd and Prometheus services out of the Kubernetes cluster based on your infrastructure.
Add the Helm chart repo for Contour in your environment:
helm repo add bitnami https://charts.bitnami.com/bitnami
Install the Contour chart by running the following command:
helm install aperture bitnami/contour --namespace projectcontour --create-namespace
It may take a few minutes for the Contour Load Balancer's IP to become available. You can watch the status by running:
kubectl get svc aperture-contour-envoy --namespace projectcontour -w
Once
EXTERNAL-IP
is no longer<pending>
, run below command to get the External IP for the Load Balancer:kubectl describe svc aperture-contour-envoy --namespace projectcontour | grep Ingress | awk '{print $3}'
Add an entry for the above IP in the Cloud provider's DNS configuration. For example, follow steps on Cloud DNS on GKE for Google Kubernetes Engine.
Configure the below parameters to install the Kubernetes Ingress with the Aperture Controller by updating the
values.yaml
created during installation and pass it withinstall
command:ingress:
enabled: true
domain_name: YOUR_DOMAIN_HERE
etcd:
service:
annotations:
projectcontour.io/upstream-protocol.h2c: "2379"Replace the values of
YOUR_DOMAIN_HERE
with the actual value the domain name under with the External IP is exposed.- aperturectl
- Helm
aperturectl install controller --version v1.1.0 --values-file values.yaml
helm upgrade --install controller aperture/aperture-controller -f values.yaml
It may take a few minutes for the Ingress resource to get the
ADDRESS
. You can watch the status by running:kubectl get ingress controller-ingress -w
Once the
ADDRESS
matches the External IP, the Etcd will be accessible onhttp://etcd.YOUR_DOMAIN_HERE:80
and the Prometheus will be accessible onhttp://prometheus.YOUR_DOMAIN_HERE:80
.
Upgrade Procedure
By following these instructions, you will have deployed the upgraded version of Aperture Controller into your cluster.
Use the same
values.yaml
file created as part of Installation Steps and pass it with below command:- aperturectl
- Helm
aperturectl install controller --version v1.1.0 --values-file values.yaml
helm template --include-crds --no-hooks controller aperture/aperture-controller -f values.yaml | kubectl apply -f -
If you have deployed the Aperture Controller into a namespace other than
default
, use the--namespace
flag:- aperturectl
- Helm
aperturectl install controller --version v1.1.0 --values-file values.yaml --namespace aperture-controller
helm template --include-crds --no-hooks controller aperture/aperture-controller -f values.yaml --namespace aperture-controller | kubectl apply -f -
Verifying the Installation
Once you have successfully deployed the resources, confirm that the Aperture Controller is up and running:
kubectl get pod -A
kubectl get controller -A
You should see pods for Aperture Controller and Controller Manager in RUNNING
state and Controller
Custom Resource in created
state.
Applying Policies
The process of creating policies for Aperture can be done either after the installation of the controller or after the installation of the agent, depending on your preference. Generating and applying policies guide includes step-by-step instructions on how to create policies for Aperture in a Kubernetes cluster, which you can follow to create policies according to your needs.
Uninstall
You can uninstall the Aperture Controller and it's components installed above by following below steps:
Uninstall the Aperture Controller:
- aperturectl
- Helm
aperturectl uninstall controller
helm uninstall controller
Alternatively, if you have installed the Aperture Controller Custom Resource separately, follow below steps:
Delete the Aperture Controller Custom Resource:
kubectl delete -f controller.yaml
Delete the Aperture Controller to uninstall the Aperture Operator:
- aperturectl
- Helm
aperturectl uninstall controller
helm uninstall controller
If you have installed the chart in some other namespace than
default
, execute below commands:- aperturectl
- Helm
aperturectl uninstall controller --namespace aperture-controller
helm uninstall controller --namespace aperture-controller
kubectl delete namespace aperture-controllerIf you have installed the Contour chart for exposing the Etcd and Prometheus service, execute the below command:
helm uninstall aperture -n projectcontour
kubectl delete namespace projectcontourOptional: Delete the CRD installed by the Helm chart:
Note: By design, deleting a chart via Helm doesn’t delete the Custom Resource Definitions (CRDs) installed via the Helm chart.
kubectl delete crd controllers.fluxninja.com
kubectl delete crd policies.fluxninja.com