The Journey to Traefik Enterprise Edition: Smooth Operations

March 18, 2019

Welcome to the fifth step of our journey towards Traefik Enterprise Edition.

This post focuses on the experience of operating TraefikEE using the traefikeectl command line. You should already feel at ease with TraefikEE’s one-line installation and concepts. If you are not, then you can check the previous posts of the “Journey to TraefikEE:”

Today, we’ll focus on traefikeectl, the command-line tool used to install, configure and operate all your TraefikEE installations, in a lean and repeatable way, independently of your platform.

One Command-Line to Rule Them All

(A voice in the background): I used Traefik because it’s really simple to use. I’m interested by TraefikEE for its high availability features. But distributed systems are really hard. How to deal with this intrinsic complexity?
Containous: Let us introduce traefikeectl (also known as “TraefikEE-cuddle”), the command-line tool which abstracts most of this complexity.

TraefikEE is designed from the ground up to be platform agnostic. From installation to any operation, the workflow is the same on each platform.

All the operations are managed by the command line traefikeectl following this workflow, whichever platform you are using:

  • Install Phase: Installs a TraefikEE cluster (with traefikeectl install or manually),
  • Connect Phase: Make your local traefikeectl aware of an existing TraefikEE cluster allowing multi-cluster management,
  • Deploy Phase: Deploy a “routing configuration” to this TraefikEE instance.

Install Phase

Platform-Specific

Do you remember the “One Line Installation for Kubernetes” from previous posts (and the related documentation)?

traefikeectl install \
  --licensekey="$(cat /keybase/.../traefikee-license)" \
  --dashboard \
  --kubernetes
  # ...

What if you want to install on a Docker Swarm cluster instead?

Well, the answer is to use the flag —-swarm instead of --kubernetes , as described on the “One Line Installation guide for Swarm”:

traefikeectl install \
  --licensekey="$(cat /keybase/.../traefikee-license)" \
  --dashboard \
  --swarm
  # ...

Each “agnostic” feature is then provided by the commandtraefikeectl when required, as --kubernetes.namespace or --swarm.networkname .

Customize Installation

We’ll now dig on how to customize installation for Kubernetes.

(A voice in the background): We are not able to use traefikeectl for installation in our Kubernetes cluster, because the default configuration does not fit our needs.
Containous: The install command of traefikeectl allows you to customize installation. With Kubernetes, you can provide a YAML file to specify custom values.

Following the reference values from the documentation, let’s consider that you want to specify the following elements during the installation:

  • A custom URL for accessing the dashboard, as http(s)://private.mycompany.org/traefikee,
  • Limiting the resources used for each node to 2Gb of memory and 2 CPUs (See the article “Managing Compute Resources for Containers” Kubernetes Documentation for reference).

Before installation, create the following YAML file to specify these values:

## File "traefikee-values.yaml"

# Expose dashboard at http(s)://private.mycompany.org/traefikee
dashboard:
  host: "private.mycompany.org"
  path: "/traefikee"

# Restrict resources usage for each TraefikEE node
resources:
  cpus: "2"
  memory: "2G"

You can now proceed to the “One Line Installation”, by adding the flag --kubernetes.helmvaluespath to use the YAML value file:

traefikeectl install --kubernetes \
  --licensekey=XXXXXXX \
  --dashboard \
  --kubernetes.helmvaluespath=./traefikee-values.yaml

As no one is perfect, traefikeectl might not provide some specific settings required for your platform to work correctly. In this case, you still can “patch” , “edit” or even “annotate” the Kubernetes resources created by traefikee install.

Connect Phase

The next step is to ensure that your traefikeectl command has knowledge of the cluster you want to operate.

Please note that this step is not required if you used a “One Line Installation” with traefikeectl install, as it is automatically done.

The scenario is the following:

  • A TraefikEE cluster, designated astraefikee-staging, is running on your remote Kubernetes platform, in the namespace traefikee-ingress,
  • You want to set up an administration machine (your computer, a freshly onboarded team member, etc.) to operate this cluster,
  • This machine is already configured to reach the Kubernetes cluster.

The connect phase is easy:

traefikeectl connect --clustername=traefikee-staging \
  --kubernetes \
  --kubernetes.namespace=traefikee-ingress

That’s all: traefikeectl is now aware of this cluster and can operate it:


traefikeectl list-nodes

traefikeectl logs

# ...

Under the hood, traefikeectl stored the cluster connection information in a TOML file named after the cluster name. These files are located in ${TRAEFIKEE_HOME}, which defaults to the directory ${HOME}/.config/traefikee (XDG_CONFIG folder).

Please note that if you do not provide the option --clustername , the default value traefikee is used.

It means that you can manage multiple TraefikEE clusters from the same traefikeectl, so you are free to split per environment (staging/production, or event external/internal, etc.).

Deploy Phase

The last step of the workflow is to deploy a “routing configuration”.

(A voice in the background): We followed the previous blog post, and we were able to install TraefikEE on our platform. Now, we want to enable HTTPS and Metrics collection with Prometheus. Do we have to install the cluster again to update the configuration as we did for Traefik?
Containous: Fortunately, you don’t have to reinstall the cluster. Unlike Traefik, TraefikEE lets you update the configuration at runtime.

Let’s say that you want to set up TraefikEE with the following routing configuration:

  • Enable TLS termination on the port 443 (defining a new “entrypoint” named https),
  • Auto-generate TLS certificates using Let’s Encrypt for this new https “entrypoint” . The host names for these certificates are determined from the backend applications (either Ingresses on Kubernetes or labels on Docker Swarm),
  • Enable the Prometheus metric exporter,
  • Enable redirection from http to https for any incoming request.

With Traefik, you would have passed the following options (and restarted Traefik to apply the configuration):

## Traefik Configuration References:
# Entrypoints: https://docs.traefik.io/configuration/entrypoints/
# Metrics: https://docs.traefik.io/configuration/metrics/
# ACME/Let's Encrypt: https://docs.traefik.io/configuration/acme/

traefik --kubernetes \
  --entryPoints='Name:http Address::80 Redirect.EntryPoint:https' \
  --entryPoints='Name:https Address::443 TLS' \
  --defaultentrypoints=https,http \
  --acme.entryPoint=https \
  [email protected] \
  --acme.tlsChallenge \
  --acme.onHostRule=true \
  --metrics.prometheus

With TraefikEE, the same configuration is passed to the traefikeectl deploy command. It’s sent to the TraefikEE control plane through the traefikeectl API, stored on the TraefikEE Control Plane and then applied to all the TraefikEE Data Nodes (which are hot-reloaded).

traefikeectl deploy --kubernetes \
  --entryPoints='Name:http Address::80 Redirect.EntryPoint:https' \
  --entryPoints='Name:https Address::443 TLS' \
  --defaultentrypoints=https,http \
  --acme.entryPoint=https \
  [email protected] \
  --acme.tlsChallenge \
  --acme.onHostRule=true \
  --metrics.prometheus

If you prefer using TOML file instead of command-line flags:

# File "traefik.toml"

defaultEntryPoints = ["https","http"]

[entryPoints]
  [entryPoints.http]
  address = ":80"
    [entryPoints.http.redirect]
    entryPoint = "https"
  [entryPoints.https]
  address = ":443"
  [entryPoints.https.tls]

[kubernetes]

[acme]
email = "[email protected]"
storage = "acme.json"
entryPoint = "https"
onHostRule = true
[acme.tlsChallenge]
traefikeectl deploy --configfile=./traefik.toml

That’s all for today: we installed a customized cluster, connected a remote machine to operate with traefikeectl, and deployed the same configuration as the one you could have used for Traefik, but without taking down the cluster, and without loosing any requests!

You can learn more about the traefikeectl command line on the reference documentation.

Related Posts
Getting Started with Traefik Proxy and the New Kubernetes Gateway API 0.4

Getting Started with Traefik Proxy and the New Kubernetes Gateway API 0.4

Manuel Zapf
·
Kubernetes
·
February, 2022

In this article, you’ll find a summary of all the changes introduced in Gateway API 0.4, a walkthrough on how to install and configure Traefik Proxy to use the Gateway API, as well as how to migrate your services from v1alpha1 to v1alpha2.

Unleash the Power of Traefik for High Availability Load Balancing

Unleash the Power of Traefik for High Availability Load Balancing

Neil McAllister
·
How To
·
December, 2020

What if it was possible to achieve true high availability using only Traefik Proxy, Traefik Enterprise, and a few other, easy-to-deploy open-source networking tools?

From Zero to Hero: Getting Started with k0s and Traefik

From Zero to Hero: Getting Started with k0s and Traefik

Kevin Crawley
·
How To
·
December, 2020

K0s is a new Kubernetes distribution from Mirantis. It's similar to Rancher Labs' K3s, yet it ships only the bare minimum of extensions. K0s is a new Kubernetes distribution from Mirantis. This post covers how to configure k0s to include Traefik and begin routing your applications with CRDs.

Traefik Labs uses cookies to improve your experience. By continuing to browse the site you are agreeing to our use of cookies. Find out more in the Cookie Policy.