Skip to main content

Advanced deployments

Multi-instance deployment

For operational reasons, for instance, run a development and a validation platform or run a ThingPark Enterprise and a ThingPark Wireless instance, you may want to deploy more than one ThingPark occurrence on the same Kubernetes cluster. This scenario requires the following settings for embedded components.

Percona psmdb operator (data stack)

The Percona psmdb operator must be deployed per ThingPark instance to separate their upgrade cycles. The Custom Resource Definition extending Kubernetes API is updated by the first instance upgraded.

info

Percona configured its psmdb operator to only watch for Custom Resources in the namespace where it was deployed. No additional configuration is required.

Strimzi operator (data stack)

The Strimzi operator must also be deployed per ThingPark instance for the same independence reason. It also watches only for Custom Resources in the ThingPark deployment namespace.

The second ThingPark instance must be customized to not deploy the global resources. Add the next parameter to disable it:

strimzi-kafka-operator:
createGlobalResources: false

Cert-manager operator (application stack)

With multiple instances, prefer to use an external standalone deployment of cert-manager. Use the product installation guide to deploy it. Also, each Thingpark instance must be customized to not deploy the controller:

cert-manager:
enabled: false

Ingress nginx (application stack)

Each Thingpark instance requires a dedicated ingress nginx controller for the following reasons:

  • the controller is specifically configured to fit application needs.
  • the controller and LoadBalancer service are configured to gather all Thingpark inbound flows.

For each ThingPark instance, the next parameters must be customized:

  • ingress-nginx.controller.admissionWebhooks.namespaceSelector:

    • Description: Set a namespaceSelector to restrict webhook watch on the deployment namespace

    • Example:

      namespaceSelector:
      matchExpressions:
      - key: kubernetes.io/metadata.name
      operator: In
      values:
      - <namespace name>
  • ingress-nginx.controller.ingressClass:

    • Description: Set the ingressClass name watched by the controller
    • Example: ingressClass: <ingress class name>
  • ingress-nginx.controller.ingressClassResource:

    • Description: Set the ingressClass name created for the ThingPark instance

    • Example:

      ingressClassResource:
      name: <ingress class name>
      enabled: true
      default: false
      controllerValue: "k8s.io/<ingress class name>"
  • <chart name>.ingress.className:

    • Description: Set each ingress resource deployed by each sub-chart with ingressClass created for the ThingPark instance

    • Example:

      twa:
      ingress:
      className: "<ingress class name>"
tip

A complete example can be found in the configuration repository, multi-instance folder.

Bare metal deployment

When Thingpark is deployed on your infrastructure, you may not have the same block storage or network storage services. ThingPark can be deployed using the local storage of Kubernetes workers. In this case, additional pre-requisites are:

  • a local xfs partitioned storage
  • a local ext4 partitioned storage

You can benefit from dynamic provisioning using rancher local-path-provisioner, except for ftp-lrc volume. The root directory of this volume has to be provisioned with 755 permissions while local-path-provisioner uses 777 permissions allowing an unprivileged user to create folders or files.

tip

A complete example can be found in the configuration repository, hosting folder.