Skip to main content

Administration tasks

Update a configuration parameter

Each ThingPark enterprise reconfiguration requires to run an helm upgrade of the tpe release. After updating your Helm customization files:

  1. Reload environment settings

    # Use your release tag name to be sure to use appropriate configuration
    export RELEASE=<tag name>
    export CONFIG_REPO_BASEURL=https://raw.githubusercontent.com/actility/thingpark-enterprise-kubernetes/$RELEASE
    eval $(curl $CONFIG_REPO_BASEURL/VERSIONS)
    # Set the deployment namespace as an environment variable
    export NAMESPACE=thingpark-enterprise
    # Set the ThingPark segment chosen at capacity planning step
    # Value in l,xl,xxl
    export SEGMENT=<segment>
    # Set the targeted environment
    # Value azure,amazon
    export HOSTING=<hosting>
  2. Apply the updated configuration

    helm upgrade -i tpe --debug --timeout 20m -n $NAMESPACE \
    actility/thingpark-enterprise --version $THINGPARK_ENTERPRISE_VERSION \
    -f values-thingpark-stack-all.yaml

Components impacted by parameters will be rolling restarted.

info

Additionally, if the reconfiguration implies a setting under ingress-nginx, tpe-controllers must also be updated:

helm upgrade -i tpe-controllers -n $NAMESPACE \
actility/thingpark-application-controllers --version $THINGPARK_APPLICATION_CONTROLLERS_VERSION \
-f values-thingpark-stack-all.yaml

Listing deployments / statefulsets

Listing statefulsets of ThingPark Enterprise:

$ kubectl get sts -n $NAMESPACE -l 'app.kubernetes.io/instance in (tpe,tpe-data,mongo-replicaset,kafka-cluster)'
NAME READY AGE
kafka-cluster-kafka 2/2 4d6h
kafka-cluster-zookeeper 3/3 4d6h
lrc 2/2 4d5h
mariadb-galera 3/3 4d6h
mongo-replicaset-rs0 3/3 4d6h
zookeeper 3/3 4d6h

Listing deployments of ThingPark Enterprise:

$ kubectl get deploy -n $NAMESPACE -l 'app.kubernetes.io/instance in (tpe)'
NAME READY UP-TO-DATE AVAILABLE AGE
locsolver 1/1 1 1 4d5h
lrc-proxy 2/2 2 2 4d5h
nssa-network-survey 1/1 1 1 4d5h
nssa-spectrum-analysis 1/1 1 1 4d5h
shellinabox 1/1 1 1 4d5h
smp-tpe 2/2 2 2 44h
sql-proxy 2/2 2 2 4d5h
support 1/1 1 1 4d5h
task-notif-ws 1/1 1 1 4d5h
thingpark-enterprise-controller 1/1 1 1 4d5h
tp-dx-admin 2/2 2 2 4d5h
tp-dx-core 2/2 2 2 4d5h
tp-gui 2/2 2 2 4d5h
tpx-flow-api 2/2 2 2 4d5h
tpx-flow-bridge 1/1 1 1 4d5h
tpx-flow-engine 2/2 2 2 4d5h
tpx-flow-hub 2/2 2 2 4d5h
tpx-flow-supervisor 1/1 1 1 4d5h
twa-admin 2/2 2 2 44h
twa-alarm-notif 1/1 1 1 4d5h
twa-core 2/2 2 2 44h
twa-dev 2/2 2 2 4d5h
twa-ran 2/2 2 2 4d5h
twa-task-res 2/2 2 2 4d5h
wireless-pki 2/2 2 2 44h
wlogger 2/2 2 2 4d5h

All deployments and statefulsets should be READY. If not, go to the troubleshooting guide.

Some other services are started only following the features activated on Cockpit TPE Configuration:

  • If "DX API" feature is disabled, the following service is stopped: tpdx-core.
  • If "IoT Flow" feature is disabled, the following services are stopped: tpx-flow-hub, tpx-flow-bridge, tpx-flow-api and tpx-flow-supervisor.
  • If "DX API" and "IoT Flow" features are disabled, the following services are stopped: tpdx-core, tpdx-admin, tpx-flow-hub, tpx-flow-bridge, tpx-flow-api and tpx-flow-supervisor.
  • If "Node-RED" feature is disabled, the following service is stopped: node-red.

Connecting to a pod container

To connect to a container by using pod's name, for example for the primary lrc:

kubectl exec -n $NAMESPACE -it lrc-0 -c lrc -- bash

Enabling service debug mode

Not available yet

Displaying container logs

To display the logs of lrc pod containers:

# Main container logs
kubectl logs lrc-0 lrc
# Sftp serving rfregions
kubectl logs lrc-0 sftp

Displaying deployments or statefulsets logs

To display the logs of a deployment, for example twa:

kubectl logs deploy/twa-core

To display the logs of a statefulset, for example lrc:

kubectl logs sts/lrc lrc

Stopping and starting deployment or statefulset

To stop a statefulset, for example lrc:

$ kubectl scale sts lrc --replicas=0
statefulset.apps/lrc scaled

To start a statefulset with two replicas, for example lrc:

$ kubectl scale sts lrc --replicas=2
statefulset.apps/lrc scaled

To stop a deployment, for example twa:

$ kubectl scale deploy twa-core --replicas=0
deployment.apps/twa-core scaled

To start a deployment with two replicas, for example twa:

$ kubectl scale deploy twa-core --replicas=2
deployment.apps/twa-core scaled

Checking access to the Repository

To check the access from the Kubernetes cluster to the TPE registry, run a debug pod on one of the worker node:

$ kubectl debug node/<node-name> -it --image=curlimages/curl -- curl -X GET -u <InstallationID>:<InstallationID> https://repository.thingpark.com/v2/thingpark-kubernetes/tpe-controller/tags/list
Creating debugging pod node-debugger-<node-name> with container debugger on node .
{"name":"thingpark-kubernetes/tpe-controller","tags":["0.3.0-1","0.3.1-1","0.3.2-1"]}
info

Update the image name depending authorized registries from your cluster.

If an error is raised, check networking configuration.

If the problem persists, contact your support.

TEX synchronization

TEX synchronization status with LRC can be monitored by invoking following command:

kubectl exec -it lrc-0 -c lrc -- get-tex-sync-status.sh

TEX synchronization is done automatically every day but it can be forced by invoking the following command:

kubectl exec -it lrc-0 -c lrc -- force-tex-resync.sh

You can also export the RF Regions. This allows to download a tgz file containing all RF Regions matching the configured ISM band(s):

  1. From a first shell console, start a port-forward:

    $ kubectl port-forward svc/twa-admin 8080
    Forwarding from 127.0.0.1:8080 -> 8080
  2. And in a second one, use following curl command:

    curl -o rfRegions.tgz http://localhost:8080/thingpark/wirelessAdmin/rest/systems/operators/1/rfRegions/export