Upgrade from 7.2 to 7.3
1. Cluster wide resources
Priority Class
Helm Charts comes with a default configuration using well-known PriorityClass You may use prerequisite to define your own PriorityClasses or use the following manifest to create resources:
kubectl -n $NAMESPACE apply -f $CONFIG_REPO_BASEURL/examples/priority-class/default.yaml
Storage Class
Helm Charts can be configured with well-known StorageClass for Amazon and Azure Kubernetes services. Use the following manifests to create the appropriate StorageClass:
kubectl -n $NAMESPACE apply -f $CONFIG_REPO_BASEURL/examples/storage/storage-class-$HOSTING.yaml
For other hosting options, you may use pre-requisite to define your own StorageClass
Thingpark-data-controllers
Update data controllers in following way:
helm upgrade -i tpe-data-controllers -n $NAMESPACE \
actility/thingpark-data-controllers --version $THINGPARK_DATA_CONTROLLERS_VERSION \
-f values-data-stack-all.yaml
kubectl -n $NAMESPACE apply -f $CONFIG_REPO_BASEURL/manifests/upgrade/strimzi-crds-0.32.0.yaml
kubectl -n $NAMESPACE apply -f $CONFIG_REPO_BASEURL/manifests/upgrade/percona-server-mongodb-operator-crds-1.13.0.yaml
kubectl -n $NAMESPACE apply -f $CONFIG_REPO_BASEURL/manifests/upgrade/percona-server-mongodb-operator-rbac-1.13.0.yaml
- Console warnings can be ignored since first crd have been installed by helm chart
- Pod from strimzi-cluster-operator deployment error in logs and crash can be ignored until data chart application upgrade
Thingpark-application-controllers
Upgrade the thingpark-application-controllers
chart:
helm upgrade -i tpe-controllers -n $NAMESPACE \
actility/thingpark-application-controllers --version $THINGPARK_APPLICATION_CONTROLLERS_VERSION \
-f values-thingpark-stack-all.yaml
2. Data stack
Thingpark-data: pre-upgrade tasks
Prepare mariadb-galera upgrade by patching the statefulset:
-
Prepare the patch of mariadb-galera statefulset
kubectl -n $NAMESPACE get sts mariadb-galera -o yaml > mariadb-galera-patch.yaml
yq -i '.spec.serviceName = "mariadb-galera-headless"' mariadb-galera-patch.yaml -
Update the proxysql router configuration
kubectl -n $NAMESPACE get pods -l app.kubernetes.io/name=sql-proxy -o name | xargs -I{} kubectl -n $NAMESPACE exec {} -c sql-proxy -- mysql -u admin -padmin -h 127.0.0.1 -P 6032 -e "INSERT INTO mysql_servers(hostgroup_id,hostname,port,max_connections) VALUES (10,'mariadb-galera-0.mariadb-galera-headless',3306,100);"
kubectl -n $NAMESPACE get pods -l app.kubernetes.io/name=sql-proxy -o name | xargs -I{} kubectl -n $NAMESPACE exec {} -c sql-proxy -- mysql -u admin -padmin -h 127.0.0.1 -P 6032 -e "INSERT INTO mysql_servers(hostgroup_id,hostname,port,max_connections) VALUES (10,'mariadb-galera-1.mariadb-galera-headless',3306,100);"
kubectl -n $NAMESPACE get pods -l app.kubernetes.io/name=sql-proxy -o name | xargs -I{} kubectl -n $NAMESPACE exec {} -c sql-proxy -- mysql -u admin -padmin -h 127.0.0.1 -P 6032 -e "INSERT INTO mysql_servers(hostgroup_id,hostname,port,max_connections) VALUES (10,'mariadb-galera-2.mariadb-galera-headless',3306,100);"Check that new hostnames are in the mysql_servers table
kubectl -n $NAMESPACE get pods -l app.kubernetes.io/name=sql-proxy -o name | xargs -I{} kubectl -n $NAMESPACE exec -i {} -c sql-proxy -- mysql -u admin -padmin -h 127.0.0.1 -P 6032 -e "SELECT hostgroup_id,hostname,port from mysql_servers;"
Each mariadb backend server must be duplicated in each proxysql pod configuration, for example with
mariadb-galera-0
:hostgroup_id hostname port
10 mariadb-galera-0.mariadb-galera 3306
10 mariadb-galera-0.mariadb-galera-headless 3306Persist the new proxysql configuration:
kubectl -n $NAMESPACE get pods -l app.kubernetes.io/name=sql-proxy -o name | xargs -I{} kubectl -n $NAMESPACE exec {} -c sql-proxy -- mysql -u admin -padmin -h 127.0.0.1 -P 6032 -e "LOAD MYSQL SERVERS TO RUNTIME;"
kubectl -n $NAMESPACE get pods -l app.kubernetes.io/name=sql-proxy -o name | xargs -I{} kubectl -n $NAMESPACE exec {} -c sql-proxy -- mysql -u admin -padmin -h 127.0.0.1 -P 6032 -e "SAVE MYSQL SERVERS TO DISK;" -
Apply the mariadb-galera statefulset patch
kubectl -n $NAMESPACE delete statefulsets.apps mariadb-galera --cascade=orphan
kubectl -n $NAMESPACE apply -f mariadb-galera-patch.yaml -
Rollout restart the mariadb-galera cluster statefulset
$ kubectl -n $NAMESPACE rollout restart statefulset mariadb-galera
$ kubectl -n $NAMESPACE rollout status statefulset mariadb-galera -w
Waiting for 1 pods to be ready...
waiting for statefulset rolling update to complete 1 pods at revision mariadb-galera-98bd5bc59...
Waiting for 1 pods to be ready...
Waiting for 1 pods to be ready...
waiting for statefulset rolling update to complete 2 pods at revision mariadb-galera-98bd5bc59...
Waiting for 1 pods to be ready...
Waiting for 1 pods to be ready...
statefulset rolling update complete 3 pods at revision mariadb-galera-98bd5bc59... -
Apply a mysql_upgrade
kubectl -n $NAMESPACE exec -it mariadb-galera-0 -- mysql_upgrade -u root -p
Thingpark-data
helm upgrade -i tpe-data -n $NAMESPACE \
actility/thingpark-data --version $THINGPARK_DATA_VERSION \
-f values-data-stack-all.yaml
Thingpark-data: post-upgrade tasks
-
Identify the mongo cluster primary
MONGO_PASSWORD=$(kubectl -n $NAMESPACE get secrets maintenance-mongo-account -o jsonpath='{.data.userPassword}' | base64 -d)
MONGO_CLIENT_IMAGE=$(kubectl -n $NAMESPACE get sts mongo-replicaset-rs0 -o jsonpath='{.spec.template.spec.containers[0].image}')
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE -- bash -c \
"mongo -u maintenance -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval \"rs.isMaster().primary\"| tail -n 1"For example:
mongo-replicaset-rs0-0.mongo-replicaset-rs0.thingpark-enterprise.svc.cluster.local:27017
pod "mongo-client" deleted -
Delete in sequence each mongo pod and wait that it get back to Running state. Delete last the pod identified as
PRIMARY
. Next the sequence formongo-replicaset-rs0-0
as primary$ kubectl -n $NAMESPACE delete pod mongo-replicaset-rs0-arbiter-0
$ kubectl -n $NAMESPACE get pod mongo-replicaset-rs0-arbiter-0 -w
NAME READY STATUS RESTARTS AGE
mongo-replicaset-rs0-arbiter-0 0/1 Init:0/1 0 1s
mongo-replicaset-rs0-arbiter-0 0/1 Init:0/1 0 6s
mongo-replicaset-rs0-arbiter-0 0/1 PodInitializing 0 8s
mongo-replicaset-rs0-arbiter-0 0/1 Running 0 9s
mongo-replicaset-rs0-arbiter-0 1/1 Running 0 18skubectl -n $NAMESPACE delete pod mongo-replicaset-rs0-1
kubectl -n $NAMESPACE get pod mongo-replicaset-rs0-1 -w
...
kubectl -n $NAMESPACE delete pod mongo-replicaset-rs0-0
kubectl -n $NAMESPACE get pod mongo-replicaset-rs0-0 -w
... -
Finally verify that
perconaservermongodb
mongo-replicaset
state back to ready.kubectl -n $NAMESPACE get perconaservermongodb mongo-replicaset -o jsonpath='{.status.state}'
ready
3. ThingPark Enterprise upgrade
Start by patching lrc configuration and wait for end of rollout
kubectl -n $NAMESPACE set env sts/lrc ZK_HOSTS=zookeeper-headless:2181
kubectl -n $NAMESPACE rollout status sts lrc
Finally upgrade the thingpark-enterprise
chart using your customization
helm upgrade -i tpe --debug --timeout 20m -n $NAMESPACE \
actility/thingpark-enterprise --version $THINGPARK_ENTERPRISE_VERSION \
-f values-thingpark-stack-all.yaml
4. Post upgrade
Pki post upgrade
Run next command to finalize pki upgrade
kubectl -n $NAMESPACE exec -it deploy/wireless-pki -- ejbca.sh upgrade