Upgrade from 7.3 to 8.0
ThingPark Enterprise 8.0 upgrade require to update several time cluster wide
resources (mongo kafka controler and CRD). These operations, marked as
CAP required
(Cluster Admin Permissions) and needs clusterAdmin permissions.
1. MongoDb replicaset upgrade
1.1. MongoDb Primary/Secondary/Secondary (PSS) topology upgrade
Patch the psmdb resource to increase the number of Mongo data nodes to 3
kubectl -n $NAMESPACE patch psmdb mongo-replicaset \
--type='json' -p='[{"op": "replace", "path": "/spec/replsets/0/size", "value": 3}]'
Monitor replicaset status and check that the new node with Id 3 appears.
The node stateStr
value must get the "SECONDARY"
value before go one step further
MONGO_PASSWORD=$(kubectl -n $NAMESPACE get secrets mongo-replicaset -o jsonpath='{.data.MONGODB_CLUSTER_ADMIN_PASSWORD}'| base64 -d)
MONGO_CLIENT_IMAGE=$(kubectl -n $NAMESPACE get sts mongo-replicaset-rs0 -o jsonpath='{.spec.template.spec.containers[0].image}')
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongo -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.status()'"
Note: The mongo node with id 2 is the arbiter. If the kubernetes cluster have only 3 nodes, it will be preempted by the Kubernetes scheduler. In this case, it's
stateStr
will be "(not reachable/healthy)" and imply a temporary 2 nodes topology during the initial sync of new data node. Existing data nodes must note be stopped during this period
Finally update the psmdb resource to stop the arbiter Statefulset
kubectl -n $NAMESPACE patch psmdb mongo-replicaset --type='json' \
-p='[{"op": "replace", "path": "/spec/replsets/0/arbiter/enabled", "value": false}]'
1.2. MongoDb rolling upgrades
1.2.1. MongoDb 4.4 step
Update the Percona Server Mongodb Operator to 1.14.0 and related resources (crd, rbac: CAP required)
kubectl patch -n $NAMESPACE deploy psmdb-operator --type=strategic --patch '{
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "psmdb-operator",
"image": "actility-release-images.repo.int.actility.com/percona-server-mongodb-operator:1.14.0",
"env": [
{
"name": "DISABLE_TELEMETRY",
"value": "false"
}
]
}
]
}
}
}}'
kubectl apply --server-side --force-conflicts -f \
$CONFIG_REPO_BASEURL/manifests/upgrade/percona-server-mongodb-operator-crds-1.14.0.yaml
kubectl apply -n $NAMESPACE -f \
$CONFIG_REPO_BASEURL/manifests/upgrade/percona-server-mongodb-operator-rbac-1.14.0.yaml
Update psmdb custom resource
kubectl patch -n $NAMESPACE psmdb mongo-replicaset --type=merge --patch '{
"spec": {
"crVersion":"1.14.0",
"initImage": "actility-release-images.repo.int.actility.com/percona-server-mongodb-operator:1.14.0",
"image": "actility-release-images.repo.int.actility.com/percona-server-mongodb:4.4.18-18"
}}'
Restart replicaset members
Identify the mongo cluster primary
MONGO_CLIENT_IMAGE=$(kubectl -n $NAMESPACE get sts mongo-replicaset-rs0 -o jsonpath='{.spec.template.spec.containers[0].image}')
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongo -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.isMaster().primary'| tail -n 1"
Upgrade secondary members of the replica set. Delete each pod one by one and
check that the mongodb member stateStr
back to SECONDARY
before delete next pod
kubectl -n $NAMESPACE delete pod mongo-replicaset-rs0-<secondary pod id 1>
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongo -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.status()'"
kubectl -n $NAMESPACE delete pod mongo-replicaset-rs0-<secondary pod id 2>
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongo -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.status()'"
Upgrade the primary
kubectl -n $NAMESPACE delete pod mongo-replicaset-rs0-<primary pod id>
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongo -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.status()'"
Check that psmdb custom resource is back to the ready status
kubectl -n $NAMESPACE get perconaservermongodb mongo-replicaset -o jsonpath='{.status.state}'
ready
Update the FeatureCompatibilityVersion
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongo -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 \
--eval 'db.adminCommand( { setFeatureCompatibilityVersion: \"4.4\" } )'"
# Check FeatureCompatibilityVersion command
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongo -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 \
--eval 'db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )'"
1.2.2. MongoDb 5.0 step
Set the enableMajorityReadConcern
to true
by editing psmdb mongo-replicaset
resource and updating the configuration block
kubectl -n $NAMESPACE edit perconaservermongodb mongo-replicaset
spec:
...
replsets:
...
configuration: |
replication:
enableMajorityReadConcern: true
Update the psmdb mongo-replicaset
kubectl patch -n $NAMESPACE psmdb mongo-replicaset --type=merge --patch '{
"spec": {
"initImage": "actility-release-images.repo.int.actility.com/percona-server-mongodb-operator:1.14.0",
"image": "actility-release-images.repo.int.actility.com/percona-server-mongodb:5.0.14-12"
}}'
Re use the pod restart procedure from MongoDb 4.4 step to update the cluster
Check that psmdb custom resource is back to the ready status
kubectl -n $NAMESPACE get perconaservermongodb mongo-replicaset -o jsonpath='{.status.state}'
ready
Update the FeatureCompatibilityVersion
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongo -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 \
--eval 'db.adminCommand( { setFeatureCompatibilityVersion: \"5.0\" } )'"
1.2.3. MongoDb 6.0 step
Update the Percona Server Mongodb Operator to 1.15.0 and related resources (crd, rbac: CAP required)
kubectl patch -n $NAMESPACE deploy psmdb-operator --type=strategic --patch '{
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "psmdb-operator",
"image": "actility-release-images.repo.int.actility.com/percona-server-mongodb-operator:1.15.0",
"env": [
{
"name": "DISABLE_TELEMETRY",
"value": "false"
}
]
}
]
}
}
}}'
Update the psmdb mongo-replicaset
kubectl patch -n $NAMESPACE psmdb mongo-replicaset --type=merge --patch '{
"spec": {
"crVersion":"1.15.0",
"initImage": "actility-release-images.repo.int.actility.com/percona-server-mongodb-operator:1.15.0",
"image": "actility-release-images.repo.int.actility.com/percona-server-mongodb:6.0.9-7"
}}'
kubectl apply --server-side -f \
$CONFIG_REPO_BASEURL/manifests/upgrade/percona-server-mongodb-operator-crds-1.15.0.yaml
kubectl apply -n $NAMESPACE -f \
$CONFIG_REPO_BASEURL/manifests/upgrade/percona-server-mongodb-operator-rbac-1.15.0.yaml
Restart replicaset members
Identify the mongo cluster primary
MONGO_CLIENT_IMAGE=$(kubectl -n $NAMESPACE get sts mongo-replicaset-rs0 -o jsonpath='{.spec.template.spec.containers[0].image}')
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.isMaster().primary'| tail -n 1"
Upgrade secondary members of the replica set. Restart each pod and check that
the mongodb node stateStr
back to SECONDARY
before delete next pod
kubectl -n $NAMESPACE delete pod mongo-replicaset-rs0-<secondary pod id 1>
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.status()'"
kubectl -n $NAMESPACE delete pod mongo-replicaset-rs0-<secondary pod id 2>
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.status()'"
Upgrade the primary
kubectl -n $NAMESPACE delete pod mongo-replicaset-rs0-<primary pod id>
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.status()'"
Check that psmdb custom resource is back to the ready status
kubectl -n $NAMESPACE get perconaservermongodb mongo-replicaset -o jsonpath='{.status.state}'
ready
Update the FeatureCompatibilityVersion
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'db.adminCommand( { setFeatureCompatibilityVersion: \"6.0\" } )'"
2. Thingpark-data-controllers (CAP required)
Update data controllers in following way:
helm upgrade -i tpe-data-controllers -n $NAMESPACE \
actility/thingpark-data-controllers --version $THINGPARK_DATA_CONTROLLERS_VERSION \
-f values-data-stack-all.yaml
kubectl apply --server-side -f \
$CONFIG_REPO_BASEURL/manifests/upgrade/strimzi-crds-0.42.0.yaml
kubectl apply --server-side -f \
$CONFIG_REPO_BASEURL/manifests/upgrade/percona-server-mongodb-operator-crds-1.16.1.yaml
kubectl -n $NAMESPACE apply -f \
$CONFIG_REPO_BASEURL/manifests/upgrade/percona-server-mongodb-operator-rbac-1.16.1.yaml
Pod from strimzi-cluster-operator deployment error in logs and crash can be ignored until data chart application upgrade
3. Thingpark-data
helm upgrade -i tpe-data -n $NAMESPACE \
actility/thingpark-data --version $THINGPARK_DATA_VERSION \
-f values-data-stack-all.yaml
4. Thingpark-data: post-upgrade tasks
4.1 MongoDb
Restart all replicaset members. Identify the mongo cluster primary
MONGO_CLIENT_IMAGE=$(kubectl -n $NAMESPACE get sts mongo-replicaset-rs0 -o jsonpath='{.spec.template.spec.containers[0].image}')
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.isMaster().primary'| tail -n 1"
Upgrade secondary members of the replica set. Restart each pod and check that
the mongodb node stateStr
back to SECONDARY
before delete next pod
kubectl -n $NAMESPACE delete pod mongo-replicaset-rs0-<secondary pod id 1>
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.status()'"
kubectl -n $NAMESPACE delete pod mongo-replicaset-rs0-<secondary pod id 2>
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.status()'"
Upgrade the primary
kubectl -n $NAMESPACE delete pod mongo-replicaset-rs0-<primary pod id>
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.status()'"
Verify that perconaservermongodb
mongo-replicaset
state back to ready
kubectl -n $NAMESPACE get psmdb mongo-replicaset -o jsonpath='{.status.state}'
ready
Finally update the FeatureCompatibilityVersion
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 \
--eval 'db.adminCommand( { setFeatureCompatibilityVersion: \"7.0\", confirm: true } )'"
# Check FeatureCompatibilityVersion command
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 \
--eval 'db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )'"
4.2 MariaDb
Run the mysql_upgrade command on each node
kubectl -n $NAMESPACE exec -it mariadb-galera-0 -- mysql_upgrade -u root -p
kubectl -n $NAMESPACE exec -it mariadb-galera-1 -- mysql_upgrade -u root -p
kubectl -n $NAMESPACE exec -it mariadb-galera-2 -- mysql_upgrade -u root -p
5. ThingPark Enterprise upgrade
5.1. Thingpark-application-controllers
Upgrade the thingpark-application-controllers
chart:
helm upgrade -i tpe-controllers -n $NAMESPACE \
actility/thingpark-application-controllers --version $THINGPARK_APPLICATION_CONTROLLERS_VERSION \
-f values-thingpark-stack-all.yaml
5.2. Thingpark-enterprise
Finally upgrade the thingpark-enterprise
chart using your customization
helm upgrade -i tpw --debug --timeout 20m -n $NAMESPACE \
actility/thingpark-enterprise --version $THINGPARK_ENTERPRISE_VERSION \
-f values-thingpark-stack-all.yaml