Skip to main content

Upgrade from 7.3 to 8.0

Before starting to upgrade

ThingPark Enterprise 8.0 upgrade require to update several time cluster wide resources (mongo kafka controler and CRD). These operations, marked as CAP required(Cluster Admin Permissions) and needs clusterAdmin permissions.

ThingPark Enterprise 8.0 update sizing segments and you have to reevaluate the sizing hardware page At same base stations and devices capacity, You will be able to downsize to the smaller segment. It is transparent for compute resources, but mongodb storage require a specific operation to recreate smaller volume.

The upgrade require a workload to run as root to update files owner. If your cluster enforce restricted Pod Security Standard, you will must do manually run the upgrade following this procedure.

In any case, it is alway advised to run a manual backup just before starting the upgrade. Please follow the manual backup procedure for the 7.3 ThingPark Enterprise version.

1. MongoDb replicaset upgrade

1.1. MongoDb 4.4 step

Update the Percona Server Mongodb Operator to 1.14.0 and related resources (crd, rbac: CAP required)

kubectl patch -n $NAMESPACE deploy psmdb-operator --type=strategic --patch '{
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "psmdb-operator",
"image": "repository.thingpark.com/percona-server-mongodb-operator:1.14.0",
"env": [
{
"name": "DISABLE_TELEMETRY",
"value": "false"
}
]
}
]
}
}
}}'
kubectl apply --server-side --force-conflicts -f \
$CONFIG_REPO_BASEURL/manifests/upgrade/percona-server-mongodb-operator-crds-1.14.0.yaml
kubectl apply -n $NAMESPACE -f \
$CONFIG_REPO_BASEURL/manifests/upgrade/percona-server-mongodb-operator-rbac-1.14.0.yaml

Update psmdb custom resource

kubectl patch -n $NAMESPACE psmdb mongo-replicaset --type=merge --patch '{
"spec": {
"crVersion":"1.14.0",
"initImage": "repository.thingpark.com/percona-server-mongodb-operator:1.14.0",
"image": "repository.thingpark.com/percona-server-mongodb:4.4.18-18"
}}'

Restart replicaset members



Identify the mongo cluster primary

MONGO_PASSWORD=$(kubectl -n $NAMESPACE get secrets mongo-replicaset -o jsonpath='{.data.MONGODB_CLUSTER_ADMIN_PASSWORD}'| base64 -d)
MONGO_CLIENT_IMAGE=$(kubectl -n $NAMESPACE get sts mongo-replicaset-rs0 -o jsonpath='{.spec.template.spec.containers[0].image}')
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongo -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.isMaster().primary'| tail -n 1"
If you don't see a command prompt, try pressing enter.
mongo-replicaset-rs0-0.mongo-replicaset-rs0.thingpark-enterprise.svc.cluster.local:27017
pod "mongo-client" deleted

Upgrade arbiter and secondary members of the replica set. Delete each pod one by one and check that the mongodb member stateStr back to respectively ARBITER and SECONDARY before delete next pod

kubectl -n $NAMESPACE delete pod mongo-replicaset-rs0-arbiter-0
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongo -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.status().members'"

kubectl -n $NAMESPACE delete pod mongo-replicaset-rs0-<secondary pod id>
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongo -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.status().members'"

Upgrade the primary after have stepDown it

kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongo -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0-<primary pod id>.mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.stepDown()'"

kubectl -n $NAMESPACE delete pod mongo-replicaset-rs0-<primary pod id>
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongo -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.status().members'"


Update the FeatureCompatibilityVersion

kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongo -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 \
--eval 'db.adminCommand( { setFeatureCompatibilityVersion: \"4.4\" } )'"

# Check FeatureCompatibilityVersion command
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongo -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 \
--eval 'db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )'"
If you don't see a command prompt, try pressing enter.
{
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1739372666, 1),
"signature" : {
"hash" : BinData(0,"G42h8LnrvCE0cQ1Yq+FF2kQdmd0="),
"keyId" : NumberLong("7470465578344382469")
}
},
"operationTime" : Timestamp(1739372666, 1)
}

Check that psmdb custom resource is back to the ready status

kubectl -n $NAMESPACE get perconaservermongodb mongo-replicaset -o jsonpath='{.status.state}'
ready

1.2. MongoDb 5.0 step

Set the enableMajorityReadConcern to true by editing psmdb mongo-replicaset resource and updating the configuration block

kubectl -n $NAMESPACE edit perconaservermongodb mongo-replicaset
spec:
...
replsets:
...
configuration: |
replication:
enableMajorityReadConcern: true

Update the psmdb mongo-replicaset

kubectl patch -n $NAMESPACE psmdb mongo-replicaset --type=merge --patch '{
"spec": {
"initImage": "repository.thingpark.com/percona-server-mongodb-operator:1.14.0",
"image": "repository.thingpark.com/percona-server-mongodb:5.0.14-12"
}}'

Re use the pod restart procedure from MongoDb 4.4 step to update the cluster

Check that psmdb custom resource is back to the ready status

kubectl -n $NAMESPACE get perconaservermongodb mongo-replicaset -o jsonpath='{.status.state}'
ready

Update the FeatureCompatibilityVersion

kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongo -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 \
--eval 'db.adminCommand( { setFeatureCompatibilityVersion: \"5.0\" } )'"

1.3. MongoDb 6.0 step

Update the Percona Server Mongodb Operator to 1.15.0 and related resources (crd, rbac: CAP required)

kubectl patch -n $NAMESPACE deploy psmdb-operator --type=strategic --patch '{
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "psmdb-operator",
"image": "repository.thingpark.com/percona-server-mongodb-operator:1.15.0",
"env": [
{
"name": "DISABLE_TELEMETRY",
"value": "false"
}
]
}
]
}
}
}}'

Update the psmdb mongo-replicaset

kubectl patch -n $NAMESPACE psmdb mongo-replicaset --type=merge --patch '{
"spec": {
"crVersion":"1.15.0",
"initImage": "repository.thingpark.com/percona-server-mongodb-operator:1.15.0",
"image": "repository.thingpark.com/percona-server-mongodb:6.0.9-7"
}}'

kubectl apply --server-side -f \
$CONFIG_REPO_BASEURL/manifests/upgrade/percona-server-mongodb-operator-crds-1.15.0.yaml
kubectl apply -n $NAMESPACE -f \
$CONFIG_REPO_BASEURL/manifests/upgrade/percona-server-mongodb-operator-rbac-1.15.0.yaml

Restart replicaset members



Identify the mongo cluster primary

MONGO_CLIENT_IMAGE=$(kubectl -n $NAMESPACE get sts mongo-replicaset-rs0 -o jsonpath='{.spec.template.spec.containers[0].image}')

kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.isMaster().primary'| tail -n 1"

Upgrade arbiter and secondary members of the replica set. Delete each pod one by one and check that the mongodb member stateStr back to respectively ARBITER and SECONDARY before delete next pod

kubectl -n $NAMESPACE delete pod mongo-replicaset-rs0-arbiter-0
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.status().members'"

kubectl -n $NAMESPACE delete pod mongo-replicaset-rs0-<secondary pod id>
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.status().members'"

Upgrade the primary

kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0-<primary pod id>.mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.stepDown()'"


kubectl -n $NAMESPACE delete pod mongo-replicaset-rs0-<primary pod id>
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.status()'"


Check that psmdb custom resource is back to the ready status

kubectl -n $NAMESPACE get perconaservermongodb mongo-replicaset -o jsonpath='{.status.state}'
ready

Update the FeatureCompatibilityVersion

kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'db.adminCommand( { setFeatureCompatibilityVersion: \"6.0\" } )'"
If you don't see a command prompt, try pressing enter.
{
ok: 1,
'$clusterTime': {
clusterTime: Timestamp({ t: 1741179562, i: 1 }),
signature: {
hash: Binary.createFromBase64("xOPKf4ti1opTPsxojzcXdPFP6VA=", 0),
keyId: Long("7478249012567474181")
}
},
operationTime: Timestamp({ t: 1741179562, i: 1 })
}
pod "mongo-client" deleted

2. Thingpark-data-controllers (CAP required)

Update data controllers in following way:

helm upgrade -i tpe-data-controllers -n $NAMESPACE \
actility/thingpark-data-controllers --version $THINGPARK_DATA_CONTROLLERS_VERSION \
-f values-data-stack-all.yaml

kubectl apply --force-conflicts --server-side -f \
$CONFIG_REPO_BASEURL/manifests/upgrade/strimzi-crds-0.44.0.yaml
kubectl apply --force-conflicts --server-side -f \
$CONFIG_REPO_BASEURL/manifests/upgrade/percona-server-mongodb-operator-crds-1.17.0.yaml
kubectl -n $NAMESPACE apply -f \
$CONFIG_REPO_BASEURL/manifests/upgrade/percona-server-mongodb-operator-rbac-1.17.0.yaml
Note

Pod from strimzi-cluster-operator deployment error in logs and crash can be ignored until data chart application upgrade

3. Thingpark-data

Sizing downgrade preparations

In the values-data-stack-all.yaml you have prepared for the 8.0, you have to update the mongo persistent volume claim size during the upgrade period. The current value must be preserved.

For example if you have a 7.3 segment L and you move to the 8.0 M one to preserve capacity:

...

mongo-replicaset:
persistence:
size: 25Gi # in place of initial 15Gi

...

Restart the mariadb-galera statefulset and wait for the end of operation

kubectl -n $NAMESPACE rollout restart sts mariadb-galera
kubectl -n $NAMESPACE rollout status -w sts mariadb-galera

Upgrade data stack

helm  upgrade -i tpe-data -n $NAMESPACE \
actility/thingpark-data --version $THINGPARK_DATA_VERSION \
-f values-data-stack-all.yaml

Verify that the mariadb-galera cluster have been correctly rollouted

kubectl -n $NAMESPACE rollout status sts mariadb-galera

Command must return

statefulset rolling update complete 3 pods at revision mariadb-galera-5f4cdf8f7c...

Note If mariadb-galera rollout fail, a cold restart is required. Please follow next procedure to trigger the restart

Mariadb cold restart procedure (optional)

Stop ProxySQL router before scalling down the mariadb-galera statefulset

kubectl -n $NAMESPACE scale deployment sql-proxy --replicas=0
kubectl -n $NAMESPACE scale sts mariadb-galera --replicas=0

Once all mariadb-galera pod are stopped, re apply the tpe-data Helm Release configuration

helm -n $NAMESPACE upgrade -i tpe-data actility/thingpark-data \
--version $THINGPARK_DATA_VERSION --reuse-values
kubectl -n $NAMESPACE rollout status -w sts mariadb-galera
kubectl -n $NAMESPACE scale deployment sql-proxy --replicas=2

4. Thingpark-data: post-upgrade tasks

4.1 MongoDb

Restart all replicaset members. Identify the mongo cluster primary

MONGO_CLIENT_IMAGE=$(kubectl -n $NAMESPACE get sts mongo-replicaset-rs0 -o jsonpath='{.spec.template.spec.containers[0].image}')

kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.isMaster().primary'| tail -n 1"

Upgrade arbiter and secondary members of the replica set. Delete each pod one by one and check that the mongodb member stateStr back to respectively ARBITER and SECONDARY before delete next pod

kubectl -n $NAMESPACE delete pod mongo-replicaset-rs0-arbiter-0
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.status().members'"

kubectl -n $NAMESPACE delete pod mongo-replicaset-rs0-<secondary pod id>
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.status().members'"

Upgrade the primary

kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0-<primary pod id>.mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.stepDown()'"

kubectl -n $NAMESPACE delete pod mongo-replicaset-rs0-<primary pod id>
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.status().members'"

Verify that perconaservermongodb mongo-replicaset state back to ready

kubectl -n $NAMESPACE get psmdb mongo-replicaset -o jsonpath='{.status.state}'
ready

Finally update the FeatureCompatibilityVersion

kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 \
--eval 'db.adminCommand( { setFeatureCompatibilityVersion: \"7.0\", confirm: true } )'"

# Check FeatureCompatibilityVersion command
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 \
--eval 'db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )'"

4.2 MariaDb

Run the mysql_upgrade command on each node

kubectl -n $NAMESPACE exec -it mariadb-galera-0 -- mysql_upgrade -u root -p
kubectl -n $NAMESPACE exec -it mariadb-galera-1 -- mysql_upgrade -u root -p
kubectl -n $NAMESPACE exec -it mariadb-galera-2 -- mysql_upgrade -u root -p

5. ThingPark Enterprise upgrade

5.1. Thingpark-application-controllers

Upgrade the thingpark-application-controllers chart:

helm upgrade -i tpe-controllers -n $NAMESPACE \
actility/thingpark-application-controllers --version $THINGPARK_APPLICATION_CONTROLLERS_VERSION \
-f values-thingpark-stack-all.yaml

5.2. Thingpark-enterprise

Finally upgrade the thingpark-enterprise chart using your customization

helm upgrade -i tpe --debug --timeout 20m -n $NAMESPACE \
actility/thingpark-enterprise --version $THINGPARK_ENTERPRISE_VERSION \
-f values-thingpark-stack-all.yaml

5.3. Data volumes downsize

At this stage, the mongo persistent volume claim size can be reverted values-data-stack-all.yaml to the target for the 8.0.

For example, for the M 8.0 segment

...

mongo-replicaset:
persistence:
size: 15Gi

...

Apply this new configuration

helm  upgrade -i tpe-data -n $NAMESPACE \
actility/thingpark-data --version $THINGPARK_DATA_VERSION \
-f values-data-stack-all.yaml

The mongo-replicaset-rs0 statefulset update must be forced by recreate it

kubectl -n $NAMESPACE delete sts mongo-replicaset-rs0 --cascade=orphan

Next steps consist in delete current PVC end use mongo replication to recreate it. Identify first the current primary node

kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.isMaster().primary'| tail -n 1"

Delete the secondary member by removing the persistent volume claim. You will have to use ctrl+C to interrupt next command as pvc can't be deleted until the pod removal

kubectl -n $NAMESPACE delete pvc mongod-data-mongo-replicaset-rs0-<secondary pod id>

Delete the pod. The resync duration will depend on the amount of data stored in mongodb. You have to monitor the mongodb secondary member stateStr. It must back to SECONDARY before go one and remove the primary volume

kubectl -n $NAMESPACE delete pod mongo-replicaset-rs0-<secondary pod id>

kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.status().members'"

Validate that the new persistent volume claim have been recreated with the 8.0 sizing

kubectl -n $NAMESPACE get pvc mongod-data-mongo-replicaset-rs0-<secondary pod id>

Once secondary resync is done, apply the procedure to the primary mongo node after stepDown it

kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0-<primary pod id>.mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.stepDown()'"

kubectl -n $NAMESPACE delete pvc mongod-data-mongo-replicaset-rs0-<primary pod id>
kubectl -n $NAMESPACE delete pod mongo-replicaset-rs0-<primary pod id>
kubectl run -n $NAMESPACE mongo-client -it --rm --restart='Never' \
--overrides='{ "spec": { "imagePullSecrets": [{"name": "thingpark-image-pull-secret"}] } }' \
--env="MONGO_PASSWORD=$MONGO_PASSWORD" --image $MONGO_CLIENT_IMAGE --command -- bash -c \
"mongosh -u clusterAdmin -p $MONGO_PASSWORD mongodb://mongo-replicaset-rs0/admin?replicaSet=rs0 --eval 'rs.status().members'"

Validate that the new persistent volume claim have been recreated with the 8.0 sizing

kubectl -n $NAMESPACE get pvc mongod-data-mongo-replicaset-rs0-<primary pod id>

Verify that perconaservermongodb mongo-replicaset state back to ready

kubectl -n $NAMESPACE get psmdb mongo-replicaset -o jsonpath='{.status.state}'
ready

Verifications