Skip to main content

Sizing hardware

This topic describes the platform sizing that applies to the Appliance/VM deployment method. You can use this deployment both in standalone or High Availability (HA) mode.

This topic helps you choose the right hardware resource sizing for your IoT deployment. You can assess how much a ThingPark Enterprise platform can cost you prior to purchasing your equipment.

Case 1 - You want to purchase some hardware resources for a target IoT deployment for the first time. To learn more, see Case 1 - Determining the hardware sizing for a target IoT deployment.

Case 2 - You already have some hardware and want to know if the sizing fulfills the requirements of your new target IoT deployment. To learn more, see Case 2 - Determining the maximum deployment capacity of the current (or target) hardware sizing.

Standalone deployment

The next table describes the hardware sizing requirements according to your IoT deployment for each hardware sizing segment (XS up to XXL). It gives you the number of base stations and devices, and the LoRaWAN® uplink/downlink traffic rate.

Extra-Small (XS)Small (S)Medium (M)Large (L)Extra-Large (XL)Double-Extra-Large (XXL)
Base stationsUp to 5Up to 10Up to 50Up to 100Up to 200Up to 1,000
DevicesUp to 1,000Up to 2,000Up to 10,000Up to 20,000Up to 50,000Up to 300,000
Average Traffic Rate (uplink + downlink, msg/sec)0.30.6361590
Peak Traffic Rate (msg/sec) (1)1.53153060180
HW req.
Minimum CPU score (2)
HW req.
Minimum CPU mark (indicative) (3)
HW req.
HW req.
Disk write operations/sec (average/peak)
HW req.
Disk read operations/sec (average/peak)
HW req.
Storage size (GB) (4)
HW req.
TBW (5) over 5 years
Example of AWS EC2 sizing categorym5.large + gp2 volumem5.large + gp2 volumem5.xlarge + gp2 volumem5.xlarge + gp2 volumem5.2xlarge + gp3 volumem5.8xlarge + gp3 volume
Example of Azure VM sizing categoryD2sv4 + premium SSDD2sv4 + premium SSDD4sv4 / D4sv5 + premium SSDD4sv4 / D5sv5 + premium SSDD8sv4 / D8sv5 + premium SSDD32sv4 / D32sv5 + premium SSD

(1) The peak load (uplink and downlink packets per second) cannot be sustained over more than one minute.

(2) The CPU score can be assessed through the ThingPark HW benchmark script which is included in self-hosted TPE image distribution.

(3) The indicative CPU mark refers to the PassMark "Average CPU Mark" shown in the PassMark CPU list. This value is given as an indication to the range of CPU models required on a standalone appliance for each platform sizing segment. The definite CPU sizing must be validated against the ThingPark's minimal CPU score assessed through the HW benchmark script.

(4) Refers to the available storage space. For instance, if RAID1 is used for a Small Segment, the platform must have two disks of 90GB each.

(5) TeraBytes Written (TBW) is a reliability metric used to evaluate the SSD lifetime.

High Availability deployment

Deploying self-hosted ThingPark Enterprise in High-Availability mode requires a 3-node cluster:

  • Two identical nodes (called PRIMARY nodes) sized according to the targeted capacity. Each PRIMARY node must respect the sizing segments presented in the preceding table.

  • A third tiny node (called arbiter node) acting as database arbiter to prevent split brain issues. The sizing of arbiter node is fixed whatever the target capacity and the underlying sizing segment.

Inter-node requirements

While it is strongly recommended to deploy the three nodes in three different geographical locations to mitigate the risk of accidents (fires) or natural disasters, the following requirements must be ensured between the different nodes of the high availability cluster:

  • Latency must be less than 10 milliseconds.
  • Inter-server bandwidth must be at least 1 Gbps.

The following table provides the sizing requirement of the arbiter node:

Hardware requirementAll sizing segments
Minimum CPU score (1)9,700
Minimum CPU mark (indicative) (2)1,776
Disk write operations/sec (average/peak)20/100
Disk read operations/sec (average/peak)20/100
Storage size (GB) (3)20
TBW (4) over 5 years0.1
Example of AWS EC2 sizing categoryc5.large + gp2 volume
Example of Azure VM sizing categoryF2sv2 + premium SSD

(1) CPU score can be assessed through ThingPark HW benchmark script, included in the self-hosted TPE image distribution.

(2) Indicative CPU mark refers to the PassMark "Average CPU" Mark referenced by PassMark Software - CPU Benchmarks.

(3) Refers to available storage space. If RAID1 is used, the node must have two disks of 20GB each.

(4) TeraBytes Written (TBW) is a reliability metric used to evaluate the SSD lifetime.

Determining the hardware sizing for a target IoT deployment

  1. Determine the target number of base stations and devices for your IoT deployment.

  2. Based on the expected traffic profile of your devices derive the total average number of messages per second expected for your deployment. The expected traffic profile corresponds to the average number of uplink/downlink messages exchanged per day between the device and the ThingPark core network).

  3. From the target design parameters defined in the preceding steps 1 and 2, choose the minimum sizing segment fulfilling all design targets:

    • For a deployment having four base stations, 1200 devices and 0.6 messages per second, the minimum sizing segment must be Small (S).

    • For a deployment having 50 base stations, 8000 devices and two messages per second, the minimum sizing segment must be Medium (M).

  4. Read the minimum hardware resources required for the target sizing segment in Hardware sizing requirements shown in the preceding table:

    • If you have access to the target appliance / deployment environment, we recommend that you run a hardware benchmark to assess the effective CPU performance of your deployment server. To learn more, see Running a benchmark of the hardware.

    • If you cannot run the hardware benchmark on your target appliance / deployment environment, use the Minimum CPU score 2 to determine which CPU models fits for each segment. To do this, compare the indicative Minimum CPU mark (indicative) 3 shown in the with the "Average CPU Mark" referenced by the PassMark CPU list.

Determining the maximum deployment capacity of the current (or target) hardware sizing

  1. Determine your sizing segment:

  2. Read the first four rows shown from the preceding table to determine the maximum number of base stations, devices and traffic load supported by your sizing segment.

  3. If the current sizing does not fulfill your target deployment requirements, upgrade your hardware to the right sizing segment. For instance, if your current sizing segment is "XS" while you have an average of 0.5 messages per second, the right sizing segment is "S".

Running a benchmark of the hardware

You can benchmark your hardware resource at any time:

  • After you have purchased your hardware equipment to verify that it fulfills the IoT deployment requirements that you want to put in place.

  • If you want to verify that your existing IoT deployment has the proper hardware sizing.

To do this, you can use one of the following procedures:

Using the ThingPark Enterprise benchmark script

  1. Install the ThingPark Enterprise ISO on the targeted server.

  2. Open an SSH session using the support user.

  3. Ensure that Docker is stopped: systemctl stop docker.

  4. Run tpe-bench.


The bench must not be executed from the Cockpit Terminal.

The following capture shows an example of the script output:

Using sysbench without installing ThingPark Enterprise

You can benchmark your server without installing the ThingPark Enterprise ISO.

Note The server must be running a Linux distribution and sysbench must be installed.

For example, on Debian/Ubuntu: apt install sysbench.

  1. Ensure no other workload is running on the server.

  2. CPU bench:

    1. Run the following command: sysbench --threads=50 --time=10 --events=0 cpu run

    2. Compare the Minimum CPU score shown in the preceding table with the "total number of events" value (in the "General statistics" section) from the sysbench output.

  3. IO bench:

    1. Go in a directory in the targeted device with enough disk space.

    2. Prepare the bench by running sysbench fileio --file-total-size=10G prepare.

    3. Run sysbench --threads=50 --time=10 --events=0 --file-total-size=1G --file-test-mode=rndwr --file-extra-flags=direct --file-fsync-all=on fileio run

    4. Compare the "Disk OPS (write/s)" from the table above with the "writes/s" value (in the "File operations" section) from the sysbench output.

    5. Cleanup: sysbench fileio --file-total-size=10G cleanup