Skip to content

TT, InfluxDB, and VictoricMetrics comparison under low loads, raspberry‐pi0

Yi Lin edited this page Mar 16, 2025 · 8 revisions

Table of Contents

1. Introduction

2. Experiment Settings

3. Resource consumption comparison

3.1 100 time series

3.1.1 CPU

3.1.2 Memory

3.1.3 IO

3.2 1000 time series

3.3 10,000 time series

4. Response time comparison

5. Data compression comparison

6. Conclusion

1. Introduction

We have evaluated max cardinalities (i.e., time series number) and throughputs of TT in various settings (see wiki). In this experiment, we want to see how TT performs under low loads. After all, we won't collect too many metrics in daily scenarios. We want to see how much hardware resources needed by TSDBs to handle daily scenarios.

We will compare CPU, Memory, and IO usage of InfluxDB, VictoriaMetrics, and TT under 100, 1000, 10000 time series. We will also show read and write response times, and disk spaces used by different TSDBs.

2. Experiment settings

2.1 Hardware

To highlight the difference of hardware resource consumptions, we will use a RaspberryPI-0-w with 32 bit OS and SD card, a very small and cheap ($15 officially) Single Board Computer (SBC). Though higher profile SBCs (e.g., RPI-5 for $120) or servers support higher cardinalities and throughputs, we are more interested to provide users a very lightweight TSDB practically usable even in very cheap hardware, neither powerful cpu, large memory, nor SSD required.

PI-zero-W single board computer

2.2 Compared with InfluxDB and VictoriaMetrics

We choose InfluxDB v1.8 and VictoriaMetrics v1.108.1 for comparison because

  1. they are top ranked popular TSDBs;
  2. they are among very few TSDBs (if not the only two) which can run in RPI-0-w with ARM 32bit OS.

2.2 Benchmark software

We use IoTDB-benchmark, same as a previous performance evaluation, except:

  • Use TickTockDB 0.20.7 (instead of 0.11.0).
  • Each test lasted for 12 hours (instead of 6 hours), simulating a scenario that clients collect metrics from sensors in groups of devices and send them to TickTockDB/InfluxDB/VictoriaMetrics every 10 seconds continuously for 12 hours.
  • 90% writes and 10% reads (only time_range reads since it is typical) (sample benchmark configs here: TT, InfluxDB, VictoriaMetrics).

Note that cardinality is equal to number of time series which is (device number * sensor number per device). We use 10 sensors per device consistently in the test. In DevOps scenarios you can think of group as metric, device as server, and sensor as tag. E.g., we want to collect a metric cpu.usr (group) from a list of 8-core cpu servers (device) which uses a tag cpu_id (sensor) to identify 8 different cores.

3. Resource consumption comparison

We applied three workloads (100, 1000, and 10000 time series) to all three TSDBs. A write operation will insert 10 data points. A read operation will read up to last 2500 seconds of data.

3.1 100 time series (10 devices * 10 sensor/device)

First, let's look at a case with a very small cardinality, 100.

3.1.1 CPU

Note that the figure below show cpu.idle which measures how much percent of cpu left in PI-0-w when running TSDBs. It is more accurate than just showing cpu.usr since cpu.sys is also consumed by TSDBs. Also note that we also ran a tcollector instance in PI-0-W to collect metrics. The tcollector consumes about 2-3% of cpu. In the rest of the wiki, we just ignore tcollector since it is consistent with all TSDBs.

The figure shows that with the very small cardinality InfluxDB and VictoriaMetrics consume similar cpu, about 15% (i.e., cpu.idle=85%). Initially InfluxDB just used 10% of cpu but then 15% after a few hours. TT uses 10% of cpu consistently, better than the other two.

CPU idle, 10 devices

3.1.2 Memory

Note that we collect Resident Set Size (RSS), the amount of memory a process used. It doesn't include caches.

InfluxDB and VictoriaMetrics both used up to 55MB RSS memory, while TT used less than 5MB, just 1/10 of the other two.

RSS memory, 10 devices

3.1.3 IO

Disk utilization is a metric to measure how much time a disk is busy at percentage. 100% means a disk is completely saturated. The lower the better.

InfluxDB used about 1.8% of disk, and VictoriaMetrics utilized slightly higher than 2%. TT's disk utilization is only 0.2%.

IO util, 10 devices

When we look at the write and read byte rates, the metrics were consistent with disk utilization. The write rate of InfluxDB was about 30kB/s, and VictoriaMetrics about 50kB/s, respectively. The write rate of TT was only 1 to 3kB/s, just 1/10 of the other two TSDBs. This indicates that TT's write is more efficient than InfluxDB and VictoriaMetrics. We attributed the efficiency to TT's data compression and how data are flushed into disks.

Read rates was so small that it didn't matter much.

IO write & read byte rate, 10 devices

3.2 1000 time series (100 devices * 10 sensor/device)

With 100 time series all three TSDBs used a very small percentage of hardware resources in PI-0-w, though TT still outperformed the other two. Now let's increase the cardinality 10 times bigger by just increasing the device number from 10 to 100.

3.2.1 CPU

InfluxDB had the same pattern as 100 cardinality. Its cpu usage was 20% (i.e., cpu.idle=80%) for a few hours initially, and then increased to 30% (i.e., cpu.idle=70%).

VictoriaMetrics behaved better than InfluxDB in 1000 cardinality. Its cpu usage was around 22% (i.e., cpu.idle=78%) consistently.

TT is still the TSDB using the least cpu, only 10% (i.e., cpu.idle=90%).

CPU idle, 100 devices

3.2.2 Memory

With higher cardinality (1000 time series), InfluxDB used up to 90MB RSS memory and VictoriaMetrics 52 to 60MB, respectively. VictoriaMetrics is better. TT is still the best of the three. It only used less than 6MB RSS memory.

RSS memory, 100 devices

3.2.3 IO

InfluxDB's disk utilization was bumped up to 10 to 12% with 1000 time series. There is still room for disk utilization. Usually a 50% disk utilization would indicate the disk is closed to saturation and not usable soon.

VictoriaMetrics's disk utilization was about 4 to 6%. It was still relatively low.

TT's disk utilization was still very very low, at less than 1%.

IO util, 100 devices

Similar to disk utilization, write rates appeared as the same pattern. InfluxDB's write rate was about 180 to 200kB/s, and VictoriaMetrics 90 to 150 kB/s, respectively. TT's write rate was around 3 to 5 kB/s usually. But it occasionally bumped up to 16kB/s, still much lower than the peaks of InfluxDB and VictoriaMetrics.

IO write & read byte rate, 100 devices

3.3 10,000 time series (1000 devices * 10 sensor/device)

We kept increasing the cardinality to 10,000 by increasing the number of devices 10 times. The cardinality is considered as a medium load. Let's see how the three TSDBs behave.

3.3.1 CPU

The first thing we observed was that InfluxDB couldn't handle this load. Its cpu usage was spiked to 100% since beginning. We had to stop the test earlier since the benchmark couldn't catch up with the planned schedule. It would take forever to finish the planned 12 hours test.

VictoriaMetrics was still able to handle 10,000 time series. Its cpu usage was around 50% consistently.

TT's cpu usage was consistently less than 20%, still lots of room in cpu.

CPU idle, 1000 devices

3.3.2 Memory

InfluxDB's RSS memory kept going up to 180MB before termination.

VictoriaMetrics's RSS memory grew to 110MB.

TT's RSS memory was below 16MB. Note that PI-0-w has 512MB memory. There are still plenty of memory for VictoriaMetrics and TT.

RSS memory, 1000 devices

3.3.3 IO

InfluxDB's disk utilization was 50%. Even though it wasn't 100%, we consider IO was almost saturated.

VictoriaMetrics's disk utilization was below 8%.

TT's disk utilization was still very very low. Most of times it was unnoticable, with a few bumps less than 2%.

IO util, 1000 devices

InfluxDB's write rate was less than 1 MB/s. Note that we used a v30 SanDisk SD card. We tested write and read bytes rate using dd, they are 21.2MB/s and 44.1MB/s, respectively. The write rate was very small compared with the limit, but InfluxDB's disk utilization was 50%. This means that InfluxDB's write to disks is not efficient at all.

VictoriaMetrics's write rate was 100 to 150 kB/s.

TT's write rate was below 20 kB/s for most of the times, but with a spike up to 75 kB/s initially. It was likely due to initialization of metadata files.

IO write & read byte rate, 1000 devices

4. Response time comparison

Note that we applied three workloads (100, 1000, and 10000 time series) to all three TSDBs. The following two figures show read and write average response times per operation, respectively. A write operation will insert 10 data points. A read operation will read up to last 2500 seconds of data. Note that each data point in the figures has mean, low, and high values. We ran at least 3 times (36 hours=3*12 hours) for each data point to gather these information. Also note the figures use logarithmic scale in Y axis since the difference between TT and others are too big.

Both figures show that TT is significantly faster than VictoriaMetrics which is faster than InfluxDB.

At a very low cardinality (i.e., number of time series), 10, the read average response times of TT, VictoriaMetrics, and InfluxDB are 33.94, 60.46, and 128.36 milliseconds, respectively. At 1000 cardinality, InfluxDB and TT remain flat as 126.41 and 37.44 ms, respectively. VictoriaMetrics's response time is increased to 98.84ms. We kept increasing cardinality to 10000. InfluxDB saturated the PI-0-w so we didn't record a number. VictoriaMetrics took 473.75 ms in average. TT ran even faster, only 12.5 ms, than reading 1000 cardinality. We think it might be because data are kept in OS cache longer with more frequent access to data at higher read loads.

read average response time

Similarly TT is fastest in terms of average write response time, and InfluxDB is slowest. Note that even with 10 cardinality, InfluxDB took 136.12 ms per operation, while VictoriaMetrics 52.36 and TT only 13.73 ms. TT is almost 10 times faster than InfluxDB and 4.5 times faster than VictoriaMetrics.

InfluxDB couldn't handle 10000 cardinality at all. VictoriaMetrics's write response time bumped up significantly at 10000 cardinality. Actually hardware resource usage (cpu etc) of VictoriaMetric at the workload was very high already. We will show you resource consumption figures later.

write average response time

5. Data compression comparison

We also compare how well data are compressed in different TSDBs. We used the average size of data points at the load of 10,000 time series for TT and VictoriaMetrics, and at the load of 1000 time series for InfluxDB (since it could handle 10,000 time series). We averaged all data in all tests (note that we ran at least 3 times for each test load).

InfluxDB has the largest byte per data point (6.4) while VictoriaMetrics has the least (0.4 byte per data point). TT is in the middle (1.6 byte per data point). Note that TT generates rollup data for faster queries from its original data. If we exclude rollup data, the size is reduced to 0.5, closed to VictoriaMetrics. Currently rollup data of TT is not compressed at all so their size is huge. We are currently working on compressing rollup data of TT.

byte per data point

6. Conclusion

  • We compared TickTockDB with InfluxDB and VictoriaMetrics in PI-zero-wireless (ARMv6, 32bit OS) under low loads (100, 1000, 10000 time series).
  • At the load of 100 time series, all three TSDBs behaved well. There were lots of rooms in cpu, memory, and IO.
  • At the load of 1000 time series, you can see the obvious difference between the three TSDBs. TT used 10% of cpu while InfluxDB used 30 and VictoriaMetrics 20%, respectively. TT only used 1/10 of memory and IO usages of the other two.
  • At the load of 10,000 time series, InfluxDB was saturated in terms of CPU and disk utilization. VictoriaMetrics ran short of CPU, 50% of usage. It's memory and IO were still low. TT handled 10,000 without any pressure on hardware resource.
  • TT's write and read are significantly faster than those of InfluxDB and VictoriaMetrics. InfluxDB is the slowest while VictoriaMetrics was in the middle. The gap is even bigger when the number of time series was increased to 10,000.
  • VictoriaMetrics compresses data better than InfluxDB and TT. TT is closed to VictoriaMetrics if excluding rollup data. InfluxDB has the worst compression rate.

Clone this wiki locally