Chapter II. What’s new in Cassandra PV Archiver 3.x

Table of Contents
1. Cassandra PV Archiver 3.0
1.1. Cassandra PV Archiver 3.0.1
1.2. Cassandra PV Archiver 3.0.2
1.3. Cassandra PV Archiver 3.0.3
2. Cassandra PV Archiver 3.1
2.1. Cassandra PV Archiver 3.1.1
2.2. Cassandra PV Archiver 3.1.2
2.3. Cassandra PV Archiver 3.1.3
3. Cassandra PV Archiver 3.2

1. Cassandra PV Archiver 3.0

The Cassandra PV Archiver 3.0 is intended as a replacement for the Cassandra Archiver for CSS 2.x. While sharing some of the concepts with the Cassandra Archiver for CSS 2.x, the code for the Cassandra PV Archiver 3.0 has actually been rewritten from scratch. The Cassandra PV Archiver 3.0 uses a new, CQL-based storage architecture that provides a significant improvement in performance and also simplifies the structure of the stored data, enabling direct data access for special applications. Unfortunately, this means that data archived with the Cassandra Archiver for CSS 2.x is not compatiable with the Cassandra PV Archiver 3.0 and has to be converted manually.

In addition to the change of the data format, the Cassandra PV Archiver 3.0 brings many new features that make it more scalable and simplify the deployment and operation:

  • Completely new web interface for monitoring and configuring the archive cluster.
  • Changing the configuration of channels (including renaming channels and moving channels between servers) without having to shutdown archiving servers.
  • Asynchronous sample writer, making the best use of multi-core CPUs.
  • Web-service interface for accessing the archive, simplifying the deployment of clients.

As the list of changes is so vast, even users already familiar with the Cassandra Archiver for CSS 2.x are strongly encouraged to read the complete manual of the Cassandra PV Archiver 3.0.

1.1. Cassandra PV Archiver 3.0.1

Version 3.0.1 is a bugfix release that fixes three bugs in the archive-access JSON interface. The first bug caused an exception when trying to retrieve enum samples, making it impossible to retrieve such samples via the JSON interface. The second bug caused incorrect values to be sent when an enum sample had more than a single element. The third bug concerned the serialization of the special “disabled” and “disconnected” samples. Those samples where always presented with a quality of “original”, even if they actually were decimated samples and should thus have had a quality of “interpolated”.

All the bugs fixed in this release only concern the archive-access interface. This means that data written by previous releases has not been affected by the aforementioned bugs and is correctly serialized after installing this update.

1.2. Cassandra PV Archiver 3.0.2

Version 3.0.2 is a bugfix release that fixes an issue that could result in an extreme memory consumption when generating decimated samples. When the source samples that were used for generating decimated were very scarce (had a density that was much smaller than the density of the generated samples), this could lead to an extreme memory consumption, resulting in a denial of service. As a side-effect, the server process would not respond any longer because the thread generating the decimated samples would hold a mutex for an extended period of time. Typically, this issue would primarily occur when starting the server after it had been stopped for some time or when adding new decimation levels.

The bugfix limits the number of samples that are generated from a single source sample, interrupting the process when the limit is reached and waiting for the generated samples having been written to the database before continuing. This limits the memory consumption and also releases the mutex periodically so that threads waiting for the mutex do not block for an extended period of time.

The bug fixed in this release only concerns internal implementation details. This means that data written by previous releases is correct and does not have to be regenerated or updated.

1.3. Cassandra PV Archiver 3.0.3

Version 3.0.3 is a bugfix release that fixes four issues. Three of these issues affected the generation of decimated samples. The fourth issue was in a shared component and would cause an exception in certain situations with a very high system load.

The three bugs in the sample generation process could result in no more decimated samples being generated for a certain channel. This was caused by a problem that would result in already existing decimated samples being generated again when the decimation process was previously interrupted unexpectedly (e.g. due to a server restart). On its own, this bug would only have performance implications and not affect correct behavior. However, due to a second bug that was introduced in version 3.0.2, it would lead to the whole decimation process for the channel being brought to a halt. The third bug could have a negative impact on performance because the decimation process would not always be interrupted as intended, thus potentially blocking the channel mutex for a long time. However, it is believed that this bug did not result in incorrect behavior.

The fourth bug concerned a component that provides a queue that is time bounded, meaning that elements that have been added to the queue some time ago, but have not been removed yet, are automatically removed when new elements are added. Due to a bug in the algorithm that takes care of automatically removing such elements, an exception would be thrown if all elements in the queue were considered old and thus marked for removal. This lead to an exception when samples were added to the write queue, but the write queue was not processed for a long time and no new samples were added in this period of time. In this case, the exception would occur when new samples were finally added to the queue. Typically, such a situation would only occur when the system was under extremely high load, resulting in samples neither being written nor new samples being added to the queue for more than 30 seconds.

The bugs fixed in this release only concern internal implementation details. This means that data written by previous releases is correct and does not have to be regenerated or updated.