The Cassandra PV Archiver 3.2 adds a few new configuration options and updates its dependencies to their respective newest versions. It is compatible with the Cassandra PV Archiver 3.1.x, meaning that it can operate on data stored by the Cassandra PV Archiver 3.1.x and the APIs supported by the Cassandra PV Archiver 3.1.x are fully supported.
Due to newly introduced configuration options, configuration files for version 3.2.x are not compatible with version 3.1.x. However, configuration files for version 3.1.x remain compatible with version 3.2.x.
The following improvements have been made in this release:
OutOfMemoryError
.
Two new configuration options have been introduced for
controlling
the memory consumption of the sample decimation
process:
throttling.sampleDecimation.maxFetchedSamplesInMemory
and
throttling.sampleDecimation.maxRunningFetchOperations
.
AbstractObjectResultSet
has been improved in order to avoid unnecessary copy
operations.
This change should improve the performance when
reading samples from
the database.
In order to profit from
this change, control-system supports using
AbstractObjectResultSet
for implementing
their sample result sets should change the
result set’s
fetchNextPage()
method to return a
SizedIterator
instead of a regular
Iterator
.
This change has already been implemented for the
ResultSetBasedObjectResultSet
, so
control-system supports using this class (like the
Channel Access
control-system support) will automatically
profit from this
improvement.
cassandra.fetchSize
option has been introduced in order to control the default
fetch
size used for queries.
Usually, the default fetch size
of the Cassandra driver should be
fine, but users wanting to
fine-tune the fetch size can now do so.
server.interNodeCommunicationRequestTimeout
option has been introduced in order to control the timeout
for
requests sent from one archiving server to another one.
This timeout has been significantly increased in version
3.1.2, but
now it is possible to increase it even further if
necessary or to
choose a shorter timeout if sufficient.
There also was one bug that has been fixed in this release:
generic_data_store
table were handled was unsafe
because light-weight
transactions (LWTs) were mixed with regular
updates.
This
could theoretically lead to invalid data if writes were
happening very rapidly or server clocks had an extremely
large clock
skew.
As data is only rarely written to this table
(once when the
archiving cluster is initialized and every
time the administrator’s
password is changed), this bug was
very unlikely to cause any
actual problems.
The Cassandra driver has been updated to version 3.2.0 in this
release.
That version includes a change to how user-defined types
(UDTs) are
handled when using the schema builder to create a
table.
Control-system supports using the schema builder to create
a table with
UDT columns might have to be changed to use the
schema builder’s
addUDTColumn(…)
method with a parameter
constructed using
SchemaBuilder.frozen(…)
instead of
using
addColumn(…)
with an instance of
UserType
.
Version 3.2.1 is a bugfix release that fixes an issue introduced in the 3.2.0 release.
This issue caused the archiving server not to start if the configuration file contained a line for the sample-decimation throttling options, but not any actual options. Unfortunately, the default configuration file provided with the distribution contained such a line so that the archiving server would not start with its default configuration file.
Starting with version 3.2.1, the archiving server accepts such a line in the configuration file and thus will work with the unchanged configuration file.
Version 3.2.2 is a bugfix release that fixes an issue with the throttling mechanism introduced in the 3.2.0 release.
This issue caused the throttling mechanism to not work correctly in certain cases, thus fetching (significantly) more samples than allowed by the specified limit.
Version 3.2.3 is a bugfix release that fixes an issue with the generation of decimated samples.
Due to this issue, the generation of decimated samples would permanently stop if there was an error while reading source samples from the database. The code will now catch such an error and retry periodically.
Version 3.2.4 is a bugfix release that fixes a regression introduced with the 3.2.0 release and improves the initialization logic of the archiving service.
The regression would cause the generation process for decimated samples to be brought to a halt when more than 100 source samples were used to generate a single decimated sample. This regression would only affect the generation of decimated samples from source samples in the database (typically when a new decimation level was added). When generating decimated samples as new samples were written, the regression would not be triggered.
The initialization logic of the archiving service has been improved so that it will be run again if there is an error while trying to initialize the channels. Before, the initialization would only be attempted again when the server went offline (because the database was unavailable) and then back online. This had the consequence that a very brief database problem (possibly caused by the database being overloaded) might cause the initialization logic to fail without the server going offline, resulting in a server that was online, but did not have any active channels.