3.3.1. Compression Ratio
Let the total number of compressed
bits corresponding to the ECG samples be
.
This includes the overhead bits required
for the stream header and frame headers to be explained later.
Then the compression ratio () is defined as
Percentage space saving () is defined as
Note that often in literature, is defined as compression ratio
(e.g., [22]).
Several papers ignore the bitstream formation aspect
and report
(e.g., [37])
or (e.g., [32])
as the compression ratio
which measures the reduction in number of measurements
compared to the number of samples in each window.
We shall call this metric
percentage measurement saving ():
The ratio will be called the measurement ratio:
The measurement ratio is not a
good indicator of compression ratio.
If the sensing matrix is Gaussian,
then the measurement values are real-valued.
In literature using Gaussian sensing matrices
(e.g., [37]),
it is unclear how many bits are
being used to represent each floating point measurement value
for transmission.
Under standard 32-bit IEEE floating point format,
each value would require 32-bits.
Then for MIT-BIH data, the compression ratio in bits
would be .
The only way the ratio would make sense
is if the measurements are also quantized at 11 bits
resolution. However, the impact of such quantization
is not considered in the simulations.
Now consider the case of a sparse binary sensing
matrix. Since it consists of only zeros and ones,
hence for integer inputs, it generates integer
outputs. Thus, we can say that the output of a sparse
binary sensor is quantized by design.
However, the range of values changes.
Assume that the sensing matrix has ones per column.
Then it has a total of ones. Thus, each row
will have on average ones.
Since the ones are randomly placed, hence
we won’t have the same number of ones in each row.
If we assume the input data to be in the range
of (under 11-bit), then in the
worst case, the range of output values may go
up to.
For a simple case where and , we will require
14 bits to represent each measurement value.
To achieve as the compression ratio, we will
have to quantize the measurements in 11 bits. If we do so,
we shall need to provide some way to communicate the quantization
parameters to the decoder as well as study the impact of
quantization noise.
This issue seems to be ignored in [35].
Another way of looking at the compressibility is how
many bits per sample () are needed on average in the compressed
bitstream. We define as:
Since the entropy coder is coding the measurements rather than
the samples directly, hence it is also useful to see how
many bits are needed to code each measurement. We
denote this as bits per measurement ():
3.3.2. Reconstruction Quality
The normalized root mean square error is defined as
where is the original ECG signal and
is the reconstructed signal.
A popular metric to measure the quality of reconstruction
of ECG signals is
percentage root mean square difference ():
The signal to noise ratio () is related to as
As one desires higher compression ratios and lower
, one can define a combined quality score (QS) as
Zigel et al. [38] established
a link between the diagnostic distortion and
the easy-to-measure metric.
Table 3.1 shows the classified quality
and corresponding SNR (signal-to-noise ratio) and PRD ranges.
Table 3.1 Quality of Reconstruction
Quality |
PRD |
SNR |
Very good |
2% |
33 dB |
Good |
2-9% |
20-33 dB |
Undetermined |
9% |
20 dB |