Sparse and Redundant Representations
Contents
12.5. Sparse and Redundant Representations¶
12.5.1. Dictionaries¶
Definition 12.3 (Dictionary)
A dictionary for
The elements of a dictionary are called atoms and they are denoted by
The dictionary is written as
where
and every signal
We use the letter
This definition is adapted from [35].
The indices may have an interpretation, such as the time-frequency or time-scale localization of an atom, or they may simply be labels without any underlying meaning.
Note that the dictionary need not provide a unique representation for any vector
When
12.5.2. Redundant Dictionaries and Sparse Signals¶
With
In contrast a basis with
Definition 12.4 (
A signal
Normally, for sparse signals, we have
Let
Let
Note that this is not the only possible representation of
Now there are
Thus the set of
This set
Example 12.15 (
For the special case where
i.e., the set of signals which have
Example 12.16 (
In contrast if we choose an orthonormal basis
then with the dictionary
We also note that for a specific choice of
form a subspace of
So we have
12.5.3. Sparse Approximation Problem¶
In sparse approximation problem, we attempt to approximate a given signal
Naturally, we wish to obtain a best possible sparse representation of
Let
Let
Then we can write
We put all complex valued coefficients
Clearly we would like to minimize the approximation error over all possible choices of
Thus the sparse approximation problem can be cast as a minimization problem given by
If we choose a particular
We reemphasize here that in this formulation we are using a fixed dictionary
This problem is known as
A related problem is known as
This formulation simplifies the minimization problem (12.17) since
it is known a priori that for
Clearly the EXACT-SPARSE problem is simpler than the SPARSE approximation problem. Thus if EXACT-SPARSE problem is NP-Hard then so is the harder SPARSE-approximation problem. It is expected that solving the EXACT-SPARSE problem will provide insights into solving the SPARSE problem.
In Theorem 12.8 we identified conditions
under which a sparse representation for a given vector
12.5.4. Synthesis and Analysis¶
The atoms of a dictionary
where
Thus in matrix terminology a representation of
where
Definition 12.5 (Synthesis matrix)
The matrix
We can also view the synthesis matrix
There is another way to look at
Definition 12.6 (Analysis matrix)
The conjugate transpose
where
Note that in general
Definition 12.7 (
With the help of synthesis matrix
If
Definition 12.8 (
With the help of synthesis matrix
This problem can be visualized as a projection of
12.5.5. P-Norms¶
There are some simple and useful results on relationships between
different
Definition 12.9 (Complex sign vector)
Let
where
The sign vector for
where
Theorem 12.9 (
For any
Proof. This follows from:
Note that whenever
Theorem 12.10 (Equivalence of
Suppose
Proof. For the lower bound, we go as follows
This gives us
We can write
By Cauchy-Schwartz inequality we have
Since
Thus, we get
Theorem 12.11 (Equivalence of
Let
Proof. This follows from:
Thus
Theorem 12.12 (Relationship between
Let
Proof. TBD
Theorem 12.13
Let
Proof. This follows from:
Finally since
Theorem 12.14
Let
Proof. We know that
Thus,
We used the fact that
Theorem 12.15 (An upper bound on the
Proof. Let
Thus, the
Obviously
Similarly
Thus
12.5.6. Sparse Signals¶
In this subsection we explore some useful properties of
We recall that
We established before that this set is a union of
We first present some results which connect the
Theorem 12.16 (Relation between norms of sparse vectors)
Suppose
Proof. Due to Theorem 12.9,
we can write
By Cauchy-Schwartz inequality we have
Since
Thus we get the lower bound
Now
since there are only
12.5.7. Compressible Signals¶
In this subsection, we first look at some general results and definitions
related to
12.5.7.1. K-term Approximation of General Signals¶
Definition 12.10 (Restriction of a signal)
Let
such that
Let
Then
Alternatively let
In other words,
As an abuse of notation, we will use any of the two definitions
whenever we are referring to
Example 12.17 (Restrictions on index sets)
Let
Then
Since
Definition 12.11 (
Let
Clearly for any
Example 12.18 (
Let
Let
is a
If we choose
Definition 12.12 (
Let
In case of ties, the order is resolved lexicographically;
i.e., if
Consider the index set
This signal is denoted henceforth as
where
Example 12.19 (Largest entries approximation)
Let
Then
All further
A pertinent question at this point is:
which
Theorem 12.17 (Best
Proof. For
We note that maximizing
Let
Further let
Thus if
Thus
This result helps us establish that whenever we are looking for a best
Definition 12.13 (Restriction of a matrix)
Let
such that
Let
Then
Alternatively let
In other words,
As an abuse of notation, we will use any of the two definitions
whenever we are referring to
Theorem 12.18
Let
Proof. This follows from:
The result remains valid whether we use
the restriction or the mask version of
Corollary 12.1
Let
using the mask version of
Proof. Straightforward application of Theorem 12.18:
Theorem 12.19
Let
Proof. Note that
Now let
Then
The result remains valid whether we use
the restriction or the mask version of
12.5.7.2. Compressible Signals¶
We will now define the notion of a compressible signal in terms of the decay rate of magnitude of its entries when sorted in descending order.
Definition 12.14 (Compressible signal)
Theorem 12.20 (
Let
Proof. Recalling
since the
Now since
This gives us
The sum on the R.H.S. is the
This completes the proof.
We now demonstrate how a compressible signal is well approximated by a sparse signal.
Theorem 12.21 (Sparse approximation of compressible signals)
Proof. Expanding the
We now approximate the R.H.S. sum with an integral.
Now
We can similarly show the result for
Example 12.20 (Sparse approximation for
Let
Hence
and
Both