Matrix-valued (vector-valued) signals are everywhere these
days, such as, videos, multi-spectral images, signals from
multiarray multisensors, and high dimensional data. For these
signals, there are correlations not only over the time but also
across their matrix components. How to efficiently represent
and process them plays an important and fundamental role in
data science.
In [1], [2], a matrix-valued inner product was introduced for
matrix-valued (or vector-valued) signals. It leads to a weaker
orthogonality (called Orthogonality B) than the component-
wise orthogonality (called Orthogonality A) in orthogonal
multiwavelet constructions [11], [12]. The weaker orthog-
onality, i.e., Orthogonality B, provides an easier sufficient
condition to construct orthonormal multiwavelets with Orthog-
onality B than the necessary and sufficient condition [12] to
construct orthonormal multiwavelets with Orthogonality A. A
connection between multiwavelets and matrix-valued/vector-
valued wavelets can be found in [2]. After the works in [1],
[2], there have been many studies on matrix-valued/vector-
valued wavelets for matrix-valued/vector-valued signals in the
literature, see, for example, [3]- [10].
On the other hand, the Orthogonality B induced from the
matrix-valued inner product is stronger than the orthogonality,
which is called Orthogonality C here, induced from the
commonly used scalar-valued inner product for matrices. It is
because, as we shall see later, Orthogonality B is basically the
orthogonality between all row vectors of two matrix-valued
signals, while Orthogonality C is the orthogonality of two
long vectors of concatenated row vectors of two matrix-valued
signals. It was proved in [2] that Orthogonality B or the
matrix-valued inner product is able to completely decorrelate
matrix-valued signals not only in time domain but also across
the components inside matrix, i.e., it provides a complete
Karhunen-Lo`eve expansion for matrix-valued signals, while
Orthogonality A or Orthogonality C may not do so. In other
words, Orthogonality B induced from the matrix-valued inner
product is the proper orthogonality for matrix-valued signals.
This also means that the matrix-valued inner product is needed
to study the decorrelation of matrix-valued signals and a
conventional scalar-valued inner product may not be enough.
Since the main goal in [1], [2] was to construct orthonormal
matrix-valued (vector-valued) wavelets, not much about the
inner product or the orthogonality itself, which is more funda-
mental, was studied. In this paper, we study more propertis on
the matrix-valued inner product and its induced Orthogonality
B for matrix-valued signal space proposed in [1], [2]. We first
define a different norm for matrix-valued signals than that
defined in [2] and prove that these two norms are equivalent.
The norm defined in this paper is consistent with the matrix-
valued inner product similar to that for a scalar-valued inner
product. We introduce a new linear independence concept for
matrix-valued signals and present some related properties. We
then present the Gram-Schmidt orthonormalization procedure
for a set of linearly independent matrix-valued signals. We fi-
nally define matrix-valued lattices, where the newly introduced
Gram-Schmidt orthogonalization might be applied. Due to the
noncommuntativity of matrix multiplications, these concepts
and properties for matrix-valued signals and/or inner product
are not straightforward extensions of the conventional ones for
scalar-valued signals and/or inner product.
The remainder of this paper is organized as follows. In
Section II, we introduce matrix-valued signal space, matrix-
valued inner product, and define a new norm for matrix-valued
signals. We present some simple properties for the matrix-
valued inner product and prove that the new norm proposed
in this paper is equivalent to that used in [2]. In Section III, we
A Matrix-Valued Inner Product for Matrix-Valued Signals and
Matrix-Valued Lattices
XIANG-GEN XIA
Fellow, IEEE
Department of Electrical and Computer Engineering, University of Delaware, Newark, DE 19716, USA
Abstract: A matrix-valued inner product was proposed before to construct orthonormal matrix-valued wavelets for matrix-
valued signals. It introduces a weaker orthogonality for matrix-valued signals than the orthogonality of all components in
a matrix that is commonly used in orthogonal multiwavelet constructions. With the weaker orthogonality, it is easier to
construct orthonormal matrix-valued wavelets. In this paper, we re-study the matrix-valued inner product more from the
inner product viewpoint that is more fundamental and propose a new but equivalent norm for matrix-valued signals. We
show that although it is not scalar-valued, it maintains most of the scalarvalued inner product properties. We introduce a
new linear independence concept for matrix-valued signals and present some related properties. We then present the Gram-
Schmidt orthonormalization procedure for a set of linearly independent matrix-valued signals. Finally we define matrix-
valued lattices, where the newly introduced Gram-Schmidt orthogonalization might be applied.
Keywords: Matrix-valued inner product, matrix-valued signal space, nondegenerate matrix-valued signals, linearly
independent matrix-valued signals, matrix-valued lattices, matrix-valued wavelets
Received: March 25, 2021. Revised: April 19, 2022. Accepted: July 16, 2022. Published: September 9, 2022.
1. Introduction
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2022.18.21
Xiang-Gen Xia
E-ISSN: 2224-3488
146
Volume 18, 2022
L2(a, b;CN×N)
=
f(t) =
f11(t)f12(t)··· f1N(t)
f21(t)f22(t)··· f2N(t)
···
fN1(t)fN2(t)··· fNN (t)
:fkl L2(a, b),1k, l N
.(1)
first introduce the concepts of degenerate and nondegenerate
matrix-valued signals and then introduce the concept of linear
independence for matrix-valued signals. The newly introduced
linear independence is different from but consistent with the
conventional one for vectors. We also present some interesting
properties on the linear independence and the orthogonal-
ity. We finally present the Gram-Schmidt orthonormalization
procedure for a set of linearly independent matrix-valued
signals, which has the similar form as the conventional one
for vectors but not a straightforward generalization due to the
noncommutativity of matrix multiplications and the matrix-
valued inner product used in the procedure. In Section IV, we
define matrix-valued lattices. In Section V, we conclude this
paper.
We first introduce matrix-valued signal space studied in
[1], [2]. Let CN×Ndenote all N×Nmatrices of complex-
valued entries, and for −∞ a < b , let L2(a, b)
denote all the finite energy signals in the interval (a, b)and
L2(a, b;CN×N)be defined in (1) at the top of this page.
We call L2(a, b;CN×N)a matrix-valued signal space and
f(t)L2(a, b;CN×N), or simply fL2(a, b;CN×N), a
matrix-valued signal.
For any ACN×Nand fL2(a, b;CN×N), the products
Af,fAL2(a, b;CN×N). This implies that the matrix-
valued signal space L2(a, b;CN×N)is defined over CN×N
and not simply over C. For fL2(a, b;CN×N), its integration
Rb
af(t)dt is defined by the integrations of its components, i.e.,
Rb
af(t)dt =Rb
afkl(t)dt.
Let k · kMdenote a matrix norm on CN×N, for example,
the Frobenius norm,
kAkM=kAkF=
N
X
k,l=1 |Akl|2
1/2
,
where A= (Akl). For each fL2(a, b;CN×N), let kfkM
denote the norm of fassociated with the matrix norm k · kM
as
kfkM
=
Zb
a
f(t)f(t)dt
1/2
M
,(2)
where denotes the complex conjugate transpose. Note that
the norm kfkof fdefined in [2] has the following form
kfk
= Zb
akf(t)k2
Mdt!1/2
,(3)
where kf(t)kMis the matrix norm of matrix f(t)for a fixed t.
We will show later that the above two norms kfkand kfkMare
equivalent in the sense that there exist two positive constants
C1>0and C2>0such that
C1kfk kfkMC2kfk,for any fL2(a, b;CN×N).(4)
We next define matrix-valued inner product for matrix-
valued signals in L2(a, b;CN×N). For two matrix-valued sig-
nals f,gL2(a, b;CN×N), their matrix-valued inner product
(or simply inner product) hf,giis defined as the integration
of the matrix product f(t)g(t), i.e.,
hf,gi
=Zb
a
f(t)g(t)dt. (5)
With the definition (5), most properties of the conventional
scalar-valued inner product hold for the above matrix-valued
inner product. For instance, the following properties of the
matrix-valued inner product are clear:
(i) hf,gi=hg,fi.
(ii) hf,fi= 0 if and only if f= 0.
(iii) kfkM=khf,fik1/2
M.
(iv) For any A, B CN×N,hAf, Bgi=Ahf,giB.
Note that Property (iii) may not hold for the norm (3) used in
[2].
Two matrix-valued signals fand gin L2(a, b;CN×N)are
called orthogonal if hf,gi= 0. A set of matrix-valued
signals is called an orthogonal set if any two distinct matrix-
valued signals in the set are orthogonal. A sequence Φk(t)
L2(a, b;CN×N),kZ, is called an orthonormal set in
L2(a, b;CN×N)if
hΦk,Φli=δ(kl)IN, k, l Z,(6)
where δ(k) = 1 when k= 0 and δ(k) = 0 when k6= 0,
INis the N×Nidentity matrix. Due to (i) above, the
orthogonality/orthonormality between fand gis commutative,
i.e., if fand gare orthogonal/orthonormal, then gand fare
orthogonal/orthonormal too.
A sequence Φk(t)L2(a, b;CN×N),kZ, is called an
orthonormal basis for L2(a, b;CN×N)if it satisfies (6), and
moreover, for any fL2(a, b;CN×N)there exists a sequence
of N×Nconstant matrices Fk,kZ, such that
f(t) = X
kZ
FkΦk(t),for t[a, b],(7)
or simply
f=X
kZ
FkΦk,
where Fk=hf,Φki, the multiplication FkΦk(t)for each fixed
tis the N×Nmatrix multiplication, and the convergence for
the infinite summation is in the sense of the norm kMdefined
2. Matrix-valued Signal Space
and Matrix-valued Inner Product
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2022.18.21
Xiang-Gen Xia
E-ISSN: 2224-3488
147
Volume 18, 2022
by (2) for the matrix-valued signal space. The corresponding
Parseval equality is
hf,fi=X
kZ
FkF
k.(8)
With the norm k · kMin (2), it is clear that for any el-
ement Φkin an orthonormal set in L2(a, b;CN×N), we
have kΦkkM= 1, which is consistent with the conventional
relationship between vector norm and vector inner product.
However, this property may not hold for the norm k·k in (3)
used in [2]. We refer to [2] for the Karhunen-Lo`eve expansion
with an orthonormal basis for random processes of matrix-
valued signals.
We next show the equivalence (4) of the two norms k·kM
in (2) and k · k in (3).
Proposition 1: The norms k·kMin (2) and k· k in (3) are
equivalent.
Proof: It is known that all matrix norms for constant
matrices are equivalent. Hence, we show (4) only for the
Frobenius norm, i.e., k · kM=k · kF. In this case,
kfk4
M=
N
X
k,l=1 Zb
a
N
X
m=1
fkm(t)f
lm(t)dt
2
=
N
X
k=1
N
X
m=1 Zb
a|fkm(t)|2dt
2
+
N
X
k6=l=1
N
X
m=1 Zb
a
fkm(t)f
lm(t)dt
2
N
X
k=1
N
X
m=1 Zb
a|fkm(t)|2dt
2
1
N
Zb
a
N
X
k,l=1 |fkl(t)|2dt
2
=1
Nkfk4.
Thus, we have
1
N1/4kfk kfkM.(9)
On the other hand,
kfk4
M=
N
X
k,l=1 Zb
a
N
X
m=1
fkm(t)f
lm(t)dt
2
N
N
X
k,l=1
N
X
m=1 Zb
a
fkm(t)f
lm(t)dt
2
N
N
X
k,l=1
N
X
m=1 Zb
a|fkm(t)|2dt Zb
a|flm(t)|2dt
N
2
N
X
k,l=1
N
X
m=1
Zb
a|fkm(t)|2dt!2
+ Zb
a|flm(t)|2dt!2
N2
N
X
k=1
N
X
m=1 Zb
a|fkm(t)|2dt!2
N2
Zb
a
N
X
k,m=1 |fkm(t)|2dt
2
=N2kfk4.
This shows that
kfkMNkfk.(10)
Combining (9) and (10), the equivalence (4) with C1=N1/4
and C2=N1/2between the norms k · kMin (2) and k · k in
(3) is proved. q.e.d.
Due to the equivalence of the norm k · kMproposed in
this paper and the norm k · k used in [2], all the results on
orthonormal matrix-valued wavelets obtained in [2] hold, when
the norm k·kMfor matrix-valued signals in this paper is used.
As a remark, the conventional inner product for two matrices
Aand Bis the scalar-valued inner product tr(AB)where
tr stands for the matrix trace. It is not hard to see that with
this scalar-valued inner product, the orthogonality between two
matrix-valued signals, which is called Orthogonality C, is the
orthogonality of two long vectors of concatenated row vectors
of two matrix-valued signals. As mentioned in Introduction,
and it is also not hard to see from the above definition, the
orthogonality (6) induced from the matrix-valued inner product
in this paper for two matrix-valued signals is the orthogonality
between any row vectors including the row vectors inside
a matrix of the two matrix-valued signals, which is named
Orthogonality B in [2]. Clearly Orthogonality B is stronger
than Orthogonality C, while it is weaker than the component-
wise orthogonality called Orthogonality A in [2], commonly
used in multiwavelets [11], [12].
With Orthogonality A, a necessary and sufficient condition
to construct orthonormal multiwavelets was given in [12] that
is not easy to check. However, with Orthogonality B, an easy
sufficient condition to construct orthonormal multiwavelets
was obtained in [2]. Furthermore, it was shown in [2] that the
matrix-valued inner product (5) and its induced Orthogonality
B provide a complete decorrelation of matrix-valued signals
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2022.18.21
Xiang-Gen Xia
E-ISSN: 2224-3488
148
Volume 18, 2022
along time and across matrix components, i.e., a complete
Karhunen-Lo`eve expansion for matrix-valued signals can be
obtained. This may not be possible for Orthogonality A or
Orthogonality C induced from a scalar-valued inner product
[13], [14]. In other words, the matrix-valued inner product (5)
is fundamental to study matrix-valued signals.
Let us first introduce degenerate and linearly independent
matrix-valued signals, and study their properties.
A matrix-valued signal fin L2(a, b;CN×N)is called de-
generate signal if hf,fidoes not have full rank, otherwise it
is called nondegenerate signal. A sequence of matrix-valued
signals fkin L2(a, b;CN×N),k= 1,2, ..., K, are called
linearly independent if the following condition holds: if
K
X
k=1
Fkfk
=f(11)
for constant matrices FkCN×N,k= 1,2, ..., K, is
degenerate, then the null space of matrix F
kincludes the null
space of matrix hf,fifor every k,k= 1,2, ..., K. Clearly,
the above linear independence returns to the conventional one
when all the above matrices are diagonal. Furthermore, if
f= 0 in (11), the above condition implies that all Fk= 0,
k= 1,2, ..., K, since in this case, the null space of hf,fiis
the whole space CN×N. This concides with the condition of
the conventional linear independence.
Proposition 2: If matrix-valued signals fk,k= 1,2, ..., K,
are linearly independent, then, all signals fk,k= 1,2, ..., K,
are nondegenerate.
Proof: Without loss of generality, assume f1is degenerate.
Let F1=INand Fk= 0 for k= 2,3, ..., K. Then, we have
that K
X
k=1
Fkfk=f1
is degenerate, while the null space of F
1is 0only and does
not include the null space of hf1,f1i. In other words, fk,k=
1,2, ..., K, are not linearly independent. This contradicts the
assumption in the proposition and therefore the proposition is
proved. q.e.d.
As one can see, the above concept of degenerate signal is
similar to that of 0in the conventional linear dependence or
independence.
Proposition 3: Let GkCN×N,k= 1,2, ..., K, be K
constant matrices and at least one of them have full rank.
If matrix-valued signals fk,k= 1,2, ..., K, are linearly
independent, then PK
k=1 Gkfkis nondegenerate.
Proof: Without loss of generality, let us assume G1has full
rank. If PK
k=1 Gkfk=gis degenerate, then by the linear
independence of fk,k= 1,2, ..., K, the null space of G
1
cannot only contain 0, which contradicts the assumption that
G1has full rank. q.e.d.
It is clear to see that Proposition 2 is a special case of
Proposition 3.
Proposition 4: If matrix-valued signals fk,k= 1,2, ..., K,
are linearly independent, then for any full rank constant
matrices GkCN×N,k= 1,2, ..., K, matrix-valued signals
gk
=Gkfk,k= 1,2, ..., K, are also linearly independent.
Proof: For any constant matrices FkCN×N,k=
1,2, ..., K, if
K
X
k=1
Fkgk=
K
X
k=1
FkGkfk=f
is degenerate, then for each k,1kK, the null space of
matrix (FkGk)=G
kF
kincludes the null space of matrix
hf,fi, since fk,k= 1,2, ..., K, are linearly independent.
Because all matrices Gk,k= 1,2, ..., K, have full rank, for
each k,1kK, the null spaces of F
kand G
kF
kare the
same, thus, the null space of F
kincludes the null space of
hf,fias well. This proves the proposition. q.e.d.
Similar to the conventional linear dependence of vectors,
we have the following result for matrix-valued signals.
Proposition 5: For a matrix-valued signal f
L2(a, b;CN×N)and two constant matrices A, B CN×N,
matrix-valued signals Afand Bfare linearly dependent.
Proof: If Afand Bfare linearly independent, then, from
Proposition 2 it is easy to see that matrices Aand Ball have
full rank and fis nondegenerate. Then, we have
BA1AfBf= 0,
which contradicts with the assumption of the linear indepen-
dence of Afand Bf. This proves the proposition. q.e.d.
Although it is obvious for the conventional vectors, the
result in Proposition 5 for matrix-valued signals may not be
so, due to the matrix-valued coefficient multiplications as it
can be seen from the above proof. We next consider more
general linear combinations of linearly independent matrix-
valued signals.
For 1pK, let S1, ..., Spbe a partition of the index
set {1,2, ..., K}and each Sihas Kielements, where Si1
Si2=for 1i16=i2p,p
i=1Si={1,2, ..., K}, and
1K1, ..., KpKwith K1+K2+···+Kp=K.
Proposition 6: For each i,1ip, let GkiCN×N,
kiSi, be Kiconstant matrices and at least one of them
have full rank. If matrix-valued signals fk,k= 1,2, ..., K,
are linearly independent, then the following pmatrix-valued
signals: X
kiSi
Gkifki,for i= 1,2, ..., p,
are linearly independent.
Proof: Let FiCN×N,i= 1,2, ..., p, be constant matrices.
Assume that p
X
i=1
FiX
kiSi
Gkifki=g
is degenerate. Then,
p
X
i=1 X
kiSi
FiGkifki=g,
3. Linear Independence and
Gram-schmidt Orthonormalization
3.1 Degenerate and Linearly
Independent Matrix-valued Signals
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2022.18.21
Xiang-Gen Xia
E-ISSN: 2224-3488
149
Volume 18, 2022
and by the linear independence of fk,k= 1,2, ..., K, we know
that the null space of (FiGki)for every kiSiand every
i= 1, ..., p contains the null space of matrix hg,gi. From the
condition in the proposition, without loss of generality, we
may assume that Gki,1, for some ki,1Si, has full rank for
1ip. Thus, the null space of (FiGki,1), or G
ki,1F
i,
contains the null space of hg,gifor 1ip. Since Gki,1
has full rank, the null space of F
imust contain the null space
of hg,gifor 1ip. This proves the proposition. q.e.d.
Note that when p= 1 in Proposition 6, it returns to
Proposition 3, and when p=Kin Proposition 6, it returns to
Proposition 4.
Proposition 7: If fk,k= 1,2, ..., K, form an orthonormal
set in L2(a, b;CN×N), then, they must be linearly indepen-
dent.
Proof: For constant matrices FkCN×N,k= 1,2, ..., K,
let K
X
k=1
Fkfk=f.
Then, from the Parseval equality (8), we have
K
X
k=1
FkF
k=hf,fi.
Assume that for some vector u6= 0, we have hf,fiu= 0 but
F
k0u6= 0 for some k0,1k0K. Then,
0< uFk0F
k0u
K
X
k=1
uFkF
ku=uhf,fiu= 0,
which leads to a contradiction. Thus, for each k,1kK,
the null space of F
kincludes the null space of hf,fi. This
proves the linear independence of fk,k= 1,2, ..., K.q.e.d.
Corollary 1: Assume fk,k= 1,2, ..., K, are nondegen-
erate matrix-valued signals and form an orthogonal set in
L2(a, b;CN×N). Then, gk
=hfk,fki1/2fk,k= 1,2, ..., K,
form an orthonormal set in L2(a, b;CN×N), and fk,k=
1,2, ..., K, are linearly independent.
Proof: Since all matrix-valued signals fkare nondegenerate,
matrices hfk,fkiall have full rank. From the property (iv) for
the matrix-valued inner product and {fk}is an orthogonal set,
for every k, l,1k, l K,
hgk,gli=hfk,fki1/2hfk,flihfl,fli1/2=δ(kl)IN.
Thus, gk,k= 1,2, ..., K, form an orthonormal set in
L2(a, b;CN×N).
Then, the linear independence of fk=hfk,fki1/2gk,k=
1,2, ..., K, immediately follows from Propositions 4 and 7.
q.e.d.
The result in Corollary 1 is consistent with the conventional
one for vectors, i.e., any orthogonal set of nonzero vectors
must be linearly independent. However, there is a difference.
In the above relationship between orthogonality and linear
independence, matrix-valued signals need to be nondegenerate.
Note that it is possible that a matrix-valued signal fin an
orthogonal set in L2(a, b;CN×N)is degenerate, i.e., hf,fi
may not necessarily have full rank, even though f6= 0. Thus,
a general orthogonal set of matrix-valued signals may not
have to be linearly independent. This does not occur for any
orthogonal set of nonzero signals when a scalar-valued inner
product is used.
We are now ready to present the Gram-Schmidt orthonor-
malization for a finite sequence of linearly independent matrix-
valued signals. Let fkL2(a, b;CN×N),k= 1,2, ..., K, be
linearly independent. The Gram-Schmidt orthonormalization
for this sequence is as follows, which is similar to, but not a
straightforward extension of, the conventional one, due to the
noncommutativity of matrix multiplications.
Since fkL2(a, b;CN×N),k= 1,2, ..., K, are linearly
independent, by Proposition 2, f1is nondegenerate, i.e., matrix
hf1,f1iis invertible and positive definite. Let
g1=hf1,f1i1/2f1.(12)
Then, we have
hg1,g1i=hf1,f1i1/2Zb
a
f1(t)f
1(t)dthf1,f1i1/2
=hf1,f1i1/2hf1,f1ihf1,f1i1/2=IN.(13)
Let
ˆ
g2=f2 hf2,g1ig1,(14)
g2=hˆ
g2,ˆ
g2i1/2ˆ
g2.(15)
For (15) to be vaild, we need to show that ˆ
g2in (14) is
nondegenerate. In fact, if ˆ
g2is degenerate, then, from (14)
and (12), we have
ˆ
g2=f2 hf2,g1ihf1,f1i1/2f1
=F1f1+F2f2,
where F2=INand F1=−hf2,g1ihf1,f1i1/2. Similar to the
proof of Proposition 2, this contradicts the assumption that f1
and f2are linearly independent. Therefore, it proves that ˆ
g2
in (14) is nondegenerate and (15) is well-defined.
Let us then check the orthogonality between g1and ˆ
g2.
From (14) and (13), we have
hˆ
g2,g1i=hf2,g1i hf2,g1ihg1,g1i
=hf2,g1i hf2,g1i= 0.
From (15) and (13), we have that g1and g2form an orthonor-
mal set.
Repeat the above process and for a general k,2kK,
we let
ˆ
gk=fk
k1
X
l=1 hfk,gligl,(16)
gk=hˆ
gk,ˆ
gki1/2ˆ
gk.(17)
With the same proof as the above g1and g2, we have the
following proposition.
Proposition 8: For a linearly independent set of matrix-
valued signals fk,k= 1,2, ..., K, let g1,g2, ..., gKbe con-
structed in (12) and (14)-(17). Then, g1,g2, ..., gKform an
orthonormal set.
3.2 Gram-schmidt Orthonormalization
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2022.18.21
Xiang-Gen Xia
E-ISSN: 2224-3488
150
Volume 18, 2022
As we can see, although the above Gram-Schmidt orthonor-
malization procedure for matrix-valued signals is similar to
the conventional one for vectors, it is not a straightforward
generalization due to 1) the noncommuntativity of matrix
mulitpications and 2) the matrix-valued inner product used
in the above procedure.
We also want to make a comment on the nondegenerate and
linear independence for matrix-valued signals. The condition
for nondegenerate matrix-valued signals is a weak condition.
Unless the row vectors of functions are linearly dependent
in the conventional sense, otherwise, a matrix-valued signal
is usually nondegenerate. For a finite set of nondegenerate
matrix-valued signals, they usually satisfy the condition for
linear independence for matrix-valued signals defined above,
i.e., they are usually linearly independent and therefore, they
can be made to an orthonormal set by using the above Gram-
Schmidt procedure.
Another comment on the linear independence for matrix-
valued signals is that the definition in (11) is only for left mul-
tiplication of constant matrices Fkto matrix-valued signals fk.
Similar definitions for linear independences of matrix-valued
signals with right constant matrix multiplications and/or mixed
left and right constant matrix multiplications may be possible.
Although what is studied in this paper is for continuous-time
matrix-valued signals, it can be easily generalized to discrete-
time matrix-valued signals (sequences of finite or infinite
length).
In this section, based on the matrix-valued signal space with
the matrix-valued inner product, we introduce matrix-valued
lattices.
We first introduce matrix-valued lattices. For convenience,
in what follows we only consider the Frobenius norm for
matrices, i.e., k · kM=k · kF, and real matrix-valued signal
spaces, and also let Rdenote the real matrix-valued signal
space R
=L2(a, b;RN×N). Let ZN×Ndenote all N×N
matrices of integer entries.
For a finite many linearly independent real matrix-valued
signals fk R,k= 1,2, ..., K, let RKdenote the matrix-
valued signal space linearly expanded by them, i.e.,
RK=(K
X
k=1
Fkfk:FkRN×N, k = 1,2, ..., K).(18)
From what was studied in the previous section, clearly, fk,
k= 1,2, ..., K, form a basis in RK. The matrix-valued lattice
formed by this basis in RKis defined as
L=(K
X
k=1
Fkfk:FkZN×N, k = 1,2, ..., K),(19)
which is a subset/subgroup of RK. The basis fk,k=
1,2, ..., K, is called a basis for the Kdimensional matrix-
valued lattice L.
The fundamental region of this lattice Lcan be defined
similar to the conventional lattice as follows. A set F RK
is called a fundamental region, if its translations x+F=
{x+f:f F} for x L form a partition of RK. Since the
basis elements fk,k= 1,2, ..., K, are not constant real vectors
as in the conventional lattices, it would not be convenient to
define the determinant of the lattice. However, with the Gram-
Schmidt orthonormalization developed in the previous section,
we may define the determinant of the lattice directly as
det(L) =
K
Y
k=1 kˆ
fkkF,(20)
where ˆ
fk,k= 1,2, ..., K, are from the following Gram-
Schmidt orthogonalization of fk,k= 1,2, ..., K, which is
from the Gram-Schmidt orthonormalization in the previous
section: ˆ
f1=f1,
ˆ
fk=ˆ
gk=fk
k1
X
l=1
µl,kˆ
fl,(21)
where
µl,k =hfk,ˆ
flihˆ
fl,ˆ
fli1, l = 1,2, ..., k1and k= 2,3, ..., K.
(22)
It is clear to see that the spaces linearly spanned by
{f1,f2, ..., fK}and {ˆ
f1,ˆ
f2, ..., ˆ
fK}are the same, i.e., RKin
(18), since they can be linearly (over RN×N) represented by
each other similar to the conventional vectors.
From the Gram-Schmidt orthogonalization (21), we have
hfk,fki=hˆ
fk,ˆ
fki+
k1
X
l=1
µl,khˆ
fl,ˆ
fliµ
l,k,(23)
for k= 1,2, ..., K. Using Property (iii) in Section II, the
identity (23) implies
kfkk2
F kˆ
fkk2
F+
k1
X
l=1 kµl,kk2
F· kˆ
flk2
F,(24)
for k= 1,2, ..., K. From (23), it is also clear that kfkkF
kˆ
fkkF,k= 2, ..., K.
As a remark, we know that the conventional Gram-Schmidt
orthogonalization plays an important role in the LLL algorithm
for the conventional lattice basis reduction [20]. It is, however,
not clear how the Gram-Schmidt orthogonalization for matrix-
valued signals introduced above can be applied in matrix-
valued lattice basis reduction.
In this paper, we re-studied the matrix-valued inner product
previously proposed to construct orthonormal matrix-valued
wavelets for matrix-valued signal analysis [1], [2] where
not much on the matrix-valued inner product or its induced
Orthogonality B, which is more fundamental, was studied.
In order to study more on the matrix-valued inner product
and its induced Orthogonality B, we first proposed a new
norm for matrix-valued signals, which is more consistent with
the matrix-valued inner product than that used in [2], and
is similar to that with the conventional scalar-valued inner
product. We showed that these two norms are equivalent,
which means that with the newly proposed norm, all the
4. Matrix-valued Lattices
5. Conclusion
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2022.18.21
Xiang-Gen Xia
E-ISSN: 2224-3488
151
Volume 18, 2022
results for contructing orthonormal matrix-valued wavelets
obtained in [2] still hold. We then proposed the concepts
of degenerate and nondegenerate matrix-valued signals and
defined the linear independence for matrix-valued signals,
which is different from but similar to the conventional linear
independence for vectors. We also presented some properties
on the linear independence and the orthogonality. We then
presented the Gram-Schmidt orthonormalization procedure for
a set of linearly independent matrix-valued signals. Although
this procedure is similar to the conventional one for vectors,
due to the noncommutativity of matrix multiplications and the
matrix-valued inner product used in the procedure, it is not
a straightforward generalization. We finally defined matrix-
valued lattices, where the newly introduced Gram-Schmidt
orthogonalization might be applied.
Since it was shown in [2] that the matrix-valued inner
product and Orthogonality B provide a complete Karhunen-
Lo`eve expansion for matrix-valued signals, which a scalar-
valued inner product may not do, it is believed that what
was studied in this paper for matrix-valued inner product for
matrix-valued signal space will have fundamental applications
for high dimensional signal analysis in data science.
As a final note, after this paper was written, it has been
found that the matrix-valued signal space with the matrix-
valued inner product in this paper is related to Hilbert modules,
see, for example, [15]- [19]. Interestingly, it was mentioned
in [18] that there does not exist any general notion of C-
linear independence due to the existence of zero-divisors. We
believe that the linear independence for matrix-valued signals
introduced in this paper is novel.
[1] X.-G. Xia and B. W. Suter, “Vector-valued wavelets and vector filter
banks, IEEE Trans. Signal Processing, vol. 44, pp. 508-518, Mar. 1996.
[2] X.-G. Xia, “Orthonormal matrix valued wavelets and matrix Karhunen-
Lo`eve expansion, In Wavelets, Multiwavelets, and Their Applications,
Contemporary Math., vol. 216, pp. 159-175, 1998. Also available at
https://www.eecis.udel.edu/˜ xxia/matrix wavelets.pdf
[3] A. T. Walden and A. Serroukh, “Wavelet analysis of matrix-valued time-
series,Proc. R. Soc. Lond. A, vol. 458, pp. 157-179, 2002.
[4] G. Bhatt, B. D. Johnson, and E. Weber, “Orthogonal wavelet frames and
vector-valued wavelet transforms, Applied and Computational Harmonic
Analysis, vol. 23, no. 2, pp. 215234, 2007.
[5] Q. Chen and Z. Shi, “Construction and properties of orthogonal matrix-
valued wavelets and wavelet packets, Chaos, Solutions, and Fractals,
vol. 37, pp. 75-86, 2008.
[6] L. Cui, B. Zhai, and T. Zhang, “Existence and design of biorthogonal
matrix-valued wavelets, Nonlinear Analysis: Real World Applications,
vol. 10, pp. 2679-2687, 2009.
[7] A. S. Antoln and R. A. Zalik, “Matrix-valued wavelets and multiresolution
analysis, J. Applied Functional Analysis, vol. 7, no. 1-2, pp. 13-25, 2012.
[8] P. Ginzberg and A. T. Walden, “Matrix-valued and quaternion
wavelets,IEEE Trans. Signal Processing, vol. 61, pp. 1357-1367, Mar.
2013.
[9] M. Mittal, P. Manchanda, and A. H. Siddiqi, “Wavelets associated vector-
valued nonuniform multiresolution analysis,Applicable Analysis, vol. 93,
no. 1, pp. 84-104, 2014.
[10] L. Cui, N. Zhu, Y. Wang, J. Sun, and Y. Cen, “Existence of matrix-valued
multiresolution analysis-based matrix-valued tight wavelet frames, Nu-
merical Functional Analysis and Optimization, vol. 37, no. 9, pp. 1089-
1106, 2016.
[11] G. Strang and V. Strela, “Orthogonal multiwavelets with vanishing
moments, J. Optical Eng., vol. 33, pp. 2104-2107, Jul. 1994.
[12] G. Plonka, “Necessary and sufficient conditions for orthonormality
of scaling vectors, In: G. N¨urnberger, J. W. Schmidt, and G. Walz
(eds.), Multivariate Approximation and Splines, ISNM International
Series of Numerical Mathematics, vol 125. Birkhuser, Basel, 1997.
https://doi.org/10.1007/978-3-0348-8871-4 17
[13] E. J. Kelly and W. L. Root, A representation of vector-valued random
processes, Group Rept. 55-21, revised, MIT, Lincoln Laboratory, Apr.
1960.
[14] H. Van Trees, Detection, Estimation, and Modulation Theory I, Wiley,
1968.
[15] I. Kaplansky, “Modules over operator algebras, Amer. J. Math., vol. 75,
pp. 839-853, 1953.
[16] A. Bultheel, “Inequalities in Hilbert modules of matrix-valued func-
tions, Proc. Amer. Math. Soc., vol. 85, no. 3, pp. 369-372, July 1982.
[17] E. C. Lance, “Hilbert C*-modules, A toolkit for operator algebraists,
London Math. Soc. Note Ser., vol. 210, Cambridge Univ. Press, 1995.
[18] M. Frank and D. R. Larson, “Modular frames for Hilbert C*-modules
and symmetric approximation of frames, arXiv:math/0010115v1
[math.OA], Oct. 2000.
[19] M. Frank and D. R. Larson, “Frames in Hilbert C*-modules and C*-
algebras, J. Operator Theory, vol. 48, no. 2, pp. 273-314, Fall 2002.
[20] A. K. Lenstra, H. W. Lenstra, and L. Lov´asz, “Factoring polynomials
with rational coefficients, Mathematische Annalen, vol. 261, no. 4, pp.
515-534, 1982.
References
Creative Commons Attribution License 4.0
(Attribution 4.0 International, CC BY 4.0)
This article is published under the terms of the Creative
Commons Attribution License 4.0
https://creativecommons.org/licenses/by/4.0/deed.en_US
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2022.18.21
Xiang-Gen Xia
E-ISSN: 2224-3488
152
Volume 18, 2022