In probability theory and statistics , complex random vectors are a generalization of real random vectors to complex numbers , i.e. the possible values a complex random vectors may take are complex vectors. Complex random variables can always be considered as pairs of real random vectors: their real and imaginary parts.
Some concepts of real random vectors have a straightforward generalization to complex random vectors. E.g. the definition of the mean of a complex random vector. Other concepts are unique to complex random vectors.
Applications of complex random vectors are found in digital signal processing .
Definition
A complex random vector
Z
=
(
Z
1
,
.
.
.
,
Z
n
)
T
{\displaystyle \mathbf {Z} =(Z_{1},...,Z_{n})^{T}}
on the probability space
(
Ω
,
F
,
P
)
{\displaystyle (\Omega ,{\mathcal {F}},P)}
is a function
Z
:
Ω
→
C
n
{\displaystyle \mathbf {Z} \colon \Omega \rightarrow \mathbb {C} ^{n}}
such that the vector
(
ℜ
(
Z
1
)
,
ℑ
(
Z
1
)
,
.
.
.
,
ℜ
(
Z
n
)
,
ℑ
(
Z
n
)
)
T
{\displaystyle (\Re {(Z_{1})},\Im {(Z_{1})},...,\Re {(Z_{n})},\Im {(Z_{n})})^{T}}
is a real real random vector on
(
Ω
,
F
,
P
)
{\displaystyle (\Omega ,{\mathcal {F}},P)}
.[ 1] : p.292
Expectation
As in the real case the expectation (also called expectation value) of a complex random vector is taken component-wise.[ 1] : p.293
E
[
Z
]
=
(
E
[
Z
1
]
,
…
,
E
[
Z
n
]
)
T
{\displaystyle \operatorname {E} [\mathbf {Z} ]=(\operatorname {E} [Z_{1}],\ldots ,\operatorname {E} [Z_{n}])^{T}}
Covariance matrix and pseudo-covariance matrix
The covariance matrix
K
Z
Z
{\displaystyle \operatorname {K} _{\mathbf {Z} \mathbf {Z} }}
contains the covariances between all pairs of components. The covariance matrix of an
n
×
1
{\displaystyle n\times 1}
random vector is an
n
×
n
{\displaystyle n\times n}
matrix whose
(
i
,
j
)
{\displaystyle (i,j)}
th element is the covariance between the i th and the j th random variables. It is a hermitan matrix.[ 1] : p.293
K
Z
Z
=
Cov
[
Z
,
Z
]
=
E
[
(
Z
−
E
[
Z
]
)
(
Z
−
E
[
Z
]
)
H
]
=
E
[
Z
Z
H
]
−
E
[
Z
]
E
[
Z
H
]
=
[
E
[
(
Z
1
−
E
[
Z
1
]
)
(
Z
1
−
E
[
Z
1
]
)
¯
]
E
[
(
Z
1
−
E
[
Z
1
]
)
(
Z
2
−
E
[
Z
2
]
)
¯
]
⋯
E
[
(
Z
1
−
E
[
Z
1
]
)
(
Z
n
−
E
[
Z
n
]
)
¯
]
E
[
(
Z
2
−
E
[
Z
2
]
)
(
Z
1
−
E
[
Z
1
]
)
¯
]
E
[
(
Z
2
−
E
[
Z
2
]
)
(
Z
2
−
E
[
Z
2
]
)
¯
]
⋯
E
[
(
Z
2
−
E
[
Z
2
]
)
(
Z
n
−
E
[
Z
n
]
)
¯
]
⋮
⋮
⋱
⋮
E
[
(
Z
n
−
E
[
Z
n
]
)
(
Z
1
−
E
[
Z
1
]
)
¯
]
E
[
(
Z
n
−
E
[
Z
n
]
)
(
Z
2
−
E
[
Z
2
]
)
¯
]
⋯
E
[
(
Z
n
−
E
[
Z
n
]
)
(
Z
n
−
E
[
Z
n
]
)
¯
]
]
{\displaystyle \operatorname {K} _{\mathbf {Z} \mathbf {Z} }=\operatorname {Cov} [\mathbf {Z} ,\mathbf {Z} ]=\operatorname {E} [(\mathbf {Z} -\operatorname {E} [\mathbf {Z} ]){(\mathbf {Z} -\operatorname {E} [\mathbf {Z} ])}^{H}]=\operatorname {E} [\mathbf {Z} \mathbf {Z} ^{H}]-\operatorname {E} [\mathbf {Z} ]\operatorname {E} [\mathbf {Z} ^{H}]={\begin{bmatrix}\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}]){\overline {(Z_{1}-\operatorname {E} [Z_{1}])}}]&\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}]){\overline {(Z_{2}-\operatorname {E} [Z_{2}])}}]&\cdots &\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}]){\overline {(Z_{n}-\operatorname {E} [Z_{n}])}}]\\\\\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}]){\overline {(Z_{1}-\operatorname {E} [Z_{1}])}}]&\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}]){\overline {(Z_{2}-\operatorname {E} [Z_{2}])}}]&\cdots &\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}]){\overline {(Z_{n}-\operatorname {E} [Z_{n}])}}]\\\\\vdots &\vdots &\ddots &\vdots \\\\\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}]){\overline {(Z_{1}-\operatorname {E} [Z_{1}])}}]&\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}]){\overline {(Z_{2}-\operatorname {E} [Z_{2}])}}]&\cdots &\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}]){\overline {(Z_{n}-\operatorname {E} [Z_{n}])}}]\end{bmatrix}}}
The pseudo-covariance matrix (also called relation matrix) is defined as follows. In contrast to the covariance matrix defined above transposition gets replaced by hermitan transposition in the definition.
J
Z
Z
=
Cov
[
Z
,
Z
¯
]
=
E
[
(
Z
−
E
[
Z
]
)
(
Z
−
E
[
Z
]
)
T
]
=
E
[
Z
Z
T
]
−
E
[
Z
]
E
[
Z
T
]
=
[
E
[
(
Z
1
−
E
[
Z
1
]
)
(
Z
1
−
E
[
Z
1
]
)
]
E
[
(
Z
1
−
E
[
Z
1
]
)
(
Z
2
−
E
[
Z
2
]
)
]
⋯
E
[
(
Z
1
−
E
[
Z
1
]
)
(
Z
n
−
E
[
Z
n
]
)
]
E
[
(
Z
2
−
E
[
Z
2
]
)
(
Z
1
−
E
[
Z
1
]
)
]
E
[
(
Z
2
−
E
[
Z
2
]
)
(
Z
2
−
E
[
Z
2
]
)
]
⋯
E
[
(
Z
2
−
E
[
Z
2
]
)
(
Z
n
−
E
[
Z
n
]
)
]
⋮
⋮
⋱
⋮
E
[
(
Z
n
−
E
[
Z
n
]
)
(
Z
1
−
E
[
Z
1
]
)
]
E
[
(
Z
n
−
E
[
Z
n
]
)
(
Z
2
−
E
[
Z
2
]
)
]
⋯
E
[
(
Z
n
−
E
[
Z
n
]
)
(
Z
n
−
E
[
Z
n
]
)
]
]
{\displaystyle \operatorname {J} _{\mathbf {Z} \mathbf {Z} }=\operatorname {Cov} [\mathbf {Z} ,{\overline {\mathbf {Z} }}]=\operatorname {E} [(\mathbf {Z} -\operatorname {E} [\mathbf {Z} ]){(\mathbf {Z} -\operatorname {E} [\mathbf {Z} ])}^{T}]=\operatorname {E} [\mathbf {Z} \mathbf {Z} ^{T}]-\operatorname {E} [\mathbf {Z} ]\operatorname {E} [\mathbf {Z} ^{T}]={\begin{bmatrix}\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}])(Z_{1}-\operatorname {E} [Z_{1}])]&\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}])(Z_{2}-\operatorname {E} [Z_{2}])]&\cdots &\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}])(Z_{n}-\operatorname {E} [Z_{n}])]\\\\\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}])(Z_{1}-\operatorname {E} [Z_{1}])]&\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}])(Z_{2}-\operatorname {E} [Z_{2}])]&\cdots &\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}])(Z_{n}-\operatorname {E} [Z_{n}])]\\\\\vdots &\vdots &\ddots &\vdots \\\\\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}])(Z_{1}-\operatorname {E} [Z_{1}])]&\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}])(Z_{2}-\operatorname {E} [Z_{2}])]&\cdots &\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}])(Z_{n}-\operatorname {E} [Z_{n}])]\end{bmatrix}}}
Cross-covariance matrix and pseudo-cross-covariance matrix
The cross-covariance matrix between two complex random vectors
Z
,
W
{\displaystyle \mathbf {Z} ,\mathbf {W} }
is defined as:
K
Z
W
=
Cov
[
Z
,
W
]
=
E
[
(
Z
−
E
[
Z
]
)
(
W
−
E
[
W
]
)
H
]
=
E
[
Z
W
H
]
−
E
[
Z
]
E
[
W
H
]
=
[
E
[
(
Z
1
−
E
[
Z
1
]
)
(
W
1
−
E
[
W
1
]
)
¯
]
E
[
(
Z
1
−
E
[
Z
1
]
)
(
W
2
−
E
[
W
2
]
)
¯
]
⋯
E
[
(
Z
1
−
E
[
Z
1
]
)
(
W
n
−
E
[
W
n
]
)
¯
]
E
[
(
Z
2
−
E
[
Z
2
]
)
(
W
1
−
E
[
W
1
]
)
¯
]
E
[
(
Z
2
−
E
[
Z
2
]
)
(
W
2
−
E
[
W
2
]
)
¯
]
⋯
E
[
(
Z
2
−
E
[
Z
2
]
)
(
W
n
−
E
[
W
n
]
)
¯
]
⋮
⋮
⋱
⋮
E
[
(
Z
n
−
E
[
Z
n
]
)
(
W
1
−
E
[
W
1
]
)
¯
]
E
[
(
Z
n
−
E
[
Z
n
]
)
(
W
2
−
E
[
W
2
]
)
¯
]
⋯
E
[
(
Z
n
−
E
[
Z
n
]
)
(
W
n
−
E
[
W
n
]
)
¯
]
]
{\displaystyle \operatorname {K} _{\mathbf {Z} \mathbf {W} }=\operatorname {Cov} [\mathbf {Z} ,\mathbf {W} ]=\operatorname {E} [(\mathbf {Z} -\operatorname {E} [\mathbf {Z} ]){(\mathbf {W} -\operatorname {E} [\mathbf {W} ])}^{H}]=\operatorname {E} [\mathbf {Z} \mathbf {W} ^{H}]-\operatorname {E} [\mathbf {Z} ]\operatorname {E} [\mathbf {W} ^{H}]={\begin{bmatrix}\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}]){\overline {(W_{1}-\operatorname {E} [W_{1}])}}]&\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}]){\overline {(W_{2}-\operatorname {E} [W_{2}])}}]&\cdots &\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}]){\overline {(W_{n}-\operatorname {E} [W_{n}])}}]\\\\\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}]){\overline {(W_{1}-\operatorname {E} [W_{1}])}}]&\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}]){\overline {(W_{2}-\operatorname {E} [W_{2}])}}]&\cdots &\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}]){\overline {(W_{n}-\operatorname {E} [W_{n}])}}]\\\\\vdots &\vdots &\ddots &\vdots \\\\\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}]){\overline {(W_{1}-\operatorname {E} [W_{1}])}}]&\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}]){\overline {(W_{2}-\operatorname {E} [W_{2}])}}]&\cdots &\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}]){\overline {(W_{n}-\operatorname {E} [W_{n}])}}]\end{bmatrix}}}
And the pseudo-cross-covariance matrix is defined as:
J
Z
W
=
Cov
[
Z
,
W
¯
]
=
E
[
(
Z
−
E
[
Z
]
)
(
W
−
E
[
W
]
)
T
]
=
E
[
Z
W
T
]
−
E
[
Z
]
E
[
W
T
]
=
[
E
[
(
Z
1
−
E
[
Z
1
]
)
(
W
1
−
E
[
W
1
]
)
]
E
[
(
Z
1
−
E
[
Z
1
]
)
(
W
2
−
E
[
W
2
]
)
]
⋯
E
[
(
Z
1
−
E
[
Z
1
]
)
(
W
n
−
E
[
W
n
]
)
]
E
[
(
Z
2
−
E
[
Z
2
]
)
(
W
1
−
E
[
W
1
]
)
]
E
[
(
Z
2
−
E
[
Z
2
]
)
(
W
2
−
E
[
W
2
]
)
]
⋯
E
[
(
Z
2
−
E
[
Z
2
]
)
(
W
n
−
E
[
W
n
]
)
]
⋮
⋮
⋱
⋮
E
[
(
Z
n
−
E
[
Z
n
]
)
(
W
1
−
E
[
W
1
]
)
]
E
[
(
Z
n
−
E
[
Z
n
]
)
(
W
2
−
E
[
W
2
]
)
]
⋯
E
[
(
Z
n
−
E
[
Z
n
]
)
(
W
n
−
E
[
W
n
]
)
]
]
{\displaystyle \operatorname {J} _{\mathbf {Z} \mathbf {W} }=\operatorname {Cov} [\mathbf {Z} ,{\overline {\mathbf {W} }}]=\operatorname {E} [(\mathbf {Z} -\operatorname {E} [\mathbf {Z} ]){(\mathbf {W} -\operatorname {E} [\mathbf {W} ])}^{T}]=\operatorname {E} [\mathbf {Z} \mathbf {W} ^{T}]-\operatorname {E} [\mathbf {Z} ]\operatorname {E} [\mathbf {W} ^{T}]={\begin{bmatrix}\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}])(W_{1}-\operatorname {E} [W_{1}])]&\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}])(W_{2}-\operatorname {E} [W_{2}])]&\cdots &\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}])(W_{n}-\operatorname {E} [W_{n}])]\\\\\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}])(W_{1}-\operatorname {E} [W_{1}])]&\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}])(W_{2}-\operatorname {E} [W_{2}])]&\cdots &\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}])(W_{n}-\operatorname {E} [W_{n}])]\\\\\vdots &\vdots &\ddots &\vdots \\\\\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}])(W_{1}-\operatorname {E} [W_{1}])]&\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}])(W_{2}-\operatorname {E} [W_{2}])]&\cdots &\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}])(W_{n}-\operatorname {E} [W_{n}])]\end{bmatrix}}}
Circular symmetry
A complex random vectors
Z
{\displaystyle \mathbf {Z} }
is called circularly symmetric if for any deterministic
ϕ
∈
[
−
π
,
π
[
{\displaystyle \phi \in [-\pi ,\pi [}
the distribution of
e
i
ϕ
Z
{\displaystyle e^{\mathrm {i} \phi }\mathbf {Z} }
equals the distribution of
Z
{\displaystyle \mathbf {Z} }
.[ 2] : p.500–501
The expectation of a circularly symmetric complex random vectors is either zero or it is not defined.[ 2] : p.500
Proper complex random vectors
Definition
A complex random vector
Z
{\displaystyle \mathbf {Z} }
is called proper if the following three conditions are all satisfied:[ 1] : p.293
E
[
Z
]
=
0
{\displaystyle \operatorname {E} [\mathbf {Z} ]=0}
Var
[
Z
1
]
<
∞
,
…
,
Var
[
Z
n
]
<
∞
{\displaystyle \operatorname {Var} [Z_{1}]<\infty ,\ldots ,\operatorname {Var} [Z_{n}]<\infty }
(finite variance)
E
[
Z
Z
T
]
=
0
{\displaystyle \operatorname {E} [\mathbf {Z} \mathbf {Z} ^{T}]=0}
Properties
A complex random vector
Z
{\displaystyle \mathbf {Z} }
is proper if, and only if, for all (deterministic) vectors
c
∈
C
n
{\displaystyle \mathbf {c} \in \mathbb {C} ^{n}}
the complex random variable
c
T
Z
{\displaystyle \mathbf {c} ^{T}\mathbf {Z} }
is proper.[ 1] : p.293
Linear transformations of proper complex random vectors are proper, i.e. if
Z
{\displaystyle \mathbf {Z} }
is a proper random vectors with
n
{\displaystyle n}
components and
A
{\displaystyle A}
is a deterministic
m
×
n
{\displaystyle m\times n}
matrix, then the complex random vector
A
Z
{\displaystyle A\mathbf {Z} }
is also proper.[ 1] : p.295
Every circularly symmetric complex random vector with finite variance of all its components is proper.[ 1] : p.295
Characteristic function
The characteristic function of a complex random vector
Z
{\displaystyle \mathbf {Z} }
with
n
{\displaystyle n}
components is a function
C
n
→
C
{\displaystyle \mathbb {C} ^{n}\to \mathbb {C} }
defined by:[ 1] : p.295
φ
Z
(
ω
)
=
E
[
e
i
ℜ
(
ω
H
Z
)
]
=
E
[
e
i
(
ℜ
(
ω
1
)
ℜ
(
Z
1
)
+
ℑ
(
ω
1
)
ℑ
(
Z
1
)
+
…
+
ℜ
(
ω
n
)
ℜ
(
Z
n
)
+
ℑ
(
ω
n
)
ℑ
(
Z
n
)
)
]
{\displaystyle \varphi _{\mathbf {Z} }(\mathbf {\omega } )=\operatorname {E} \left[e^{i\Re {(\mathbf {\omega } ^{H}\mathbf {Z} )}}\right]=\operatorname {E} \left[e^{i(\Re {(\omega _{1})}\Re {(Z_{1})}+\Im {(\omega _{1})}\Im {(Z_{1})}+\ldots +\Re {(\omega _{n})}\Re {(Z_{n})}+\Im {(\omega _{n})}\Im {(Z_{n})})}\right]}
See also
References
^ a b c d e f g h Lapidoth, Amos, A Foundation in Digital Communication , Cambridge University Press, 2009.
^ a b Tse, David, Fundamentals of Wireless Communication , Cambridge University Press, 2005.
Category:Probability theory
Category:Randomness
Category:Algebra of random variables