We study the rank of sub-matrices arising out of kernel functions,
$F(\pmb{x},\pmb{y}): \mathbb{R}^d \times \mathbb{R}^d \mapsto \mathbb{R}$,
where $\pmb{x},\pmb{y} \in \mathbb{R}^d$ with $F(\pmb{x},\pmb{y})$ is smooth
everywhere except along the line $\pmb{x}=\pmb{y}$. Such kernel functions are
frequently encountered in a wide range of applications such as $N$ body
problems, Green’s functions, integral equations, geostatistics, kriging,
Gaussian processes, etc. One of the challenges in dealing with these kernel
functions is that the corresponding matrix associated with these kernels is
large and dense and thereby, the computational cost of matrix operations is
high. In this article, we prove new theorems bounding the numerical rank of
sub-matrices arising out of these kernel functions. Under reasonably mild
assumptions, we prove that the rank of certain sub-matrices is rank-deficient
in finite precision. This rank depends on the dimension of the ambient space
and also on the type of interaction between the hyper-cubes containing the
corresponding set of particles. This rank structure can be leveraged to reduce
the computational cost of certain matrix operations such as matrix-vector
products, solving linear systems, etc. We also present numerical results on the
growth of rank of certain sub-matrices in $1$D, $2$D, $3$D and $4$D, which, not
surprisingly, agrees with the theoretical results.

Source link


Leave A Reply