Learning mappings between infinite-dimensional function spaces has achieved empirical success in many areas of machine learning, such as generative modeling, functional data analysis, causal inference, and multi-agent reinforcement learning. In this paper, we investigate the statistical limits of learning his Hilbert-Schmidt operator between two infinite-dimensional Sobolev reproduction kernel Hilbert spaces. We establish an information-theoretic lower bound in terms of the Sobolev Hilbert-Schmidt norm and show that a regularization that learns spectral components below the bias contour and ignores spectral components above the variance contour can achieve an optimal learning rate. At the same time, the spectral content between the bias and variance contours allows flexibility in designing computationally viable machine learning algorithms. Based on this observation, we develop a multilevel kernel operator learning algorithm that is optimal when learning linear operators across infinite-dimensional function spaces.