We provide rigorous theoretical bounds for Anderson acceleration (AA) that
    allow for efficient approximate calculations of the residual, which reduce
    computational time and memory storage while maintaining convergence.
    Specifically, we propose a reduced variant of AA, which consists in projecting
    the least squares to compute the Anderson mixing onto a subspace of reduced
    dimension. The dimensionality of this subspace adapts dynamically at each
    iteration as prescribed by computable heuristic quantities guided by the
    theoretical error bounds. The use of the heuristic to monitor the error
    introduced by approximate calculations, combined with the check on monotonicity
    of the convergence, ensures the convergence of the numerical scheme within a
    prescribed tolerance threshold on the residual. We numerically assess the
    performance of AA with approximate calculations on: (i) linear deterministic
    fixed-point iterations arising from the Richardson’s scheme to solve linear
    systems with open-source benchmark matrices with various preconditioners and
    (ii) non-linear deterministic fixed-point iterations arising from non-linear
    time-dependent Boltzmann equations.

    Source link


    Leave A Reply