Many combinatorial problems can be found in the well-known complexity class NP, and their optimization counterparts are important for many practical settings. These problems usually take full knowledge of the input and optimize for this particular input. However, in practical settings, input data uncertainty is a normal phenomenon and is usually not covered in optimized versions of NP problems. One concept that models uncertainty in input data is \textit{recoverable robustness}. This setting computes the solution for the input. This ensures possible recovery to a good solution whenever uncertainty is revealed. That is, the solution $\texttt{s}_0$ for the base scenario $\textsf{S}_0$ and the solutions \texttt{s} for all possible scenarios of the scenario set \textsf{S} are computed. In other words, not only is the solution $\texttt{s}_0$ for instance $\textsf{S}_0$ computed, but the solution \texttt{s} for all scenarios from \textsf{S} is certainty. In this paper, we introduce a particular concept of recoverable robust problem, the Hamming distance recoverable robust problem. With this setup, we need to compute the solutions $\texttt{s}_0$ and \texttt{s}. As a result, $\texttt{s}_0$ and \texttt{s} can differ by at most $\kappa$. element. This means that you can recover from adverse scenarios by choosing another solution that is not far from the first one. In this paper, we investigate the complexity of Hamming distance recoverable robust versions of optimization problems typically found in NP for different types of scenarios. The complexity is mainly at the lower levels of the polynomial hierarchy. The main contribution of this paper is that the compression-encoded scenario and recoverable robust problem with $m \in \mathbb{N}$ recovery is $\Sigma^P_{2m+1}$-complete .

    Source link


    Leave A Reply