Random linear codes are a mainstay of coding theory and are used to demonstrate the existence of codes with the most familiar or near-optimal tradeoffs in many noise models. However, it has little structure other than linearity and is not suitable for tractable error correction algorithms.

    In this work, we prove general nonrandomization results applicable to random linear codes. That is, in settings where the coding theory property of interest is “local” (in the sense that it forbids certain bad constructs involving a small number of vectors – code distance and list decodability are prominent examples), random linear Code (RLC) can be replaced. It basically has no parameter loss and uses heavily randomized variants. Specifically, instead of randomly sampling the coordinates of the (long) Hadamard code (the same way you would write RLC), you can randomly sample the coordinates of any arbitrary code with low bias. For larger alphabets, the low-bias requirement can be weakened to larger distances. Moreover, large distances are sufficient even for small alphabets to match the currently best-known bounds of RLC list decodability.

    In particular, thanks to our results, the current (and future) achievability bounds of list decodability for random linear codes all fall under the low-bias (or large-alphabet) “mother” code It automatically extends to puncturing. We also show that the punctured code emulates RLC behavior in a stochastic channel and derandomizes RLC in the context of achieving Shannon capacity. Thus, there are randomness-efficient methods for sampling code, achieving power in both worst-case and stochastic settings, and further inheriting the algebraically or other algorithmically useful structural properties of the mother code. I can do it.



    Source link

    Share.

    Leave A Reply