On Optimal Codes for Additive Noise Channels
Communications , Coding Theory
One of the fundamental goals of coding theory is to find optimal codes, in terms of achieving the largest probability of correct decoding. Most of the foundational works on optimal codes use three assumptions. First, optimality is defined relative to a code’s rate and minimum Hamming distance, and not its probability of correct decoding. The problem is that the former conditions are not always good indicators for the latter. Second, for practicality only optimal codes with some kind of algebraic structure are considered, the most common being linearity. While linear codes have practical advantages, it is known that often the best codes are not linear. Thirdly, codes are usually proven optimal only for memoryless channels. However, there are many real-life channels which exhibit memory. Attempts to extend these results often make use of interleaving. This technique renders some of these channels with memory equivalent to a memoryless one with respect to the decoder. However, this comes with two disadvantages namely: we fail to exploit the channel’s memory and we add latency to the communication system. In this thesis, we seek optimal codes which require milder conditions than those aforementioned. Specifically, we describe a class of optimal q-ary block codes over additive noise channels, which are a generalization of a class of optimal codes from Hamada. We prove these codes are optimal using our novel method of analyzing the probability of correct decoding of block codes based on so-called error indexed decoder regions. Our method allows us to prove that these codes must be either a linear code or a coset of one.