Articles for CIThttps://nowpublishers.com/feed/CITArticles for CIThttp://nowpublishers.com/article/Details/CIT-119Rank-Metric Codes and Their Applications<h3>Abstract</h3>The rank metric measures the distance between two matrices
by the rank of their difference. Codes designed for the
rank metric have attracted considerable attention in recent
years, reinforced by network coding and further motivated
by a variety of applications. In code-based cryptography,
the hardness of the corresponding generic decoding problem
can lead to systems with reduced public-key size. In
distributed data storage, codes in the rank metric have been
used repeatedly to construct codes with locality, and in
coded caching, they have been employed for the placement
of coded symbols. This survey gives a general introduction
to rank-metric codes, explains their most important applications,
and highlights their relevance to these areas of
research.<h3>Suggested Citation</h3>Hannes Bartz, Lukas Holzbaur, Hedongliang Liu, Sven Puchinger, Julian Renner and Antonia Wachter-Zeh (2022), "Rank-Metric Codes and Their Applications", Foundations and Trends® in Communications and Information Theory: Vol. 19: No. 3, pp 390-546. http://dx.doi.org/10.1561/0100000119Mon, 02 May 2022 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-122Common Information, Noise Stability, and Their Extensions<h3>Abstract</h3>Common information is ubiquitous in information theory
and related areas such as theoretical computer science and
discrete probability. However, because there are multiple
notions of common information, a unified understanding
of the deep interconnections between them is lacking. This
monograph seeks to fill this gap by leveraging a small set
of mathematical techniques that are applicable across seemingly
disparate problems.<p>In Part I, we review the operational tasks and properties
associated with Wyner’s and Gács–Körner–Witsenhausen’s
(GKW’s) common information. In Part II, we discuss extensions
of the former from the perspective of distributed source
simulation. This includes the Rényi common information
which forms a bridge between Wyner’s common information
and the exact common information. Via a surprising
equivalence between the Rényi common information of order
∞ and the exact common information, we demonstrate
the existence of a joint source in which the exact common
information strictly exceeds Wyner’s common information.
Other closely related topics discussed in Part II include the
channel synthesis problem and the connection of Wyner’s
and exact common information to the nonnegative rank of
matrices.</p><p>In Part III, recognizing that GKW’s common information is
zero for most non-degenerate sources, we examine it with a
more refined lens via the Non-Interactive Correlation Distillation
(NICD) problem in which we quantify the agreement
probability of extracted bits from a bivariate source. We
extend this to the noise stability problem which includes
as special cases the k-user NICD and q-stability problems.
This allows us to seamlessly transition to discussing their
connections to various conjectures in information theory
and discrete probability, such as the Courtade–Kumar, Li–
Médard and Mossell–O’Donnell conjectures. Finally, we consider
functional inequalities (e.g., the hypercontractivity
and Brascamp–Lieb inequalities), which constitute a further
generalization of the noise stability problem in which
the Boolean functions therein are replaced by nonnnegative
functions. We demonstrate that the key ideas behind the
proofs in Part III can be presented in a pedagogically coherent
manner and unified via information-theoretic and
Fourier-analytic methods.</p><h3>Suggested Citation</h3>Lei Yu and Vincent Y. F. Tan (2022), "Common Information, Noise Stability, and Their Extensions", Foundations and Trends® in Communications and Information Theory: Vol. 19: No. 2, pp 107-389. http://dx.doi.org/10.1561/0100000122Thu, 28 Apr 2022 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-117Information-Theoretic Foundations of DNA Data Storage<h3>Abstract</h3>Due to its longevity and enormous information density, DNA
is an attractive medium for archival data storage. Natural
DNA more than 700.000 years old has been recovered, and
about 5 grams of DNA can in principle hold a Zetabyte of
digital information, orders of magnitude more than what is
achieved on conventional storage media. Thanks to rapid
technological advances, DNA storage is becoming practically
feasible, as demonstrated by a number of experimental storage
systems, making it a promising solution for our society’s
increasing need of data storage.<p>While in living things, DNA molecules can consist of millions
of nucleotides, due to technological constraints, in practice,
data is stored on many short DNA molecules, which are preserved
in a DNA pool and cannot be spatially ordered. Moreover,
imperfections in sequencing, synthesis, and handling,
as well as DNA decay during storage, introduce random
noise into the system, making the task of reliably storing
and retrieving information in DNA challenging.</p><p>This unique setup raises a natural information-theoretic
question: how much information can be reliably stored on
and reconstructed from millions of short noisy sequences?
The goal of this monograph is to address this question by
discussing the fundamental limits of storing information on
DNA. Motivated by current technological constraints on
DNA synthesis and sequencing, we propose a probabilistic
channel model that captures three key distinctive aspects
of the DNA storage systems: (1) the data is written onto
many short DNA molecules that are stored in an unordered
fashion; (2) the molecules are corrupted by noise and (3) the
data is read by randomly sampling from the DNA pool. Our
goal is to investigate the impact of each of these key aspects
on the capacity of the DNA storage system. Rather than
focusing on coding-theoretic considerations and computationally
efficient encoding and decoding, we aim to build an
information-theoretic foundation for the analysis of these
channels, developing tools for achievability and converse
arguments.</p><h3>Suggested Citation</h3>Ilan Shomorony and Reinhard Heckel (2022), "Information-Theoretic Foundations of DNA Data Storage", Foundations and Trends® in Communications and Information Theory: Vol. 19: No. 1, pp 1-106. http://dx.doi.org/10.1561/0100000117Thu, 24 Feb 2022 00:00:00 +0100http://nowpublishers.com/article/Details/CIT-125Asymptotic Frame Theory for Analog Coding<h3>Abstract</h3>Over-complete systems of vectors, or in short, frames, play
the role of analog codes in many areas of communication
and signal processing. To name a few, spreading sequences
for code-division multiple access (CDMA), over-complete
representations for multiple-description (MD) source coding,
space-time codes, sensing matrices for compressed sensing
(CS), and more recently, codes for unreliable distributed
computation. In this survey paper we observe an informationtheoretic
random-like behavior of frame subsets. Such subframes
arise in setups involving erasures (communication),
random user activity (multiple access), or sparsity (signal
processing), in addition to channel or quantization noise.
The goodness of a frame as an analog code is a function
of the eigenvalues of a sub-frame, averaged over all subframes
(e.g., harmonic mean of the eigenvalues relates to
least-square estimation error, while geometric mean to the
Shannon transform, and condition number to the restricted
isometry property).<p>Within the highly symmetric class of Equiangular Tight
Frames (ETF), as well as other “near ETF” families, we
show a universal behavior of the empirical eigenvalue distribution
(ESD) of a randomly-selected sub-frame: (i) the
ESD is asymptotically indistinguishable from Wachter’s
MANOVA distribution; and (ii) it exhibits a convergence
rate to this limit that is indistinguishable from that of a
matrix sequence drawn from MANOVA (Jacobi) ensembles
of corresponding dimensions. Some of these results follow
from careful statistical analysis of empirical evidence, and
some are proved analytically using random matrix theory
arguments of independent interest. The goodness measures
of the MANOVA limit distribution are better, in a concrete
formal sense, than those of the Marchenko–Pastur distribution
at the same aspect ratio, implying that deterministic
analog codes are better than random (i.i.d.) analog codes.
We further give evidence that the ETF (and near ETF)
family is in fact superior to any other frame family in terms
of its typical sub-frame goodness.</p><h3>Suggested Citation</h3>Marina Haikin, Matan Gavish, Dustin G. Mixon and Ram Zamir (2021), "Asymptotic Frame Theory for Analog Coding", Foundations and Trends® in Communications and Information Theory: Vol. 18: No. 4, pp 526-645. http://dx.doi.org/10.1561/0100000125Thu, 18 Nov 2021 00:00:00 +0100http://nowpublishers.com/article/Details/CIT-108Modeling and Optimization of Latency in Erasure-coded Storage Systems<h3>Abstract</h3>As consumers are increasingly engaged in social networking
and E-commerce activities, businesses grow to rely on
Big Data analytics for intelligence, and traditional IT infrastructures
continue to migrate to the cloud and edge,
these trends cause distributed data storage demand to rise
at an unprecedented speed. Erasure coding has seen itself
quickly emerged as a promising technique to reduce storage
cost while providing similar reliability as replicated systems,
widely adopted by companies like Facebook, Microsoft and
Google. However, it also brings new challenges in characterizing
and optimizing the access latency when data objects
are erasure coded in distributed storage. The aim of this
monograph is to provide a review of recent progress (both
theoretical and practical) on systems that employ erasure
codes for distributed storage.<p>In this monograph, we will first identify the key challenges
and taxonomy of the research problems and then give an
overview of different models and approaches that have been
developed to quantify latency of erasure-coded storage. This
includes recent work leveraging MDS-Reservation, Fork-Join,
Probabilistic, and Delayed-Relaunch scheduling policies, as
well as their applications to characterizing access latency
(e.g., mean, tail, and asymptotic latency) of erasure-coded
distributed storage systems. We will also extend the discussions
to video streaming from erasure-coded distributed
storage systems. Next, we will bridge the gap between theory
and practice, and discuss lessons learned from prototype
implementations. In particular, we will discuss exemplary
implementations of erasure-coded storage, illuminate key
design degrees of freedom and tradeoffs, and summarize
remaining challenges in real-world storage systems such as
in content delivery and caching. Open problems for future
research are discussed at the end of each chapter.</p><h3>Suggested Citation</h3>Vaneet Aggarwal and Tian Lan (2021), "Modeling and Optimization of Latency in Erasure-coded Storage Systems", Foundations and Trends® in Communications and Information Theory: Vol. 18: No. 3, pp 380-525. http://dx.doi.org/10.1561/0100000108Wed, 07 Jul 2021 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-083An Algebraic and Probabilistic Framework for Network Information Theory<h3>Abstract</h3>In this monograph, we develop a mathematical framework based on asymptotically good random structured codes, i.e., codes possessing algebraic properties, for network information theory. We use these codes to propose new strategies for communication in multi-terminal settings. The proposed coding strategies are applicable to arbitrary instances of the multi-terminal communication problems under consideration. In particular, we consider four fundamental problems which can be considered as building blocks of networks: distributed source coding, interference channels, multiple-access channels with distributed states and multiple description source coding. We then develop a systematic framework for characterizing the performance limits of these strategies for these problems from an information-theoretic viewpoint. Lastly, we identify several examples of the multiterminal communication problems studied herein, for which structured codes attain optimality, and provide strictly better performance as compared to classical techniques based on unstructured codes. In summary, we develop an algebraic and probabilistic framework to demonstrate the fundamental role played by structured codes in multiterminal communication problems. This monograph deals exclusively with discrete source and channel coding problems.<h3>Suggested Citation</h3>S. Sandeep Pradhan, Arun Padakandla and Farhad Shirani (2020), "An Algebraic and Probabilistic Framework for Network Information Theory", Foundations and Trends® in Communications and Information Theory: Vol. 18: No. 2, pp 173-379. http://dx.doi.org/10.1561/0100000083Mon, 21 Dec 2020 00:00:00 +0100http://nowpublishers.com/article/Details/CIT-102Theoretical Foundations of Adversarial Binary Detection<h3>Abstract</h3>The present monograph focuses on the detection problem in adversarial setting. When framed in an adversarial setting, classical detection theory can not be applied any more, since, in order to make a correct decision, the presence of an adversary must be taken into account when designing the detector. In particular, the interplay between the Defender (), wishing to carry out the detection task, and the Attacker (), aiming at impeding it, must be investigated. The purpose of this monograph is to lay out the foundations of a general theory of adversarial detection, taking into account the impact that the presence of the adversary has on the design of the optimal detector. We do so by casting the adversarial detection problem into a game theoretical framework, which is then studied by relying on typical methods of information theory. As a final result, the theory allows to state the conditions under which both the false positive and false negative error probabilities tend to zero exponentially fast, and to relate the error exponents of the two kinds of errors to the distortion the attacker can introduce into the test sequence.<h3>Suggested Citation</h3>Mauro Barni and Benedetta Tondi (2020), "Theoretical Foundations of Adversarial Binary Detection", Foundations and Trends® in Communications and Information Theory: Vol. 18: No. 1, pp 1-172. http://dx.doi.org/10.1561/0100000102Sun, 20 Dec 2020 00:00:00 +0100http://nowpublishers.com/article/Details/CIT-095Polynomial Methods in Statistical Inference: Theory and Practice<h3>Abstract</h3>This survey provides an exposition of a suite of techniques based on the theory of polynomials, collectively referred to as polynomial methods, which have recently been applied to address several challenging problems in statistical inference successfully. Topics including polynomial approximation, polynomial interpolation and majorization, moment space and positive polynomials, orthogonal polynomials and Gaussian quadrature are discussed, with their major probabilistic and statistical applications in property estimation on large domains and learning mixture models. These techniques provide useful tools not only for the design of highly practical algorithms with provable optimality, but also for establishing the fundamental limits of the inference problems through the method of moment matching. The effectiveness of the polynomial method is demonstrated in concrete problems such as entropy and support size estimation, distinct elements problem, and learning Gaussian mixture models.<h3>Suggested Citation</h3>Yihong Wu and Pengkun Yang (2020), "Polynomial Methods in Statistical Inference: Theory and Practice", Foundations and Trends® in Communications and Information Theory: Vol. 17: No. 4, pp 402-586. http://dx.doi.org/10.1561/0100000095Mon, 12 Oct 2020 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-101Information-Theoretic Foundations of Mismatched Decoding<h3>Abstract</h3>Shannon’s channel coding theorem characterizes the maximal rate of information that can be reliably transmitted over a communication channel when optimal encoding and decoding strategies are used. In many scenarios, however, practical considerations such as channel uncertainty and implementation constraints rule out the use of an optimal decoder. The mismatched decoding problem addresses such scenarios by considering the case that the decoder cannot be optimized, but is instead fixed as part of the problem statement. This problem is not only of direct interest in its own right, but also has close connections with other long-standing theoretical problems in information theory.<p>In this monograph, we survey both classical literature and recent developments on the mismatched decoding problem, with an emphasis on achievable random-coding rates for memoryless channels. We present two widely-considered achievable rates known as the generalized mutual information (GMI) and the LM rate, and overview their derivations and properties. In addition, we survey several improved rates via multi-user coding techniques, as well as recent developments and challenges in establishing upper bounds on the mismatch capacity, and an analogous mismatched encoding problem in rate-distortion theory. Throughout the monograph, we highlight a variety of applications and connections with other prominent information theory problems.</p><h3>Suggested Citation</h3>Jonathan Scarlett, Albert Guillén i Fàbregas, Anelia Somekh-Baruch and Alfonso Martinez (2020), "Information-Theoretic Foundations of Mismatched Decoding", Foundations and Trends® in Communications and Information Theory: Vol. 17: No. 2–3, pp 149-401. http://dx.doi.org/10.1561/0100000101Mon, 31 Aug 2020 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-103Coded Computing: Mitigating Fundamental Bottlenecks in Large-Scale Distributed Computing and Machine Learning<h3>Abstract</h3>We introduce the concept of “coded computing”, a novel computing paradigm that utilizes coding theory to effectively inject and leverage data/computation redundancy to mitigate several fundamental bottlenecks in large-scale distributed computing, namely communication bandwidth, straggler’s (i.e., slow or failing nodes) delay, privacy and security bottlenecks. More specifically, for MapReduce based distributed computing structures, we propose the “Coded Distributed Computing” (CDC) scheme, which injects redundant computations across the network in a structured manner, such that in-network coding opportunities are enabled to substantially slash the communication load to shuffle the intermediate computation results. We prove that CDC achieves the optimal tradeoff between computation and communication, and demonstrate its impact on a wide range of distributed computing systems from cloud-based datacenters to mobile edge/fog computing platforms.<p>Secondly, to alleviate the straggler effect that prolongs the executions of distributed machine learning algorithms, we utilize the ideas from error correcting codes to develop “Polynomial Codes” for computing general matrix algebra, and “Lagrange Coded Computing” (LCC) for computing arbitrary multivariate polynomials. The core idea of these proposed schemes is to apply coding to create redundant data/computation scattered across the network, such that completing the overall computation task only requires a subset of the network nodes returning their local computation results. We demonstrate the optimality of Polynomial Codes and LCC in minimizing the computation latency, by proving that they require the least number of nodes to return their results.</p><p>Finally, we illustrate the role of coded computing in providing security and privacy in distributed computing and machine learning. In particular, we consider the problems of secure multiparty computing (MPC) and privacy-preserving machine learning, and demonstrate how coded computing can be leveraged to provide efficient solutions to these critical problems and enable substantial improvements over the state of the art.</p><p>To illustrate the impact of coded computing on real world applications and systems, we implement the proposed coding schemes on cloud-based distributed computing systems, and significantly improve the run-time performance of important benchmarks including distributed sorting, distributed training of regression models, and privacy-preserving training for image classification. Throughout this monograph, we also highlight numerous open problems and exciting research directions for future work on coded computing.</p><h3>Suggested Citation</h3>Songze Li and Salman Avestimehr (2020), "Coded Computing: Mitigating Fundamental Bottlenecks in Large-Scale Distributed Computing and Machine Learning", Foundations and Trends® in Communications and Information Theory: Vol. 17: No. 1, pp 1-148. http://dx.doi.org/10.1561/0100000103Thu, 20 Aug 2020 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-104Cache Optimization Models and Algorithms<h3>Abstract</h3>Caching refers to the act of replicating information at a faster (or closer) medium with the purpose of improving performance. This deceptively simple idea has given rise to some of the hardest optimization problems in the fields of computer systems, networking, and the Internet, many of which remain unsolved several years after their conception. While a wealth of research contributions exists from the topics of memory systems, data centers, Internet traffic, CDNs, and recently wireless networks, the literature is dispersed and overlapping at times. In this monograph, we take a unifying modeling view: by focusing on the fundamental underlying mathematical models, we re-organize the available material into a powerful framework for performing optimization of caching systems. This way, we aspire to present a solid background for the anticipated explosion in caching research, but also provide a didactic view into how engineers have managed to infuse mathematical models into the study of caching over the last 40 years.<h3>Suggested Citation</h3>Georgios Paschos, George Iosifidis and Giuseppe Caire (2020), "Cache Optimization Models and Algorithms", Foundations and Trends® in Communications and Information Theory: Vol. 16: No. 3–4, pp 156-345. http://dx.doi.org/10.1561/0100000104Thu, 20 Aug 2020 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-100Lattice-Reduction-Aided and
Integer-Forcing Equalization: Structures, Criteria, Factorization, and Coding<h3>Abstract</h3>In this monograph, a tutorial review of lattice-reductionaided
(LRA) and integer-forcing (IF) equalization approaches
inMIMO communications is given. Both methods have in
common that integer linear combinations are decoded; the
remaining integer interference is resolved subsequently. The
aim is to enlighten similarities and differences of both approaches.
The various criteria for selecting the integer linear
combinations available in the literature are summarized in a
unified way. Thereby, we clearly distinguish between the criteria
according to which the non-integer equalization part
is optimized and those, which are inherently considered in
the applied lattice algorithms, i.e., constraints on the integer
equalization part. The demands on the signal constellations
and coding schemes are discussed in detail. We treat
LRA/IF approaches for receiver-side linear equalization and
decision-feedback equalization, as well as transmitter-side
linear preequalization and precoding.<h3>Suggested Citation</h3>Robert F. H. Fischer, Sebastian Stern and Johannes B. Huber (2019), "Lattice-Reduction-Aided and
Integer-Forcing Equalization: Structures, Criteria, Factorization, and Coding", Foundations and Trends® in Communications and Information Theory: Vol. 16: No. 1-2, pp 1-155. http://dx.doi.org/10.1561/0100000100Mon, 09 Dec 2019 00:00:00 +0100http://nowpublishers.com/article/Details/CIT-099Group Testing: An Information Theory Perspective<h3>Abstract</h3>The group testing problem concerns discovering a small
number of defective items within a large population by
performing tests on pools of items. A test is positive if
the pool contains at least one defective, and negative if it
contains no defectives. This is a sparse inference problem
with a combinatorial flavour, with applications in medical
testing, biology, telecommunications, information technology,
data science, and more.
In this monograph, we survey recent developments in the
group testing problem from an information-theoretic perspective.
We cover several related developments: efficient
algorithms with practical storage and computation requirements,
achievability bounds for optimal decoding methods,
and algorithm-independent converse bounds. We assess the
theoretical guarantees not only in terms of scaling laws,
but also in terms of the constant factors, leading to the
notion of the rate of group testing, indicating the amount
of information learned per test. Considering both noiseless
and noisy settings, we identify several regimes where existing
algorithms are provably optimal or near-optimal, as
well as regimes where there remains greater potential for
improvement.
In addition, we survey results concerning a number of variations
on the standard group testing problem, including
partial recovery criteria, adaptive algorithms with a limited
number of stages, constrained test designs, and sublineartime
algorithms.<h3>Suggested Citation</h3>Matthew Aldridge, Oliver Johnson and Jonathan Scarlett (2019), "Group Testing: An Information Theory Perspective", Foundations and Trends® in Communications and Information Theory: Vol. 15: No. 3-4, pp 196-392. http://dx.doi.org/10.1561/0100000099Thu, 05 Dec 2019 00:00:00 +0100http://nowpublishers.com/article/Details/CIT-092Sparse Regression Codes<h3>Abstract</h3>Developing computationally-efficient codes that approach the
Shannon-theoretic limits for communication and compression has
long been one of the major goals of information and coding theory.
There have been significant advances towards this goal in the
last couple of decades, with the emergence of turbo codes, sparsegraph
codes, and polar codes. These codes are designed primarily
for discrete-alphabet channels and sources. For Gaussian channels
and sources, where the alphabet is inherently continuous, Sparse
Superposition Codes or Sparse Regression Codes (SPARCs) are a
promising class of codes for achieving the Shannon limits.
This monograph provides a unified and comprehensive over-view of
sparse regression codes, covering theory, algorithms, and practical
implementation aspects. The first part of the monograph focuses
on SPARCs for AWGN channel coding, and the second part on
SPARCs for lossy compression (with squared error distortion criterion).
In the third part, SPARCs are used to construct codes for
Gaussian multi-terminal channel and source coding models such
as broadcast channels, multiple-access channels, and source and
channel coding with side information. The monograph concludes
with a discussion of open problems and directions for future work.<h3>Suggested Citation</h3>Ramji Venkataramanan, Sekhar Tatikonda and Andrew Barron (2019), "Sparse Regression Codes", Foundations and Trends® in Communications and Information Theory: Vol. 15: No. 1-2, pp 1-195. http://dx.doi.org/10.1561/0100000092Thu, 20 Jun 2019 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-094Fundamentals of Index Coding<h3>Abstract</h3>Index coding is a canonical problem in network information
theory that studies the fundamental limit and optimal
coding schemes for broadcasting multiple messages to receivers
with different side information. The index coding
problem provides a simple yet rich model for several important
engineering tasks such as satellite communication,
content broadcasting, distributed caching, device-to-device
relaying, and interference management. This monograph
aims to provide a broad overview of this fascinating subject,
focusing on the simplest form of multiple-unicast index
coding. A unified treatment on coding schemes based
on graph-theoretic, algebraic, and information-theoretic approaches
is presented. Although the problem of characterizing
the optimal communication rate is open in general, several
bounds and structural properties are established. The
relationship to other problems such as network coding and
distributed storage is also discussed.<h3>Suggested Citation</h3>Fatemeh Arbabjolfaei and Young-Han Kim (2018), "Fundamentals of Index Coding", Foundations and Trends® in Communications and Information Theory: Vol. 14: No. 3-4, pp 163-346. http://dx.doi.org/10.1561/0100000094Wed, 17 Oct 2018 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-067Community Detection and
Stochastic Block Models<h3>Abstract</h3>The stochastic block model (SBM) is a random graph model
with different group of vertices connecting differently. It is
widely employed as a canonical model to study clustering
and community detection, and provides a fertile ground to
study the information-theoretic and computational tradeoffs
that arise in combinatorial statistics and more generally
data science.
This monograph surveys the recent developments that establish
the fundamental limits for community detection in
the SBM, both with respect to information-theoretic and
computational tradeoffs, and for various recovery requirements
such as exact, partial and weak recovery. The main
results discussed are the phase transitions for exact recovery
at the Chernoff-Hellinger threshold, the phase transition for
weak recovery at the Kesten-Stigum threshold, the optimal
SNR-mutual information tradeoff for partial recovery, and
the gap between information-theoretic and computational
thresholds.<h3>Suggested Citation</h3>Emmanuel Abbe (2018), "Community Detection and
Stochastic Block Models", Foundations and Trends® in Communications and Information Theory: Vol. 14: No. 1-2, pp 1-162. http://dx.doi.org/10.1561/0100000067Thu, 07 Jun 2018 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-090Redundancy of Lossless Data Compression for Known Sources by Analytic Methods<h3>Abstract</h3>Lossless data compression is a facet of source coding and a well studied
problem of information theory. Its goal is to find a shortest possible
code that can be unambiguously recovered. Here, we focus on
rigorous analysis of code redundancy for known sources. The redundancy
rate problem determines by how much the actual code length
exceeds the optimal code length. We present precise analyses of three
types of lossless data compression schemes, namely fixed-to-variable
(FV) length codes, variable-to-fixed (VF) length codes, and variableto-
variable (VV) length codes. In particular, we investigate the average
redundancy of Shannon, Huffman, Tunstall, Khodak and Boncelet
codes. These codes have succinct representations as trees, either as
coding or parsing trees, and we analyze here some of their parameters
(e.g., the average path from the root to a leaf). Such trees are
precisely analyzed by analytic methods, known also as analytic combinatorics,
in which complex analysis plays decisive role. These tools
include generating functions, Mellin transform, Fourier series, saddle
point method, analytic poissonization and depoissonization, Tauberian
theorems, and singularity analysis. The term analytic information the-
ory has been coined to describe problems of information theory studied
by analytic tools. This approach lies on the crossroad of information
theory, analysis of algorithms, and combinatorics.<h3>Suggested Citation</h3>Michael Drmota and Wojciech Szpankowski (2017), "Redundancy of Lossless Data Compression for Known Sources by Analytic Methods", Foundations and Trends® in Communications and Information Theory: Vol. 13: No. 4, pp 277-417. http://dx.doi.org/10.1561/0100000090Wed, 24 May 2017 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-072Multiterminal Secrecy by Public Discussion<h3>Abstract</h3>This monograph describes principles of information theoretic secrecy
generation by legitimate parties with public discussion in the presence
of an eavesdropper. The parties are guaranteed secrecy in the form of
independence from the eavesdropper’s observation of the communication.
Part I develops basic technical tools for secrecy generation, many of
which are potentially of independent interest beyond secrecy settings.
Various information theoretic and cryptographic notions of secrecy are
compared. Emphasis is placed on central themes of interactive communication
and common randomness as well as on core methods of
balanced coloring and leftover hash for extracting secret uniform randomness.
Achievability and converse results are shown to emerge from
“single shot” incarnations that serve to explain essential structure.
Part II applies the methods of Part I to secrecy generation in two
settings: a multiterminal source model and a multiterminal channel
model, in both of which the legitimate parties are afforded privileged
access to correlated observations of which the eavesdropper has only
partial knowledge. Characterizations of secret key capacity bring out
inherent connections to the data compression concept of omniscience
and, for a specialized source model, to a combinatorial problem of maximal
spanning tree packing in a multigraph. Interactive common information
is seen to govern the minimum rate of communication needed to
achieve secret key capacity in the two-terminal source model. Furthermore,
necessary and sufficient conditions are analyzed for the secure
computation of a given function in the multiterminal source model.
Based largely on known recent results, this self-contained monograph
also includes new formulations with associated new proofs. Supplementing
each chapter in Part II are descriptions of several open
problems.<h3>Suggested Citation</h3>Prakash Narayan and Himanshu Tyagi (2016), "Multiterminal Secrecy by Public Discussion", Foundations and Trends® in Communications and Information Theory: Vol. 13: No. 2-3, pp 129-275. http://dx.doi.org/10.1561/0100000072Wed, 28 Sep 2016 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-084Channel Coding Methods for Non-Volatile Memories<h3>Abstract</h3>Non-volatile memories (NVMs) have emerged as the primary replacement of hard-disk drives for a variety of storage applications, including personal electronics, mobile computing, intelligent vehicles,
enterprise storage, data warehousing, and data-intensive computing systems. Channel coding schemes are a necessary tool for ensuring target reliability and performance of NVMs. However, due to operational
asymmetries in NVMs, conventional coding approaches – commonly based on designing for the Hamming metric – no longer apply. Given the immediate need for practical solutions and the shortfalls of existing methods,
the fast-growing discipline of coding for NVMs has resulted in several key innovations that not only answer the needs of modern storage systems but also directly contribute to the analytical toolbox of coding theory at large.
This monograph discusses recent advances in coding for NVMs, covering topics such as error correction coding based on novel algebraic and graph-based methods, rank modulation, rewriting codes, and constrained coding.
Our goal for this work is multifold: to illuminate the advantages – as well as challenges – associated with modern NVMs, to present a succinct overview of several exciting recent developments in coding for memories, and,
by presenting numerous potential research directions, to inspire other researchers to contribute to this timely and thriving discipline.<h3>Suggested Citation</h3>Lara Dolecek and Frederic Sala (2016), "Channel Coding Methods for Non-Volatile Memories", Foundations and Trends® in Communications and Information Theory: Vol. 13: No. 1, pp 1-128. http://dx.doi.org/10.1561/0100000084Thu, 25 Feb 2016 00:00:00 +0100http://nowpublishers.com/article/Details/CIT-081Multi-way Communications: An Information Theoretic Perspective<h3>Abstract</h3>Multi-way communication is a means to significantly improve the spectral
efficiency of wireless networks. For instance, in a bi-directional (or
two-way) communication channel, two users can simultaneously use
the transmission medium to exchange information, thus achieving up
to twice the rate that would be achieved had each user transmitted
separately. Multi-way communications provides an overview on the developments
in this research area since it has been initiated by Shannon.
The basic two-way communication channel is considered first, followed
by the two-way relay channel obtained by the deployment of an additional
cooperative relay node to improve the overall communication
performance. This basic setup is then extended to multi-user systems.
For all these setups, fundamental limits on the achievable rates are reviewed,
thereby making use of a linear high-SNR deterministic channel
model to provide valuable insights which are helpful when discussing
the coding schemes for Gaussian channel models in detail. Several tools
and communication strategies are used in the process, including (but
not limited to) computation, signal-space alignment, and nested-lattice
codes. Finally, extensions of multi-way communication channels to multiple
antenna settings are discussed.<h3>Suggested Citation</h3>Anas Chaaban and Aydin Sezgin (2015), "Multi-way Communications: An Information Theoretic Perspective", Foundations and Trends® in Communications and Information Theory: Vol. 12: No. 3-4, pp 185-371. http://dx.doi.org/10.1561/0100000081Tue, 15 Sep 2015 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-042An Approximation Approach to Network Information Theory<h3>Abstract</h3>This monograph illustrates a novel approach, which is based on changing the focus to seek approximate solutions accompanied by universal
guarantees on the gap to optimality, in order to enable progress on several key open problems in network information theory. We seek universal
guarantees that are independent of problem parameters, but perhaps dependent on the problem structure. At the heart of this approach is the
development of simple, deterministic models that capture the main features of information sources and communication channels, and are utilized
to approximate more complex models. The program advocated in this monograph is to use first seek solutions for the simplified deterministic
model and use the insights and the solution of the simplified model to connect it to the original problem. The goal of this
deterministic-approximation approach is to obtain universal approximate characterizations of the original channel capacity region and source
coding rate regions. The translation of the insights from the deterministic framework to the original problem might need non-trivial steps
either in the coding scheme or in the outer bounds. The applications of this deterministic approximation approach are demonstrated in four
central problems, namely unicast/multicast relay networks, interference channels, multiple descriptions source coding, and joint
source-channel coding over networks. For each of these problems, it is illustrated how the proposed approach can be utilized to approximate
the solution and draw engineering insights. Throughout the monograph, many extensions and future directions are addressed, and several open
problems are presented in each chapter. The monograph is concluded by illustrating other deterministic models that can be utilized to obtain
tighter approximation results, and discussing some recent developments on utilization of deterministic models in multi-flow multi-hop wireless
networks.<h3>Suggested Citation</h3>A. Salman Avestimehr, Suhas N. Diggavi, Chao Tian and David N. C. Tse (2015), "An Approximation Approach to Network Information Theory", Foundations and Trends® in Communications and Information Theory: Vol. 12: No. 1-2, pp 1-183. http://dx.doi.org/10.1561/0100000042Fri, 04 Sep 2015 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-088Energy Efficiency in Wireless Networks via Fractional Programming Theory<h3>Abstract</h3>This monograph presents a unified framework for energy efficiency maximization in wireless networks via fractional programming theory. The definition of energy efficiency is introduced, with reference to single-user and multi-user wireless networks, and it is observed how the problem of resource allocation for energy efficiency optimization is naturally cast as a fractional program. An extensive review of the state-of-the-art in energy efficiency optimization by fractional programming is provided, with reference to centralized and distributed resource allocation schemes. A solid background on fractional programming theory is provided. The key-notion of generalized concavity is presented and its strong connection with fractional functions described. A taxonomy of fractional problems is introduced, and for each class of fractional problem, general solution algorithms are described, discussing their complexity and convergence properties. The described theoretical and algorithmic framework is applied to solve energy efficiency maximization problems in practical wireless networks. A general system and signal model is developed which encompasses many relevant special cases, such as one-hop and two-hop heterogeneous networks, multi-cell networks, small-cell networks, device-to-device systems, cognitive radio systems, and hardware-impaired networks, wherein multiple-antennas and multiple subcarriers are (possibly) employed. Energy-efficient resource allocation algorithms are developed, considering both centralized, cooperative schemes, as well as distributed approaches for self-organizing networks. Finally, some remarks on future lines of research are given, stating some open problems that remain to be studied. It is shown how the described framework is general enough to be extended in these directions, proving useful in tackling future challenges that may arise in the design of energy-efficient future wireless networks.<h3>Suggested Citation</h3>Alessio Zappone and Eduard Jorswieck (2015), "Energy Efficiency in Wireless Networks via Fractional Programming Theory", Foundations and Trends® in Communications and Information Theory: Vol. 11: No. 3-4, pp 185-396. http://dx.doi.org/10.1561/0100000088Thu, 11 Jun 2015 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-086Asymptotic Estimates in Information Theory with Non-Vanishing Error Probabilities<h3>Abstract</h3>This monograph presents a unified treatment of single- and multi-user problems in Shannon’s information theory where we depart from the
requirement that the error probability decays asymptotically in the blocklength. Instead, the error probabilities for various problems are
bounded above by a non-vanishing constant and the spotlight is shone on achievable coding rates as functions of the growing blocklengths.
This represents the study of asymptotic estimates with non-vanishing error probabilities.<p>In Part I, after reviewing the fundamentals of information theory, we discuss Strassen’s seminal result for binary hypothesis testing
where the type-I error probability is non-vanishing and the rate of decay of the type-II error probability with growing number of
independent observations is characterized. In Part II, we use this basic hypothesis testing result to develop second- and sometimes, even
third-order asymptotic expansions for point-to-point communication. Finally in Part III, we consider network information theory problems
for which the second order asymptotics are known. These problems include some classes of channels with random state, the multiple-encoder
distributed lossless source coding (Slepian-Wolf) problem and special cases of the Gaussian interference and multiple-access channels.
Finally, we discuss avenues for further research.</p><h3>Suggested Citation</h3>Vincent Y. F. Tan (2014), "Asymptotic Estimates in Information Theory with Non-Vanishing Error Probabilities", Foundations and Trends® in Communications and Information Theory: Vol. 11: No. 1-2, pp 1-184. http://dx.doi.org/10.1561/0100000086Tue, 23 Sep 2014 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-061Rate Distortion Bounds for Voice and Video<h3>Abstract</h3>Numerous voice, still image, audio, and video compression standards have been developed over the last 25 years, and significant advances in
the state of the art have been achieved. However, in the more than 50 years since Shannon’s seminal 1959 paper, no rate distortion bounds for
voice and video have been forthcoming. In this volume, we present the first rate distortion bounds for voice and video that actually lower
bound the operational rate distortion performance of the best-performing voice and video codecs. The bounds indicate that improvements in
rate distortion performance of approximately 50% over the best-performing voice and video codecs are possible. Research directions to improve
the new bounds are discussed.<h3>Suggested Citation</h3>Jerry D. Gibson and Jing Hu (2014), "Rate Distortion Bounds for Voice and Video", Foundations and Trends® in Communications and Information Theory: Vol. 10: No. 4, pp 379-514. http://dx.doi.org/10.1561/0100000061Tue, 18 Feb 2014 00:00:00 +0100http://nowpublishers.com/article/Details/CIT-071Two-User Gaussian Interference Channels: An Information Theoretic Point of View<h3>Abstract</h3>The purpose of this monograph is to introduce both classic and recent capacity theorems for the two-user Gaussian interference channel
including both the single-antenna case and the multiple-antenna case.<p>This monograph starts with the single antenna case and introduces the Han and Kobayashi achievable rate region and its various subregions.
Several capacity outer bounds are then presented, which, together with the achievable rate region, yields several capacity results for the
single-antenna Gaussian interference channel, including the capacity region for strong interference and the sum-rate capacity for Z
interference, noisy interference, mixed interference and degraded interference.</p><p>For the more complex multiple-antenna case, the interference state is no longer determined solely by the interference strength as in the
single-antenna case. Instead, the structure of the interference in different multi-dimensional subspaces plays an equally important role. As
a result of this multiple-dimensional signaling, new interference states including generally strong, generally noisy and generally mixed
interference are introduced to obtain capacity theorems that generalize those for the single-antenna case.</p><h3>Suggested Citation</h3>Xiaohu Shang and Biao Chen (2013), "Two-User Gaussian Interference Channels: An Information Theoretic Point of View", Foundations and Trends® in Communications and Information Theory: Vol. 10: No. 3, pp 247-378. http://dx.doi.org/10.1561/0100000071Thu, 19 Dec 2013 00:00:00 +0100http://nowpublishers.com/article/Details/CIT-064Concentration of Measure Inequalities in Information Theory, Communications, and Coding<h3>Abstract</h3>Concentration inequalities have been the subject of exciting developments during the last two decades, and have been intensively studied
and used as a powerful tool in various areas. These include convex geometry, functional analysis, statistical physics, mathematical
statistics, pure and applied probability theory (e.g., concentration of measure phenomena in random graphs, random matrices, and percolation),
information theory, theoretical computer science, learning theory, and dynamical systems.<p>This monograph focuses on some of the key modern mathematical tools that are used for the derivation of concentration inequalities, on
their links to information theory, and on their various applications to communications and coding. In addition to being a survey, this
monograph also includes various new recent results derived by the authors.</p><p>The first part of the monograph introduces classical concentration inequalities for martingales, as
well as some recent refinements and extensions. The power and versatility of the martingale approach is exemplified in the context of codes
defined on graphs and iterative decoding algorithms, as well as codes for wireless communication.</p><p>The second part of the monograph introduces the entropy method, an information-theoretic technique for deriving concentration inequalities
for functions of many independent random variables. The basic ingredients of the entropy method are discussed first in conjunction with the
closely related topic of logarithmic Sobolev inequalities, which are typical of the so-called functional approach to studying the
concentration of measure phenomenon. The discussion on logarithmic Sobolev inequalities is complemented by a related viewpoint based on
probability in metric spaces. This viewpoint centers around the so-called transportation-cost inequalities, whose roots are in information
theory. Some representative results on concentration for dependent random variables are briefly summarized, with emphasis on their connections
to the entropy method. Finally, we discuss several applications of the entropy method and related information-theoretic tools to problems in
communications and coding. These include strong converses, empirical distributions of good channel codes with non-vanishing error probability,
and an information-theoretic converse for concentration of measure.</p><h3>Suggested Citation</h3>Maxim Raginsky and Igal Sason (2013), "Concentration of Measure Inequalities in Information Theory, Communications, and Coding", Foundations and Trends® in Communications and Information Theory: Vol. 10: No. 1-2, pp 1-246. http://dx.doi.org/10.1561/0100000064Wed, 23 Oct 2013 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-068Coding Techniques for Repairability in Networked Distributed Storage Systems<h3>Abstract</h3>This survey comprises a tutorial on traditional erasure codes and their applications to networked distributed storage systems (NDSS), followed by a survey of novel code families tailor made for better repairability in NDSS.<h3>Suggested Citation</h3>Frédérique Oggier and Anwitaman Datta (2013), "Coding Techniques for Repairability in Networked Distributed Storage Systems", Foundations and Trends® in Communications and Information Theory: Vol. 9: No. 4, pp 383-466. http://dx.doi.org/10.1561/0100000068Thu, 06 Jun 2013 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-069Optimal Resource Allocation in Coordinated Multi-Cell Systems<h3>Abstract</h3>The use of multiple antennas at base stations is a key component in the design of cellular communication systems that can meet high-capacity demands in the downlink. Under ideal conditions, the gain of employing multiple antennas is well-recognized: the data throughput increases linearly with the number of transmit antennas if the spatial dimension is utilized to serve many users in parallel. The practical performance of multi-cell systems is, however, limited by a variety of nonidealities, such as insufficient channel knowledge, high computational complexity, heterogeneous user conditions, limited backhaul capacity, transceiver impairments, and the constrained level of coordination between base stations.<p>This tutorial presents a general framework for modeling different multi-cell scenarios, including clustered joint transmission, coordinated beamforming, interference channels, cognitive radio, and spectrum sharing between operators. The framework enables joint analysis and insights that are both scenario independent and dependent.</p><p>The performance of multi-cell systems depends on the resource allocation; that is, how the time, power, frequency, and spatial resources are divided among users. A comprehensive characterization of resource allocation problem categories is provided, along with the signal processing algorithms that solve them. The inherent difficulties are revealed: (a) the overwhelming spatial degrees-of-freedom created by the multitude of transmit antennas; and (b) the fundamental tradeoff between maximizing aggregate system throughput and maintaining user fairness. The tutorial provides a pragmatic foundation for resource allocation where the system utility metric can be selected to achieve practical feasibility. The structure of optimal resource allocation is also derived, in terms of beamforming parameterizations and optimal operating points.</p><p>This tutorial provides a solid ground and understanding for optimization of practical multi-cell systems, including the impact of the nonidealities mentioned above. The Matlab code is available online for some of the examples and algorithms in this tutorial.</p><h3>Suggested Citation</h3>Emil Björnson and Eduard Jorswieck (2013), "Optimal Resource Allocation in Coordinated Multi-Cell Systems", Foundations and Trends® in Communications and Information Theory: Vol. 9: No. 2–3, pp 113-381. http://dx.doi.org/10.1561/0100000069Mon, 28 Jan 2013 00:00:00 +0100http://nowpublishers.com/article/Details/CIT-014Fundamental Performance Limits in Cross-layer Wireless Optimization: Throughput, Delay, and Energy<h3>Abstract</h3>In recent years, one of the most significant developments in both the theory and practice of communication and networking has been the closer coupling between the design of physical-layer functionalities such as coding and modulation, and the design of higher-layer functionalities such as contention resolution and scheduling. This closer coupling is characteristic of the <em>cross-layer</em> paradigm. It is the objective of the present survey to spell out some of the basic challenges, key communication settings, and crucial results, relevant to cross-layer analysis and design for wireless systems. This work focuses primarily on communication settings relevant to wireless cellular communications, where cross-layer design principles have arguably had the greatest impact on practical systems. In order to explore the fundamental performance limits of wireless systems operating under the cross-layer paradigm, the survey shows how information theory and network theory can be leveraged to study issues such as channel modeling, coding, source burstiness, throughput, delay, multi-user interference, multi-path fading, and energy constraints in a more coherent overall analytical and design framework.<p>The survey first examines multiaccess communication channels, the simplest example of a network setting where multiple users share a communication medium. It reviews some of the pioneering work in extending information theory to incorporate source burstiness and queueing. It then examines cross-layer design approaches for multiaccess (uplink) fading channels in wireless communications. The key concepts of stability region, throughput optimality, and delay optimality are introduced. Optimal algorithms which maximize throughput and minimize delay for multiaccess fading channels with random arrivals and queueing are characterized. Next, the survey focuses on a similar setting for communication over broadcast (downlink) fading channels, and introduce relevant results. Finally, it examines the fundamental performance tradeoffs between power and delay in single-user and multi-user communication over wireless channels with energy constraints.</p><h3>Suggested Citation</h3>Edmund M. Yeh (2012), "Fundamental Performance Limits in Cross-layer Wireless Optimization: Throughput, Delay, and Energy", Foundations and Trends® in Communications and Information Theory: Vol. 9: No. 1, pp 1-112. http://dx.doi.org/10.1561/0100000014Mon, 17 Dec 2012 00:00:00 +0100http://nowpublishers.com/article/Details/CIT-041Polarization and Polar Codes<h3>Abstract</h3>This tutorial treats the fundamentals of polarization theory and polar coding. Arıkan's original results on binary source and channel polarization methods are studied. Error probability and complexity analyses are offered. The original results are generalized in several directions. Early developments in the field are discussed, pointers to some of the important work omitted from this tutorial are given.<h3>Suggested Citation</h3>Eren Şaşoğlu (2012), "Polarization and Polar Codes", Foundations and Trends® in Communications and Information Theory: Vol. 8: No. 4, pp 259-381. http://dx.doi.org/10.1561/0100000041Thu, 18 Oct 2012 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-066Block Toeplitz Matrices: Asymptotic Results and Applications<h3>Abstract</h3>The present monograph studies the asymptotic behaviour of eigenvalues, products and functions of block Toeplitz matrices generated by the Fourier coefficients of a continuous matrix-valued function. This study is based on the concept of asymptotically equivalent sequences of non-square matrices. The asymptotic results on block Toeplitz matrices obtained are applied to vector asymptotically wide sense stationary processes. Therefore, this monograph is a generalization to block Toeplitz matrices of the Gray monograph entitled "Toeplitz and circulant matrices: A review", which was published in the second volume of Foundations and Trends in Communications and Information Theory, and which is the simplest and most famous introduction to the asymptotic theory on Toeplitz matrices.<h3>Suggested Citation</h3>Jesús Gutiérrez-Gutiérrez and Pedro M. Crespo (2012), "Block Toeplitz Matrices: Asymptotic Results and Applications", Foundations and Trends® in Communications and Information Theory: Vol. 8: No. 3, pp 179-257. http://dx.doi.org/10.1561/0100000066Mon, 15 Oct 2012 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-048Cooperative Wireless Cellular Systems: An Information-Theoretic View<h3>Abstract</h3>In this monograph, the impact of cooperation on the performance of wireless cellular systems is studied from an information-theoretic standpoint, focusing on simple formulations typically referred to as Wynertype models. Following ongoing research and standardization efforts, the text covers two main classes of cooperation strategies. The first class is cooperation at the base station (BS) level, which is also known as Multi-Cell Processing (MCP), network Multiple-Input Multiple-Output (MIMO), or Coordinated Multi-Point transmission/reception (CoMP). With MCP, cooperative decoding, for the uplink, or encoding, for the downlink, is enabled at the BSs. MCP is made possible by the presence of an architecture of, typically wired, backhaul links connecting individual BSs to a central processor (CP) or to one another. The second class of cooperative strategies allows cooperation in the form of relaying for conveying data between Mobile Stations (MSs) and BSs in either the uplink or the downlink. Relaying can be enabled by two possible architectures. A first option is to deploy dedicated Relay Stations (RSs) that are tasked with forwarding uplink or downlink traffic. The second option is for the MSs to act as RSs for other MSs.<p>MCP is first studied under ideal conditions on the backhaul links, namely by assuming that all BSs are connected to a CP with unlimited-capacity links. Both Gaussian (nonfading) and flat-fading channels are analyzed, for the uplink and the downlink, and analytical insights are drawn into the performance advantages of MCP in different relevant operating regimes. Performance comparison is performed with standard Single-Cell Processing (SCP) techniques, whereby each BS decodes, in the uplink, or encodes, in the downlink, independently, as implemented with different spatial reuse factors. Then, practical constraints on the backhaul architecture enabling MCP are introduced. Specifically, three common settings are studied. In the first, all the BSs are connected to a CP via finite-capacity links. In the second, only BSs in adjacent cells are connected via (finite-capacity) backhaul links. In the third, only a subset of BSs is connected to a CP for joint encoding/decoding (clustered cooperation). Achievable rates for the three settings are studied and compared for both the uplink and the downlink.</p><p>The performance advantages of relaying are analyzed for cellular systems with dedicated RSs and with cooperative MSs. Different techniques are reviewed that require varying degrees of information about system parameters at the MSs, RSs, and BSs. Performance is investigated with both MCP and SCP, revealing a profound interplay between cooperation at the BS level and relaying. Finally, various open problems are pointed out.</p><h3>Suggested Citation</h3>Osvaldo Simeone, Nathan Levy, Amichai Sanderovich, Oren Somekh, Benjamin M. Zaidel, H. Vincent Poor and Shlomo Shamai (Shitz) (2012), "Cooperative Wireless Cellular Systems: An Information-Theoretic View", Foundations and Trends® in Communications and Information Theory: Vol. 8: No. 1-2, pp 1-177. http://dx.doi.org/10.1561/0100000048Fri, 17 Aug 2012 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-054Random-Set Theory and Wireless Communications<h3>Abstract</h3>This monograph is devoted to random-set theory, which allows unordered collections of random elements, drawn from an arbitrary space, to be handled. After illustrating its foundations, we focus on Random Finite Sets, i.e., unordered collections of random cardinality of points from an arbitrary space, and show how this theory can be applied to a number of problems arising in wireless communication systems. Three of these problems are: (1) neighbor discovery in wireless networks, (2) multiuser detection in which the number of active users is unknown and time-varying, and (3) estimation of multipath channels where the number of paths is not known <em>a priori</em> and which are possibly time-varying. Standard solutions to these problems are intrinsically suboptimum as they proceed either by assuming a fixed number of vector components, or by first estimating this number and then the values taken on by the components. It is shown how random-set theory provides optimum solutions to all these problems. The complexity issue is also examined, and suboptimum solutions are presented and discussed.<h3>Suggested Citation</h3>Ezio Biglieri, Emanuele Grossi and Marco Lops (2012), "Random-Set Theory and Wireless Communications", Foundations and Trends® in Communications and Information Theory: Vol. 7: No. 4, pp 317-462. http://dx.doi.org/10.1561/0100000054Fri, 17 Aug 2012 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-051Biometric Security from an Information-Theoretical Perspective<h3>Abstract</h3>In this review, biometric systems are studied from an information theoretical point of view. In the first part biometric authentication systems are studied. The objective of these systems is, observing correlated enrollment and authentication biometric sequences, to generate or convey as large as possible secret keys by interchanging a public message, while minimizing privacy leakage. Here privacy leakage is defined as the amount of information that this public message contains about the biometric enrollment sequence. In this setting also the secrecy leakage, that is, the amount of information the public message leaks about the secret key, should be negligible. Next identification biometric systems are investigated. These systems should be able to identify as many individuals as possible while being able to assign as large as possible secret keys to each individual and again minimize the privacy leakage. To realize these systems public reference data are stored in the database. Leakage is defined with respect to these reference data. For all these biometric systems fundamental limits are determined in the current work. Finally, a popular practical construction for biometric systems, fuzzy commitment, is analyzed with respect to both its theoretical performance and performance related to the code choice.<h3>Suggested Citation</h3>Tanya Ignatenko and Frans M.J. Willems (2012), "Biometric Security from an Information-Theoretical Perspective", Foundations and Trends® in Communications and Information Theory: Vol. 7: No. 2–3, pp 135-316. http://dx.doi.org/10.1561/0100000051Wed, 15 Feb 2012 00:00:00 +0100http://nowpublishers.com/article/Details/CIT-047Interference Alignment — A New Look at Signal Dimensions in a Communication Network<h3>Abstract</h3>This monograph introduces to the reader the idea of interference alignment, traces its origins, reviews a variety of interference alignment schemes, summarizes the diverse settings where the idea of interference alignment is applicable and highlights the common principles that cut across these diverse applications. The focus is on theoretical aspects.<h3>Suggested Citation</h3>Syed A. Jafar (2011), "Interference Alignment — A New Look at Signal Dimensions in a Communication Network", Foundations and Trends® in Communications and Information Theory: Vol. 7: No. 1, pp 1-134. http://dx.doi.org/10.1561/0100000047Thu, 30 Jun 2011 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-060Raptor Codes<h3>Abstract</h3>This monograph describes the theory behind Raptor codes, and elucidates elements of the processes behind the design of two of the most prominent members of this class of codes: R10 and RaptorQ (RQ). R10 has already been adopted by a number of standards' bodies, and RQ is in the process of entering various standards at the time of writing of this monograph.<p>The monograph starts with the description of some of the transmission problems, which inspired the invention of Fountain codes. Thereafter, Luby transform codes (LT codes) and Raptor codes are introduced and insights are provided into their design. These codes are currently the most efficient realizations of Fountain codes. Different algorithms are introduced for encoding and decoding various versions of these codes, including their systematic versions. Moreover, a hybrid decoding algorithm called "inactivation decoding" is introduced, which is an integral part of all modern implementations of Raptor codes.</p><p>The R10 and RQ codes have been continued and will continue to be adopted into a number of standards and thus there are publicly available specifications that describe exactly how to implement these codes. However, the standards' specifications provide no insight into the rationale for the design choices made. One of the primary purposes of this document is to provide this design rationale.</p><p>We provide results of extensive simulations of R10 and RQ codes to show the behavior of these codes in many different scenarios.</p><h3>Suggested Citation</h3>Amin Shokrollahi and Michael Luby (2011), "Raptor Codes", Foundations and Trends® in Communications and Information Theory: Vol. 6: No. 3–4, pp 213-322. http://dx.doi.org/10.1561/0100000060Wed, 18 May 2011 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-052Statistical Physics and Information Theory<h3>Abstract</h3>This monograph is based on lecture notes of a graduate course, which focuses on the relations between information theory and statistical physics. The course was delivered at the Technion during the Spring of 2010 for the first time, and its target audience consists of EE graduate students in the area of communications and information theory, as well as graduate students in Physics who have basic background in information theory. Strong emphasis is given to the analogy and parallelism between information theory and statistical physics, as well as to the insights, the analysis tools and techniques that can be borrowed from statistical physics and 'imported' to certain problem areas in information theory. This is a research trend that has been very active in the last few decades, and the hope is that by exposing the students to the meeting points between these two disciplines, their background and perspective may be expanded and enhanced. This monograph is substantially revised and expanded relative to an earlier version posted in arXiv (1006.1565v1 [cs.iT]).<h3>Suggested Citation</h3>Neri Merhav (2010), "Statistical Physics and Information Theory", Foundations and Trends® in Communications and Information Theory: Vol. 6: No. 1–2, pp 1-212. http://dx.doi.org/10.1561/0100000052Mon, 13 Dec 2010 00:00:00 +0100http://nowpublishers.com/article/Details/CIT-044Combinatorial Designs for Authentication and Secrecy Codes<h3>Abstract</h3>Combinatorial design theory is a very active area of mathematical research, with many applications in communications and information theory, computer science, statistics, engineering, and life sciences. As one of the fundamental discrete structures, combinatorial designs are used in fields as diverse as error-correcting codes, statistical design of experiments, cryptography and information security, mobile and wireless communications, group testing algorithms in DNA screening, software and hardware testing, and interconnection networks. This monograph provides a tutorial on combinatorial designs, which gives an overview of the theory. Furthermore, the application of combinatorial designs to authentication and secrecy codes is described in depth. This close relationship of designs with cryptography and information security was first revealed in Shannon's seminal paper on secrecy systems. We bring together in one source foundational and current contributions concerning design-theoretic constructions and characterizations of authentication and secrecy codes.<h3>Suggested Citation</h3>Michael Huber (2010), "Combinatorial Designs for Authentication and Secrecy Codes", Foundations and Trends® in Communications and Information Theory: Vol. 5: No. 6, pp 581-675. http://dx.doi.org/10.1561/0100000044Sun, 30 May 2010 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-036Information Theoretic Security<h3>Abstract</h3>The topic of information theoretic security is introduced and the principal results in this area are reviewed. The basic wire-tap channel model is considered first, and then several specific types of wire-tap channels are considered, including Gaussian, multi-input multi-output (MIMO), compound, and feedback wire-tap channels, as well as the wire-tap channel with side information. Practical code design techniques to achieve secrecy for wire-tap channels are also introduced. The wire-tap formalism is then extended to the basic channels of multi-user networks, including broadcast channels, multiple-access channels (MACs), interference channels, relay channels and two-way channels. For all of these models, results on the fundamental communication limits under secrecy constraints and corresponding coding schemes are reviewed. Furthermore, several related topics including key agreement through common randomness, secure network coding, authentication, cross-layer design, and secure source coding are discussed.<h3>Suggested Citation</h3>Yingbin Liang, H. Vincent Poor and Shlomo Shamai (Shitz) (2009), "Information Theoretic Security", Foundations and Trends® in Communications and Information Theory: Vol. 5: No. 4–5, pp 355-580. http://dx.doi.org/10.1561/0100000036Mon, 15 Jun 2009 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-021Universal Estimation of Information Measures for Analog Sources<h3>Abstract</h3>This monograph presents an overview of universal estimation of information measures for continuous-alphabet sources. Special attention is given to the estimation of mutual information and divergence based on independent and identically distributed (i.i.d.) data. Plug-in methods, partitioning-based algorithms, nearest-neighbor algorithms as well as other approaches are reviewed, with particular focus on consistency, speed of convergence and experimental performance.<h3>Suggested Citation</h3>Qing Wang, Sanjeev R. Kulkarni and Sergio Verdú (2009), "Universal Estimation of Information Measures for Analog Sources", Foundations and Trends® in Communications and Information Theory: Vol. 5: No. 3, pp 265-353. http://dx.doi.org/10.1561/0100000021Wed, 27 May 2009 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-019Bit-Interleaved Coded Modulation<h3>Abstract</h3>The principle of coding in the signal space follows directly from Shannon's analysis of waveform Gaussian channels subject to an input constraint. The early design of communication systems focused separately on <em>modulation</em>, namely signal design and detection, and <em>error correcting codes</em>, which deal with errors introduced at the demodulator of the underlying waveform channel. The correct perspective of signal-space coding, although never out of sight of information theorists, was brought back into the focus of coding theorists and system designers by Imai's and Ungerböck's pioneering works on coded modulation. More recently, powerful families of binary codes with a good tradeoff between performance and decoding complexity have been (re-)discovered. Bit-Interleaved Coded Modulation (BICM) is a pragmatic approach combining the best out of both worlds: it takes advantage of the signal-space coding perspective, whilst allowing for the use of powerful families of binary codes with virtually any modulation format. BICM avoids the need for the complicated and somewhat less flexible design typical of coded modulation. As a matter of fact, most of today's systems that achieve high spectral efficiency such as DSL, Wireless LANs, WiMax and evolutions thereof, as well as systems based on low spectral efficiency orthogonal modulation, feature BICM, making BICM the <em>de-facto</em> general coding technique for waveform channels.<p>The theoretical characterization of BICM is at the basis of efficient coding design techniques and also of improved BICM decoders, e.g., those based on the belief propagation iterative algorithm and approximations thereof. In this text, we review the theoretical foundations of BICM under the unified framework of error exponents for mismatched decoding. This framework allows an accurate analysis without any particular assumptions on the length of the interleaver or independence between the multiple bits in a symbol. We further consider the sensitivity of the BICM capacity with respect to the signal-to-noise ratio (SNR), and obtain a wideband regime (or low-SNR regime) characterization. We review efficient tools for the error probability analysis of BICM that go beyond the standard approach of considering infinite interleaving and take into consideration the dependency of the coded bit observations introduced by the modulation. We also present bounds that improve upon the union bound in the region beyond the cutoff rate, and are essential to characterize the performance of modern randomlike codes used in concatenation with BICM. Finally, we turn our attention to BICM with iterative decoding, we review extrinsic information transfer charts, the area theorem and code design via curve fitting. We conclude with an overview of some applications of BICM beyond the classical coherent Gaussian channel.</p><h3>Suggested Citation</h3>Albert Guillén i Fàbregas, Alfonso Martinez and Giuseppe Caire (2008), "Bit-Interleaved Coded Modulation", Foundations and Trends® in Communications and Information Theory: Vol. 5: No. 1–2, pp 1-153. http://dx.doi.org/10.1561/0100000019Sun, 30 Nov 2008 00:00:00 +0100http://nowpublishers.com/article/Details/CIT-025Channel Coding in the Presence of Side Information<h3>Abstract</h3>In this survey we review the concepts and methods of communication systems equipped with side information. We focus on the channel coding problem, where side information is available to the transmitter in either a causal or non-causal manner, and we also consider the source coding problem with side information at the receiver.<p>We first summarize the main results for channels with causal/non-causal side information and the associated capacity formulas. Next, we consider specific channel models, such as Costa's dirty-paper model, the AWGN channel model with fading and the modulo additive noise channel. Further, we provide applications to the models considered here, in particular, we present the watermarking problem and the Gaussian MIMO broadcast channel. We also consider algorithms for the calculation of the channel's capacity, and practical coding schemes for the communication systems explored in this survey. Finally, we study several related information-theoretic problems and present both the Wyner–Ziv and the Slepian–Wolf problems. The source coding problems and the channel coding problems, are presented in a unified version and the duality between the problems is presented. We also present extensions for the MAC and broadcast channel models, to the case where they are controlled by a state process, and consider several hybrid models, e.g., joint source–channel coding for the Wyner–Ziv source and the Gel'fand–Pinsker channel, and the achievable tradeoff between the message and the state information rates.</p><h3>Suggested Citation</h3>Guy Keshet, Yossef Steinberg and Neri Merhav (2008), "Channel Coding in the Presence of Side Information", Foundations and Trends® in Communications and Information Theory: Vol. 4: No. 6, pp 445-586. http://dx.doi.org/10.1561/0100000025Wed, 25 Jun 2008 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-028Topics in Multi-User Information Theory<h3>Abstract</h3>This survey reviews fundamental concepts of multi-user information theory. Starting with typical sequences, the survey builds up knowledge on random coding, binning, superposition coding, and capacity converses by introducing progressively more sophisticated tools for a selection of source and channel models. The problems addressed include: Source Coding; Rate-Distortion and Multiple Descriptions; Capacity-Cost; The Slepian–Wolf Problem; The Wyner-Ziv Problem; The Gelfand-Pinsker Problem; The Broadcast Channel; The Multiaccess Channel; The Relay Channel; The Multiple Relay Channel; and The Multiaccess Channel with Generalized Feedback. The survey also includes a review of basic probability and information theory.<h3>Suggested Citation</h3>Gerhard Kramer (2008), "Topics in Multi-User Information Theory", Foundations and Trends® in Communications and Information Theory: Vol. 4: No. 4–5, pp 265-444. http://dx.doi.org/10.1561/0100000028Wed, 25 Jun 2008 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-008Reliability Criteria in Information Theory and in Statistical Hypothesis Testing<h3>Abstract</h3>This survey is devoted to one of the central problems of Information Theory — the problem of determination of interdependence between coding rate and error probability exponent for different information transmission systems. The overview deals with memoryless systems of finite alphabet setting. It presents material complementary to the contents of the series of the most remarkable in Information Theory books of Feinstain, Fano, Wolfowitz, Gallager, Csiszar and Körner, Kolesnik and Poltirev, Blahut, Cover and Thomas and of the papers by Dobrushin, Gelfand and Prelov.<p>We briefly formulate fundamental notions and results of Shannon theory on reliable transmission via coding and give a survey of results obtained in last two-three decades by the authors, their colleagues and some other researchers. The paper is written with the goal to make accessible to a broader circle of readers the theory of rate-reliability. We regard this concept useful to promote the noted problem solution in parallel with elaboration of the notion of reliability-reliability dependence relative to the statistical hypothesis testing and identification.</p><h3>Suggested Citation</h3>Evgueni A. Haroutunian, Mariam E. Haroutunian and Ashot N. Harutyunyan (2008), "Reliability Criteria in Information Theory and in Statistical Hypothesis Testing", Foundations and Trends® in Communications and Information Theory: Vol. 4: No. 2–3, pp 97-263. http://dx.doi.org/10.1561/0100000008Thu, 28 Feb 2008 00:00:00 +0100http://nowpublishers.com/article/Details/CIT-016Cyclic Division Algebras: A Tool for Space–Time Coding<h3>Abstract</h3>Multiple antennas at both the transmitter and receiver ends of a wireless digital transmission channel may increase both data rate and reliability. Reliable high rate transmission over such channels can only be achieved through Space–Time coding. Rank and determinant code design criteria have been proposed to enhance diversity and coding gain. The special case of <em>full-diversity</em> criterion requires that the difference of any two distinct codewords has full rank.<p>Extensive work has been done on Space–Time coding, aiming at finding fully diverse codes with high rate. Division algebras have been proposed as a new tool for constructing Space–Time codes, since they are non-commutative algebras that naturally yield linear fully diverse codes. Their algebraic properties can thus be further exploited to improve the design of good codes.</p><p>The aim of this work is to provide a tutorial introduction to the algebraic tools involved in the design of codes based on cyclic division algebras. The different design criteria involved will be illustrated, including the constellation shaping, the information lossless property, the non-vanishing determinant property, and the diversity multiplexing trade-off. The final target is to give the complete mathematical background underlying the construction of the Golden code and the other Perfect Space–Time block codes.</p><h3>Suggested Citation</h3>Frédérique Oggier, Jean-Claude Belfiore and Emanuele Viterbo (2007), "Cyclic Division Algebras: A Tool for Space–Time Coding", Foundations and Trends® in Communications and Information Theory: Vol. 4: No. 1, pp 1-95. http://dx.doi.org/10.1561/0100000016Fri, 16 Nov 2007 00:00:00 +0100http://nowpublishers.com/article/Details/CIT-026Majorization and Matrix-Monotone Functions in Wireless Communications<h3>Abstract</h3>This short tutorial presents two mathematical techniques namely <em>Majorization Theory</em> and <em>Matrix-Monotone Functions</em>, reviews their basic definitions and describes their concepts clearly with many illustrative examples. In addition to this tutorial, new results are presented with respect to Schur-convex functions and regarding the properties of matrix-monotone functions.<p>The techniques are applied to solve communication and information theoretic problems in wireless communications. The impact of spatial correlation in multiple antenna systems is characterized for many important performance measures, e.g., average mutual information, outage probability, error performance, minimum $\farc{E_{b}}{N_{0}}$ and wideband slope, zero-outage capacity, and capacity region. The impact of user distribution in cellular systems is characterized for different scenarios including perfectly informed transmitters and receivers, regarding, e.g., the average sum rate, the outage sum rate, maximum throughput. Finally, a unified framework for the performance analysis of multiple antenna systems is developed based on matrix-monotone functions. The optimization of transmit strategies for multiple antennas is carried out by optimization of matrix-monotone functions. The results within this framework resemble and complement the various results on optimal transmit strategies in single-user and multiple-user multiple-antenna systems.</p><h3>Suggested Citation</h3>Eduard Jorswieck and Holger Boche (2007), "Majorization and Matrix-Monotone Functions in Wireless Communications", Foundations and Trends® in Communications and Information Theory: Vol. 3: No. 6, pp 553-701. http://dx.doi.org/10.1561/0100000026Mon, 09 Jul 2007 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-018MIMO Transceiver Design via Majorization Theory<h3>Abstract</h3>Multiple-input multiple-output (MIMO) channels provide an abstract and unified representation of different physical communication systems, ranging from multi-antenna wireless channels to wireless digital subscriber line systems. They have the key property that several data streams can be simultaneously established.<p>In general, the design of communication systems for MIMO channels is quite involved (if one can assume the use of sufficiently long and good codes, then the problem formulation simplifies drastically). The first difficulty lies on how to measure the global performance of such systems given the tradeoff on the performance among the different data streams. Once the problem formulation is defined, the resulting mathematical problem is typically too complicated to be optimally solved as it is a matrix-valued nonconvex optimization problem. This design problem has been studied for the past three decades (the first papers dating back to the 1970s) motivated initially by cable systems and more recently by wireless multi-antenna systems. The approach was to choose a specific global measure of performance and then to design the system accordingly, either optimally or suboptimally, depending on the difficulty of the problem.</p><p>This text presents an up-to-date unified mathematical framework for the design of point-to-point MIMO transceivers with channel state information at both sides of the link according to an <em>arbitrary</em> cost function as a measure of the system performance. In addition, the framework embraces the design of systems with given individual performance on the data streams.</p><p>Majorization theory is the underlying mathematical theory on which the framework hinges. It allows the transformation of the originally complicated matrix-valued nonconvex problem into a simple scalar problem. In particular, the <em>additive</em> majorization relation plays a key role in the design of <em>linear</em> MIMO transceivers (i.e., a linear precoder at the transmitter and a linear equalizer at the receiver), whereas the <em>multiplicative</em> majorization relation is the basis for <em>nonlinear decision-feedback</em> MIMO transceivers (i.e., a linear precoder at the transmitter and a decision-feedback equalizer at the receiver).</p><h3>Suggested Citation</h3>Daniel P. Palomar and Yi Jiang (2007), "MIMO Transceiver Design via Majorization Theory", Foundations and Trends® in Communications and Information Theory: Vol. 3: No. 4-5, pp 331-551. http://dx.doi.org/10.1561/0100000018Sun, 10 Jun 2007 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-013Information Combining<h3>Abstract</h3>Consider coded transmission over a binary-input symmetric memoryless channel. The channel decoder uses the noisy observations of the code symbols to reproduce the transmitted code symbols. Thus, it combines the information about individual code symbols to obtain an overall information about each code symbol, which may be the reproduced code symbol or its a-posteriori probability. This tutorial addresses the problem of "information combining" from an information-theory point of view: the decoder combines the mutual information between channel input symbols and channel output symbols (observations) to the mutual information between one transmitted symbol and all channel output symbols. The actual value of the combined information depends on the statistical structure of the channels. However, it can be upper and lower bounded for the assumed class of channels. This book first introduces the concept of mutual information profiles and revisits the well-known Jensen's inequality. Using these tools, the bounds on information combining are derived for single parity-check codes and for repetition codes. The application of the bounds is illustrated in four examples: information processing characteristics of coding schemes, including extrinsic information transfer (EXIT) functions; design of multiple turbo codes; bounds for the decoding threshold of low-density parity-check codes; EXIT function of the accumulator.<h3>Suggested Citation</h3>Ingmar Land and Johannes Huber (2006), "Information Combining", Foundations and Trends® in Communications and Information Theory: Vol. 3: No. 3, pp 227-330. http://dx.doi.org/10.1561/0100000013Thu, 30 Nov 2006 00:00:00 +0100http://nowpublishers.com/article/Details/CIT-009Performance Analysis of Linear Codes under Maximum-Likelihood Decoding: A Tutorial<h3>Abstract</h3>This article is focused on the performance evaluation of linear codes under optimal maximum-likelihood (ML) decoding. Though the ML decoding algorithm is prohibitively complex for most practical codes, their performance analysis under ML decoding allows to predict their performance without resorting to computer simulations. It also provides a benchmark for testing the sub-optimality of iterative (or other practical) decoding algorithms. This analysis also establishes the goodness of linear codes (or ensembles), determined by the gap between their achievable rates under optimal ML decoding and information theoretical limits. In this article, upper and lower bounds on the error probability of linear codes under ML decoding are surveyed and applied to codes and ensembles of codes on graphs. For upper bounds, we discuss various bounds where focus is put on Gallager bounding techniques and their relation to a variety of other reported bounds. Within the class of lower bounds, we address de Caen's based bounds and their improvements, and also consider sphere-packing bounds with their recent improvements targeting codes of moderate block lengths.<h3>Suggested Citation</h3>Igal Sason and Shlomo Shamai (2006), "Performance Analysis of Linear Codes under Maximum-Likelihood Decoding: A Tutorial", Foundations and Trends® in Communications and Information Theory: Vol. 3: No. 1–2, pp 1-222. http://dx.doi.org/10.1561/0100000009Fri, 07 Jul 2006 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-010QoS-Based Resource Allocation and Transceiver Optimization<h3>Abstract</h3>The control and reduction of multiuser interference is a fundamental problem in wireless communications. In order to increase the spectral efficiency and to provide individual quality-of-service (QoS), it is required to jointly optimize the power allocation together with possible receive and transmit strategies. This often leads to complex and difficult-to-handle problem formulations. There are many examples in the literature, where the special structure of the problem is exploited in order to solve special cases of this problem (e.g. multiuser beamforming or CDMA). So it is desirable to have a general theory, which can be applied to many practical QoS measures, like rates, delay, BER, etc. These measures can all be related to the signal-to-interference ratio (SIR) or the signal-to-interference-plus-noise ratio (SINR). This leads to the problem of SIR and SINR balancing, which is fundamental for many problems in communication theory.<p>In this text we derive a comprehensive theoretical framework for SIR balancing, with and without noise. The theory considers the possible use of receive strategies (e.g. interference filtering or channel assignment), which can be included in the model in an abstract way. Power allocation and receiver design are mutually interdependent, thus joint optimization strategies are derived. The main purpose of this text is to provide a better understanding of interference balancing and the characterization of the QoS feasible region. We also provide a generic algorithmic framework, which may serve as a basis for the development of new resource allocation algorithms.</p><p>We study different interference models, which are general enough to be valid for a wide range of system designs, but which are also specific enough to facilitate efficient algorithmic solutions. One important class of interference functions is based on axioms, which characterize the impact of the power allocation of the interference received by the individual users. Another class of interference functions is based on non-negative coupling matrices, which may be parameter-dependent in order to model the possible impact of receive strategies. Both models are studied with and without noise. We analyze the resulting QoS feasible region (the set of jointly achievable QoS) and discuss different allocation strategies for min-max fairness and sum-power minimization. Finally we study geometrical properties of the QoS region, which can be shown to be convex for log-convex interference functions.</p><h3>Suggested Citation</h3>Martin Schubert and Holger Boche (2006), "QoS-Based Resource Allocation and Transceiver Optimization", Foundations and Trends® in Communications and Information Theory: Vol. 2: No. 6, pp 383-529. http://dx.doi.org/10.1561/0100000010Fri, 07 Jul 2006 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-007IINetwork Coding Theory Part II: Multiple Source<h3>Abstract</h3>Store-and-forward had been the predominant technique for transmitting information through a network until its optimality was refuted by network coding theory. Network coding offers a new paradigm for network communications and has generated abundant research interest in information and coding theory, networking, switching, wireless communications, cryptography, computer science, operations research, and matrix theory.<p>In this issue we review network coding theory for the scenario when there are multiple source nodes each intending to transmit to a different set of destination nodes.</p><p>A companion issue reviews the foundational work that has led to the development of network coding theory and discusses the theory for the transmission from a single source node to other nodes in the network.</p><h3>Suggested Citation</h3>Raymond W. Yeung, Shuo-Yen Robert Li, Ning Cai and Zhen Zhang (2006), "Network Coding Theory Part II: Multiple Source", Foundations and Trends® in Communications and Information Theory: Vol. 2: No. 5, pp 330-381. http://dx.doi.org/10.1561/0100000007IIFri, 07 Jul 2006 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-007INetwork Coding Theory Part I: Single Source<h3>Abstract</h3>Store-and-forward had been the predominant technique for transmitting information through a network until its optimality was refuted by network coding theory. Network coding offers a new paradigm for network communications and has generated abundant research interest in information and coding theory, networking, switching, wireless communications, cryptography, computer science, operations research, and matrix theory.<p>We review the foundational work that has led to the development of network coding theory and discuss the theory for the transmission from a single source node to other nodes in the network. A companion issue discusses the theory when there are multiple source nodes each intending to transmit to a different set of destination nodes.</p><h3>Suggested Citation</h3>Raymond W. Yeung, Shuo-Yen Robert Li, Ning Cai and Zhen Zhang (2006), "Network Coding Theory Part I: Single Source", Foundations and Trends® in Communications and Information Theory: Vol. 2: No. 4, pp 241-329. http://dx.doi.org/10.1561/0100000007IFri, 07 Jul 2006 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-006Toeplitz and Circulant Matrices: A Review<h3>Abstract</h3>The fundamental theorems on the asymptotic behavior of eigenvalues, inverses, and products of banded Toeplitz matrices and Toeplitz matrices with absolutely summable elements are derived in a tutorial manner. Mathematical elegance and generality are sacrificed for conceptual simplicity and insight in the hope of making these results available to engineers lacking either the background or endurance to attack the mathematical literature on the subject. By limiting the generality of the matrices considered, the essential ideas and results can be conveyed in a more intuitive manner without the mathematical machinery required for the most general cases. As an application the results are applied to the study of the covariance matrices and their factors of linear models of discrete time random processes.<h3>Suggested Citation</h3>Robert M. Gray (2006), "Toeplitz and Circulant Matrices: A Review", Foundations and Trends® in Communications and Information Theory: Vol. 2: No. 3, pp 155-239. http://dx.doi.org/10.1561/0100000006Tue, 31 Jan 2006 00:00:00 +0100http://nowpublishers.com/article/Details/CIT-005Geometric Programming for Communication Systems<h3>Abstract</h3>Geometric Programming (GP) is a class of nonlinear optimization with many useful theoretical and computational properties. Over the last few years, GP has been used to solve a variety of problems in the analysis and design of communication systems in several 'layers' in the communication network architecture, including information theory problems, signal processing algorithms, basic queuing system optimization, many network resource allocation problems such as power control and congestion control, and cross-layer design. We also start to understand why, in addition to how, GP can be applied to a surprisingly wide range of problems in communication systems. These applications have in turn spurred new research activities on GP, especially generalizations of GP formulations and development of distributed algorithms to solve GP in a network. This text provides both an in-depth tutorial on the theory, algorithms, and modeling methods of GP, and a comprehensive survey on the applications of GP to the study of communication systems.<h3>Suggested Citation</h3>Mung Chiang (2005), "Geometric Programming for Communication Systems", Foundations and Trends® in Communications and Information Theory: Vol. 2: No. 1–2, pp 1-154. http://dx.doi.org/10.1561/0100000005Fri, 15 Jul 2005 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-004Information Theory and Statistics: A Tutorial<h3>Abstract</h3>This tutorial is concerned with applications of information theory concepts in statistics, in the finite alphabet setting. The information measure known as information divergence or Kullback-Leibler distance or relative entropy plays a key role, often with a geometric flavor as an analogue of squared Euclidean distance, as in the concepts of I-projection, I-radius and I-centroid. The topics covered include large deviations, hypothesis testing, maximum likelihood estimation in exponential families, analysis of contingency tables, and iterative algorithms with an "information geometry" background. Also, an introduction is provided to the theory of universal coding, and to statistical inference via the minimum description length principle motivated by that theory.<h3>Suggested Citation</h3>I. Csiszár and P.C. Shields (2004), "Information Theory and Statistics: A Tutorial", Foundations and Trends® in Communications and Information Theory: Vol. 1: No. 4, pp 417-528. http://dx.doi.org/10.1561/0100000004Wed, 15 Dec 2004 00:00:00 +0100http://nowpublishers.com/article/Details/CIT-003Algebraic Number Theory and Code Design for Rayleigh Fading Channels<h3>Abstract</h3>Algebraic number theory is having an increasing impact in code design for many different coding applications, such as single antenna fading channels and more recently, MIMO systems.<p>Extended work has been done on single antenna fading channels, and algebraic lattice codes have been proven to be an effective tool. The general framework has been settled in the last ten years and many explicit code constructions based on algebraic number theory are now available.</p><p>The aim of this work is to provide both an overview on algebraic lattice code designs for Rayleigh fading channels, as well as a tutorial introduction to algebraic number theory. The basic facts of this mathematical field will be illustrated by many examples and by the use of a computer algebra freeware in order to make it more accessible to a large audience.</p><h3>Suggested Citation</h3>F. Oggier and E. Viterbo (2004), "Algebraic Number Theory and Code Design for Rayleigh Fading Channels", Foundations and Trends® in Communications and Information Theory: Vol. 1: No. 3, pp 333-415. http://dx.doi.org/10.1561/0100000003Tue, 14 Dec 2004 00:00:00 +0100http://nowpublishers.com/article/Details/CIT-002Transmission and Reception with Multiple Antennas: Theoretical Foundations<h3>Abstract</h3>Wireless communication system design was until recently thought to have been limited in practice by time and bandwidth. The discovery that space, obtained by increasing the number of transmit and receive antennas, can also effectively generate degrees of freedom, and hence expand the range of choices made available to the design offers system designers important new opportunities. This paper focuses on the main aspects of single-user multiple-antenna theory, with the goal of presenting a comprehensive, yet compact, survey, emphasizing its mathematical aspects. After describing channel models, we compute the capacities they achieve, we briefly overview "space-time" codes, and we describe how suboptimum architectures can be employed to simplify the receiver.<h3>Suggested Citation</h3>E. Biglieri and G. Taricco (2004), "Transmission and Reception with Multiple Antennas: Theoretical Foundations", Foundations and Trends® in Communications and Information Theory: Vol. 1: No. 2, pp 183-332. http://dx.doi.org/10.1561/0100000002Wed, 01 Sep 2004 00:00:00 +0200http://nowpublishers.com/article/Details/CIT-001Random Matrix Theory and Wireless Communications<h3>Abstract</h3>Random matrix theory has found many applications in physics, statistics and engineering since its inception. Although early developments were motivated by practical experimental problems, random matrices are now used in fields as diverse as Riemann hypothesis, stochastic differential equations, condensed matter physics, statistical physics, chaotic systems, numerical linear algebra, neural networks, multivariate statistics, information theory, signal processing and small-world networks. This article provides a tutorial on random matrices which provides an overview of the theory and brings together in one source the most significant results recently obtained. Furthermore, the application of random matrix theory to the fundamental limits of wireless communication channels is described in depth.<h3>Suggested Citation</h3>Antonia M. Tulino and Sergio Verdú (2004), "Random Matrix Theory and Wireless Communications", Foundations and Trends® in Communications and Information Theory: Vol. 1: No. 1, pp 1-182. http://dx.doi.org/10.1561/0100000001Mon, 28 Jun 2004 00:00:00 +0200