While information theory was initially developed specifically for the study of reliable communication systems, it has found broader applications in areas as diverse as biology, cryptography, or machine learning. More recently, motivated in part by the emergence of distributed cyber-physical systems, information theory has also been used to develop the basis of a theory of coordination over networks. The central idea behind the approach is to view the transmission of data between nodes in a network as a means to coordinate their behaviors, and not as an end in itself. From a mathematical perspective, the question is not how much data rate is required to guarantee a small probability of decoding error, but how much data rate is needed to ensure that the joint behavior of nodes, modeled as a joint probability distribution, approximates a targeted joint behavior. Promising applications of this theory include the design of efficient algorithms and protocols to coordinate networks of distributed autonomous agents.
In this talk, we will discuss recent results regarding the design of explicit coding schemes for coordination in two-node and three node networks. In particular, we will highlight how the problem of coordinating behaviors over networks relates to the problem of simulating processes at the output of noisy channels, also known as channel resolvability. We will leverage this relation to guide the design of practical coding schemes based on polar codes and we will illustrate all our results in the context of the coordination of autonomous robots patrolling a border.