Monday, 8 July, 08:30 | Muriel Médard |

Tuesday, 9 July, 08:30 | Alice Guionnet |

Wednesday, 10 July, 08:30 | Shannon Lecturer Erdal Arıkan |

Thursday, 11 July, 08:30 | Yoshua Bengio |

Friday, 12 July, 08:30 | George V. Moustakides |

This talk gives an account of the original ideas that motivated the development of polar coding and discusses some new ideas for exploiting channel polarization.

Erdal Arıkan was born in Ankara, Turkey, in 1958. He received the B.S. degree from the California Institute of Technology, Pasadena, CA, in 1981, and the S.M. and Ph.D. degrees from the Massachusetts Institute of Technology, Cambridge, MA, in 1982 and 1985, respectively, all in Electrical Engineering. He served as an assistant professor at the University of Illinois at Urbana-Champaign during 1986-1987. Since Sept. 1987, he has been with the Electrical-Electronics Engineering Department of Bilkent University, Ankara, Turkey, where he works as a Professor. He is the recipient of the 2010 IEEE Information Theory Society Best Paper Award, the 2013 IEEE W. R. G. Baker Award, IEEE Turkey Section 2017 Life-Long Achievement Award, and the 2018 IEEE Hamming Medal. Arıkan is a member of the IEEE and an IEEE Fellow.

In this talk, we consider two different aspects of randomness as a design principle. In the first part, we overview the intentional use of randomness for network code constructions. Random linear network coding (RLNC) allows for endless composability and, in erasure networks, achieves network capacity. In the second part of the talk, we exploit the randomness of noise in channels for decoding. Guessing random additive noise to decode (GRAND) does not use the structure of the code, but instead considers the random properties of the noise itself. GRAND provides a maximum a posteriori decoder with any code, with attractive complexity properties for many channels.

Muriel Médard is the Cecil H. Green Professor in the Electrical Engineering and Computer Science (EECS) Department at MIT and leads the Network Coding and Reliable Communications Group at the Research Laboratory for Electronics at MIT. She has co-founded three companies to commercialize network coding, CodeOn, Steinwurf and Chocolate Cloud. She has served as editor for many publications of the Institute of Electrical and Electronics Engineers (IEEE), of which she was elected Fellow, and she has served as Editor in Chief of the IEEE Journal on Selected Areas in Communications. She was President of the IEEE Information Theory Society in 2012, and served on its board of governors for eleven years. She has served as technical program committee co-chair of many of the major conferences in information theory, communications and networking. She received the 2009 IEEE Communication Society and Information Theory Society Joint Paper Award, the 2009 William R. Bennett Prize in the Field of Communications Networking, the 2002 IEEE Leon K. Kirchmayer Prize Paper Award, the 2018 ACM SIGCOMM Test of Time Paper Award and several conference paper awards. She was co-winner of the MIT 2004 Harold E. Edgerton Faculty Achievement Award, received the 2013 EECS Graduate Student Association Mentor Award and served as Housemaster for seven years. In 2007 she was named a Gilbreth Lecturer by the U.S. National Academy of Engineering. She received the 2016 IEEE Vehicular Technology James Evans Avant Garde Award, the 2017 Aaron Wyner Distinguished Service Award from the IEEE Information Theory Society and the 2017 IEEE Communications Society Edwin Howard Armstrong Achievement Award.

The typical behaviour of random matrices is well understood in a rather large range of models, including deformed models of a signal+a noise. Estimating the probability of deviating from these typical behaviours is a much more challenging question that we shall discuss in this talk.

Alice Guionnet entered Ecole Normale superieure in 1989. She earned her PhD in 1995 in probability theory under the supervision of Gerard Ben Arous ,working on spin glass dynamics and aging. She is well known for her work on random matrices. She has established surprizing links with various other fields of mathematics as spectral theory, operator algebra, free probability which lead her to several outstanding results. Her "single ring theorem" is a real masterpiece of analysis. One of the most important contribution of Alice Guionnet might be a series of work where she founds the theory of "Matrix Models". She has received a number of prestigious prizes like the Loeve prize, the silver medal of CNRS and the Blaise Pascal medal, showing her impressive impact beyond probability theory. After being an invited speaker at the International Congress of Mathematician, and at the International Congress of Mathematical Physics, she was elected in 2017 at the french Academy of Sciences.

One of the central goals of deep learning is to learn good representations, which ideally disentangle the underlying abstract explanatory factors of the observed data. How goodness of these representations should be defined is an active and open area of research, but an interesting hypothesis is that we should try to define this mostly in the high-level representation space itself, rather than in data space, because the interesting factors (e.g., the word sequence, in speech) may represent very few bits out of the actual sensory signal (e.g. the acoustic speech signal). Yet, standard machine learning objectives (like maximum likelihood) are defined in the data space. This has stimulated the exploration of training objectives based on information-theoretical ideas, like mutual information and entropy. Interestingly, the tools of deep learning, in particular of adversarial networks, is being used to learn these objective functions: the basic idea is that dependency can be measured by how well a classifier can separate samples from a joint distribution from samples of the marginals. Whereas classical non-parametric methods struggle to estimate quantities like entropy or mutual information when the variables are high-dimensional, a new wave of estimators has been proposed based on neural networks, which also raise other interesting questions and open new opportunities for learning high-level representations.

Recognized as one of the world’s leading experts in artificial intelligence and a pioneer in deep learning, Yoshua Bengio studied in Montreal, earned his Ph.D. in computer science from McGill University in 1991, and did post-doctoral studies at MIT.

Since 1993, he has been a professor in the Department of Computer Science and Operations Research at the Université de Montréal, and he holds the Canada Research Chair in Statistical Learning Algorithms. In addition, he is Scientific Director of IVADO and Mila, the Quebec Artificial Intelligence Institute, the world’s largest deep learning academic research group.

An Officer of the Order of Canada, he is also a Fellow of the Royal Society of Canada, the recipient of the Marie-Victorin Prize in 2017, and was named Radio-Canada’s Scientist of the Year for 2017. In 2018, he was awarded the 50th anniversary medal of Quebec’s Ministry of International Relations and La Francophonie.

Yoshua Bengio is one of the world’s most cited computer scientists, thanks to his three books and more than 500 publications. His h-index stands at 131, with more than 149,000 Google Scholar citations. His ambition is to understand the principles that lead to intelligence through learning, as well as promote the development of artificial intelligence for the benefit of all.

The problem of rapid detection of a change in the statistical behavior of an observed data sequence, finds application in a plethora of scientific fields. Quality control, detection of seismic wave onset time, epidemic detection, portfolio monitoring, health monitoring of structures, fraud detection, spectrum monitoring, attack and router failure detection in networks, are only a few examples where quickest (sequential) change detection can be adopted to mathematically formulate the corresponding application problem. In the first part of our presentation we introduce the notion of a stopping time which is the mathematical entity we employ for implementing a sequential detector. We continue with an effort to understand the various change imposing mechanisms that exist in nature and we provide a high-level model for their statistical description. This in turn allows us to define proper performance measures and related optimization problems which, when solved, give rise to optimum detection strategies. In the second part, we make an overview of the most popular metrics proposed in the literature along with their optimum detectors and discuss their applicability to real problems. Finally, in the last part of our presentation we focus on modern versions of the quickest change detection problem. In particular, we consider data that are acquired from multiple sources with the change occurring simultaneously either in all or in an unknown subset of the nodes. We also consider different classes of sequential detectors as centralized, decentralized, and the more recently investigated, distributed and discuss their corresponding optimality properties.

George V. Moustakides received the diploma in Electrical and Mechanical Engineering from the National Technical University of Athens, Greece in 1979, the MSE in Systems Engineering from the University of Pennsylvania in 1980, and the M.Sc and Ph.D in Electrical Engineering and Computer Science from Princeton University in 1983. Since 1988 he is with the University of Patras, Greece initially with the Computer Engineering and Informatics department and afterwards with the department of Electrical and Computer Engineering. In 2017 he also joined the Computer Science department at Rutgers University. In the past he held various visiting or long-term appointments with INRIA, Princeton University, University of Pennsylvania, Columbia University, University of Maryland, Georgia Institute of Technology, University of Southern California, University of Illinois at Urbana-Champaign and Rutgers University. His interests include Sequential Detection, Statistical Signal Processing and Machine Learning. From 2011 to 2014 and from 2016 to 2018 he served as Associate Editor for the IEEE Transactions on Information Theory.