ECE 598 - Representation of Information (Spring 2024)

Instructor: Lav Varshney (office hours: T 12:30-1:30 in 314 CSL and by appointment)

Teaching Assistant: Xiyue Zhu (office hours: W 4:00-5:00 on zoom, https://illinois.zoom.us/j/86393907066?pwd=SXpuckE0b09TdHROeWVVYnl2QkZXQT09)

Lectures: Tuesdays and Thursdays, 11:00am, 3017 Electrical and Computer Engineering Building

Catalog Description: Learning representations are critical in many branches of artificial intelligence, especially in recent applications of generative AI. At the same time, there are several information-theoretic principles that are foundational to representation of information. This course covers relevant information-theoretic topics in multiterminal source coding, multiterminal channel coding, universal prediction, and associative memories, as well as mathematical and computational foundations of learning representations arising in generative AI models that include variational autoencoders, autoregressive models such as Transformers, normalizing flow models, information lattice learning, invariant risk minimization, and diffusion models. Emerging connections between information theory and AI will be discussed throughout. Governance and social responsibility in generative AI will also be briefly discussed.

Prerequisites: ECE 563 and ECE 544

Textbook: T. M. Cover and J. A. Thomas, Elements of Information Theory, 2nd ed., Wiley, 2006, as well as many further readings and lecture notes.

Grading: homework (25%), midterm exam [take-home] (20%), final exam [take-home] (15%), project (35%), social responsibility essay (5%)

Syllabus


Homework [submit on gradescope]

Exams [submit on gradescope]

Project [submit on gradescope]

Social Responsibility Essay [submit on gradescope]


Course Schedule

 

Date Topic Readings (see also lecture slides for any embedded things) Learning Objectives

1/16

Introduction to Information Representation

[slides]

   objectives
1/18

Review of Lossless Source Coding

[slides]

[handwritten]

Chapters 3, 5, and 13 of Cover and Thomas  objectives
1/23

Review of Rate-Distortion Theory

[handwritten]

Chapter 10 of Cover and Thomas

Chapter 8 of Yeung

 objectives
1/25

Source Coding with Coded Side Information

[handwritten]

Chapter 15.8 of Cover and Thomas  objectives
1/30

Source Coding with Coded Side Information

[handwritten]

Chapter 15.9 of Cover and Thomas  objectives
2/1

Information Bottleneck

[handwritten]

Z. Goldfeld and Y. Polyanskiy, "The information bottleneck problem and its applications in machine learning," IEEE Journal on Selected Areas in Information Theory, vol. 1, no. 1, pp. 19-38, May 2020.

A. Zaidi, I. Estella-Aguerri, and S. Shamai, "On the information bottleneck problems: Models, connections, applications, and information-theoretic views," Entropy, vol. 22, no. 2, 151, 2020.

 objectives
2/6

Multiple Access Channel

[handwritten]

Chapter 15.1 and 15.3 of Cover and Thomas  objectives
2/8

Universal Multiple Access Channel

[handwritten]

Y.-S. Liu and B. L. Hughes, "A new universal random coding bound for the multiple-access channel," IEEE Transactions on Information Theory, vol. 42, no. 2, pp. 376-386, March 1996.  objectives
2/13

Invariant Causal Prediction and Invariant Risk Minimization 

[handwritten]

https://www.inference.vc/invariant-risk-minimization/  objectives
2/20

Metalearning

[slides]

   objectives
2/22

Normalizing Flows

[slides]

[handwritten]

   objectives
2/27

Applications of Normalizing Flows and Autoencoders

[slides]

[handwritten]

   objectives
2/29 Variational Autoencoders

D. P. Kingma and M. Welling, "An Introduction to Variational Autoencoders," Foundations and Trends in Machine Learningvol. 12, no. 4, pp. 307-392, 2019.