Administrivia


Course Outline

The emergence of AI systems and their ubiquitous adoption in automating tasks that involve humans in critical application domains (e.g., autonomous vehicles, medical assistants/devices, manufacturing, agriculture, and smart buildings) means that it is of paramount importance that we be able to place trust in these technologies. In a broad sense, a trustworthy AI system must be dependable (i.e., ensure safety, resilience, robustness, and security of its own and its operational environment) and reasonable (i.e., provide the reasoning behind produced decisions/actions). Indeed, the absence of such features not only makes people reluctant to deploy technology in the field despite successful demonstrations but also leaves systems vulnerable to security hacks and crashes that ultimately impact human safety. As Schneir states, traditionally, computers have only outperformed humans at speed, scale, and scope, whereas humans excelled at thinking, reasoning, adapting, and understanding. However, artificial intelligence (AI) changes the landscape, where computers can now infer relationships, discover patterns, react, and adapt to changes while keeping its strength in speed, scale, and scope. While AI application long remained withheld for its high computational cost, recent advances in computing (high-speed network, big data storage, computation speed) led to a new era of smart systems.

Traditional dependability and interpretability techniques are no longer viable for machine learning (ML) and artificial intelligence (AI) driven applications. Such applications are prone to (i) existing dependability issues such as software/hardware failures and bugs, (ii) uncertainty in the data (both training and operational/inference data) leading to biases and corner-case inference failures, and (iii) uncertainty in the machine learning models and their composition with other ML models or the rest of the system.

Designing a dependable and interpretable system is an active area of research. In this course, we will discuss recent techniques for designing dependable and interpretable ML/AI, especially focusing on the safety, security, and reliability aspects of the emerging applications. The course will draw inspiration from emerging safety-critical artificial intelligence applications such as self-driving ground and aerial vehicles (e.g., Waymo’s self-driving cars or Boeing’s autonomous systems), health applications such as medical assistants (e.g., IBM Watson) and surgical robots (e.g., RAVEN II), and machine-learning driven computer systems (e.g., UIUC’s Symphony). Through innovative projects and research paper presentations, the student will learn the challenges and opportunities in designing and validating such ML-driven autonomous systems.

Dependabilitly framework

Prerequisites

Basic probability (ECE 313 or equivalent), machine learning (ECE 498DS or CS 446 or equivalent) and basic computer programming skills (such as Python or R or Matlab) are essential.

About the Class

The class will consist of lectures, with the intent of building up common knowledge and grounding and then transition to discussion of both seminal and more recent research papers that outline new challenges and opportunities in dependable ML/AI. In addition,:

  • We will have guest lectures from industry and academia.
  • For classes with student-led presentations, students who are not presenting in that particular session are expected to write short reviews for the papers being presented in that session.
  • Students are expected to complete data-driven assignments/projects focused on resilience assessment and design.

Evaluation

We will compute the final grade using the following table:

Activity Grade Details
Paper Presentation + Reviews 25%
In-class Assignments 10%
Final Project 40%
Class Participation 10% May include quizzes
Final Exam (take home) 15%

Paper Presentation & Reviews

For Reviewers

  • Description: 2 pages max. 1 paragraph on the core idea of the paper, followed by list of pros and cons of the approach, and any questions/criticisms/thoughts about the paper.
  • Grading Criteria: Argumentative critique (Pros/Cons), Creative comments about addressing issues or improving the paper.
  • Due: Night before class at 10 p.m. Link will be posted on Piazza.

For Presenters

  • Sign-up: TBD
  • Description: 10-12 slides max (20 min for paper, 5 min critique, 5 min for questions). 2-3 slides on motivation and background. 3-5 slides on core ideas of the paper. 2-3 slides on experimental data. 3-5 slides on your thoughts/criticisms/questions/discussion points about the paper. Include slides summarizing Piazza discussion about paper.