navigation
Announcements
22-Aug Webpage Up!

Course Description

The accelerated integration of Artificial Intelligence (AI) and Machine Learning (ML) into critical societal systems and infrastructures such as autonomous vehicles, healthcare, manufacturing, agriculture, and smart buildings coupled with the explosive growth of Generalized Pretrained Transformers (GPT) and similar models with all their attendant trust and resilience issues has brought the question of the dependability of such models to the forefront. An AI/ML system must be dependable, ensuring fairness, safety, resilience, robustness, and security of its operational environment and explainable, providing the reasoning behind produced decisions/actions. The absence of these features impacts not only public trust in these technologies but also leaves systems that incorporate these technologies venerable to security attacks, biased outcomes, and failures that can affect human/infrastructure/system safety. Additionally, our direct interaction with AI/ML systems and applications can potentially exasperate unethical and biased behavior, further impacting the issue of public trust. In this course, we will address these problems from both a research and implementation perspective focusing on creating and implementing AI/ML algorithms that are not only accurate but also fair, robust, privacy-preserving, transparent, and explainable. The dependability of and trust in AI/ML is increasingly linked to the success of critical applications in the real world. The availability and deployment of AI/ML systems that are dependable and trustworthy in the field will increase the efficiency and safety of humans and infrastructures. In summary, this course will teach principles and emerging practices, algorithms, design, and assessment of dependable systems through lectures, assignments, and projects supplemented by a select list of guest speakers.

Course Highlights

This course addresses the emerging challenges of design, implementation, and validation of dependable AI systems in the rapidly exploding GPT world by allowing students to study classic and emerging AI systems and algorithms, decision-making under uncertainty, and the resulting safety, reliability, and security issues. The course organizes fragmented, trustworthy AI approaches across multiple application domains into a coherent framework. Students will focus on the dependability of AI/ML systems with applications, including transportation, health, and other critical infrastructures. Through innovative projects and research paper presentations, the student will learn the challenges and opportunities in designing and validating such ML-driven technologies. This course will allow students to interact with domain experts who will demonstrate novel approaches in making real-world AI systems more reliable and trustworthy. The course is structured into five sub-modules: a. Reliability, Fairness, and Ethics; b. Robustness; c. Verification and Certification; d. Security/Privacy; and e. Explainability/Interpretability.

Course Components

The class will consist of lectures, with the intent of building up common knowledge and grounding and then transition to discussion of both seminal and recent research papers that outline new challenges and opportunities in designing and validating dependable AI systems. Additionally, the course will include:

  • In-class lectures
  • Guest lectures from industry and academia
  • Student-led presentations
  • Group discussions
  • A project focused on designing models for resilience assessment and design

More details here

Logistics
  • Class Timings: Tue/Thu 12:30pm - 1:50pm (CT) 4070 ECEB. In-person lectures and discussion.
  • Teaching Hours: Monday/Wednesday 10 - 11 am(CT) via Zoom (https://illinois.zoom.us/j/87808605124)
  • Zoom password: Shared in first lecture/On Campuswire
  • Paper discussions and class announcements will be made on Campuswire. Students should enroll using the course enrollment code shared in the class.
  • Homeworks, presentations and project materials should be submitted on Canvas.
  • Signup Deadlines: Presentation (Thursday, 31 Aug), Project (end of week 4)
  • Evaluation: Details here
  • Academic Accommodation: DRES requirements must be reported to instructor/TA by the end of 1st week (8/28/2023)

Team

Instructor Teaching Assistant
Ravishankar K. Iyer Anirudh Choudhary
Prof. Iyer Anirudh
Office Hours: Online (via Zoom); 10:00am - 11:00am Monday Office Hours: Online (via Zoom), 10:00am - 11:00am Wednesday
Office: 255 Coordinated Science Laboratory (CSL) Office: 245 Coordinated Science Laboratory (CSL)
Email: rkiyer@illinois.edu Email: ac67@illinois.edu

Academic Integrity Policy