Emerging ML models (like deep neural networks) tend to be complex, fragile, non-robust, and uninterpretable. This makes it extremely challenging to build reliable real-world systems that incorporate ML components. We need trustworthy ML as well as robust system design to achieve end-to-end correctness of systems.
In this course, we will study recent developments at the intersection of formal methods (FM), programming languages (PL) and machine learning (ML) research towards the development of trustworthy AI-based systems. Some topics planned covered for the course are:
Formal Verification of ML models using abstraction and constraint solvers
Symbolic explanations of deep neural networks
Training ML models with Logic and Knowledge
Learning symbolic concepts (code, program synthesis, invariants)
Proving correctness of systems with ML components
Neurosymbolic machine learning
Prerequisites: Students should have taken CS173 (or a similar course) and know basic proofs using induction and proof by contradiction (as taught in CS173). Some background on computability theory (as taught is CS374) would be good. Having taken a course in machine learning (like CS446) would be good.