Social Responsibility Essay
Most investigations of AI and social responsibility have focused on discriminative models rather than generative ones. Moreover, investigations have focused much more on allocative harms rather than representational harms. The purpose of this esssay assignment, which should be 2-3 pages in the IEEE 2-column format, is to revisit some ideas in the context of generative models. You should start by summarizing the ideas that are discussed in one of the two options below, and then argue how they might be modified/extended specifically for the case of generative models. Please focus more on the social dimensions of the works, rather than the technical ones.
Option 1
- H. Suresh and J. Guttag, "A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle," in Proc. Equity and Access in Algorithms, Mechanisms, and Optimization Conf. (EAAMO '21), Oct. 2021.
Option 2
- M. Mitchell, S. Wu, A. Zaldivar, P. Barnes, L. Vasserman, B. Hutchinson, E. Spitzer, I. D. Raji, and T. Gebru, "Model Cards for Model Reporting," in Proc. Conf. Fairness, Accountability, and Transparency (FAT* '19), pp. 220-229, Jan. 2019.
- M. Arnold, R. K. E. Bellamy, M. Hind, S. Houde, S. Mehta, A. Mojsilović, R. Nair, K. Natesan Ramamurthy, A. Olteanu, D. Piorkowski, D. Reimer, J. Richards, J. Tsay, and K. R. Varshney, "FactSheets: Increasing trust in AI services through supplier's declarations of conformity," IBM J. Res. Dev., vol. 63, no. 4/5, pp. 6:1-6:13, July-Sept. 2019.
The paper is due May 2 at 5pm via Gradescope.