The second virtual panel for the Technology Ethics Conference was introduced by Mark P. McKenna, the John P. Murphy Foundation Professor of Law at the Notre Dame Law School and the Director of the Notre Dame Technology Ethics Center, and moderated by Scott Nestler, Associate Teaching Professor in the IT, Analytics, and Operations (ITAO) department, and serves as the Academic Director of the MS in Business Analytics program. The panelists included Kirsten Martin, the William P. and Hazel B. White Professor of Technology Ethics at the University of Notre Dameâs Mendoza School of Business, Mutale Nkonde, 2020-2021 Fellow at the Notre Dame Institute for Advanced Study, Francesca Rossi, IBM fellow and the IBM AI Ethics Global Leader, Kate Vredenburgh, Assistant Professorship in the Department of Philosophy, Logic and Scientific Method at the London School of Economics, and finally Michael Zimmer, Ph.D., associate professor in the Department of Computer Science at Marquette University, where he also serves as the Director of Undergraduate Studies, Co-Director of the Interdisciplinary Data Science major, and Director of the Graduate Data Science Certificate. This panel focused on two major questions: what ethical obligations do developers/institutions have in accounting for bias in algorithmic decision making? And, what technical, institutional, and legal responses are best suited to dealing with the problem?
Each panelist took a moment to introduce themselves and their expertise surrounding algorithmic bias and data science. From there, Nestler asked the panelists to provide their views on the ethical obligations associated with algorithmic bias in decision making. Each panelist took a unique perspective on where the responsibility and accountancy lies within algorithms. Some argued governmental responsibility, while others argued a more business-sided approach. Vredenburgh started off by saying governmental regulation is an important part of keeping businesses accountable. Rossi, on the other hand, argued that there should be many actors involved in decision making. Multi-stakeholder initiatives to decide best practices are important and can help share challenges and successes in what has worked and what has not. Educating both student and professional data scientists in these best practices as well as having diverse teams who might catch a bias that the scientist didn’t see could also be effective ways to fix these issues. The keynote speaker, Cathy OâNeil, jumped on to add that it should be a dance between both government and business to become more accountable to these biases and potential impacts of the algorithm.Â
Nestler moved the discussion towards addressing what technical, institutional, and legal responses are best suited to dealing with these problems. Nkonde expressed her thoughts on how lawyers should be involved. Martin agreed that if businesses donât make ethical judgements or decisions, the court will get involved and force businesses to become more accountable. Rossi said that from a business perspective, an organization cannot have just one person involved with a decision, but a group of people who have the power to implement these decisions across different areas of the business. A common thread between the panelists seemed to be that it takes many institutions together to create real positive change in algorithmic bias. McKenna closed the conversation by saying that he tells his students to just do the right thing. If they donât, someone else will make you through a lawsuit or another way.
Visit the event page for more.