Technology Ethics Conference 2020: Panel 2

Subscribe to the ThinkND podcast on Apple, Spotify, or Google.

Featured Speakers: 

  • Kirsten Martin, William P. and Hazel B. White Professor of Technology Ethics, University of Notre Dame Mendoza School of Business
  • Scott Nestler, Associate Teaching Professor in the IT, Analytics, and Operations (ITAO) Department and Academic Director of the MS in Business Analytics Program, University of Notre Dame
  • Mutale Nkonde, 2020-2021 Fellow at the Notre Dame Institute for Advanced Study, University of Notre Dame
  • Francesca Rossi, IBM Fellow and the IBM AI Ethics Global Leader, IBM Corporation
  • Kate Vredenburgh, Assistant Professorship in the Department of Philosophy, Logic and Scientific Method, London School of Economics
  • Michael Zimmer, Ph.D., Associate Professor in the Department of Computer Science, Director of Undergraduate Studies, Co-Director of the Interdisciplinary Data Science Major, and Director of the Graduate Data Science Certificate, Marquette University

The second virtual panel for the Technology Ethics Conference was introduced by Mark P. McKenna, the John P. Murphy Foundation Professor of Law at the Notre Dame Law School and the Director of the Notre Dame Technology Ethics Center, and moderated by Scott Nestler, Associate Teaching Professor in the IT, Analytics, and Operations (ITAO) department, and serves as the Academic Director of the MS in Business Analytics program. The panelists included Kirsten Martin, the William P. and Hazel B. White Professor of Technology Ethics at the University of Notre Dame’s Mendoza School of Business, Mutale Nkonde, 2020-2021 Fellow at the Notre Dame Institute for Advanced Study, Francesca Rossi, IBM fellow and the IBM AI Ethics Global Leader, Kate Vredenburgh, Assistant Professorship in the Department of Philosophy, Logic and Scientific Method at the London School of Economics, and finally Michael Zimmer, Ph.D., associate professor in the Department of Computer Science at Marquette University, where he also serves as the Director of Undergraduate Studies, Co-Director of the Interdisciplinary Data Science major, and Director of the Graduate Data Science Certificate. This panel focused on two major questions: what ethical obligations do developers/institutions have in accounting for bias in algorithmic decision making? And, what technical, institutional, and legal responses are best suited to dealing with the problem?

Each panelist took a moment to introduce themselves and their expertise surrounding algorithmic bias and data science. From there, Nestler asked the panelists to provide their views on the ethical obligations associated with algorithmic bias in decision making. Each panelist took a unique perspective on where the responsibility and accountancy lies within algorithms. Some argued governmental responsibility, while others argued a more business-sided approach. Vredenburgh started off by saying governmental regulation is an important part of keeping businesses accountable. Rossi, on the other hand, argued that there should be many actors involved in decision making. Multi-stakeholder initiatives to decide best practices are important and can help share challenges and successes in what has worked and what has not. Educating both student and professional data scientists in these best practices as well as having diverse teams who might catch a bias that the scientist didn’t see could also be effective ways to fix these issues. The keynote speaker, Cathy O’Neil, jumped on to add that it should be a dance between both government and business to become more accountable to these biases and potential impacts of the algorithm. 

Nestler moved the discussion towards addressing what technical, institutional, and legal responses are best suited to dealing with these problems. Nkonde expressed her thoughts on how lawyers should be involved. Martin agreed that if businesses don’t make ethical judgements or decisions, the court will get involved and force businesses to become more accountable. Rossi said that from a business perspective, an organization cannot have just one person involved with a decision, but a group of people who have the power to implement these decisions across different areas of the business. A common thread between the panelists seemed to be that it takes many institutions together to create real positive change in algorithmic bias. McKenna closed the conversation by saying that he tells his students to just do the right thing. If they don’t, someone else will make you through a lawsuit or another way.

Visit the event page for more.


  • Multiple stakeholders need to be involved in decision making surrounding algorithmic bias (13:43)
  • Educating new and experienced developers is an important part of learning how to catch and mitigate algorithmic bias in the future (17:13)
  • Diverse teams are needed to catch biases a developer may not catch on their own (17:48)
  • There needs to be both government and business involvement in combating problems associated with algorithmic bias (28:01)
  • People who are making decisions surrounding ethics should be people who have the power to implement these decisions (41:50)

  • “Education for students is very important, but also very important for people who are producing these systems already. They need to be educated and retrained to understand what it means to take care of AI bias in their operations” (Francesca Rossi; 17:12)
  • “We try as much as possible to build diverse teams, so that in the team, there can be people with different points of view that can make team members aware of their respective biases” (Francesca Rossi; 17:48)
  • “The best legal advice is often to do the right thing”  (Mark McKenna; 49:10)

Technology Ethics CenterUniversity of Notre Dame