Who is accountable for ethical artificial intelligence? How do you build accountability into your organization’s use of AI? I was recently invited to answer those questions in a guest blog post published on the EDUCAUSE Professional Development Commons and EDUCAUSE Review.
There is more to think about when implementing AI than just efficiency and time savings. There are ethical implications at every step in the process. This article includes an overview of those ethical implications and steps organizations can take to build ethics into current and future AI projects.
“Determining who is responsible for ethical AI turns out to be more complicated than identifying the person who created the program. There are potentially multiple responsible parties, including programmers, sellers, and implementers of AI-enabled products and services. For AI to be ethical, multiple parties must fulfill their ethical obligations.… IT departments should be ready to assess and manage ethics before, during, and after AI deployment.”Linda Fisher Thornton, Artificial Intelligence and Ethical Accountability, EDUCAUSE Professional Development Commons and EDUCAUSE Review.
While the article was written for higher education IT professionals, the principles apply to any IT department in any industry that is directly or indirectly (through vendors) using AI.
The article is governed under a Creative Commons BY-NC-ND 4.0 International License.
Share this article with your team to establish a baseline understanding of ethical accountability for AI, and to incorporate key steps into your planning and implementation processes.
This article was originally published in the EDUCAUSE Professional Development Commons (blog) and EDUCAUSE REVIEW, Artificial Intelligence and Ethical Accountability, EDUCAUSE Review, July 31, 2020.