Ethics is the reflection on moral judgements. It is concerned with questions like “What shall I do?”, “What do we owe to each other?”, and “In what society do we want to live in?”. It refers not only to standards of valid inference, but also to ethical principles and values.
The term “Artificial intelligence” is colloquially used to describe machines that mimic cognitive functions which are usually associated with the functions of human minds like understanding, learning, and problem-solving.
Artificial intelligence, thus, raises a great number of philosophically and ethically interesting questions. For instance, it challenges our understanding of the very essence of intelligence. It also raises questions on moral agency and on ethical standards and guidelines for the development, application, and oversight of algorithms and machine learning systems.
What does it mean for an AI to be “ethical”? There are several ways to understand this question. Are the aims that an AI is designed for ethically legitimate? Is the design-process carried out in a way that does not violate ethical standards? Does the design of the AI itself comply with ethical principles? Or, does it, for instance, perpetuate and reinforce stereotypes and, thus, discrimination against certain groups?
Since machine learning systems “learn” from patterns they aggregate from past data, they infer from what has always been this or that way how it is going to be – and they might be very successful with their prognosis. However, whatever happened in the past, or (allegedly) has always been a certain way, does not give us an answer on how it should be.
Very often algorithms are explicitly or implicitly value-laden. The values and judgments of AI designers inform the set up and therefore the outcome of AI. This is unavoidably so. We, however, have to be aware of this and act accordingly: Designing ethical AI is an enormously complex task which bestows great responsibility on the developers.
The deeper an AI affects people’s lives, their prospects and the distribution of societal goods, the greater the case for transparency, ethical reflection and supervision.
WeNet is aware of the complexities of our social and moral world(s) and is aspiring to design a context-sensitive and diversity-sensitive AI that avoids the pitfalls of statistical discrimination and stereotyping.
Article by Dr. Karoline Reinhardt, Researcher at International Center for Ethics in the Sciences and Humanities (IZEW) at Eberhard Karls Universität Tübingen.
To stay updated to WeNet news and developments, subscribe to our newsletter http://bit.ly/WeNet_subscribe