With the increasing use of algorithmic systems in all areas of life, the discussion about the social impact of technology and the development of a “European Path to Artificial Intelligence” has also gained momentum. Human-centered and trustworthy AI are the keywords of this debate. A large number of ethical guidelines for the design of AI have been published to pave the way.
There seems to be general agreement that AI systems must be subject to certain principles such as fairness, transparency or data protection. However, the question of how the principles, to which these guidelines and directives refer to, are to be implemented in practice remains largely unanswered. There are, for instance, many different understandings of concepts such as transparency and justice. As a result, both AI-developing companies and users, e.g. public authorities, lack the necessary orientation and effective control over the systems. The lack of concretisation is thus one of the major obstacles to the development and deployment of public-interest oriented artificial intelligence.
The WeNet-Consortium partner IZEW Tübingen (International Center for Ethics in the Sciences and Humanities) is a member of the interdisciplinary AI Ethics Impact Group that has developed an interdisciplinary framework to operationalize AI ethics.
The study “AI Ethics: From Principles to Practice” illustrates how AI ethics principles can be operationalized and transferred into practice.
For Tübingen, WeNet-Consortium member PD Dr. Jessica Heesen, as well as Dr. Thilo Hagendorff and Dr. Wulf Loh are participating in the AI Ethics Impact Group and contributed to the study “AI Ethics: From Principles to Practice“ their expertise on informational, technology, and media ethics, as well as on value-oriented development of AI.
To know more, watch the video here below: