Report on the Panel Discussion “Social Innovation and AI Development in Europe”

On 13 September 2022, the International Center for Ethics in the Sciences and Humanities (IZEW) at the University of Tübingen, together with the German-American Institute Tübingen (d.a.i.), hosted a hybrid panel discussion on “Social Innovation and AI Development in Europe” as part of the WeNet project activities. The panelists included Elizabeth Churchill (Google), Gianluca Misuraca (Inspiring Futures, AI4GOV, InTouchAI & Re-Imagine Europa), Daniel Gatica-Perez (EPFL), and Judith Simon (University of Hamburg). The discussion was moderated by Jessica Heesen (University of Tübingen). 20 people attended the panel in person and 22 joined online.

Europe is increasingly investing in the development of Artificial Intelligence and has become the third-biggest AI economic player globally, behind the United States and China. It is aiming at strategic leadership and tech sovereignty, which Ursula von der Leyen, president of the European Commission, described as “the capability that Europe must have to make its own choices, based on its own values, respecting its own rules”.[1] In addition, Europe is putting increasing emphasis on social innovation, with the European Commission primarily promoting and funding research and products designed to serve society.

Yet, questions remain about the European approach to AI. Where does Europe stand in AI development, where do we want it? How is social innovation put into practice? How should AI be regulated? The possibilities and challenges of the European approach were the focus of the panel discussion. The panelists talked about the current state of AI development in Europe, important norms and values on which it should be based, and how these can be implemented from theory into practice.

[1] Von der Leyen, U. (2020, February 19). Shaping Europe’s digital future: op-ed by Ursula von der Leyen, President of the European Commission [Editorial]. European Commission.

AI Development in Europe Today

What distinguishes AI “made in Europe” from other approaches is not so much its technological development but the differences in access to data and regulations. Likewise, cultural factors influence which kinds of technology are considered useful and developed. Judith Simon explained Europe’s strive for tech sovereignty and leadership in AI development with national and international security concerns, but also interest in science and the social good, depending on the type of technology in question.

However, with the notion of tech sovereignty also come potential dangers and challenges, such as promoting isolation or protectionism. Elizabeth Churchill emphasized the need to look at the consequences of tech sovereignty and examine the pros and cons: who will benefit, and, more importantly, who will not? Gianluca Misuraca reiterated the importance of establishing technology diplomacy to tackle important questions like data ownership or interdependence with other countries.  

Important Norms and Values

Human agency, justice, human-centricity, as well as transparency and explainability were considered the most important values for AI development by the panelists. According to Churchill, many dangers come from systems that are not transparent and cannot be explained. Explainability also promotes trust. But “good” explanations are context-dependent, as Simon explained, and must consider both the purpose of an explanation and the person receiving it. She warned of a mismatch between what the AI community explains and what people actually want to know and why. 

The importance of justice was also discussed in depth. Daniel Gatica-Perez pointed to the need for a life-cycle assessment of AI that emphasizes the equal distribution of AI benefits at all stages – development, production, distribution, and use. Unfortunately, the risk of discrimination in AI is pervasive. Churchill pointed out that it will remain so unless certain questions are addressed: to whose benefit are systems built? Who is being left out? Who is (actively) being discriminated against? When an AI system is developed within a society, the values and problems within that society are incorporated, including inequalities. However, as Simon proposed, AI can also be used as a diagnostic tool, indicating inequalities that need to be addressed.

Putting Norms and Values into Practice: Whose responsibility?

All panelists agreed that the responsibility of developing AI for the common good and social welfare cannot lie (solely) with individuals. Churchill placed responsibility at a higher, corporate level. Corporations need to examine the motivations behind their “big picture” goals and daily practices. The fact that conversations about the long-term effects of decisions and developments are starting to happen at Google and Microsoft, for example, is a good first sign for Churchill. Simon, instead, considered it a systemic problem, calling for alternative economic systems. She stressed that the capitalist motivations underpinning AI development have led to major problems, including the downsides of data brokerage and the countering of democratic processes.

For Gatica-Perez, this need for high-level responsibility and grand-scale change does not fully negate the responsibility that individuals have, whether they work on AI or not. He reminded the audience of the importance of civic engagement. It is crucial to engage citizens and hear their voices, both in the democratic process and within research. Gatica-Perez’s advice: researchers need to get more involved in public dialogue. Churchill, too, brought up “digital civics” and the importance of figuring out how to get people more involved in the development of technologies, further their understanding of technology, and make them feel like they can shape the technologies that shape our world.

Regulating AI: Audience Questions

The big questions of what it means to regulate AI, whether it is possible, and how to do it in a meaningful way were raised by the audience. In response, Misuraca highlighted the need for a global approach to AI development and regulation to harness its potential to solve some of our biggest challenges, such as climate change. Where exactly this exchange and cooperation will take place, however, is not clear yet. At the UN? Will a new place have to be created?

Both Gatica-Perez and Simon also pointed out that they see nothing so special about AI that it cannot be regulated. Complex political issues including the development and proliferation of nuclear power are regulated, so why not AI? As Simon put it, regulating AI will not be easy and regulations will not always lead to improvements. But in accordance with Heesen’s summarizing closing remark, “AI can be regulated like everything else in this world.”

About the author: Bonnie Kerkhoff graduated from McGill University in Montréal, Canada with a B.A. in History and Political Science and is currently enrolled in the Master’s program in Peace Research and IR at the University of Tübingen, Germany. She is a student assistant at the International Center for Ethics in the Sciences and Humanities (IZEW) at the University of Tübingen. Her research interests include security ethics, dealing with the past and transitional justice.