Takeaways on the “Social Innovation and AI Development in Europe” Panel

The 13 September panel discussion ​​“Social Innovation and AI Development in Europe” organised in Tübingen by project partner University of Tübingen and the local German-American Institute (d.a.i) was paired with the first in-person WeNet consortium meeting since the beginning of the pandemic. The WeNet partners met in Tübingen to discuss the latest project achievements and make plans for the coming months, until the end of the project – but they also had a chance to attend the panel discussion. Here below, a collection of partners and co-organisers impressions of the event, and their reflections on the themes at hand.

Connecting to WeNet’s core

Fausto Giunchiglia (University of Trento and WeNet Project Coordinator): AI development is at the heart of our project, and especially the idea of striving for it to be used by people to build a better, more diversity-aware society, using technologies that are still advanced but human-centric.

Nardine Osman (IIIA-CSIC): AI is already everywhere around us, affecting our everyday decisions – what to buy next, who to vote for, medical decisions, job recruitment – and the AI algorithms will continue to grow smarter every day. And they will continue to make mistakes, every now and then. Hence, the AI-craze in the last few years: fearing the impact of AI and the ethical issues that it raises. All these concerns are all valid and very important. So our core question is: how much can we impact the path of AI development?

The most interesting AI developments right now

Kobi Gal (Ben Gurion University): Applications to healthcare have incredible potential: using AI to help doctors communicate with patients – not only to provide the right diagnosis, but also to make sure to provide the right treatment. AI is more about optimisation, and this is something we tend to forget: the idea is also to help people communicate with each other.

Osman: Big tech companies driving AI have their own agendas – generally strongly intertwined with politics – that do not necessarily take our best interests at heart, but we can say that in Europe, today, development is focused on ethical AI: AI that respects human values, that is trustworthy, transparent, robust and preserves human control. That is evident if we look at where AI EU funding is going.

Ute Bechdolf (Director of d.a.i.): We have a “Cyber Valley” up on the hill here in Tübingen, and it’s a conglomerate of many companies engaged in AI, joined by the Max Planck Institute: we always try to bring our American guests there, and they’re very curious about what’s happening in this context. In more general terms, it’s interesting to see how widespread AI is now. The topic is obviously relevant even beyond its merely technological aspects: even for us, a mainstream cultural institute, AI is a key issue to talk about, as it’s deeply influencing our culture and society.

The most pressing ethical issue in AI

Jessica Heesen (Ethics Center Tübingen – who acted as panel moderator): The Ethics Center is multidisciplinary, and we try to bring forward ethics in science and technology, which is the intersection where WeNet finds itself. The theme of “explainable AI” is very important, in my opinion – and the connected theme of transparency – but also how we could reach AI for human welfare, addressing social issues to create a better world. Because of the monopolies in the field of AI – a lot of big players, motivated by their own goals and profit – now the task is to find solutions for the people, to be applied to the contexts that people act within.

Osman: Working and lobbying for ethics in AI is a must. Anybody working on AI development must ensure that AI is engineered to be aware of our human values and needs. It must understand human values and needs and reason about them, to be able to make decisions aligned to those – and even explain how this alignment is achieved. This is what we refer to as value engineering in AI.

Gal: One of the greatest challenges when it comes to AI is also, undoubtedly, circumventing the inherent bias that is already present in the type of data we collect. We as humans generate data and such data reflects societal biases toward people of different gender, race and social class; and, if we’re not careful, the algorithms that we create – that are based on this data – will simply reflect these biases. One of the most interesting aspects of my research is how we can learn to overcome these biases by bringing into the fold concepts from social science and psychology, which is why I love working within the WeNet project.

Daniel Gatica-Perez (idiap Research Institute, EPFL – one of the panelists in the discussion): I think it is important for the different actors in the domain to think at the highest possible level, beyond the specific work they do, and to possibly frame such questions at that maximum level that they can picture, because the topic of AI is multi-disciplinary and touches so many aspects of industry, daily life, research and education.

Final takeaways

Giunchiglia: The final takeaway is that we have a long way to go, but the interesting thing is that there are two ways to think about social innovation: a defensive attitude towards what we have, or a proactive attitude; the latter representing a great chance for us, through WeNet – and beyond – to try and construct a better society.

Daniele Miorandi (U-Hopper): I found the experience very interesting; we had some thoughtful discussions – a lot of food for thought – so I’m bringing home a lot of things to reflect on, that will probably shape my work for the near future, on the technological level.

Gatica-Perez: What I liked about the panel was hearing the perspectives of researchers from both the EU and the US – see what different people from different regions think about the topic – and the interesting combination of hopeful and sceptical views on the topic. Overall, the panel showed how important it is to think bigger than a single discipline, to be able to come up with more long-term solutions.

Interviews conducted by Laura Schelenz of the International Center for Ethics in the Sciences and Humanities of WeNet partner University of Tübingen