A computer sciences conference that focuses on ethical issues, the ACM conference on Fairness, Accountability, and Transparency is a must-visit for interdisciplinary research teams that aspire to providing more inclusive and fair technological solutions. Members of the WeNet consortium joined the gathering from 27-30 January 2020 at the Barceló Sants hotel in Barcelona. A good mix of presentations, tutorials, and workshops allowed participants to engage with the latest innovations for fair, accountable, and transparent AI. These innovations range from computational methods for fairness in algorithms to improved data collection and cleaning for less biased datasets, and not least social and communication tools for mutual understanding. The latter matters greatly for interdisciplinary research teams, which are increasingly funded by the European Union.
A workshop on the topic “Lost in Translation” shed light on the challenges when computer scientists and social scientists or humanities scholars tackle a shared concern. Sometimes, it comes down to very basic information that helps both “sides” understand and engage the other: explaining one’s workflow and clarifying terms used naturally in discussions. The workflows, or ways to approach a research topic, can differ greatly between disciplines. Whereas computer scientists may start with an identified problem and follow a linear approach through data collection, model creation, testing, and adjustment, social scientists may center their work on the problem itself. They reflect its understanding, why it is conceived a problem, and engage literature, discourses, focus groups, and correlations to come back to the problem, re-define it, and continue. With regard to so-called “contested” terms, we can think of almost taken for granted words such as “algorithm.” In an interdisciplinary research consortium, different partners may have different ideas of what an algorithm constitutes. Whereas computer scientists usually refer to a computer model, social scientists may think of the designer, operator, and material involved in creating and implementing computer models.
The FAT* conference affirmed WeNet in the project’s continuous effort to bring together approaches and integrate ethical perspectives in the development of technology. Interestingly, diversity was not at the center of discussions at the FAT* conference, although many presentations implied the need for the representation of diverse populations in datasets or the need for diversity-awareness in technology development. More work should focus on what diversity-awareness means in this context, and how we can make the concept diversity explicit and operationalizable for computer sciences. Such work may take place at next year’s conference, which – after a change of the acronym at the Barcelona gathering – should be noted down as follows on everyone’s to-do list: FAccT 2021 in Toronto, Canada!
Article by Laura Schelenz
Research Associate, International Centre for Ethics in the Sciences and Humanities,
University of Tübingen, Germany