The WeNet App Pilot 2.0 Exit Survey: Highlights and Qualitative insights

In November 2021, the five WeNet App Pilot 1.0 institutions University of Trento (Italy), National University of MongoliaUniversity of Aalborg (Denmark), the Universidad Católica “Nuestra Señora de la Asunción” (Paraguay) and the London School of Economics (UK) hosted the second round of app pilots, launching an upgraded version of the technology, ‘we@’ 2.0.

This pilot is the result of close coordination between the ethics, social science and technical working groups. WP2 developed the location proximity feature, WP3 social distance, WP4 the incentive system with badges and messages, WP5 provided the norms (question parameters), WP6 the technology infrastructure, WP7 developed the design (e.g., the chat flow) and protocols, WP9 the ethical dimensions, with the sensitive and anonymous options, and contact information for support services.

The pilots 2.0 ran on a common scenario – getting strangers to help each other – with a total of over 200 student participants posing and answering questions to each other via a chatbot application, supported by Telegram.

In the 2.0 pilots, student volunteers tested a more sophisticated version of the app, which uses diversity-aware AI to capitalise on the diversity of student communities for the mutual benefit of all. As part of the study, participants completed a short questionnaire covering their interests and skills, values and personality which came together as their personal profile, used by the WeNet technology to match questions to the most appropriate responder(s). 

Responding to participants’ feedback from the first pilot study (1.0), new features and components were integrated into the app. The participants’ evaluation of these will provide valuable material for reflection and guide consortium-wide decisions and future directions; specifically, they will determine which components and features are most valuable and therefore worth keeping and which warrant major or minor re-evaluation and improvements in future iterations.

How does we@ compare with other apps?

The WeNet app sets out to offer a valuable and unique alternative to mainstream social networking apps or Q&A platforms. In comparison to the latter, it provides:

  • A tailored, user-directed experience thanks to the diversity-aware algorithms 
  • A trusted community where only students at selected pilot locations took part
  • The option of maintaining anonymity and greater privacy, e.g., no profile photos
  • A specific filter for sensitive topics, a signal to potential responders to be careful and considerate in their answers
  • No commercial underpinnings or affiliations

Highlights from the Exit Survey

Table 1. Frequency and usefulness of each question filter by each pilot location

While there are substantial variations between locations, ‘sensitive’ and ‘anonymous’ filters were overwhelmingly evaluated as ‘fairly or very’ useful across all pilot sites (Table 1). 


Table 2. Distribution of agreement on user experience items for each pilot location – In bold, the overall scores across locations (right-most column, ranked by highest to lowest agreement) and across items (bottom row)

Mongolian students are, by far, the most positive among all our samples, scoring 81% across all user experience items.

The top two UX items having gathered the highest enthusiasm across all pilot sites are related to altruistic behaviours, i.e., the intrinsic rewarding feeling of helping others. An encouraging result for future project developments, between 61% (in Italy) and 89% (in Mongolia) think that using the WeNet app would benefit students. It is not surprising that ‘feeling part of a community’ and ‘getting to know other students’ fall in the bottom half of the table; considering the somewhat limiting one-to-one nature of interactions and short length of the experiment, this sentiment should be expected.

In terms of the chatbot incentive system, across all pilots, 70% liked the badges and 66% felt these awards encouraged their contribution. Respondents’ opinion on messages was more split, with just over half saying they liked them. 

Qualitative insights: main currents of opinion

Key words to capture the we@ experience. To ‘break the ice’ at the beginning of each focus group, participants were asked to pick three words to summarise their experience with the app. Connection, fun, educational, engaging, addictive, useful, creative, interesting, curiosity, easy are just some of the positive terms used to describe the experiment.

Qualitative findings confirm findings from both the quantitative survey and the earlier pilot study (1.0); for instance, the focus on helping others. Some newly implemented features that gathered enthusiasm are sensitive and anonymous filters and the long answer badge. On the other hand, participants felt that some concepts/features need further improvement and might be dropped or modified in future versions; for instance, the concepts of social distance, proximity, similarity were somewhat ambiguous, both onboarding instructions and interface need streamlining.

Content Analysis

In an analysis of the questions submitted in LSE and AAU, the main categories were community (“What’s your favourite part of London?”), and suggestions (“What are the best places to look for a flat or other accommodation?”) in both sites, as in the previous pilot. Questions in the three other sites will be also analysed as this might highlight local student cultures and concerns, and contribute to our understanding of diversity within and between our national partners.

Conclusion

The second pilots unfolded when covid restrictions were still in place, partly hindering recruitment and participation. Nevertheless, feedback from the evaluation phase has provided useful insights and strengthened early findings, providing stronger basis for further development in the next and final iteration later in the year.

Follow us on social media and subscribe to our newsletter for updates on our ongoing pilots!