Interpretable Models for Understanding Immersive Simulations

Kobi Gal from WeNet’s partner Ben-Gurion University (Israel) recently contributed to a paper on “Interpretable Models for Understanding Immersive Simulations”. The paper describes methods for comparative evaluation of the interpretability of models of high dimensional time series data inferred by unsupervised machine learning algorithms.

The time series data used in this investigation were logs from an immersive simulation like those commonly used in education and healthcare training. The structures learnt by the models provide representations of participants’ activities in the simulation which are intended to be meaningful to people’s interpretation. To choose the model that induces the best representation, two interpretability tests were designed, each of which evaluating the extent to which a model’s output aligns with people’s expectations or intuitions of what has occurred in the simulation.

The performance of the models on these interpretability tests was then compared to their performance on statistical information criteria. The investigation showed that the models which optimise interpretability quality differ from those that optimise (statistical) information theoretic criteria. Furthermore, the team found out that a model using a fully Bayesian approach performed well on both the statistical and human-interpretability measures. The Bayesian approach is a good candidate for fully automated model selection, i.e., when direct empirical investigations of interpretability are costly or infeasible.

For more details, you can read the full paper in the Scientific Publications page of WeNet’s website.