Workshop on
Visualization for AI Explainability

October 22, 2018 at IEEE VIS in Berlin, Germany

Program Overview

2:20 -- 2:25 Welcome from the Organizers
2:25 -- 3:10 Keynote: Been Kim (Google Brain)
Towards interpretability for everyone: Testing with Concept Activation Vectors (TCAV)
The ultimate goal of interpretability is to help users gain insights into the model for more responsible use of ML. Unlike the majority of subfields in ML, interpretable ML requires studying how humans parse complex information and exploring effective ways to communicate such information. This human aspect becomes even more critical when developing interpretability methods for non-ML experts/layer users --- my core research agenda. I will share some interpretability methods that are designed with or without considering human aspect, and where they succeeded or fall short. I will take a deeper dive in one of my recent work - testing with concept activation vectors (TCAV) - a post-training interpretability method for complex models, such as neural network. This method provides an interpretation of a neural net's internal state in terms of human-friendly, high-level concepts instead of low-level input features. Most importantly, I will share some open questions in interpretability methods that are calling for visualization community's expertise.
3:10 -- 3:35 Session I: Neural Networks and Deep Learning
Visualising State Space Representations of Long Short-Term Memory Networks -- Emmanuel M. Smith, Jim Smith, Phil Legg and Simon Francis
Visualizing neuron activations of neural networks with the grand tour -- Mingwei Li, Zhenge Zhao and Carlos Scheidegger"
Embodied Machine Learning: An educational, human MNIST classifier -- Philipp Schmitt
3:35 -- 4:00 Session II: Projections and Dimensionality Reduction
Roads from Above -- Greg More, Slaven Marusic and Caihao Cui
The Beginner's Guide to Dimensionality Reduction -- Matthew Conlen and Fred Hohman
Dimension, Distances, or Neighborhoods? Projection Literacy for the Analysis of Multivariate Data -- Dirk Streeb, Rebecca Kehlbeck, Dominik Jäckle and Mennatallah El-Assady
4:00 -- 4:20 Coffee Break with Poster Session
4:20 -- 4:45 Session III: Data Distribution and Bias
A Visual Exploration of Gaussian Processes -- Jochen Görtler, Rebecca Kehlbeck and Oliver Deussen
Towards an Interpretable Latent Space -- Thilo Spinner, Jonas Körner, Jochen Görtler and Oliver Deussen
Understanding Bias in Machine Learning -- Jindong Gu and Daniela Oelke
4:45 -- 5:10 Session IV: Machine Learning Processes and Explanation Strategies
Minions, Sheep, and Fruits: Metaphorical Narratives to Explain Artificial Intelligence and Build Trust -- Wolfgang Jentner, Rita Sevastjanova, Florian Stoffel, Daniel Keim, Jurgen Bernard and Mennatallah El-Assady
Aimacode Javascript - Minimax -- Michael Kawano
Going beyond Visualization: Verbalization as Complementary Medium to Explain Machine Learning Models -- Rita Sevastjanova, Fabian Beck, Basil Ell, Cagatay Turkay, Rafael Henkin, Miriam Butt, Daniel Keim and Mennatallah El-Assady
5:10 -- 5:55 Moderated Panel Discussion (Been, Arvind, Fred)
5:55 -- 6:00 Best submission ceremony and "Auf Wiedersehen" :)
8:00 -- ... VISxAI Eastcoast party

Posters

What is Bayesian Knowledge Tracing? -- Young Cho, Grace Mazzarella, Kelvin Tejeda, Tongyu Zhou and Iris Howley
Recsys: what is a recommendation in the Age of Machine Learning -- Iskra Velitchkova, Juan Arévalo and Marco Creatura
Understanding ML through Topological Data Analysis -- Nathaniel Saul and Dustin L Arendt
Explaining neural network concepts through an interactive visualization -- Roberto Stelling and Adriana S Vivacqua
Plainability: Explainability for 1-Dimensional Temporal Inputs -- Humberto Simon Garcia Caballero, Michel Westenberg and Binyam Gebre

Organizers (alphabetic)

Mennatallah El-Assady - University of Konstanz
Duen Horng (Polo) Chau - Georgia Tech
Adam Perer - Carnegie Mellon University
Hendrik Strobelt - IBM Research, MIT-IBM Watson AI Lab
Fernanda Viegas - Google Brain