7th Workshop on
Visualization for AI Explainability

October 13, 2024 at IEEE VIS in St. Pete Beach, Florida

VISxAI Logo

The role of visualization in artificial intelligence (AI) gained significant attention in recent years. With the growing complexity of AI models, the critical need for understanding their inner-workings has increased. Visualization is potentially a powerful technique to fill such a critical need.

The goal of this workshop is to initiate a call for 'explainables' / 'explorables' that explain how AI techniques work using visualization. We believe the VIS community can leverage their expertise in creating visual narratives to bring new insight into the often obfuscated complexity of AI systems.

Important Dates

August 06, 2024, anywhere: Submission Deadline
September 10, 2024: Author Notification
October 1, 2024: Camera Ready Deadline
October 13, 2024: Morning Session (ET)

Program Overview

All times in ET (UTC -5).

8:30am
Welcome from the Organizers
Session I (75 minutes)
8:35 -- 9:15
Opening Keynote: David Bau - @davidbau
Resilience and Human Understanding in AI - What is the role of human understanding in AI? As increasingly massive AI systems are deployed into an unpredictable and complex world, interpretability and controllability are the keys to achieving resilience. We discuss results in understanding and editing large-scale transformer language models and diffusion image synthesis models, and how these are part of an emerging research agenda in interpretable generative AI. Finally, we talk about the concentration of power that is emerging due to the scaling up of large-scale AI, and the kind of infrastructure that will be needed to ensure broad and democratized human participation in the future of AI.
9:15 -- 9:45
Lightning Talks I
Can Large Language Models Explain Their Internal Mechanisms?
Nada Hussein, Asma Ghandeharioun, Ryan Mullins, Emily Reif, Jimbo Wilson, Nithum Thain, Lucas Dixon
Explaining Text-to-Command Conversational Models
Petar Stupar, Prof. Dr. Gregory Mermoud, Jean-Philippe Vasseur
Where is the information in data?
Kieran Murphy, Dani S. Bassett
Explainability Perspectives on a Vision Transformer: From Global Architecture to Single Neuron
Anne Marx, Yumi Kim, Luca Sichi, Diego Arapovic, Javier Sanguino Bautiste, Rita Sevastjanova, Mennatallah El-Assady
9:45 -- 10:15
Break
Session II (75 minutes)
10:15 -- 10:45
Lightning Talks II
The Illustrated AlphaFold
Elana P Simon, Jake Silberg
A Visual Tour to Empirical Neural Network Robustness
Chen Chen, Jinbin Huang, Ethan M Remsberg, Zhicheng Liu
What Can a Node Learn from Its Neighbors in Graph Neural Networks?
Yilin Lu, Chongwei Chen, Matthew Xu, Qianwen Wang
Inside an interpretable-by-design machine learning model: enabling RNA splicing rational design
Mateus Silva Aragao, Shiwen Zhu, Nhi Nguyen, Alejandro Garcia, Susan Elizabeth Liao
10:45 -- 11:30
Closing Keynote: Adam Pearce - @adamrpearce
Why Aren't We Using Visualizations to Interact with AI? - Well-crafted visualizations are the highest bandwidth way of downloading information into our brains. As complex machine learning models become increasingly useful and important, can we move beyond mostly using text to understand and engage with them?
11:30am
Closing

Hall of Fame

Each year we award Best Submissions and Honorable Mentions. Congrats to our winners!

VISxAI 2024
Can Large Language Models Explain Their Internal Mechanisms? Nada Hussein, Asma Ghandeharioun, Ryan Mullins, Emily Reif, Jimbo Wilson, Nithum Thain, Lucas Dixon
The Illustrated AlphaFold Elana P Simon, Jake Silberg
VISxAI 2023
Understanding and Comparing Multi-Modal Models Christina Humer, Vidya Prasad, Marc Streit, Hendrik Strobelt
Do Machine Learning Models Memorize or Generalize? Adam Pearce, Asma Ghandeharioun, Nada Hussein, Nithum Thain, Martin Wattenberg, Lucas Dixon
VISxAI 2021
Feature Sonification: An investigation on the features learned for Automatic Speech Recognition Amin Ghiasi, Hamid Kazemi, W. Ronny Huang, Emily Liu, Micah Goldblum, Tom Goldstein
VISxAI 2020
Comparing DNNs with UMAP Tour Mingwei Li and Carlos Scheidegger
How Does a Computer "See" Gender? Stefan Wojcik, Emma Remy, and Chris Baronavski
VISxAI 2018
A Visual Exploration of Gaussian Processes Jochen Görtler, Rebecca Kehlbeck and Oliver Deussen
Roads from Above Greg More, Slaven Marusic and Caihao Cui

Organizers (alphabetic)

Alex Bäuerle - Axiom Bio
Angie Boggust - Massachusetts Institute of Technology
Fred Hohman - Apple
Steering Committee
Adam Perer - Carnegie Mellon University
Hendrik Strobelt - MIT-IBM Watson AI Lab
Mennatallah El-Assady - ETH AI Center

Program Committee and Reviewers

Jane Adams
Camelia D. Brumar
Jaegul Choo
Brandon Duderstadt
Angus Forbes
Seongmin Lee
Katelyn Morrison
Rita Sevastjanova
Venkatesh Sivaraman
James Wexler
Catherine Yeh
Tim Barz-Cech
Yuexi Chen
Aeri Cho
Bhavana Doppalapudi
Jianben He
Sichen Jin
Panfeng Li
Tong Li
Huyen N. Nguyen
Haowei Ni
Yu Qin
Rubab Zahra Sarfraz
Johanna Schmidt
Ryan Yen