Skip to content

Commit

Permalink
Update paper (#519)
Browse files Browse the repository at this point in the history
Co-authored-by: manzt <[email protected]>
  • Loading branch information
github-actions[bot] and manzt authored Oct 13, 2024
1 parent fe6318a commit 023a5b8
Showing 1 changed file with 12 additions and 30 deletions.
42 changes: 12 additions & 30 deletions _publications/manz-cev-2024.md
Original file line number Diff line number Diff line change
@@ -1,40 +1,22 @@
---
title: "A General Framework for Comparing Embedding Visualizations Across Class-Label Hierarchies"
title: A General Framework for Comparing Embedding Visualizations Across Class-Label Hierarchies
image: cev.png
members:
- trevor-manz
- fritz-lekschas
- nils-gehlenborg
year: 2024
type: article
publisher: "https://ieeexplore.ieee.org/document/10672535/"
doi: "10.1109/TVCG.2024.3456370"
zotero-key: "EBDFTS7Q"
preprint: "https://doi.org/10.31219/osf.io/puxnf"
publisher: 'https://ieeexplore.ieee.org/document/10672535/'
doi: 10.1109/TVCG.2024.3456370
zotero-key: EBDFTS7Q
preprint: 'https://doi.org/10.31219/osf.io/puxnf'
cite:
authors: "T Manz, F Lekschas, E Greene, G Finak, N Gehlenborg"
published: "*IEEE Transactions on Visualization and Computer Graphics* 1-11"
authors: 'T Manz, F Lekschas, E Greene, G Finak, N Gehlenborg'
published: '*IEEE Transactions on Visualization and Computer Graphics* 1-11'
videos: []
other-resources: []
awards: []
code: 'https://github.com/OzetteTech/comparative-embedding-visualization'
---
Projecting high-dimensional vectors into two dimensions for visualization,
known as embedding visualization, facilitates perceptual reasoning and
interpretation. Comparison of multiple embedding visualizations drives
decision-making in many domains, but conventional comparison methods are
limited by a reliance on direct point correspondences. This requirement
precludes embedding comparisons without point correspondences, such as two
different datasets of annotated images, and fails to capture meaningful
higher-level relationships among point groups. To address these shortcomings,
we propose a general framework to compare embedding visualizations based on
shared class labels rather than individual points. Our approach partitions
points into regions corresponding to three key class concepts--confusion,
neighborhood, and relative size--to characterize intra- and inter-class
relationships. Informed by a preliminary user study, we realize an
implementation of our framework using perceptual neighborhood graphs to define
these regions and introduce metrics to quantify each concept. We demonstrate
the generality of our framework with use cases from machine learning and
single-cell biology, highlighting our metrics' ability to draw insightful
comparisons across label hierarchies. To assess the effectiveness of our
approach, we conducted a user study with five machine learning researchers and
six single-cell biologists using an interactive and scalable prototype
developed in Python and Rust. Our metrics enable more structured comparison
through visual guidance and increased participants’ confidence in their
findings.
Projecting high-dimensional vectors into two dimensions for visualization, known as embedding visualization, facilitates perceptual reasoning and interpretation. Comparison of multiple embedding visualizations drives decision-making in many domains, but conventional comparison methods are limited by a reliance on direct point correspondences. This requirement precludes embedding comparisons without point correspondences, such as two different datasets of annotated images, and fails to capture meaningful higher-level relationships among point groups. To address these shortcomings, we propose a general framework to compare embedding visualizations based on shared class labels rather than individual points. Our approach partitions points into regions corresponding to three key class concepts--confusion, neighborhood, and relative size--to characterize intra- and inter-class relationships. Informed by a preliminary user study, we realize an implementation of our framework using perceptual neighborhood graphs to define these regions and introduce metrics to quantify each concept. We demonstrate the generality of our framework with use cases from machine learning and single-cell biology, highlighting our metrics' ability to draw insightful comparisons across label hierarchies. To assess the effectiveness of our approach, we conducted a user study with five machine learning researchers and six single-cell biologists using an interactive and scalable prototype developed in Python and Rust. Our metrics enable more structured comparison through visual guidance and increased participants’ confidence in their findings.

0 comments on commit 023a5b8

Please sign in to comment.