Incorporating Texture Information into Dimensionality Reduction for High-Dimensional Images
High-dimensional imaging is becoming increasingly relevant in many fields from astronomy and cultural heritage to systems biology. Visual exploration of such high-dimensional data is commonly facilitated by dimensionality reduction. However, common dimensionality reduction methods do not include spatial information present in images, such as local texture features, into the construction of low-dimensional embeddings. Consequently, exploration of such data is typically split into a step focusing on the attribute space followed by a step focusing on spatial information, or vice versa. In this paper, we present a method for incorporating spatial neighborhood information into distance-based dimensionality reduction methods, such as t-Distributed Stochastic Neighbor Embedding (t-SNE). We achieve this by modifying the distance measure between high-dimensional attribute vectors associated with each pixel such that it takes the pixel's spatial neighborhood into account. Based on a classification of different methods for comparing image patches, we explore a number of different approaches. We compare these approaches from a theoretical and experimental point of view. Finally, we illustrate the value of the proposed methods by qualitative and quantitative evaluation on synthetic data and two real-world use cases.
Resources
Citation
BibTeX
@inproceedings{ bib:2022_pacific_vis_spidr,
author = {Alexander Vieth and Anna Vilanova and Boudewijn Lelieveldt and Elmar Eisemann and Thomas H{\"o}llt},
title = { Incorporating Texture Information into Dimensionality Reduction for High-Dimensional Images },
booktitle = { Proceedings of the 15th IEEE Pacific Visualization Symposium },
pages = { 11 -- 20 },
year = { 2022 },
doi = { 10.1109/PacificVis53943.2022.00010 },
}