IAN: Iterated Adaptive Neighborhoods for Manifold Learning and Dimensionality Estimation

Invoking the manifold assumption in machine learning requires knowledge of the manifold’s geometry and dimension, and theory dictates how many samples are required. However, in most applications, the data are limited, sampling may not be uniform, and the manifold’s properties are unknown; this implies that neighborhoods must adapt to the local structure. We introduce an algorithm for inferring adaptive neighborhoods for data given by a similarity kernel. Starting with a locally conservative neighborhood (Gabriel) graph, we sparsify it iteratively according to a weighted counterpart. In each step, a linear program yields minimal neighborhoods globally, and a volumetric statistic reveals neighbor outliers likely to violate manifold geometry. We apply our adaptive neighborhoods to nonlinear dimensionality reduction, geodesic computation, and dimension estimation. A comparison against standard algorithms using, for example, k-nearest neighbors, demonstrates the usefulness of our approach.

Citation

Dyballa, Luciano, and Steven W. Zucker. "IAN: Iterated Adaptive Neighborhoods for manifold learning and dimensionality estimation." Neural Computation 35.3 (2023): 453-524.

Authors from IE Research Datalab