Invited speakers, experts in the field

“Data Fusion Using Source Separation: Role of Diversity”, Abstract

“Geometric Deep Learning”, Abstract, Slides

“Barycentric Subspace Analysis: an extension of PCA to Manifolds”, Abstract, Slides

“Elastic Riemannian Frameworks for High-Dimensional Signal Processing”, Abstract, Slides

“Subspace Approximations on the Continuum”, Abstract


Special session on computional imaging featuring

“Superresolution Imaging using Piecewise Smooth Image Models”, Abstract

“X-RAY Fluorescence Image Super-resolution using Dictionary Learning”, Abstract

“LASSI: A Low-Rank and Adaptive Sparse Signal Model for Highly Accelerated Dynamic Imaging”, Abstract

  • Yon Visell, University of California, Santa Barbara

“Learning Parts of Touch in the Whole Hand with Sparse Dictionary Learning”, Abstract


“Data Fusion Using Source Separation: Role of Diversity”, Tülay Adali

Fusion of information from multiple sets of data in order to extract a set of features that are most useful and relevant for the given task is inherent to many problems we deal with today. Data-driven methods based on source separation minimize the assumptions about the underlying relationships and enable fusion of information by letting multiple datasets to fully interact and inform each other. Use of multiple types of diversity—statistical property—enables maximal use of the available information when achieving source separation. In this talk, a number of powerful models are introduced for fusion of both multiset—data of the same nature—as well as multi-modal data, and the importance of diversity in fusion is demonstrated with a number of practical examples in medical imaging and video processing.

“Geometric Deep Learning”, Michael Bronstein

The past decade in computer vision research has witnessed the re-emergence of “deep learning” and in particular, convolutional neural network techniques, allowing to learn task-specific features from examples and achieving a breakthrough in performance in a wide range of applications. However, in the geometry processing and computer graphics communities, these methods are practically unknown. One of the reasons stems from the facts that 3D shapes (typically modeled as Riemannian manifolds) are not shift-invariant spaces, hence the very notion of convolution is rather elusive. In this talk, I will show some recent works from our group trying to bridge this gap. Specifically, I will show the construction of intrinsic convolutional neural networks on meshes and point clouds, with applications such as finding dense correspondence between deformable shapes and shape retrieval.

“Barycentric Subspace Analysis: an extension of PCA to Manifolds”, Xavier Pennec

I address in this talk the generalization of Principal Component Analysis (PCA) to Riemannian manifolds and potentially more general stratified spaces. Tangent PCA is often sufficient for analyzing data which are sufficiently centered around a central value (unimodal or Gaussian-like data), but fails for multimodal or large support distributions (e.g. uniform on close compact subspaces). Instead of a covariance matrix analysis, Principal Geodesic Analysis (PGA) and Geodesic PCA (GPCA) are proposing to minimize the distance to Geodesic Subspaces (GS) which are spanned by the geodesics going through a point with tangent vector is a restricted linear sub-space of the tangent space. Other methods like Principal Nested Spheres (PNS) restrict to simpler manifolds but emphasize on the need for the nestedness of the resulting principal subspaces.
In this work, we first propose a new and more general type of family of subspaces in manifolds that we call barycentric subspaces. They are implicitly defined as the locus of points which are weighted means of $k+1$ reference points. As this definition relies on points and do not on tangent vectors, it can also be extended to geodesic spaces which are not Riemannian. For instance, in stratified spaces, it naturally allows to have principal subspaces that span over several strata, which is not the case with PGA. We show that barycentric subspaces locally define a submanifold of dimension $k$ which generalizes geodesic subspaces. Like PGA, barycentric subspaces can naturally be nested, which allow the construction of inductive forward nested subspaces approximating data points which contains the Frechet mean. However, it also allows the construction of backward flags which may not contain the mean. Second, we rephrase PCA in Euclidean spaces as an optimization on flags of linear subspaces (a hierarchies of properly embedded linear subspaces of increasing dimension). We propose for that an extension of the unexplained variance criterion that generalizes nicely to flags of barycentric subspaces in Riemannian manifolds. This results into a particularly appealing generalization of PCA on manifolds, that we call Barycentric Subspaces Analysis (BSA).

“Elastic Riemannian Frameworks for High-Dimensional Signal Processing”, Anuj Srivastava

Success in high-dimensional signal processing often relies on our ability to represent signals as elements of low-dimensional, nonlinear manifolds. These representations can then be used for statistical modeling, estimation, testing, and classification of signals. The key steps in search for low-dimensional representations include: (1) understanding geometries of signal spaces, (2) registration of signals in space and time, and (3) PCA-type dimension reduction in quotient spaces of signal spaces. These steps are critically dependent on finding metrics that are invariant to nuisance variables and that enable efficient algorithms for these three steps. I will describe a comprehensive framework, termed elastic functional data analysis (EFDA), that allows for joint registration and statistical analysis of signals with both attractive theoretical and computational properties. I will illustrate this framework for: (1) Elastic signal registration, (2) Elastic functional PCA, (3) Elastic regression models, (4) Elastic trend estimation models, and (5) Elastic shape analysis of objects.

“Subspace Approximations on the Continuum”, Michael B. Wakin

Nonlinear manifold models arise in situations where a small number of continuous-valued parameters capture the degrees of freedom in a signal. Examples of such parameters include the frequency of a sinusoid, the position of a target in a radar image, etc. Linear subspace models form the foundation of many classical signal processing techniques and play an important role in modern sparsity-based signal processing. Subspaces are commonly used to simplify manifold models by providing local, tangent-like approximations. Rather than fitting low-dimensional subspaces to small regions of a manifold, we consider the potential benefits of using a higher-dimensional subspace approximation over a larger region of the manifold. As we discuss, in some cases such constructions can be remarkably effective for capturing the energy in the signals of interest while remaining nearly orthogonal to signals out of this range. Consequently, these subspace approximations have many possible applications, including modeling and reconstruction of multiband signals, and through-the-wall radar imaging. This is joint work with Zhihui Zhu, Mark Davenport, and Justin Romberg.

“Superresolution Imaging using Piecewise Smooth Image Models”, Mathews Jacob

It is often challenging to achieve high spatial and temporal resolution in imaging applications involving dynamically changing image content (e.g. cardiac MRI). The talk will focus on novel algorithms, which models the image data as piecewise smooth signals, to improve the state of the art in MRI. We will introduce off-the-grid methods, based on novel advances in structured matrix recovery algorithms, to exploit the additional structure in multidimensional imaging problems, which current total variation regularization methods fail to exploit. The talk will also demonstrate the application of this framework in dynamic imaging.

“X-RAY Fluorescence Image Super-resolution using Dictionary Learning”, Qiqin Dai, Emeline Pouyet, Oliver Cossairt, Marc Walton, Francesca Casadio and Aggelos Katsaggelos

X-Ray fluorescence (XRF) scanning of works of art is becoming an increasingly popular non-destructive analytical method. The high quality XRF spectra is necessary to obtain significant information on both major and minor elements used for characterization and provenance analysis. However,
there is a trade-off between the spatial resolution of an XRF scan and the Signal-to-Noise Ratio (SNR) of each pixel’s spectrum, due to the limited scanning time. In this paper, we propose an XRF image super-resolution method to address this trade-off, thus obtaining a high spatial resolution XRF scan with high SNR. We use a sparse representation of each pixel using a dictionary trained from the spectrum samples of the image, while imposing a spatial smoothness constraint on the sparse coefficients. We then increase the spatial resolution of the sparse coefficient map using a conventional super-resolution method. Finally the high spatial resolution XRF image is reconstructed by the high spatial resolution sparse coefficient map and the trained spectrum dictionary.

“LASSI: A Low-Rank and Adaptive Sparse Signal Model for Highly Accelerated Dynamic Imaging”, Saiprasad Ravishankar, Brian E. Moore, Raj Rao Nadakuditi and Jeffrey A. Fessler

Sparsity-based approaches have been popular in many applications in image processing and imaging. Recent research has shown the usefulness of sparsity or low-rank techniques for solving inverse problems such as those in dynamic imaging. In particular, the imaged temporal data sequence is modeled as a sum of low-rank and sparse components that are estimated from measurements. In this work, we instead decompose the temporal image sequence into a low-rank component and a component whose spatiotemporal patches are assumed sparse in some adaptive dictionary domain. We present a methodology to jointly estimate the underlying signal components and the spatiotemporal dictionary from highly under-sampled measurements. Our numerical experiments demonstrate the promising performance of our scheme for dynamic magnetic resonance image reconstruction from undersampled k-t space data.

“Learning Parts of Touch in the Whole Hand with Sparse Dictionary Learning”, Yitian Shao and Yon Visell

Touching and manipulating objects elicits spatiotemporal patterns of touch-elicited vibrations in the whole hand that can be compared to the retinal images received by the eye during vision. However, little is known about the perceptual information that may be contained in these signals. Motivated by the successful application of sparse coding ideas in other areas of perceptual neuroscience, we asked to what extent touch-elicited vibrations can be considered to be the sum of their parts, and how these parts might relate to the functional and structural specializations of the hand. To address this, we employed a wearable sensor array to record a database of vibrations elicited during natural manual interactions, and formulated a sparse dictionary learning problem that aimed to model the latent structure in this data in an additive way, using a variation on non-negative matrix factorization with true (l0-norm) sparseness constraints. We validated our approach using classification and clustering tasks. This analysis yielded “parts of touch” — cohesive representations of transient, touch-elicited vibration patterns that were localized mainly in the fingers, and that reflected the most salient anatomical and functional specializations in the hand.