Abstracts

Multivariate self-similarity : Wavelet eigen-analysis

Patrice Abry

Self-similarity has been successfully used to model scale-free dynamics in numerous real-world applications very different in nature, and wavelet representation were shown to be permit an efficient analysis of scale-free dynamics and robust estimation of the scaling parameters. However, most successes remained confined to univariate analysis, while most modern applications entails the joint analysis of collection of time series. A multivariate self-similar model has recently been proposed.
The present talk will detail the relevance and benefits of an eigen-multivariate wavelet representation for multivariate self-similarity and its interest for real world applications.

 

Multiscale and nonlinear properties of cardiac signals: a wavelet-based study of their variability

Françoise Argoul

Physiological systems are regulated spatially and temporally by integrated and intermingled networks of neurons. This multiscale and multifrequency regulation leads to seemingly random signals which are now classified as Network Physiology. The cardiac system in one of the most fascinating physiological system, because it is modulated by both local and distant internal factors (central and autonomic nervous systems, heart tissues and vessel mechanics) and external environmental factors (exercise and rest, wake and sleep, circadian rhythms). Despite many experimental and theoretical efforts grounded on nonlinear system and fractals concepts in the past decades, the exact role nerve tissues rhythm alterations on cardiac rhythm variability remains an open question.

The heart rate variability (HRV) was defined by clinicians by beat-to-beat interval signals. HRV exhibits temporal self-similar properties across a broad range of scales, long-range correlations and nonlinear properties, indicating fractal or multifractal structures. These fractal and nonlinear scaling features change not only with the electrical and mechanical properties of the heart excitable and muscular tissues but also with modifications of the sympathetic-parasympathetic stimuli. The complexity of the heart rate fluctuations cannot be adequately described by a single scaling parameter, a large set of scaling parameters is needed. Even more puzzling is the fact that these signals are non-stationary, which make their study a complicated task. To decipher the phase interaction of different frequency components of a cardiac signal, complex wavelet transforms offer the possibility to follow continuously the temporal fluctuations of the instantaneous frequency and its phase. Here, we compare discrete beat-to-beat interval signals with these continuous frequency and phase signals, their scale invariances and their complexity by the computation of their D(h) spectra, with the wavelet transform modulus method (WTMM).

Wavelet-based techniques have been proposed for the identification, classification and analysis of ar- rhythmic electrocardiogram signals, to distinguish normal sinus rhythm from atrial flutter, paroxysmal atrial fibrillation (AF), chronic AF and ventricular fibrillation (VT). Atrial fibrillation (AF) is an arrhythmia associated with the asynchronous contraction of the atrial muscle fibres. It is the most prevalent cardiac arrhythmia in the western world, and is associated with significant morbidity. Best available therapeutic strategies combine pharmacological and surgical means. But when successful, they do not always prevent long-term relapses. Initial surgical success becomes all the more tricky to extend as the arrhythmia maintains itself and the pathology evolves into sustained or chronic AF. This raises the open crucial issue of deciphering the mechanisms that govern the onset of AF as well as its perpetuation. Here, we propose a wavelet-based multi-scale strategy to analyze the electrical activity of human hearts recorded by catheter electrodes, positioned in the coronary sinus (CS), during episodes of chronic AF.

Because the clinical outcome of the “purely” myocardial approach remains suboptimal despite of significant technological improvement in ablation procedures with better mapping and energy delivery systems, there has been increasing evidence that dysfunction of the autonomic nervous system that encompasses the sympathetic, parasympathetic and intrinsic neural network could also be involved in the patho- genesis of AF. With a complex wavelet decomposition of AF signals, to capture their local phase dynamics, we propose to quantify the nature and strength of nonlinearities involved in these signals and propose a new marker to follow the evolution of this disease.

 

Multiscale Deep Learning

Richard Baraniuk

Deep (neural) networks have been applied productively in a wide range of machine learning and signal processing problems.  But a fundamental question remains:  Why do they work?  Intuitions abound, but a coherent framework for understanding, analyzing, and synthesizing deep learning architectures has remained elusive.  This talk overviews some recent progress on two fronts.  First, we build a bridge between deep networks and approximation theory via spline functions. Our key result is that a large class of deep networks can be written as a composition of max-affine spline operators (MASOs), which provide a powerful portal for analyzing the geometry of learning and prediction.  Second, we demonstrate that deep networks can be endowed with a natural multiscale structure that enables a multiresolution analysis of their prediction, for example, the decision boundary for classification tasks.

 

Sicut in caelo and in terra -- Data representations for seismic and gravitational waves

Eric Chassande-Mottin

In order to improve the resolution in the reconstruction of geological layers from seismic data, J Morlet, A Grossmann and colleagues proposed, in the late 70s, an alternative to the short-term Fourier representation, later called wavelet transform. Similar representations and ideas are used today in a completely different context, that of gravitational wave detection by the advanced LIGO and advanced Virgo interferometric detectors. This presentation will review data analysis issues in the field of gravitational astronomy, highlighting similarities with that encountered in seismology.

 

Coherent states: analysis, geometry and physics. In the footsteps of Alex Grossmann and Yves Meyer (in the age of machine learning)

Ronald Coifman

We will relate the quest for effective analytic  functional representations , to current trends in empirical machine learning and deep neural networks, to work by Grossmann and Meyer who were seeking informative waveforms for physics. Their  goal was to provide a language for expressing efficiently acoustic fields and oscillating states. 

Our current abilities to build purely empirical models directly from observations  ( by machine learning optimizations) is a continuation of some of  old work by Alex and his collaborators .

One of our goals is to describe the integration of classical methods with more recent developments in scientific machine learning.

 

Lovely bones: A mathematical and biological dialog.

Ingrid Daubechies

In collaboration with Biological morphologists, we have defined new distances between pairs of two-dimensional surfaces (embedded in three-dimensional space) that use both local structures and global information in the surfaces. These are motivated by the need of biological morphologists to compare different phenotypical structures, and to study relationships of living or extinct animals with their surroundings and each other. Many existing algorithms for morphological correspondences make use of carefully defined anatomical correspondence points (landmarks) on bones, for example. Our approach does not require any such preliminary marking of special features or landmarks by the user.
We then show how to use and "clean up" these distances to study collections of teeth and bones, refining the results further.

 

De l’algèbre linéaire pour la comparaison des séquences biologiques à l’étude de l’évolution du génome du pommier.

Claudine Landés

Cet exposé se déroulera en deux temps, une première section présentera les contributions d’Alex Grossmann pour les comparaisons de séquences biologiques en particulier dans l’étude des virus HIV1-HIV2 et de l’évolution des ADN-topoisomérases (les topoisomérases assurent le maintien de la molécule d’ADN dans une topologie correcte au cours des processus biologiques du cycle cellulaire). Dans un second temps je présenterai mes travaux de recherche en cours au sein de l’IRHS (Institut de Recherche en Horticulture et Semences) sur le génome du pommier et son histoire évolutive.

 

Quantizations of the quadratic Monge-Kantorovitch distance

Thierry Paul

We will present two - quantum and partially quantum - analogues of the Wasserstein metric of order two and show how they allow to somehow metrize the set of quantum states in a more precise and meaningful, at scale of order square root of the Planck constant, than the usual Schatten (including trace and Hilbert-Schmidt) topologies. The quantum analog provides a quantum analog to the Knott-Smith-Brenier Theorem, towards a quantum optimal transport theory. The partially quantum ones compares quantitatively the quantum and the underlying classical evolution in a situation of low regularity potentials and initial data, unreachable to our knowledge by standard compacity methods and semiclassical microlocal analysis.

 

Dissection multiéchelle de quelques systèmes naturels : cryosphère Antarctique et pulsations solaires, apports sur le climat terrestre

Sylvie Roques

The aim of this conference is to illustrate the multi-scale nature of some natural systems in universe sciences. We will first present an application of the Empirical Mode Decomposition (EMD) to the detection of the Antarctic circumpolar wave, one of the strongest demonstrations of the Southern variability. This variability is analysed from data of meteorological coastal stations of Antarctica continent providing temperature time-series since 1955. At this time, this wave is detected for the first time on the ground. Then, we will analyse the Sun activity through the “sunspot data” on the photosphere over a period of two centuries (1796-1996). A “ matching pursuit” algorithm compared to EMD of temperature data in Antarctica will allow us to suggest a large-scale temperature variation on South Pole linked to the solar cycle. To finish, we will present how a multi-scale analysis of these natural systems provides some answers on the physical quantities acting on the Earth climate.

 

Has the Earth's magnetic field periodically collapsed since its last reversal, 780.000 ago?

Ginette Saracco

The hypothetic contribution of the Earth’s axis precession to the geodynamo energy was recently reinforced by experimental and numerical geodynamo models, triggering new research on the possible influence of orbital periodicities, e.g. axial precession (≈26.000 years) and obliquity (≈42.000 years) in the paleomagnetic field spectrum (e.g. Thouveny et al. 2008; Saracco et al. 2009). Oceanic sedimentary sequences contain records of geomagnetic paleointensity,  as well as cosmogenic nuclide production records, that provide some appropriate time-series of the geomagnetic dipole moment variations, allowing us to extract all frequency modulation laws of it, from algorithmes based on the phase of a linear time-frequency method as the complex wavelet transform.
 Beyond the fundamental interest for mechanical constraints on the geodynamo, the interest of such studies lies in the fact that, at present and for the last 3 millenia, the dipole moment has been decaying by about 30% (from 11.5 1022 A.m2 to less than 8 1022A.m2 ), either a rate equivalent to that leading to the full cancellation of the dipole at the time of excursions and reversals of the geomagnetic field.
 Since the last excursion (LASCHAMP) occurred 41.500 years ago, i.e. about for one obliquity period, solving the question raised, here, takes all its importance by the changes that may occur during the magnetic field inversion at the scale of the Earth.

 

Blind galaxy survey images Deconvolution with Shape Constraint

Jean-Luc Starck

Removing the aberrations introduced by the Point Spread Function (PSF) is a fundamental aspect of astronomical image processing. The presence of noise in observed images makes deconvolution a nontrivial task that necessitates the use of regularisation. This task is particularly difficult when the PSF varies spatially as is the case for big surveys such as LSST or Euclid surveys. It becomes a fantastic challenge when the PSF field is unknown.

The first step is therefore to estimate accurately the PSF field. In practice, isolated stars provide a measurement of the PSF at a given location in the telescope field of view. Thus we propose an algorithm to recover the PSF field, using the measurements available at few these locations.  This amounts to solving an inverse problem that we regularize using mathematical concepts such as optimal transport, graph theory and sparsity. We also show how a shape constraint can be added in the inverse problem which improves significantly the shape measurements on the reconstructed galaxies. 

 

Interference Imaging and Lippmann Photography

Martin Vetterli

Gabriel Lippmann invented an original color photography process in the late 19thcentury, for which he obtained the Nobel Prize in Physics in 1908 (https://www.nobelprize.org/nobel_prizes/physics/laureates/1908/).

The method relies on an interferential process, and as such, provides a recording of the entire color spectrum, rather than the trichromatic process common today. Because of complexity of the acquisition and the subtlety of the viewing process, Lippmann photography remains an all too rare curiosity.

Luckily, Lausanne has a large collection of Lippmann plates at the “Musée de l’Elysée” (http://www.elysee.ch/en/homepage/). By serendipity, we came across this collection while working on a high-resolution scanning project at the Museum, and we got totally fascinated by Lippmann photography! First, can we produce a digital twin of a Lippmann photograph? This requires a high dimensional acquisition, which is possible thanks to technology from a spinoff of our lab (https://www.artmyn.com/#home). 

Second, we needed to understand in depth the Lippmann process, from acquisition to rendering. This led to a precise modeling of the recording of the standing waves in the light sensitive medium, as well as of the rendering by white light exciting the recorded interference patterns. A physical model combined with mathematical analysis leads to a new and full understanding of the Lippmann process, which was verified by experimental acquisitions (with the help of Felipe Alves) and imaging the resulting plates under X-ray imaging (CIME) and tomography (PSI). Some phenomena absent from previous analysis can be explained and verified with this new model.

Third, we designed and implemented a “digital Lippmann camera” which mimics what the original, analog Lippmann photography acquires. This is an alternative to a multispectral camera, but based on interferometry. Interesting trade-offs between number of channels, pixels and exposition time can be explored.

Last, we consider Lippmann printing with femto-second lasers. The analog version is cumbersome, and the digital twin could allow us to render original Lippmann photographs in full glory.

In conclusion, a chance encounter with the powerful historical color acquisition of Lippmann led us to an exploration at the intersections of physics, applied mathematics, computer vision and computer graphics, resulting in a digital version of Lippmann photography.

Online user: 1