Convergence properties of gradient descent noise reduction

D. Ridout, Kevin Judd

Research output: Contribution to journalArticlepeer-review

22 Citations (Web of Science)

Abstract

Gradient descent noise reduction is a technique that attempts to recover the true signal, or trajectory, from noisy observations of a non-linear dynamical system for which the dynamics are known. This paper provides the first rigorous proof that the algorithm will recover the original trajectory for a broad class of dynamical systems under certain conditions. The proof is obtained using ideas from linearisation theory. Since the first introduction of the algorithm it has been recognised that the algorithm can fail to recover the true trajectory, and it has been suggested that this is a practical or numerical limitation that is a consequence of near tangencies between stable and unstable manifolds. This paper demonstrates through numerical experiments and details of the proof that the situation is worse than expected in that near tangencies impose essential limitations on noise reduction, not just practical or numerical limitations. That is, gradient descent noise reduction will sometimes fail to recover the true trajectory, even with unlimited, perfect computation. On the other hand, the numerical experiments suggest that the gradient descent noise-reduction algorithm will always recover a trajectory that is entirely consistent with the evidence provided by the observations, that is, it attains the best that can be achieved given the observations. It is argued that near tangencies will therefore impose the same limitations on any noise-reduction algorithm. (C) 2002 Elsevier Science B.V. All rights reserved.
Original languageEnglish
Pages (from-to)26-47
JournalPhysica D-Nonlinear Phenomena
Volume165
Issue number165
DOIs
Publication statusPublished - 2002

Fingerprint

Dive into the research topics of 'Convergence properties of gradient descent noise reduction'. Together they form a unique fingerprint.

Cite this