3.8 Proceedings Paper

Learning Filter Functions in Regularisers by Minimising Quotients

Journal

Publisher

SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-319-58771-4_41

Keywords

Regularisation learning; Non-linear eigenproblem; Sparse regularisation; Generalised inverse power method

Funding

  1. Leverhulme Trust
  2. Newton Trust
  3. Israel Science Foundation [718/15]
  4. NIHR Cambridge Biomedical Research Centre
  5. Leverhulme Trust project
  6. EPSRC [EP/M00483X/1]
  7. EPSRC centre [EP/N014588/1]
  8. Cantab Capital Institute for the Mathematics of Information
  9. CHiPS (Horizon RISE project grant)
  10. EPSRC [EP/M00483X/1, EP/N014588/1] Funding Source: UKRI

Ask authors/readers for more resources

Learning approaches have recently become very popular in the field of inverse problems. A large variety of methods has been established in recent years, ranging from bi-level learning to high-dimensional machine learning techniques. Most learning approaches, however, only aim at fitting parametrised models to favourable training data whilst ignoring misfit training data completely. In this paper, we follow up on the idea of learning parametrised regularisation functions by quotient minimisation as established in [3]. We extend the model therein to include higher-dimensional filter functions to be learned and allow for fit- and misfit-training data consisting of multiple functions. We first present results resembling behaviour of well-established derivative-based sparse regularisers like total variation or higher-order total variation in one-dimension. Our second and main contribution is the introduction of novel families of non-derivative-based regularisers. This is accomplished by learning favourable scales and geometric properties while at the same time avoiding unfavourable ones.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available