A short course at the Department of Mathematics

Massimo Fornasier

Massimo Fornasier

(Technische Universität München)
Personal website
Bio Ha conseguito il titolo di Dottore di Ricerca in Matematica computazionale nel 2003 presso l'Università di Padova. Dopo aver speso il periodo dal 2003 al 2006 come ricercatore post-doc presso l'Università di Vienna e l'Università di Roma "La Sapienza", ha preso servizio presso lo Johann Radon Institute for Computational and Applied Mathematics dell'Accademia delle Scienze Austriaca, dove ha lavorato come Senior Scientists fino al marzo 2011. È stato Associate Researcher dal 2006 al 2007 presso il Program in Applied and Computational Mathematics della Princeton University negli USA. Nel 2011 viene chiamato per Chiara Fama come Professore Ordinario di Analisi Applicata e Numerica dalla Technische Universität München in Germania, titolo che ricopre tuttora. È autore di una settantina di articoli scientifici. Ha supervisionato gli studi e le ricerche di 7 studenti di dottorato e di 14 post-dottorandi.

Prerequisites

  • Recommended: Analysis 1, Analysis 2, Linear Algebra 1, Linear Algebra 2, Lineare Algebra for Informatics, Analysis for Informatics, Discrete Probability Theory, Introduction to Probability Theory.
  • Suggested: Algorithmic Discrete Mathematics, Introduction to Nonlinear Optimization

Intended Learning Outcomes

After successful completion of the module students are able to understand and apply the basic notions, concepts, and methods of computational linear algebra, convex optimization, differential geometry for data analysis. They master in particular the use of the singular value decomposition and random matrices for low dimensional data representations. They know fundamentals of sparse recovery problems, including compressed sensing, low rank matrix recovery, and dictionary learning algorithms. They understand the representation of data as clusters around manifolds in high dimension and they know how to use methods for constructing local charts for the data.

Content

  1. Representations of data as matrices
    • Many data vectors form a matrix
    • Review of basic linear algebra
    • Linear dependence and concept of rank
    • Approximate linear dependence with varying degree of approximation: Singular value decomposition /Principal Component Analysis
    • Redundancy of data representations -> orthonormal bases, frames and dictionaries
    • Fourier basis as singular vectors of spatial shift
    • Fast Fourier Transform
  2. Linear dimension reduction
    • Johnson-Lindenstrauss (JL) Lemma
    • Review of basic probability, random matrices
    • Random Matrices satisfying JL with high probability
    • Fast JL embeddings
    • Sparsity, low rank as structured signal models
    • Compressed sensing
    • Matrix completion and low rank matrix recovery
    • Optimization review
    • Dictionary Learning
  3. Non-linear dimension reduction
    • Manifolds as data models
    • Review of differential geometry
    • ISOMAP
    • Diffusion maps
    • Importance of Nearest neighbor search, use of JL
  4. Outlook: Data Analysis and Machine Learning

Schedule

  • Friday 2018/10/26 14.00-18.00 Room B103
  • Friday 2018/11/30 14.00-18.00 Room B103
  • Friday 2018/12/21 14.00-18.00 Room B103

Details

  • Call: PDF
  • Poster: PDF
  • Venue: Polo Scientifico e Tecnologico F. Ferrari – Room B103
  • Language: English
  • Credits:
    • for students of the Department of Mathematics: 3CFU
    • for students of other Departments information should ask to their corresponding organization structure
  • Admission: Course open to max 30 LM students + PhD Students
  • How to apply: Fill the form you find here
  • Deadline: 15/10/2018 23:59 (Italian time)

Admission list