Mahdi Mohammadigohari

Machine Learning Theory • Statistical Learning Theory • Functional Analysis

PhD in Computer Science (Machine Learning Theory, defense scheduled July 2026)
Free University of Bozen–Bolzano, Italy

PhD in Mathematics (Functional Analysis)
Islamic Azad University of Mashhad, Iran

Email: mahdi.mohammadigohari@unibz.it | mahdi.mohammadigohari@gmail.com

Mahdi Mohammadigohari

About

I develop theoretical foundations for modern machine learning from a function-space and statistical perspective. My work focuses on understanding generalization in deep architectures beyond parameter-based analyses, using kernel methods, operator-theoretic techniques, and hierarchical constructions of hypothesis spaces.

A central question in my research is how compositional depth affects regularity and statistical complexity. My work shows that, under suitable constructions, deep models can remain both expressive and statistically stable, yielding dimension-free and depth-uniform generalization guarantees.

More broadly, I aim to bridge functional analysis and statistical learning theory with modern deep learning, and to develop mathematically grounded models for complex learning systems.

Research

Brownian Kernel Ladders (BKL)
[Project page] I introduced Brownian Kernel Ladders, a hierarchical function-space framework for deep learning based on recursive integral RKHS constructions. The framework defines an intrinsic notion of complexity independent of parameterization, allowing deep models to be analyzed directly at the level of function spaces.

A key result is that, despite increasing compositional depth, the associated hypothesis classes exhibit dimension-free and depth-uniform generalization behavior, with Gaussian complexity scaling on the order of O(n-1/2). This shows that depth does not necessarily lead to statistical overfitting, contrary to parameter-based analyses.

The construction also provides a geometric interpretation of depth through progressive regularity transformations, offering a new perspective on how expressive hierarchical models can remain statistically stable.

A detailed manuscript is currently in preparation.

Operator-Theoretic Generalization
I developed a Koopman operator-based framework for analyzing generalization in deep multi-task learning. This approach enables improved spectral scaling and provides a unified treatment of injective and non-injective architectures.

Current Focus

I am currently interested in the interplay between function-space representations and optimization in deep learning, including how hierarchical kernel constructions relate to implicit bias and learnability in modern architectures.

Publications

Machine Learning Theory

Mathematics / Functional Analysis

Education

Teaching

Courses taught include Linear Algebra, Real Analysis, Probability Theory, Statistical Inference, and Numerical Analysis.

Technical Skills

Python, R, MATLAB, TensorFlow, scikit-learn, LaTeX, Git

Contact

Email: mahdi.mohammadigohari@unibz.it