Modern deep learning theory is largely built around parameter-based analyses, where model complexity grows with depth and width. This creates a fundamental tension: deep models are highly expressive, yet often generalize well in practice.
Brownian Kernel Ladders (BKL) is a function-space framework designed to address this gap. Instead of analyzing neural networks through parameters, the framework constructs a hierarchy of function spaces and studies deep models directly at the level of functions.
---BKL builds a sequence of function spaces recursively. At each level, a new space is obtained by integrating Brownian kernels over the unit sphere of the previous layer. This produces a hierarchical ladder of spaces that encode compositional structure.
The construction induces an intrinsic notion of complexity, independent of parameterization, which allows the analysis of deep models without reference to specific architectures.
---For the BKL function class, the Gaussian complexity satisfies:
O(n-1/2)
independently of both the depth and the ambient dimension (up to logarithmic factors).
This result shows that increasing depth does not necessarily increase statistical complexity. In particular, it provides a setting in which deep models remain both expressive and statistically stable.
---The key insight is that depth acts as a form of geometric regularization. Each layer transforms the function space in a way that refines regularity while preserving control over complexity.
This leads to a perspective in which depth is not a source of overfitting, but rather a mechanism for structured representation.
---Traditional bounds based on parameter norms or network size typically grow with depth, suggesting that deeper models should be harder to control statistically.
In contrast, the BKL framework shows that when models are analyzed at the level of function spaces, depth can be decoupled from statistical complexity.
This provides an alternative theoretical explanation for the empirical success of deep architectures.
---A full manuscript is currently in preparation. Ongoing work includes extensions to approximation theory and the study of optimization and implicit bias within the BKL framework.
---I am particularly interested in connections between hierarchical kernel constructions and modern optimization phenomena, including implicit regularization in deep learning.
If you are interested in related questions or potential collaborations, feel free to reach out.