Abstract Summary/Description
Deep neural networks (DNNs) have been widely applied to solve complex real-world problems. However, selecting optimal network structures remains challenging. This study addresses this challenge by linking neuron selection in DNNs to knot placement in spline basis expansion models, introducing a difference penalty that automates knot selection and bypasses traditional neuron selection complexities. We name the method Deep P-Spline (DPS). Such method extends the model class considers in conventional DNNs modeling. It also forms a latent variable modeling method based on Expectation Conditional Maximization (ECM) algorithm for efficient network structure tuning. From a non-parametric regression perspective, DPS is proved to overcome the curse of dimensionality, enabling effective handling of high-dimensional inputs. The methodology is applied to computer experiments and image data analysis, where the underlying regression problems with many inputs are common. Numerical results validate the model effectiveness, highlighting its potential in advanced nonlinear regression tasks.