The Generalization Performance of the Random Fourier Features Method
We investigate the risk bounds of support vector machines (SVM) that use the random Fourier features method as an approximate model in classification tasks under three different problem setups: (i) in the soft-margin formulation, using the same regularization parameter in the approximate and the accurate models, (ii) using the same upper bound on the 2-norm of the normal vectors, and (iii) choosing freely the 2-norm of the normal vectors in the approximate model. We also provide a series of simulation results using synthetic data to exhibit a large gap in the performance between the accurate and the approximate models that can occur when we fail to control the regularization parameter appropriately.
Our work shows the importance of the effect of the regularization parameters when comparing the generalization performance of random Fourier features method with the accurate model and provides guidance for numerical experimentation, as well as a rigorous interpretation of the intuition that the expected risk of the kernel SVM with random
Fourier features method may be no more than O(1/(?N))O(1/(N)) compared with the hypothesis learned by the accurate model.
Joint work with Yitong Sun