Multi-Task Learning For Option Pricing
Multi-task learning is a process used to learn domain-specific bias. It consists in simultaneously training models on different tasks derived from the same domain and forcing them to exchange domain information. This transfer of knowledge is performed by imposing constraints on the parameters defining the models and can lead to improved generalization performance. In this paper, we explore a particular multi-task learning method that forces the parameters of the models to lie on an affine manifold defined in parameter space and embedding domain information. We apply this method to the prediction of the prices of call options on the S&P index for a period of time ranging from 1987 to 1993. An analysis of variance of the results is presented that shows significant improvements of the generalization performance.
[ - ]