Monday, March 1, 2010

Model selection in Online learning

Reading the paper "On the Generalization Ability of On-Line Learning
Algorithms", I felt that i.i.d assumption is not a big deal. Recently I've read this excellent post Adaptivity in online models? and tuned myself. The point is that i.i.d assumption is not just for ensuring generalization ability of learning. Another usefulness of i.i.d assumption is in model selection. In regularized error minimization for machine learning algorithms we have a trading parameter between error and regularization. Having i.i.d assumptions for samples, we can employ cross validation method to find a value for his parameter. In other words, if we want to choose an appropriate value for a regularization parameter we can split the samples and use a validation set to find a good approximation of parameter.

So, the question is how can we choose such parameters in online learning algorithms having no i.i.d assumption and not having all samples in advance. At the first glance, adaptive adjustment of parameter is obvious solution. Are there other methods, or can current methods be improved considering the inherent limitations of online learning setting?

No comments:

Post a Comment