Thursday, March 18, 2010

Online Learning with Unbounded and Non-convex Loss Function

Actually this a a new thought attracted me. In all of the online learning algorithms, we assume that the loss function is bounded. Also, if we do not use perturbation in our algorithm, the loss function should be convex in first argument. So the question is that is there any way to relax these assumptions, more specifically

1) How can we get rid of boundedness constraint?
2) How can we use non-convex loss function in learning?

The motivation for second problem is well known 0-1 loss function. We know by adding some randomization in our prediction, this loss function behaves like absolute loss function. So, can we generalize this idea to other nonconvex or maybe some other soft constraints? Are there any real applications for such loss functions?

I will post more info regarding this problem when I rich my knowledge.

No comments:

Post a Comment