Friday, December 2, 2011

Online Convex Optimization without Projection

In this post I would like to briefly discuss our recent work in Online Convex Optimization (OCO) namely OCO with Long Term Constraints. Classical gradient descent optimization techniques usually require a projection step in each iteration, in order to get back to the feasible domain. Although for simple convex domains like Eculidain ball and simplex, this is a well studied problem and there exists efficient algorithms, however; for a variety of applications, this is a non-trivial and computationally burdensome step. One prominent example is semide nite optimization, where the projection of an arbitrary symmetric matrix back to the PSD matrices requires the computation of a complete eigenvalue-decomposition. Comparing this to the linear time update of first order gradient methods, it makes more sense to resolve this problem. In OCO, this is more crucial since the running time of each step supposed to be more efficient. We tackled this problem by posing a new OCO problem. The intuition is to replace the complex convex domain with simpler one with efficient projection algorithm and see how this process affects the general performance of the algorithm. The solution we proposed seems interesting in its own way; leaving some open problems needs through investigation.

No comments:

Post a Comment