Friday, February 19, 2010

Theories in Statistical learning theory

In my view, two main important questions regarding learning algorithms are generalization ability and stability analysis. When I have started working on learning algorithms, I asked myself this question: "Why can a learning algorithm predict future samples only by finding a classifier which minimizes the empirical error on training samples?". I have encountered a lot of theories and concepts looking for a answer to this question and I wanna deep my understanding of them. Please note that all of following concepts are not exactly related to answering the mentioned question but somehow related to it.

1) No free launch theorem
2) PAC learning (realizable and agnostic)
3) Occam learning
4) VC dimension
5) Covering numbers
6) Growth functions
7) Rademacher complexity

Wait for my future post for a survey of these concepts.

1 comment:

  1. I should mention something or correct myself (;-. In this post I am talking in the theory context, so by mentioning "two main important questions regarding learning algorithms" I mean only in this context. So generally this is wrong.

    ReplyDelete