Today I read a paper titled “Pattern Recognition for Conditionally Independent Data”
The abstract is:
In this work we consider the task of relaxing the i.i.d assumption in pattern recognition (or classification), aiming to make existing learning algorithms applicable to a wider range of tasks.
Pattern recognition is guessing a discrete label of some object based on a set of given examples (pairs of objects and labels).
We consider the case of deterministically defined labels.
Traditionally, this task is studied under the assumption that examples are independent and identically distributed.
However, it turns out that many results of pattern recognition theory carry over a weaker assumption.
Namely, under the assumption of conditional independence and identical distribution of objects, while the only assumption on the distribution of labels is that the rate of occurrence of each label should be above some positive threshold.
We find a broad class of learning algorithms for which estimations of the probability of a classification error achieved under the classical i.i.d.
assumption can be generalised to the similar estimates for the case of conditionally i.i.d.
examples.