0

Normally, when using an ensemble method, such as baggin or boosting, in binary classification, there is a reqauirment that each weak classifier have accuracy better than 50%.

In the multiclass claaification setting, this is often infeasible. Is there a way to improve upon multiclass classification with ensembles.

For an example to make this concrete: Say I have a problem with 1000 classes, and I train 50 models, each with 10% accuracy, which is 100x better than random guessing.

Is there a way to combine these models to form a better classification algorithm?

chessprogrammer
  • 3,050
  • 2
  • 16
  • 26

1 Answers1

0

Any classifier that performs slightly better than random guessing is a weak classifier. In the case of a system with N classes, a random guessing actor will have an accuracy of 100/N.

For example, if there are 1000 classes, a random guesser will have an accuracy of 0.1%. Hence any classifier with more than 0.1% accuracy is a weak classifier in this example.