Abstract:
Although Binary Relevance (BR) is an adaptive and conceptually simple multi-label learning technique, its
inability to exploit label dependencies and other inherent problems in multi-label examples makes it difficult to generalize well
in the classification of real-world multi-label examples like annotated images. Thus, to strengthen the generalization ability of
Binary Relevance, this study used Multi-label Linear Discriminant Analysis (MLDA) as a preprocessing technique to take care
of the label dependencies, the curse of dimensionality, and label over counting inherent in multi-labeled images. After that,
Binary Relevance with K Nearest Neighbor as the base learner was fitted and its classification performance was evaluated on
randomly selected 1000 images with a label cardinality of 2.149 of the five most frequent categories, namely; "person",
"chair", "bottle", "dining table" and "cup" in the Microsoft Common Objects in Context 2017 (MS COCO 2017) dataset.
Experimental results showed that micro averages of precision, recall, and f1-score of Multi-label Linear Discriminant Analysis
followed by Binary Relevance K Nearest Neighbor (MLDA-BRKNN) achieved a more than 30% improvement in
classification of the 1000 annotated images in the dataset when compared with the micro averages of precision, recall, and f1-
score of Binary Relevance K Nearest Neighbor (BRKNN), which was used as the reference classifier method in this study