Distributed Semi-Supervised Metric Learning
Over the last decade, many pairwise-constraint-based metric learning algorithms have been developed to automatically learn application-specific metrics from data under similarity/dissimilarity data-pair constraints (weak labels). Nevertheless, these existing methods are designed for the centralized learning case, in which all the data and constraints are supposed to be gathered together in one source, and the algorithms utilize the whole data and constraints information during the learning process. However, in many real applications, large amounts of data (constraints) are dispersedly generated/stored in geographically distributed nodes over networks. Thus, it might be impractical to centralize the whole data information to one fusion node. Besides, in such cases, it is often hard to have every data pair labeled due to the huge data-pair amounts, resulting in numerous unlabeled data pairs. Given these situations, in this paper, we propose two types, namely, a diffusion type and an alternating-direction-method-of-multipliers type, of distributed semi-supervised metric learning frameworks, which make use of both labeled and unlabeled data pairs. The proposed frameworks can be easily used to extend centralized metric learning methods of different objective functions to distributed cases. In particular, we apply our frameworks on a well-behaved centralized semi-supervised metric learning method called SERAPH and yield two new distributed semi-supervised metric learning algorithms. Our simulation results show that the metrics learned by the proposed distributed algorithms are very close to that of the corresponding centralized method in most cases.
- 원문이 없습니다.
NDSL에서는 해당 원문을 복사서비스하고 있습니다. 위의 원문복사신청 또는 장바구니 담기를 통하여 원문복사서비스 이용이 가능합니다.
- 이 논문과 함께 출판된 논문 + 더보기