Robust Sparse Coding for Mobile Image Labeling on the Cloud
With the rapid development of the mobile service and online social networking service, a large number of mobile images are generated and shared on the social networks every day. The visual content of these images contains rich knowledge for many uses, such as social categorization and recommendation. Mobile image labeling has, therefore, been proposed to understand the visual content and received intensive attention in recent years. In this paper, we present a novel mobile image labeling scheme on the cloud, in which mobile images are first and efficiently transmitted to the cloud by Hamming compressed sensing, such that the heavy computation for image understanding is transferred to the cloud for quick response to the queries of the users. On the cloud, we design a sparse correntropy framework for robustly learning the semantic content of mobile images, based on which the relevant tags are assigned to the query images. The proposed framework (called maximum correntropy-based mobile image labeling) is very insensitive to the noise and the outliers, and is optimized by a half-quadratic optimization technique. We theoretically show that our image labeling approach is more robust than the squared loss, absolute loss, Cauchy loss, and many other robust loss function-based sparse coding methods. To further understand the proposed algorithm, we also derive its robustness and generalization error bounds. Finally, we conduct experiments on the PASCAL VOC’07 data set and empirically demonstrate the effectiveness of the proposed robust sparse coding method for mobile image labeling.