본문 바로가기
HOME> 논문 > 논문 검색상세

논문 상세정보

The Unified Framework for AUC Maximizer

Jun, Jong-Jun    (Department of Statistics, Seoul National University   ); Kim, Yong-Dai    (Department of Statistics, Seoul National University   ); Han, Sang-Tae    (Department of Informational Statistics, Hoseo University   ); Kang, Hyun-Cheol    (Department of Informational Statistics, Hoseo University   ); Choi, Ho-Sik    (Department of Informational Statistics, Hoseo University  );
  • 초록

    The area under the curve(AUC) is commonly used as a measure of the receiver operating characteristic(ROC) curve which displays the performance of a set of binary classifiers for all feasible ratios of the costs associated with true positive rate(TPR) and false positive rate(FPR). In the bipartite ranking problem where one has to compare two different observations and decide which one is "better", the AUC measures the quantity that ranking score of a randomly chosen sample in one class is larger than that of a randomly chosen sample in the other class and hence, the function which maximizes an AUC of bipartite ranking problem is different to the function which maximizes (minimizes) accuracy (misclassification error rate) of binary classification problem. In this paper, we develop a way to construct the unified framework for AUC maximizer including support vector machines based on maximizing large margin and logistic regression based on estimating posterior probability. Moreover, we develop an efficient algorithm for the proposed unified framework. Numerical results show that the propose unified framework can treat various methodologies successfully.


  • 주제어

    ROC curve .   AUC .   bipartite ranking problem.  

  • 참고문헌 (15)

    1. Agarwal, S., Graepel, T., Herbrich, R., Harpeled, S. and Roth, D. (2005). Generalization bounds for the area under the ROC curve, Journal of Machine Learning Research, 6, 393?425 
    2. Bach, F., Heckerman, D. and Horvitz, E. (2006). Considering cost asymmetry in learning classifiers, Journal of Machine Learning Research, 7, 1713?1741 
    3. Bartlett, P. and Tewari, A. (2007). Sparseness vs estimating conditional probabilities: Some asymptotic results, Journal of Machine Learning Research, 8, 775?790 
    4. Brefeld, U. and Scheffer, T. (2005). Auc maximizing support vector learning, In Proceedings of the ICML. 2005 Workshop on ROC Analysis in Machine Learning 
    5. Cl$\acute{e}$mencon, S., Lugosi, G. and Vayatis, N. (2006). From ranking to classification: A statistical view, From Data and Information Analysis to Knowledge Engineering, 214?221 
    6. Cl$\acute{e}$mencon, S., Lugosi, G. and Vayatis, N. (2008). Ranking and empirical minimization of Ustatistics, The Annals of Statistics, 36, 844?874 
    7. Cortes, C. and Mohri, M. (2004). Auc optimization vs. error rate minimization, In Flach, F. et al. (Eds.), In Advances in Neural Information Processing Systems, 16, MIT Press, Cambridge 
    8. Cortes, C. and Vapnik, V. (1995). Support-vector networks, Machine Learning, 20, 273?297 
    9. Freund, Y., Iyer, R., Schapire, R. E. and Singer, Y. (2003). An effcient boosting algorithm for combining preferences. Journal of Machine Learning Research, 4, 933?969 
    10. Friedman, J. (2008). Fast sparse regression and classification, Technical Report, Stanford University 
    11. Joachims, T. (2002). Optimizing search engines using clickthrough data, Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD) 
    12. Kim, J. (2004). ROC and cost graphs for general cost matrix where correct classifications incur nonzero costs, Communications of the Korean Statistical Society, 11, 21?30     
    13. Kim, Y., Kim, K. and Song, S. (2005). Comparison of boosting and SVM, Journal of Korean Data & Information Science Society, 16, 999?1012     
    14. Liu, Y. and Zhang, H. H. (2009). The large margin unified machines: A bridge between hard and soft classification. The 1st Institute of Mathematical Statistics Asia Pacific Rim Meeting & 2009 Conference of the Korean Statistical Society 
    15. Tibshirani, R. (1996). Regression shrinkage and selection via the lasso, Journal of the Royal Statistical Society: Series B, 58, 267?288 

 저자의 다른 논문

  • Kim, Yongdai (18)

    1. 2004 "Comparison Of Interval Estimation For Relative Risk Ratio With Rare Events" 한국통계학회 논문집 = Communications of the Korean Statistical Society 11 (1): 181~187    
    2. 2005 "동측치가 많은 FRAILTY 모형의 분석" 응용통계연구 = The Korean journal of applied statistics 18 (1): 67~81    
    3. 2005 "Comparison of Boosting and SVM" Journal of the Korean Data & Information Science Society = 한국데이터정보과학회지 16 (4): 999~1012    
    4. 2007 "기계학습과 통계학" 정보과학회지 = Communications of the Korean Institute of Information Scientists and Engineers 25 (3): 90~95    
    5. 2008 "The Doubly Regularized Quantile Regression" 한국통계학회 논문집 = Communications of the Korean Statistical Society 15 (5): 753~764    
    6. 2009 "An Algorithm for Support Vector Machines with a Reject Option Using Bundle Method" 한국통계학회 논문집 = Communications of the Korean Statistical Society 16 (6): 997~1004    
    7. 2010 "기후변화 영향평가의 불확실성 저감기법 최신 연구동향" 물과 미래 : 한국수자원학회지 = Water for future 43 (9): 32~36    
    8. 2012 "축차적 반응표면 분석을 통한 M&S 메타모형 구축에 관한 사례 연구" 品質經營學會誌 = Journal of the Korean Society for Quality Management 40 (1): 49~59    
    9. 2012 "Evaluating Interval Estimates for Comparing Two Proportions with Rare Events" 응용통계연구 = The Korean journal of applied statistics 25 (3): 435~446    
    10. 2013 "Revisiting the Bradley-Terry model and its application to information retrieval" Journal of the Korean Data & Information Science Society = 한국데이터정보과학회지 24 (5): 1089~1099    
  • 한상태 (29)

  • 강현철 (56)

  • 최호식 (8)

 활용도 분석

  • 상세보기

    amChart 영역
  • 원문보기

    amChart 영역

원문보기

무료다운로드
  • NDSL :
  • 한국통계학회 : 저널
유료다운로드

유료 다운로드의 경우 해당 사이트의 정책에 따라 신규 회원가입, 로그인, 유료 구매 등이 필요할 수 있습니다. 해당 사이트에서 발생하는 귀하의 모든 정보활동은 NDSL의 서비스 정책과 무관합니다.

원문복사신청을 하시면, 일부 해외 인쇄학술지의 경우 외국학술지지원센터(FRIC)에서
무료 원문복사 서비스를 제공합니다.

NDSL에서는 해당 원문을 복사서비스하고 있습니다. 위의 원문복사신청 또는 장바구니 담기를 통하여 원문복사서비스 이용이 가능합니다.

이 논문과 함께 출판된 논문 + 더보기