© Springer International Publishing AG 2016.In many machine learning applications there exists prior knowledge that the response variable should be increasing (or decreasing) in one or more of the features. This is the knowledge of ‘monotone’ relationships. This paper presents two new techniques for incorporating monotone knowledge into non-linear kernel support vector machine classifiers. Incorporating monotone knowledge is useful because it can improve predictive performance, and satisfy user requirements. While this is relatively straight forward for linear margin classifiers, for kernel SVM it is more challenging to achieve efficiently. We apply the new techniques to real datasets and investigate the impact of monotonicity and sample size on predictive accuracy. The results show that the proposed techniques can significantly improve accuracy when the unconstrained model is not already fully monotone, which often occurs at smaller sample sizes. In contrast, existing techniques demonstrate a significantly lower capacity to increase monotonicity or achieve the resulting accuracy improvements.
|Name||Lecture Notes in Computer Science|
|Conference||12th International Conference on Advanced Data Mining and Applications|
|Period||12/12/16 → 15/12/16|