Synopsis
This operators is an SVM Learner based on the Java libsvm, an SVM learner.
Description
Applies the <a href="http://www.csie.ntu.edu.tw/~cjlin/libsvm">libsvm</a> learner by Chih-Chung Chang and Chih-Jen Lin. The SVM is a powerful method for both classification and regression. This operator supports the SVM types C-SVC
and nu-SVC
for classification tasks as well as epsilon-SVR
and nu-SVR
for regression tasks.
Additionally one-class
gives the possibility to learn from just one class of examples and lateron test if new examples match the knowing ones.
In contrast to other SVM learners, the libsvm supports also internal multiclass learning and probability estimation based on Platt scaling for proper confidence values after applying the learned model on a classification data set.
Input
- training set: expects: ExampleSet, optional: null
Output
- model:
- exampleSet:
Parameters
- svm type: SVM for classification (C-SVC, nu-SVC), regression (epsilon-SVR, nu-SVR) and distribution estimation (one-class)
- kernel type: The type of the kernel functions
- degree: The degree for a polynomial kernel function.
- gamma: The parameter gamma for polynomial, rbf, and sigmoid kernel functions (0 means 1/#examples).
- coef0: The parameter coef0 for polynomial and sigmoid kernel functions.
- C: The cost parameter C for c_svc, epsilon_svr, and nu_svr.
- nu: The parameter nu for nu_svc, one_class, and nu_svr.
- cache size: Cache size in Megabyte.
- epsilon: Tolerance of termination criterion.
- p: Tolerance of loss function of epsilon-SVR.
- class weights: The weights w for all classes (first column: class name, second column: weight), i.e. set the parameters C of each class w * C (empty: using 1 for all classes where the weight was not defined).
- shrinking: Whether to use the shrinking heuristics.
- calculate confidences: Indicates if proper confidence values should be calculated.
- confidence for multiclass: Indicates if the class with the highest confidence should be selected in the multiclass setting. Uses binary majority vote over all 1-vs-1 classifiers otherwise (selected class must not be the one with highest confidence in that case).