SMO-MKL

The objective in Multiple Kernel Learning (MKL) is to jointly learn both kernel and SVM parameters. We focus on the case where the kernel is learnt to be a linear combination of given base kernels with non-negative weights. The kernel weights are regularised using the p-norm where p is strictly greater than one. Choose values of p close to one, such as 1.1 or 1.33, if you'd like to learn sparse linear kernel combinations. The formulation can also handle certain Bregman divergences.

The following is very efficient code for training p-norm MKL using Sequential Minimal Optimization (SMO). The code has been built on top of the LibSVM code base and therefore has very similar usage. Please go through the included README file for detailed usage instructions as well as the LibSVM FAQ and COPYRIGHT. Using standard hardware, our code can be used to train on Sonar with a hundred thousand kernels in less than seven minutes for pre-computed RBF kernels and in less than thirty minutes for kernels computed on the fly. Similarly, training on the Adult data set with fifty thousand training points and fifty kernels computed on the fly takes less than half an hour.

Download source code     (now includes 32/64 bit precompiled Windows binaries, support for regression, a substantially improved README and illustrative toy examples. Many minor bugs have also been fixed.)

The code is in C++ and should compile on 32/64 bit Windows/Linux machines. This code is made available as is for non-commercial research purposes. Please contact Manik Varma [manik@microsoft.com] and S. V. N. Vishwanathan [vishy@stat.purdue.edu] if you have any questions or feedback.

References

Home Page