Here is a helpful note on the subject (worked for me during python DAWG installation): http://www.xavierdupre.fr/blog/2013-07-07_nojs.html
Wednesday, June 4 2014
By Katja on Wednesday, June 4 2014, 16:37
Thursday, May 1 2014
By Katja on Thursday, May 1 2014, 18:30
Source: Janez Demšar. Statistical Comparisons of Classifiers over Multiple Data Sets. The Journal of Machine Learning Research, Volume 7, 2006, pp. 1-30. http://machinelearning.wustl.edu/mlpapers/paper_files/Demsar06.pdf
Suppose we have k classifiers and conduct N-fold cross-validation (say, 10-fold) to compare their performance and choose the best one.
The solution proposed in the paper is the following: first, test the null hypothesis that the classifiers perform equally well, and there is no difference in their performance. Once it is disproved, the next step is to choose the best classifier (with maximum score) and make k-1 comparisons with the rest k-1 classifiers.
For the first step, Friedman nonparameteric test can be used (when applying cross-validation, we usually have a small number of different test sets, therefore parametric tests can't be used since normal distribution conditions are violated).
NOTE: when conducting cross-validation, ensure that all the classifiers are tested on the same test sets (of the train/test split).
The modified statistic
is distributed according to the F-distribution, df1 = k-1, df2 = (k-1)(N-1).
If you're using python, scikit-learn has the imlementation of Friedman's chi-square statistic, you only have to wrap it with the modifications as in the equation above. If you have k classifiers and have conducted N-fold cross-validation, give these k arrays, each one consisting of N scores, to the scikit-learn function. Then modify the output (chi-square) and there you are with the test value. Having implemented it, just check F-statistic test value against the critical value.
Suppose we are to check whether the classifier with maximal scores is significantly better than the others. One of the ways to do it, as described in the paper mentioned above, is to conduct the Bonferroni-Dunn test. The paper proposes a slightly modified version of the test.
First, we calculate Critical Difference (CD) value (such that if the difference between two classifiers ranks is higher than CD, it is significant):
Q-alpha values can be found in the table which is also given in the paper. Here it is:
Control classifier is our "best" classifier.
Note that here we compare not the scores, but the ranks of the classifiers. To calculate them, we should do the following.
For each of the cross-validation folds, we test the classifiers and get the scores. We rank the classifiers scores (scikit-learn has
rankdata function) for each of N test sets (so that there is a rating of classifiers for each of the test sets). Then for each of the k classifiers we calculate the average rank (across N test sets). And these very ranks (average ones) are to be compared using CDs.
So, if the difference between our control (best) classifier rank and each of the other classifiers ranks is higher than CD, we have proved that our classifier is significantly better than the others.
By Katja on Thursday, May 1 2014, 17:01
Here's how it worked on my Windows 7.
0. You should have Python and MS Visual Studio installed. (Mine are ver. 3.3.5 and 2012).
1. Go here: http://www.chokkan.org/software/crfsuite/ and download CRFsuite source package (direct link:
Download libLBFGS: https://github.com/downloads/chokkan/liblbfgs/liblbfgs-1.10.tar.gz (the link to libLBFGS page is at CRFsuite website).
Download SWIG for Windows: http://prdownloads.sourceforge.net/swig/swigwin-3.0.0.zip
2. Go to the downloaded CRFsuite package.
Move the line
%include "crfsuite_api.hpp" from export.i (which is at the end of the file) to the place just below the
3. Add environment variables:
3. Go the downloaded libLBFGS package.
Open libLBFGS project in your Visual Studio (I tried it in VS2012) and build it (Release mode).
Then go to
Releasefolder and take lbfgs.lib. Go to
include folder inside libLBFGS project and take libfgs.h file. Copy these two files to the folder
win32\liblbfgs inside CRFsuite project. (Note: CRFsuite project won't compile until these files are put there!).
Then build CRFsuite project: crf.lib file will be created in the
Release folder. Copy it into your
...\crfsuite-0.12\lib folder and rename to crfsuite.lib.
4. Go here:
Copy two files: export.i, crfsuite.cpp from
...\crfsuite-0.12\swig to this folder.
Instead of prepare.sh I used the following commands (with files export.i, export_wrap.cpp, export_wrap.h and crfsuite.cpp in the
...\swigwin-3.0.0\swig.exe -c++ -python -I../../include -o export_wrap.cpp export.i
...\swigwin-3.0.0\swig.exe is the path to the downloaded SWIG package)
python setup.py build_ext --include-dir=...\crfsuite-0.12\include --library-dirs=...\crfsuite-0.12\lib
..." with the path to the downloaded CRFsuite package)
python setup.py install
5. Go ahead with your Python project, and good luck! :)