Python, linguistics and other stuff

To content | To menu | To search

Wednesday, June 4 2014

Building C extensions for Python, 64-bit Windows 7

Here is a helpful note on the subject (worked for me during python DAWG installation): http://www.xavierdupre.fr/blog/2013-07-07_nojs.html

Thursday, May 1 2014

Assessing statistical significance of cross-validation results

Source: Janez Demšar. Statistical Comparisons of Classifiers over Multiple Data Sets. The Journal of Machine Learning Research, Volume 7, 2006, pp. 1-30. http://machinelearning.wustl.edu/mlpapers/paper_files/Demsar06.pdf

Suppose we have k classifiers and conduct N-fold cross-validation (say, 10-fold) to compare their performance and choose the best one.

The solution proposed in the paper is the following: first, test the null hypothesis that the classifiers perform equally well, and there is no difference in their performance. Once it is disproved, the next step is to choose the best classifier (with maximum score) and make k-1 comparisons with the rest k-1 classifiers.

1.

For the first step, Friedman nonparameteric test can be used (when applying cross-validation, we usually have a small number of different test sets, therefore parametric tests can't be used since normal distribution conditions are violated).

NOTE: when conducting cross-validation, ensure that all the classifiers are tested on the same test sets (of the train/test split).

The modified statistic

F.png

is distributed according to the F-distribution, df1 = k-1, df2 = (k-1)(N-1).

If you're using python, scikit-learn has the imlementation of Friedman's chi-square statistic, you only have to wrap it with the modifications as in the equation above. If you have k classifiers and have conducted N-fold cross-validation, give these k arrays, each one consisting of N scores, to the scikit-learn function. Then modify the output (chi-square) and there you are with the test value. Having implemented it, just check F-statistic test value against the critical value.

2.

Suppose we are to check whether the classifier with maximal scores is significantly better than the others. One of the ways to do it, as described in the paper mentioned above, is to conduct the Bonferroni-Dunn test. The paper proposes a slightly modified version of the test.

First, we calculate Critical Difference (CD) value (such that if the difference between two classifiers ranks is higher than CD, it is significant):

CD.png

Q-alpha values can be found in the table which is also given in the paper. Here it is: 

q_values.png

Control classifier is our "best" classifier.

Note that here we compare not the scores, but the ranks of the classifiers. To calculate them, we should do the following.

For each of the cross-validation folds, we test the classifiers and get the scores. We rank the classifiers scores (scikit-learn has rankdata function) for each of N test sets (so that there is a rating of classifiers for each of the test sets). Then for each of the k classifiers we calculate the average rank (across N test sets). And these very ranks (average ones) are to be compared using CDs.

So, if the difference between our control (best) classifier rank and each of the other classifiers ranks is higher than CD, we have proved that our classifier is significantly better than the others.

Installing CRFsuite Python bindings for Windows

Here's how it worked on my Windows 7.

0. You should have Python and MS Visual Studio installed. (Mine are ver. 3.3.5 and 2012).

1. Go here: http://www.chokkan.org/software/crfsuite/ and download CRFsuite source package (direct link: https://github.com/downloads/chokkan/crfsuite/crfsuite-0.12.tar.gz).

Download libLBFGS: https://github.com/downloads/chokkan/liblbfgs/liblbfgs-1.10.tar.gz (the link to libLBFGS page is at CRFsuite website).
Download SWIG for Windows: http://prdownloads.sourceforge.net/swig/swigwin-3.0.0.zip

2. Go to the downloaded CRFsuite package.
Move the line %include "crfsuite_api.hpp" from export.i (which is at the end of the file) to the place just below the %include statements.

3. Add environment variables:

LD_LIBRARY_PATH ...\crfsuite-0.12\lib
PYTHON_INCLUDE C:\Python33\include
PYTHON_LIB C:\Python33\libs\python33.lib


3. Go the downloaded libLBFGS package.
Open libLBFGS project in your Visual Studio (I tried it in VS2012) and build it (Release mode).
Then go to Releasefolder and take lbfgs.lib. Go to include folder inside libLBFGS project and take libfgs.h file. Copy these two files to the folder win32\liblbfgs inside CRFsuite project. (Note: CRFsuite project won't compile until these files are put there!).
Then build CRFsuite project: crf.lib file will be created in the Release folder. Copy it into your ...\crfsuite-0.12\lib folder and rename to crfsuite.lib.

4. Go here: ...\crfsuite-0.12\swig\python.
Copy two files: export.i, crfsuite.cpp from ...\crfsuite-0.12\swig to this folder.
Instead of prepare.sh I used the following commands (with files export.i, export_wrap.cpp, export_wrap.h and crfsuite.cpp in the ...\crfsuite-0.12\swig\python directory!):

...\swigwin-3.0.0\swig.exe -c++ -python -I../../include -o export_wrap.cpp export.i

(...\swigwin-3.0.0\swig.exe is the path to the downloaded SWIG package)

python setup.py build_ext --include-dir=...\crfsuite-0.12\include --library-dirs=...\crfsuite-0.12\lib

(replace "..." with the path to the downloaded CRFsuite package)

python setup.py install

5. Go ahead with your Python project, and good luck! :)