The following post discusses how to use a Bayesian hierarchical test (and also the Python module that implements it) to compare classifiers assessed via m-runs k-folds cross-validation. In the Bayesian correlated t-test and also in the frequentist correlated t-test, we can only analyze cross-validation results on a single dataset.
In particular, the Bayesian correlated t-test makes inference about the mean difference of accuracy between two classifiers in the $i$-th dataset ($\mu_i$) by exploiting three pieces of information: the sample mean ($\bar{x}_i$), the variability of the data (sample standard deviation $\hat{\sigma}_i$) and the correlation due to the overlapping training set ($\rho$). This test can only be applied to a single dataset.
There is no direct NHST able to extend the above statistical comparison to multiple datasets, i.e., that takes as inputs the $m$ runs of the $k$-fold cross-validation results for each dataset and returns as output a statistical decision
about which classifier is better in all the datasets.
The usual NHST procedure that is employed for performing such analysis has two steps:
(1) compute the mean \textit{difference} of accuracy for each dataset $\bar{x}_i$;
(2) perform a NHST to establish if the two classifiers have different performance or not based on these mean differences of accuracy.
This discards two pieces of information: the correlation $\rho$ and sample standard deviation $\hat{\sigma}_i$ in each dataset.
The standard deviation is informative about the accuracy of $\bar{x}_i$ as an estimator of $\mu_i$.
The standard deviation can largely vary across data sets, as a result of each data set having its own size and complexity.
The aim of this section is to present an extension of the Bayesian correlated t-test that is able
to make inference on multiple datasets and at the same time to account for all the available information
(mean, standard deviation and correlation).
In Bayesian estimation, this can be obtained by defining a hierarchical model. Hierarchical models are among the most powerful and flexible tools in Bayesian analysis.
The code, this notebook and the dataset can be downloaded from our github repository.
Bayesian Hierarchical Test
Module hierarchical
in bayesiantests
compares the performance of two classifiers that have been assessed by m-runs of k-fold cross-validation on q datasets. It returns probabilities that, based on the measured performance, one model is better than another or vice versa or they are within the region of practical equivalence.
This notebook demonstrates the use of the module.
We will load the 10-folds, 10-runs cross-validation results of the naive Bayesian classifier and HNB on 54 UCI datasets from the file Data/diffNbcHnb.csv. The file includes the differences of accuracies NBC-HNB in the $10 \times 10=100$ (columns) ripetions and 54 datasets (rows). A total of 5400 values.
import numpy as np
scores = np.loadtxt('Data/diffNbcHnb.csv', delimiter=',')
names = ("HNB", "NBC")
print(scores)
To analyse this data, we will use the function hierarchical in the module bayesiantests that accepts the following arguments.
scores: a 2-d array of differences.
rope: the region of practical equivalence. We consider two classifiers equivalent if the difference in their
performance is smaller than rope.
rho: correlation due to cross-validation
names: the names of the two classifiers; if x is a vector of differences, positive values mean that the second
(right) model had a higher score.
The hierarchical function uses STAN through Python module pystan.
Summarizing probabilities
Function hierarchical(scores,rope,rho, verbose, names=names)
computes the Bayesian hierarchical test and returns the probabilities that the difference (the score of the first classifier minus the score of the first) is negative, within rope or positive.
import bayesiantests as bt
rope=0.01 #we consider two classifers equivalent when the difference of accuracy is less that 1%
rho=1/10 #we are performing 10 folds, 10 runs cross-validation
pleft, prope, pright=bt.hierarchical(scores,rope,rho)
The first value (left
) is the probability that the the differences of accuracies is negative (and, therefore, in favor of HNB). The third value (right
) is the probability that the the differences of accuracies are positive (and, therefore, in favor of NBC). The second is the probability of the two classifiers to be practically equivalent, i.e., the difference within the rope.
In the above case, the HNB performs better than naive Bayes with a probability of 0.9965, and they are practically equivalent with a probability of 0.002. Therefore, we can conclude with high probability that HNB is better than NBC.
If we add arguments verbose
and names
, the function also prints out the probabilities.
pl, pe, pr=bt.hierarchical(scores,rope,rho, verbose=True, names=names)
The posterior distribution can be plotted out:
- using the function
hierarchical_MC(scores,rope,rho, names=names)
we generate the samples of the posterior - using the function
plot_posterior(samples,names=('C1', 'C2'))
we then plot the posterior in the probability simplex
%matplotlib inline
import matplotlib.pyplot as plt
samples=bt.hierarchical_MC(scores,rope,rho, names=names)
#plt.rcParams['figure.facecolor'] = 'black'
fig = bt.plot_posterior(samples,names)
plt.savefig('triangle_hierarchical.png',facecolor="black")
plt.show()
Checking sensitivity to the prior
To functions hierarchical
allow also to test the efect of the prior hyperparameters. We point to the last reference for a discussion about prior sensitivity.
References
@ARTICLE{bayesiantests2016,
author = {{Benavoli}, A. and {Corani}, G. and {Demsar}, J. and {Zaffalon}, M.},
title = "{Time for a change: a tutorial for comparing multiple classifiers through Bayesian analysis}",
journal = {ArXiv e-prints},
archivePrefix = "arXiv",
eprint = {1606.04316},
url={https://arxiv.org/abs/1606.04316},
year = 2016,
month = jun
}
@article{corani2016unpub,
title = { Statistical comparison of classifiers through Bayesian hierarchical modelling},
author = {Corani, Giorgio and Benavoli, Alessio and Demsar, Janez and Mangili, Francesca and Zaffalon, Marco},
url = {http://ipg.idsia.ch/preprints/corani2016b.pdf},
year = {2016},
date = {2016-01-01},
institution = {technical report IDSIA},
keywords = {},
pubstate = {published},
tppubtype = {article}
}