In a recent paper, we have considered the problem of gambling on a quantum experiment and enforce rational behaviour by a few rules. These rules yield, in the classical case, the Bayesian theory of probability via duality theorems. In the quantum setting, they yield the Bayesian theory generalised to the space of Hermitian matrices. . To say it in a nutshell, we have obtained that quantum mechanics is the Bayesian theory in the complex numbers.
Bayesian hypothesis testing in machine learning
Hypothesis testing in machine learning - for instance to establish whether the performance of two algorithms is significantly different - is usually performed using null hypothesis significance tests (nhst). Yet the nhst methodology has well-known drawbacks. For instance, the claimed statistical significances do not necessarily imply practical significance. Moreover, nhst cannot verify the null hypothesis and thus cannot recognize equivalent classifiers. We developed Bayesian counterparts of the most commonly test adopted in machine learning, such as the correlated t-test and the signed-rank test that solve all these problems.
Filtering and Control with set of distributions
Can we solve the filtering problem from the only knowledge of few moments of the noise terms? By exploiting set of distributions based filtering, we can solve this problem without introducing additional assumptions on the distributions of the noises (e.g., Gaussianity) or on the final form of the estimator (e.g., linear estimator). In the figure, first order moments are used, via Sum-Of-Square optimization, to outer-approximate the true state uncertainty set X (blue) of a polynomial dynamical system.
Nonparametric Bayesian tests
Bayesian methods are ubiquitous in machine learning, bionformatics etc.. Nevertheless, the analysis of empirical results is typically performed by frequentist tests. This implies dealing with null hypothesis significance tests (NHST) and p-values, even though the shortcomings of such methods are well known. We are currently working on the development of nonparametric Bayesian versions of the most used standard frequentist tests (Wilcoxon signed-rank test, Wilcoxon ranksum, Friedman test, etc.).
How can we model prior ignorance about statistical parameters? The most natural approach is by using a set of prior distributions M or, equivalently, the upper and lower expectations that are generated by M. Using this model, we can develop parametric and nonparametric Bayesian near-ignorance models.