Quantum theory (QT) has been confirmed by numerous experiments, yet we still cannot fully grasp the meaning of the theory. As a consequence, the quantum world appears to us paradoxical. In a recent work, we shed new light on QT by having it follow from two main postulates (i) the theory should be logically consistent; (ii) inferences in the theory should be computable in polynomial time. The first postulate is what we require to each well-founded mathematical theory. The computation postulate defines the physical component of the theory. We show that the computation postulate is the only true divide between QT, seen as a generalised theory of probability, and classical probability. All quantum paradoxes, and entanglement in particular, arise from the clash of trying to reconcile a computationally intractable, somewhat idealised, theory (classical physics) with a computationally tractable theory (QT) or, in other words, from regarding physics as fundamental rather than computation.
Bayesian hypothesis testing in machine learning
Hypothesis testing in machine learning - for instance to establish whether the performance of two algorithms is significantly different - is usually performed using null hypothesis significance tests (nhst). Yet the nhst methodology has well-known drawbacks. For instance, the claimed statistical significances do not necessarily imply practical significance. Moreover, nhst cannot verify the null hypothesis and thus cannot recognize equivalent classifiers. We developed Bayesian counterparts of the most commonly test adopted in machine learning, such as the correlated t-test and the signed-rank test that solve all these problems.
Filtering and Control with set of distributions
Can we solve the filtering problem from the only knowledge of few moments of the noise terms? By exploiting set of distributions based filtering, we can solve this problem without introducing additional assumptions on the distributions of the noises (e.g., Gaussianity) or on the final form of the estimator (e.g., linear estimator). In the figure, first order moments are used, via Sum-Of-Square optimization, to outer-approximate the true state uncertainty set X (blue) of a polynomial dynamical system.
Nonparametric Bayesian tests
Bayesian methods are ubiquitous in machine learning, bionformatics etc.. Nevertheless, the analysis of empirical results is typically performed by frequentist tests. This implies dealing with null hypothesis significance tests (NHST) and p-values, even though the shortcomings of such methods are well known. We are currently working on the development of nonparametric Bayesian versions of the most used standard frequentist tests (Wilcoxon signed-rank test, Wilcoxon ranksum, Friedman test, etc.).
How can we model prior ignorance about statistical parameters? The most natural approach is by using a set of prior distributions M or, equivalently, the upper and lower expectations that are generated by M. Using this model, we can develop parametric and nonparametric Bayesian near-ignorance models.