I want to share with you my excitement for this new work:Computational Complexity and the Nature of Quantum MechanicsAlessio Benavoli, Alessandro Facchini, Marco Zaffalon In a previous paper, we derived the axioms of QM from the same rationality principles that underlie the subjective foundation of probability. We were able to show the way QM is …Continue reading Entanglement!
I am happy to be part of the Technical Program Committee of ISIPTA 2019. That is the 20-year anniversary edition of the world’s main forum on imprecise probabilities. If we had to describe its central theme in one sentence, it would be that “There’s more to uncertainty than probabilities.” Indeed, a wide range of other …Continue reading ISIPTA 2019
I am co-editing a Special Issue on “Imprecise Probabilities, logic and Rationality” in the International Journal of Approximate Reasoning (IJAR-elsevier). This SI intends to contribute to the state-of-the-art of the interactions and connections between imprecise probabilities and logic, and more generally with formal theories of rationality, the hope being that this cross-disciplinary view will lead …Continue reading Special issue on “Imprecise Probabilities, logic and Rationality”
I have implemented a Python library for modelling, inference and updating with Almost Desirable Gambles (ADG) models. It is both friendly and flexible. It works with continuous, discrete and mixed variables. Here you can find some additional info, setup instructions and 4 examples (notebooks): https://github.com/PyRational/PyRational/blob/master/notebooks/index.ipynb The notebooks (and relative examples) are very simple, their purpose …Continue reading PyRational
Janez Demsar has reimplemented our library about Bayesian hypothesis testing for comparing competing algorithms in ML. It can now be installed directly with pip. Hereafter, a brief description. Baycomp is a library for Bayesian comparison of classifiers. Functions compare two classifiers on one or on multiple data sets. They compute three probabilities: the probability that …Continue reading Baycomp
In a previous post we derived the Covariance Inequality from a Bayesian (Imprecise probability) perspective. There is another and more elegant way to derive this inequality: $$Cov(X,Y)^2\leq Var(X)Var(Y)$$ To do that, we introduce again our favorite subject, Alice. Let us summarize the problem again. Assume that there two real variables $X,Y$ and that Alice only …Continue reading Heisenberg uncertainty principle: a Bayesian perspective part I cont.
While I was at QPL presenting I had a question from the audience about whether/how we can derive “Heisenberg inequality” as a consequence of our subjective (gambling) formulation of QM. This is not complicated since Heisenberg inequality is just the QM version of Covariance Inequality which states that for any two random variables $X$ and …Continue reading Heisenberg uncertainty principle: a Bayesian perspective part I
I thank the organizers of ISIPTA 2017 for having invited me. Here you can find the link to my Keynote talk:Continue reading Keynote Bayes+Hilbert=QM
QM is based on four main axioms, which were derived after a long process of trial and error. The motivations for the axioms are not always clear and even to experts the basic axioms of QM often appear counter-intuitive. In a recent paper , we have shown that: It is possible to derive quantum mechanics …Continue reading Bayes+Hilbert=QM
The following post discusses how to use a Bayesian hierarchical test (and also the Python module that implements it) to compare classifiers assessed via m-runs k-folds cross-validation. In the Bayesian correlated t-test and also in the frequentist correlated t-test, we can only analyze cross-validation results on a single dataset. In particular, the Bayesian correlated t-test …Continue reading Hierarchical test to compare classifiers