## Heisenberg uncertainty principle: a Bayesian perspective part I cont.

In a previous post we derived the Covariance Inequality from a Bayesian (Imprecise probability) perspective. There is another and more elegant way to derive this inequality: $$Cov(X,Y)^2\leq Var(X)Var(Y)$$ To do that, we introduce again our favorite subject, Alice. Let us summarize the problem again. Assume that there two real variables …

0

## Heisenberg uncertainty principle: a Bayesian perspective part I

While I was at QPL presenting I had a question from the audience about whether/how we can derive “Heisenberg inequality” as a consequence of our subjective (gambling) formulation of QM. This is not complicated since Heisenberg inequality is just the QM version of Covariance Inequality  which states that for any …

0

## Keynote Bayes+Hilbert=QM

I thank the organizers of ISIPTA 2017 for having invited me. Here you can find the link to my Keynote talk:

0

## Bayes+Hilbert=QM

QM is based on four main axioms, which were derived after a long process of trial and error. The motivations for the axioms are not always clear and even to experts the basic axioms of QM often appear counter-intuitive. In a recent paper [1], we have shown that: It is …

0

## Hierarchical test to compare classifiers

The following post discusses how to use a Bayesian hierarchical test (and also the Python module that implements it) to compare classifiers assessed via m-runs k-folds cross-validation. In the Bayesian correlated t-test and also in the frequentist correlated t-test, we can only analyze cross-validation results on a single dataset. In …

0

## Special Issue on Bayesian Nonparametrics on IJAR

The SI on Bayesian Nonparametrics I co-edited together with Antonio Lijoi and Antonietta Mira is closed, with 10 interesting paper accepted. The aim of this Special Issue is twofold. On one hand, it wishes to give a broad overview of popular models used in BNP, and of the related computational …

0

## “Reality is not what it seems” by Carlo Rovelli

A very pleasant way to start the day. A joy to read. It is really very well written, maybe a bit too simple, but a catching book. I recommend it.

0

## The importance of the region of practical equivalence (ROPE)

The difference between two classifiers (algorithms) can be very small; however there are no two classifiers whose accuracies are perfectly equivalent. By using an null hypothesis significance test (NHST), the null hypothesis is that the classifiers are equal. However, the null hypothesis is practically always false! By rejecting the null …

0

## Bayesian Signed-Rank Test for comparing algorithms in machine learning

This post is about Bayesian nonparametric tests for comparing algorithms in ML. This time we will discuss about Python module signrank in bayesiantests (see our GitHub repository). It computes the Bayesian equivalent of the Wilcoxon signed-rank test. It return probabilities that, based on the measured performance, one model is better …

0

## 28 October, USA general election situation

These are the updated probability obtained running the Python code that computes the Bayesian posterior distribution over the electoral votes using near-ignorance priors. The worst and best case distribution for Clinton are in red and respectively, blue.. Winning probability above 0.99 (for both worst and best scenario). Electoral votes between …

1