A very pleasant way to start the day. A joy to read. It is really very well written, maybe a bit too simple, but a catching book. I recommend it.

Continue reading “Reality is not what it seems” by Carlo Rovelli# Uncategorized

This post is about Bayesian nonparametric tests for comparing algorithms in ML. This time we will discuss about Python module signrank in bayesiantests (see our GitHub repository). It computes the Bayesian equivalent of the Wilcoxon signed-rank test. It return probabilities that, based on the measured performance, one model is better than another or vice versa …

Continue reading Bayesian Signed-Rank Test for comparing algorithms in machine learningThese are the updated probability obtained running the Python code that computes the Bayesian posterior distribution over the electoral votes using near-ignorance priors. The worst and best case distribution for Clinton are in red and respectively, blue.. Winning probability above 0.99 (for both worst and best scenario). Electoral votes between 322 and 335 (mean of …

Continue reading 28 October, USA general election situationIf you are interested in the Quantum Mechanics version of Fréchet bounds, then I have just edited the Fréchet inequalities page in wikipedia to show that similar bounds can also be obtained in quantum mechanics for separable density matrices. These bounds were derived in our paper: It is worth to point out that entangled states …

Continue reading Quantum Fréchet boundsBayesian Sign Test Module signtest in bayesiantests computes the probabilities that, based on the measured performance, one model is better than another or vice versa or they are within the region of practical equivalence. This notebook demonstrates the use of the module. We will load the classification accuracies of the naive Bayesian classifier and AODE …

Continue reading Bayesian Sign TestThe worst-case probability for Clinton winning the election is back above 90% (precisely 93%). You can try by yourself running this code in my Github repository.

Continue reading Fresh forecast for US2016 electionI have run again the Bayesian algorithm that uses a prior near-ignorance model to compute US2016 election forecast. This is the current situation for Clinton (worst-case in red and best-case in blue). The probability range of winning the election (by getting the majority of the electoral votes) is [0.68,0.91]. The posterior distributions obtained using the …

Continue reading US2016 election forecastTutorial went very well. It was a nice experience and we received very positive feedback. If you are interested in the content please visit this page.

Continue reading ECML 2016 tutorial on Bayesian vs. Frequentist tests for comparing algorithmsI have run again the Python code that computes the worst-case (red) and best-Case (blue) posterior distribution for Clinton winning the general USA election. using fresh (September) poll-data. At the moment there is a quite large uncertainty but is still in favour of Clinton: the probability of winning is between 0.78 and 0.95. …

Continue reading Clinton vs. Trump 23th Sptember 2016Working on the slides for our Tutorial at ECML 2016 (Riva del Garda) G. Corani, A. Benavoli, J. Demsar. Comparing competing algorithms: Bayesian versus frequentist hypothesis testing Schedule Time Duration Content Details 09:00 15min Introduction Motivations and Goals 09:15 60min Null hypothesis significance tests in machine learning NHST testing (methods and drawbacks) 10:15 25min …

Continue reading 19 September Tutorial at ECML