I have run again the Python code that computes the worst-case (red) and best-Case (blue) posterior distribution for Clinton winning the general USA election. using fresh (September) poll-data. At the moment there is a quite large uncertainty but is still in favour of Clinton: the probability of winning is between 0.78 and 0.95. …

Continue reading Clinton vs. Trump 23th Sptember 2016# Blog

Working on the slides for our Tutorial at ECML 2016 (Riva del Garda) G. Corani, A. Benavoli, J. Demsar. Comparing competing algorithms: Bayesian versus frequentist hypothesis testing Schedule Time Duration Content Details 09:00 15min Introduction Motivations and Goals 09:15 60min Null hypothesis significance tests in machine learning NHST testing (methods and drawbacks) 10:15 25min …

Continue reading 19 September Tutorial at ECMLWe continue our adventure in the Bayesian USA 2016 election forecast through near-ignorance priors. I will today show how to compute the lower and upepr probabilities for Clinton of winning the general election 2016. First, we load the lower and upper probabilities for Clinton of winning in every single State (see bayesian-winning-lower-and-upper/) as well as …

Continue reading General Poll for US Presidential Election 2016This post is about how to perform a Bayesian analysis of election polls for USA 2016 presidential election. In previous posts, I have discussed how to make a poll for a single State (Nevada as example). Here we will use some simple Python functions to compue the probability for Clinton of winning in all 51 …

Continue reading Bayesian winning lower and upper probabilities in all 51 StatesIn a previous post, we have seen how to perform polls for a single State using poll data from KTNV/Rasmussen. Here we are going to see how to combine polls from different sources. Let us consider again Nevada polls. Poll Date Sample MoE Clinton (D) Trump (R) Johnson (L) Spread 0 RCP Average 7/7 – …

Continue reading Combining polls data from different sources using covariance intersectionI will show how to apply the models described in a-description-of-bayesian-near_ignorance_prior to predict USA2016 election results in Nevada. The polls data are from www.realclearpolitics.com, in particular KTNV/Rasmussen poll (see below). In a future post, I will discuss how to take into account of the three polls. We start by importing the data. In [4]: import pandas …

Continue reading Nevada data poll with near-ignorance priors and PythonElection Poll for a single state In this and follwoing posts, I’ll present a way to compute Bayesian prediction for the result of USA 2016 election based on election poll data and near-ignorance prior models. This model is described in detail here: A. Benavoli and M. Zaffalon. “Prior near ignorance for inferences in the k-parameter …

Continue reading A description of a Bayesian near-ignorance model for USA election pollsAlso for the 2016 USA election, I will periodically post election polls for the battle for White House using Bayesian methods based on near-ignorance prior probabilities that automatically allows to perform swing scenarios (e.g., a percentage of voters decide to change their vote). This is the first result with using June polls. In future posts, …

Continue reading Battle for White House 2016I have just completed the wikipedia page for the Imprecise Dirichlet process, https://en.wikipedia.org/wiki/Imprecise_Dirichlet_process …any useful contribution/modification is welcome.

Continue reading WikipediaThis post shows how to use the IDP statistical package to compare sport performance. As case study, I have considered (just for fun) the comparison of my climbing performance in two consecutive editions (2013 and 2014) of the “Tre Valli Bresciane” cycling race. The following table reports my ascent time on 6 different climbs on …

Continue reading Comparing climbing performance