These are the updated probability obtained running the Python code that computes the Bayesian posterior distribution over the electoral votes using near-ignorance priors. The worst and best case distribution for Clinton are in red and respectively, blue.. Winning probability above 0.99 (for both worst and best scenario). Electoral votes between 322 and 335 (mean of …

Continue reading 28 October, USA general election situation# probability

Bayesian Sign Test Module signtest in bayesiantests computes the probabilities that, based on the measured performance, one model is better than another or vice versa or they are within the region of practical equivalence. This notebook demonstrates the use of the module. We will load the classification accuracies of the naive Bayesian classifier and AODE …

Continue reading Bayesian Sign TestI have run again the Python code that computes the worst-case (red) and best-Case (blue) posterior distribution for Clinton winning the general USA election. using fresh (September) poll-data. At the moment there is a quite large uncertainty but is still in favour of Clinton: the probability of winning is between 0.78 and 0.95. …

Continue reading Clinton vs. Trump 23th Sptember 2016We continue our adventure in the Bayesian USA 2016 election forecast through near-ignorance priors. I will today show how to compute the lower and upepr probabilities for Clinton of winning the general election 2016. First, we load the lower and upper probabilities for Clinton of winning in every single State (see bayesian-winning-lower-and-upper/) as well as …

Continue reading General Poll for US Presidential Election 2016This post is about how to perform a Bayesian analysis of election polls for USA 2016 presidential election. In previous posts, I have discussed how to make a poll for a single State (Nevada as example). Here we will use some simple Python functions to compue the probability for Clinton of winning in all 51 …

Continue reading Bayesian winning lower and upper probabilities in all 51 StatesIn a previous post, we have seen how to perform polls for a single State using poll data from KTNV/Rasmussen. Here we are going to see how to combine polls from different sources. Let us consider again Nevada polls. Poll Date Sample MoE Clinton (D) Trump (R) Johnson (L) Spread 0 RCP Average 7/7 – …

Continue reading Combining polls data from different sources using covariance intersectionI will show how to apply the models described in a-description-of-bayesian-near_ignorance_prior to predict USA2016 election results in Nevada. The polls data are from www.realclearpolitics.com, in particular KTNV/Rasmussen poll (see below). In a future post, I will discuss how to take into account of the three polls. We start by importing the data. In [4]: import pandas …

Continue reading Nevada data poll with near-ignorance priors and PythonElection Poll for a single state In this and follwoing posts, I’ll present a way to compute Bayesian prediction for the result of USA 2016 election based on election poll data and near-ignorance prior models. This model is described in detail here: A. Benavoli and M. Zaffalon. “Prior near ignorance for inferences in the k-parameter …

Continue reading A description of a Bayesian near-ignorance model for USA election pollsThis post shows how to use the IDP statistical package to compare sport performance. As case study, I have considered (just for fun) the comparison of my climbing performance in two consecutive editions (2013 and 2014) of the “Tre Valli Bresciane” cycling race. The following table reports my ascent time on 6 different climbs on …

Continue reading Comparing climbing performanceBattle for White House 2012 – 2 weeks before election The statistical analysis has been performed by using the most recent (2 weeks before election) polling data from realclearpolitics. The dataset can be downloaded here, while Matlab code can be downloaded here. The minimum sample size is around 500 people. The analysis employs an imprecise …

Continue reading Battle for White House 2012