The problem is modelling prior ignorance about statistical parameters through a set of prior distributions M or, equivalently, the upper and lower expectations that are generated by M. The upper and lower expectations of a bounded real-valued function g on a possibility space, denoted by LE(g) and UE(g), are respectively the supremum of the expectations Ep(g) over the probability measures p in M (if M is assumed to be closed and convex, p it is fully determined by all the upper and lower expectations). In choosing a set M to model prior near-ignorance, the main aim is to generate upper and lower expectations with the property that LE(g) = inf g and UE(g) = sup g on a specific class of bounded real-valued function of interest g. This means that the only available information about E(g) is that it belongs to [inf g; sup g], which is equivalent to state a condition of complete prior ignorance about the value of g. Modeling a state of prior ignorance about the value w of a random variable W is not the only requirement for M, it should also lead to non-vacuous posterior inferences. Posterior inferences are vacuous if the lower and upper expectations of all gambles of interest g coincide with the infimum and, respectively, the supremum of g. This means that our prior beliefs do not change with experience (i.e., there is no learning from data). The issue is thus to define a set M of distributions that is a model of prior near-ignorance but that does not lead to vacuous inferences. Using this model, we can develop parametric and nonparametric Bayesian near-ignorance models.