Statistical inference often begins with a model. In the classical setting, we might look at a histogram of data and say, "This looks like a Bell curve." We then assume the data comes from a Normal distribution and focus our efforts on finding the best mean () and variance ().
But what if we don't want to make that assumption? What if the data has two peaks, or a skew that no standard distribution captures perfectly?
This is the domain of Density Estimation, where we are concerned with making inferences about an unknown distribution on the basis of an observed sample. Instead of fitting parameters to a fixed curve, we want the data to tell us what the curve should look like.
In this post, we will explore the Dirichlet Process (DP), the most popular prior model used for this task in the Bayesian framework.
The Bayesian Nonparametric Approach
In Bayesian inference, if we want to estimate an unknown parameter, we must place a prior on it.
- If the unknown is a number (like a coin bias), we might use a Beta distribution.
- If the unknown is a vector, we might use a Multivariate Normal.
In density estimation, our unknown parameter is the distribution itself. Since a distribution is a function, it is an infinite-dimensional object. To perform Bayesian inference here, we need a probability model defined over the space of all possible probability measures.
This is called a Bayesian Nonparametric (BNP) prior. The Dirichlet Process, introduced by Ferguson in 1973, is the fundamental building block of BNP.
Defining the Dirichlet Process
Let's denote our unknown random probability measure as .
We say that follows a Dirichlet Process with precision parameter and base measure , denoted as:
But what does it mean for a "measure" to be random? Ferguson provided a definition based on finite partitions of the space.
Definition: Let be our sample space (e.g., the real line). A random probability measure is a DP if, for every finite partition of the space into sets , the vector of probabilities assigned to these sets follows a finite Dirichlet distribution:
Here:
- (Base Measure): This is our "best guess" or centering distribution. It determines where the mass is located on average.
- (Precision Parameter): This controls how tightly the random measure concentrates around .
This definition is powerful because it reduces an infinite-dimensional problem back to a familiar finite-dimensional distribution—the Dirichlet distribution.
Key Properties
To build intuition, let's look at the mean and variance of this process. For any specific set :
These equations reveal the roles of our parameters:
- The Mean: On average, the random measure looks exactly like the base measure .
- The Variance: The variance decreases as increases.
- If , the variance goes to zero. The random measure becomes identical to .
- If is small, can deviate significantly from , allowing the model to adapt more freely to the data.
A Surprising Discreteness
Perhaps the most important property of the DP is the nature of the measures it generates. Even if our base measure is smooth and continuous (like a Normal distribution), any realization drawn from a DP is discrete.
Specifically, with probability 1, can be written as an infinite weighted sum of point masses:
- The locations are points drawn from the base measure .
- The weights sum to 1.
- represents a "Dirac mass" or a spike of probability at .
This means a distribution drawn from a DP doesn't look like a smooth curve; it looks like a staircase (a cumulative distribution function of discrete points). This discreteness is actually a feature, not a bug—it naturally leads to clustering, making the DP excellent for mixture models.
Constructing a DP: The Stick-Breaking Process
We know is discrete, but how do we actually generate it? How do we determine those infinite weights ?
Sethuraman (1994) provided a constructive definition known as Stick-Breaking. It gives us an explicit recipe to simulate .
Imagine a stick of unit length (representing total probability 1). We want to break it into infinite pieces to get our weights
-
Break the first piece: Generate a random fraction from a Beta distribution, .
- The first weight is .
- The remaining stick has length .
-
Break the second piece: Generate another fraction .
- The second weight is a fraction of what was left: .
- The remaining stick is now .
-
Repeat infinitely: Generally, the weight is the fraction of the remainder from the previous breaks:
Finally, we assign each weight to a random location drawn independently from .
Interactive Simulation
Below is a simulation of the Dirichlet Process using the Stick-Breaking construction.
We assume the base measure is a Standard Normal distribution, .
- The White Line represents the CDF of the base measure .
- The Grey Lines are 15 different realizations of the random measure .
Figure: 15 samples from DP(M, α₀) with α₀ = N(0,1).
What to Look For
Try changing the precision parameter M:
- M = 1 (Low Precision): The stick is broken in large chunks effectively immediately. Often, one or two weights will be massive. This results in a "blocky" CDF that looks nothing like the smooth normal curve. The variance is high.
- M = 500 (High Precision): The stick is broken into tiny splinters. It takes many, many atoms to sum up to 1. Because the atoms are drawn from , their average behavior creates a very smooth curve that hugs the white line tightly.
This visualization demonstrates the "Large Weak Support" of the DP: while every individual sample is jagged and discrete, the process places probability mass everywhere that does. Given enough data, a DP model can approximate any distribution effectively.