Carl Dehlin
Apr 2, 2024

Robust estimation and the RANSAC algorithm

This is an introduction to robust estimation methods and the RANSAC algorithm for fitting a plane to a point cloud. The target audience are engineers and software developers working with 3D data and parameter estimation.

When dealing with 3D-data, it is common to extract structural geometric information such as planes. Planes are abundant in man made environments, and finding these can help to solve many other down stream tasks.

In order to estimate a plane in a point cloud, we have to ask ourselves: What is the best plane? Depending on the answer to that question, you will end up with different algorithms that solve the problem.

First thing to notice is that in general, a point cloud contains many planes. Which are the true planes? To create a model of this problem, imagine a graphics pipeline where you describe a scene mathematically. To get a “measurement” of this scene, we can either render it onto the screen as an image, or sample from it to get a point cloud. Whatever measurement procedure we decide to use, we should add noise to it to make it realistic.

Now, given a measurement of the scene, let's say that we want to extract the plane that “fits as many points as possible". We will refer to this particular plane as the dominant plane. The points that do not fit the plane are referred to as outliers. In statistics, outliers are data points that do not fit your model. Hence, when fitting a plane to a set of points, the points that actually belong to the plane (in the sense that they were created by sampling from it and adding noise) are inliers to the model of the plane. Any point not belonging to the plane can not be described mathematically using the model of sampling from it and adding noise.

When working with data containing outliers, least squares estimation of model parameters does not work (which assumes that all points are inliers). In order to deal with outliers, one has to use so called robust estimation methods.

RANSAC

RANdom SAmple Consensus is a robust, randomized algorithm to estimate a set of parameters from data with outliers.

The algorithm has two key characteristics

  1. It is robust
  2. It is randomized

It's robust in the sense that it can deal with data with outliers. In fact, it can handle an arbitrary amount of outliers! The fact that it is randomized means that you will get different results every time you run it (unless you reset the seed of the pseudo-random generator). That might seem problematic at first, but going from deterministic to randomized algorithms have substantial impact on computational complexity. An algorithm that is exponential time in a deterministic setting can become feasible when made random. It is worth noting that random with fixed seed becomes deterministic in practice. The point is that using (pseudo)-random number generator decouple inductive biases in our algorithms.

For example, let's say that you want to write a deterministic algorithm for finding the plane containing the maximum numbers of points within a predefined distance along the normal vector of the plane. The naive way is to walk through every subset of points and fit a plane using least squares. For each subset we fit a plane and compute a score, and then pick the best result. The score can be very simple, such as counting the number of points satisfying the predefined point-to-plane distance criteria. Anyhow, this algorithm is obviously infeasible since the set of all subsets of a set (a.k.a. power set) of size $N$ is $2^N$.

Taking a step back, ask yourself this: What is the minimum subset size to estimate a plane to begin with? You only need 3 points! If we walk through all subsets of size 3 we need $N^3$ iterations to walk through all triplets. In other words, the algorithm have a cubic time complexity. This is on par with a naive implementation of dense matrix multiplication (however our plane estimation algorithm is much slower since we need perform an optimization step in the inner loop, not just a multiply-add operation). For smaller point clouds, this will work fine. But how small?

Modern iOS devices equipped with a LiDAR sensor provide 50K points per frame. That would require about 100 trillion iterations with the naive estimation algorithm described above! Hence, this is not a feasible solution either.

This is where randomization comes in: What if we instead of walking through the subsets deterministically, we do it by picking random triplets?  It is quite easy to derive the probability of finding the dominant plane after $K$ iterations. Assume that the probability of picking a point without replacement on the dominant plane is $P$. Setting $P = 50\%$ means that the dominant plane contains half of the points in the scene. The probability of selecting an inlier triplet is approximately $P^3$ (this is the probability with replacement, which becomes almost the same as without replacement if the number of points are large). Selecting an outlier triplet trivially becomes $1 - P^3$, and selecting only outlier triplets after $K$ iterations $(1 - P^3)^K$, deriving the equation for the probability of selecting an inlier triplet after $K$ iterations

$$P_K = 1 - (1 - P^3)^K$$

Turning it around and asking how many iterations do we need in order to guarantee a certain probability $Q$ of success? You can easily work out the math yourself, but here is the answer:

$$K = \lceil \frac{\log (1 - Q)}{\log(1 - P^3)} \rceil$$

Evaluating this for $Q=99\%$ and $P=50\%$ gives $K=35$. That is a lot less than 100 trillion iterations! Even better is that the probability sharply increases with the number of iterations. Setting $Q = 99.999\%$ gives $K=87$, and lowering the inlier probability to $P=25\%$ gives $K=732$. Hence, you can find a dominant plane covering only 25% of the scene with almost 100% certainty in less that 1000 iterations. This is what makes RANSAC, despite it's simplicity, such a powerful algorithm!

Let's discuss how our 3D software can speed up your operations.
Contact me