Since we’re averaging everything and keeping a single guess, it doesn’t work well with only relative sensors. Notice how in the picture, there’s no overlapping between two measurements and it’s not sensible to just stick something in between. Unless there’s absolute measurement to lower the uncertainty, we get drift.

The robot knows what its environment looks like but doesn't know its position within it. There's no starting point for the Kalman Filter.
Using a wide Gaussian distribution doesn't help—it only becomes more uncertain over time.

It's similar to being lost in the wilderness, where you can estimate your location through triangulation.
We use a LIDAR to compare what we see against the floorplan.

<aside> 🧠
Limitations of these models in general…
You can't determine your location if multiple rooms have identical layouts.
This is very hard to solve and is still unsolved. This is worth exploring.
Susceptible to occlusions
Also sensitive to orientations (though it’s good for granularity, it’s not very smooth when you try to optimize for the correct localization)

</aside>
The Likelihood Field Model converts a floorplan into a distance map, which offers several advantages:
For each LiDAR bearing measurement:
<aside> 🧠
Likelihood Field Example
Consider a 5x5 grid representing a room with walls (marked as 1):
1 1 1 1 1
1 0 0 0 1
1 0 0 0 1
1 0 0 0 1
1 1 1 1 1
The likelihood field (distance to nearest wall) would look like:
0.0 0.0 0.0 0.0 0.0
0.0 1.0 1.4 1.0 0.0
0.0 1.4 2.0 1.4 0.0
0.0 1.0 1.4 1.0 0.0
0.0 0.0 0.0 0.0 0.0
Each value represents the distance to the nearest wall. The darker the value (closer to 0), the more likely it is to be near a wall. This creates a smooth gradient that's easier to optimize against than binary wall/no-wall values.
</aside>
Key benefits of this approach: