After watching this video to get started and reading
this article for additional information, I just implemented EPA in 2d.
I have one remark about the video (about the 2d part): I don't think it's possible to get a concave polytope because GJK should always return a simplex where the points are on the perimeter of the Minkowsky difference. And when EPA adds a new point, it uses the GJK support function that also returns a point on the perimeter.
The problems I ran into when implementing EPA (in 2d) where different:
- GJK can return a simplex where one of the points is the origin. But you can early out of this case because it means that the shapes are just touching, there is no overlap to "correct".
- (I guess) Floating point precision error in 2 places:
1. When I use the support function to search for a new point in the direction on the closest edge from the origin.
If there is no farthest point, the support function will return one of the two points from the closest edge, that we dot with the normal of the edge to know the distance of the origin to the edge (we do that in all case, but the problem is only if there is no new point). Depending on which point was returned, the distance might not exactly equal the minimum distance computed before and so we can failed to finish the loop and add a point that was already in the polytope. To solve that you can either have a tolerance value (0.00001f was not enough in my case) or compare with both points of the edge and pass if one of them has the right distance (in my case distance - minDistance always return 0.0f).
2. When searching for the closest edge and the origin is on one of the simplex edges.
I have an assert in the code to make sure the distanceFromOriginToEdge is >= 0. But when the origin is on one of the edge, it sometimes is less then zero (a very small number). In practice it should not matter as long as the normal direction is toward the outside of the polytope.
I have 3 question, not directly related to EPA or GJK, about floating point numbers:
1. Can I assume, given similar floating point inputs that a set of operations will always give me the same result on the same computer ?
2. Is there a way to compute the error range of floating point computations ? For example if I know that I have 3 multiplies, 4 adds and 1 divide that should add up to an expected value, can I know "how far" from that value I might end up ? And I suppose it depends on if those are small numbers (0.000001f) or big numbers (100000.0f) or a combination of both.
3. Is it interesting to use doubles instead of floats for computations if the inputs are floats casted to doubles ?