Distance to the sine curve
Sometimes we want to know the distance from a point to a particular shape or surface. In the language of math, what we're looking for is a "signed distance function" (SDF) for that object. This is a function
f(x,y) that takes the coordinates for a point and returns the distance to the closest point on the object. Some objects have a well defined inside and outside, and in these cases their corresponding SDF will be negative for the inner region, and positive for the outer region.
Signed distance functions have been derived for a variety of objects. There are SDFs for 2 dimensional things like the circle, square, line segment, and parabola, as well as 3 dimensional things like the cube, sphere, or capsule. Inigo Quilez maintains a gallery of 2D SDFs and 3D SDFs, complete with code samples.
For some of these objects, the code for its corresponding SDF is quite simple, but for others it can be very complicated. Worse, it's often not obvious when an object will have an elegant formulation as an SDF. As an example, the exact SDF for a heart shape is about 4 lines of code, whereas the ellipse is 27 lines of complicated algebra. Sometimes the SDF for an object cannot be stated in closed form. Instead, slow iterative methods like root finding must be used.
In this article I'll discuss ways of computing the distance to the curve
sin(f*x). I'll walk through the classical approach of root finding, and then introduce my own closed-form approximation that has high accuracy.
The classical approach
The problem of obtaining the distance to the sine curve can be posed like so. Given the point
(s,t), We want to find the point
(x,y) that minimizes the following function. This is the squared euclidean distance.
f(x,y) = (x-s)^2 + (y-t)^2
(x,y) must also satisfy this constraint which forces it to lie on the sine curve:
g(x,y) = sin(w*x) - y = 0
To solve this equation, we can use Lagrange multipliers. This is a technique that allows us to expand the problem into a system of three equations. The solutions to those equations will correspond to either an optima, or a point on the function where
f is unchanging. To use this technique, we introduce a new variable
λ, and the function:
F(x,y,λ) = f(x,y) - λ*g(x,y)
We then take the derivative of F with respect to each of its three variables. We then require that the resulting derivatives equal zero vector. Solutions to this system of equations will correspond to possible optima.
∇F(x,y,λ) = (0,0,0)
The intuition here is that if some point is an optimum of
f(x,y) and satisfies
g(x,y) = 0, then the derivatives of
g should point in the same direction. That is, there exists a
λ such that
∇f(x,y) - λ*∇g(x,y) = (0,0). The first two equations of the system guarantee this case, and the last is just the original constraint.
Applying this to our definitions for
g(x,y) will give us the following system of equations:
2x - 2s - λ*w*cos(w*x) = 0 2y - 2t + λ = 0 y - sin(w*x) = 0
After some substitution and simplification, we get the following formula:
w*sin(2w*x) - 2t*w*cos(w*x) + 2x - 2s = 0
Yikes! That's not an easy formula to solve. Thankfully, we can leverage some knowledge about the sine wave to make solving this easier. Firstly, we can exploit the symmetries of the sine wave to constrain the location of the roots. The sine wave can be seen as a single curve, the principal half-cycle, mirrored and repeated through space:
Before we do any calculation, we can map our given point
(s,t) to live within the x range
[0, π/w]. If we do this, we can assume that the closest point will be on the principal half-cycle. The following code can do this:
q = π/w cell = round(s/q)*q sgn = sign(cos(s*w)) s = (s-cell)*sgn
From here we can use a root finding method to search within the interval
[0, π/w]. Once we find our roots, we will need to pick the one that corresponds to the true closest point, since the roots may not be true optima. This can be easily done by comparing distances.
This method works, and you can see it employed on a few shadertoys that compute the signed distance to the sine curve. Here is one by iq and one by fabrice. Unfortunately, the root finding step is iterative. Depending on one's use-case, these kinds of iterative methods can be too slow to use.
In the next section, I will describe my method that produces an extremely accurate estimate for the closest point on the sine curve. This approximation can then be refined with a single optimization step to produce a visually perfect distance function.
Approximation with Chebyshev polynomials
Consider the following equality:
T(cos(x)) = cos(x*2)
A function that satisfies this constraint exists. Surprisingly, it is a polynomial! A parabola, in fact:
T(x) = 2x^2 - 1
This is what's called the second Chebyshev polynomial of the first type. There are many of these polynomials, one for each integer
n ≥ 0:
T_n(cos(x)) = cos(x*n)
The Chebyshev polynomials reveal a connection between the polynomials and the trigonometric functions. The intuition is that if you draw
n principal half-cycles of a cosine curve onto the surface of a half-cylinder and projected it down to the ground, you would get a Chebyshev polynomial.
Creative Commons Attribution 4.0 International
Earlier I mentioned that the distance to the parabola is a well known SDF. It is a closed form solution, requiring that we solve a cubic equation. What we need for this method is the closest point on the parabola, not just the distance. Thankfully, deriving the SDF for the parabola also produces the closest point as a byproduct. Some SDFs don't have an obvious way to derive the closest point on the object, just the minimum distance, so this is lucky.
We're going to use the closest point on the parabola to get a good approximation for the closest point on the cosine wave. We can imagine that our given point
(s,t) sits on this half-cylinder with the cosine curve. What we can do is map it down to the ground, get the closest point on the curve
T_2(x), then map the resulting point back to the half-cylinder and clamp the y-coordinate to the cosine wave. Care must be taken before mapping back, since the closest point on the parabola may be outside the bounds of the cylinder, so we need to clamp it within the x range
The following code will do this mapping for us:
s_2 = sin(s*w)/w; (c_x, c_y) = closest point on parabola 2(w*x)^2 - 1 to (s_2, t) c_x = asin(clamp(c_x*w,-1.,1.))/w c_y = -cos(2w*c_x)
If we define our sine SDF to be the distance between
(s,t) and the resultant
(c_x,c_y) we get the following:
This diagram illustrates the contour lines of the SDF. Ideally, they should be equally spaced and continuous, but on the right side the approximation has broken down. Thankfully, we can just use the left side since this is still a full principal half-cycle. But the top-left part of the curve is also messed up. To account for this, we can create a second approximation for the top-left side by flipping the coordinates, then take the closest point between the two approximations.
We now have an extremely clean approximation for the closest point on the principal half-cycle. Using the trick in the previous section, we can mirror and repeat this section across the plane to produce an SDF that works for the whole range.
There is a small quirk with this method. Because we are using the 2nd Chebyshev polynomial, the estimated closest point is actually for the curve
sin(2w*x). We get a doubling of the frequency. This is easily solved with a change of variables in the right places.
Finalizing with Newton's method
What we gained from the last section is an approximation for the closest point on the sine curve. This approximation is great for high frequencies, but begins to break down as the frequency goes below 1. Namely, areas opposite to the peaks begin to overestimate the distance. Thankfully the overestimation isn't too bad, but we would still like to improve the accuracy.
I have found that performing a single step of Newton's method produces an estimate that is visually indistinguishable from an SDF found with root-finding.
To do this Newton's method optimization, we will return to the function
F we obtained using the Lagrange multiplier method. We can use the following update rule to refine our estimate:
(x,y,λ) -= inverse(∇'F()) * (x,y,λ)
∇' is the Hessian, aka second derivative. This will be a 3x3 matrix of second partial derivatives.
There is a little bit of a hiccup here. We have approximations for
y, but not
λ. Turns out we can't make an arbitrary choice here, it's possible for the newton's method update to overshoot if
λ is too big, or not do enough if it is too small. There may be a way to figure this out mathematically, but I have found that
λ = s-t is an adequate estimate.
Performing one newton's method update improves the estimate greatly. Doing more interations will increase the accuracy quadratically.
I'm not sure if approximating the distance to the sine curve with Chebyshev polynomials has been done before. If you've done this before or are aware of a paper where this method is explained and analyzed in more detail, you can reach me on mastodon, or send an email to "blackle" at this domain.
Below is an interactive shadertoy widget illustrating the fruits of this method. You can click on it anywhere to get the closest point and distance to your cursor. Press play to see the frequency change dynamically. The shader itself can be found here.