BenEngebreth.org

HelioLinC: A variation in 6 dimensions

I've referenced the Holman et al. paper that uses heliocentric transformation and trajectory propagation and is the original HelioLinC implementation many times. I've long been fascinated by the ideas in that paper and finally took the time to implement my own version of HelioLinC using a slightly different approach. The Holman paper propagates and clusters in angular space and I thought I could approach the problem in cartesian space. When I started I thought this might be a novel new technique, but by the time I finished I realized that I was not the first person to have this idea. I dug into Rubin/LSST's HelioLinC3D implementation after I finished my HelioLinC implementation and saw that they're eventually clustering in 6D too even though they named their technique HelioLinC3D. Ah well, I learned a lot implementing my own version of HelioLinC and I think it will be a useful tool to employ in future work - though maybe not with TESS as I'll explain. To begin, I'll describe my algorithm and show some initial results.

Algorithm

I always appreciate a high-level overview, so let me begin with the simplest description of my approach that I can articulate. I'll unpack some of the details later.

  1. Acquire observations in RA and DEC (detections).
  2. Create two detection combinations (tracklets) for all detections having implied motion within a specified angular rate limit and total angular span limit of one another.
  3. Guess a heliocentric range and optionally range-rate then transform all detections to a heliocentric position at that distance.
  4. Calculate an initial orbit for all tracklets by solving Lambert's problem (two position vectors and time of flight).
  5. Propagate all tracklet initial orbits to a common epoch; choose a time that does not coincide with an observation time.
  6. Scale the velocity component of the propagated states to the magnitude of the range guess while preserving the relative variation of the velocity vector magnitudes with respect to one another.
  7. Cluster the full 6D propagated position and scaled velocity states together.

Fundamentally, if you guess the right range and range-rate for each SSO, this approach is not just a good search technique but a complete numeric solution to the astrodynamic problem of moving object detection. For this study I'm going to illustrate the utility of the algorithm for a simplified case where I guess only one range (2.5 AU) and use a range-rate guess of zero. What's interesting is how well this simplistic version works - at least for the observations and cadence of this study.

Observations

With this initial work I was trying to figure out how well a HelioLinC technique might work with TESS, so my test data here looks like my usual TESS observation strategy for fast moving objects. With TESS, because you can essentially define your own cadence, I usually work with 18 frames of observations over 3 days when I'm not looking for slow and dim objects. That's what the data you see below is except it's from my JPL Small Body Identification API Python library not TESS. This way I can start with perfect data and slowly corrupt it to see how HelioLinC handles it. Click on the image for a larger view.

Figure 1. The plot on the left shows Solar System Objects retrieved from JPL's Small Body Identification API for a 2 degree field of view centered around RA=315.4084, DEC=-28.9571. There are 18 observation times evenly spaced over 3 days starting on Julian date: 2458349.6206. There are 3,131 total sources in the left panel. The right plot is the same field of view but I've added normally distributed noise to the SSO detections (σ=2") and inserted fake detections at a rate of 4x the number of true sources. There are 15,655 sources in the right panel. My testing consists of a range of cases starting with the perfect data on the left and concluding with the noisy detections with many false positive (fake) detections on the right.

State space: perfect data

Now let's look at the results of the algorithm described above using a fixed 2.5 AU range guess (range-rate guess equals zero). The state space plots below show the common epoch propagated state vectors for the case of noiseless observation data with no fake detections inserted (i.e. perfect data). This corresponds to the left panel observation data in Figure 1. Click on the image for a larger view.

Figure 2. The plot on the left is a 3D plot of the common epoch propagated positions of all tracklets. The plot on the right is a 3D plot of the common epoch propagated velocities of all tracklets. Both plots have a bit of alpha blending going on which makes points plotted on top of one another appear darker as more and more of them pile up at the same location. The obvious clusters are not artificially highlighted, they're the common position and velocity states that tracklets are propagated to and correspond to real objects sharing common orbits. Note: the clustering state space will look like this because you need to normalize the velocity vectors so they have the same scale as the position vectors. I'll describe that process below.

State space: imperfect data

Real world data is of course not perfect. The figure below is the same as Figure 2, but this data adds noise to the SSO data retrieved from JPL (σ=2") and inserts fake detections at 4x the number of real SSO detections. This corresponds to the observation data in the right panel of Figure 1 above. Clusters might not look distinct in this view, but remember that clustering happens in 6 dimensions not 3. I'm showing the 6D state space as two separate 3D views; it's a useful representation, but not the same thing. In 6D space there are still well separated clusters. Click on the image for a larger view.

Figure 3. Once again, the plot on the left is a 3D plot of the common epoch propagated positions for all tracklets. The plot on the right is a 3D plot of the common epoch propagated velocities for all tracklets. This time I've added σ=2" noise to the true position data from JPL and inserted random fake detections at a 4x multiple of the true objects in the field of view.

Summary of results

My initial takeaway from these tests is that HelioLinC works extremely well when you have relatively precise astrometric data - even if that data contains a lot of false positive detections. HelioLinC does not appear to work as well when the astrometric precision degrades. With less precision the number of links HelioLinC generates starts to increase substantially and the number of confusions (detections attributed to the wrong object) becomes significant. This may not bode well for TESS data with its 21" pixels.

Above we saw how the HelioLinC common epoch propagated state space looks with the best and the worst dataset I tested. The table below summarizes the results over these and other intermediate test scenarios across various metrics. The Figure 1 left panel corresponds to the first row of the table. The right panel of Figure 1 is the last row of the table. There are 195 real SSOs and 3,131 real source detections in the data. Fakes are inserted as an additional multiple of those 3,131 detections. For instance, 4x fakes means there are 3,131 real + 4 x 3,131 fakes = 15,655 real+fake sources in the data.

Metrics for HelioLinC performance over various data quality scenarios

Observation
Noise σ
(arcsec)
Inserted
Fakes
(multiple of true)
HelioLinC
Generated
Links
6+ Detection
Matched
Links
Fully
Matched
Links
Confused
Links
Long
Links
0 0 350 195 163 0 0
0 2x 352 195 163 8 0
0 4x 361 195 163 14 0
1 0 419 194 138 26 0
1 2x 467 194 138 77 0
1 4x 504 194 136 127 1
2 0 1,569 182 19 85 0
2 2x 1,602 182 19 145 0
2 4x 1,702 182 19 280 0

Fixed parameters: clustering tolerance=0.003, range guess=2.5 AU, range-rate guess=0

Noteworthy algorithmic details

Scaling propaged orbit data for the clustering state space

The trick to getting my implementation working well was figuring out how to scale the propagated orbit data for clustering. You can't just cluster orbital elements as is, for instance, because they're different units that vary over different scales - some values are interval bound and/or periodic and others can have values from minus infinity to positive infinity. Position and velocity work better but still need to be scaled to the same range of values.

I started by clustering position in 3D cartesian space alone because it was easy and obvious and it worked okay. I then tried clustering 3D position with some of the plane information from the orbital elements I was also generating (scaled appropriately: radians/2π x range guess). That worked a little better. Next I clustered 3D positions with 3D velocity unit vectors scaled up to the range guess magnitude. That was not much different than clustering on position with the orbit plane information (unsurprisingly, I later concluded). Finally, I realized I could cluster in position and velocity without losing any information at all by normalizing the velocity vectors relative to the maximum velocity magnitude in the propagated velocity set. Then I could scale those max-normalized velocity vectors (now varying from 0 to 1 in length) to my range guess and cluster over position and velocity in full 6D.

Clustering algorithm choice: A hybrid approach

I tried DBSCAN clustering first. It's fast and does an excellent job of finding clusters with just one tuneable tolerance parameter. The problem with DBSCAN was that it would generate too many long links under certain conditions. I define a long link as link that contains more detections than the number of observation frames you have. You can't specify a maximum length constraint with DBSCAN and I couldn't figure out a way to structure the clustering state space to impose one.

The clustering algorithm I arrived at after a lot of fiddling was a hybrid one. I used a KD-tree to find an initial cluster but then filtered the clustered RVs (propagated positions and velocities in the clustering state space) in a two-step process to avoid non-physical clusters and long links. First I selected the 'best' RVs for a given frame combination because a real SSO can't be in two places at the same time. For example, if there were three t=0 to t=1 tracklets in a cluster, I chose the tracklet that was closest to the RV I was clustering around and discarded the others. That mostly fixed the number of long links I had to deal with. Next, when all of the clustering above was complete, I broke up the handful of long clusters that remained with K-means clustering.

Future study

This work just scratches the surface of the capabilities (and limitations) of HelioLinC and raises many questions. Briefly, here are a few.

When do you need to guess range-rates?

For this set of test data, guessing a singular and constant heliocentric range to transform to works remarkably well. In the initial Holman paper they guess a handful of ranges and also vary the range-rate within those range guesses. Is there anything about this 6D formulation that makes guessing one constant range value work better or did I just choose a random time and place in the sky that doesn't require variation in the range guess?

At a fundamental physical level, why does this work so well?

Related to the range-rate question, astrodynamically, why does HelioLinC work so well? When we guess a range of 2.5 AU for an object at a distance of 4 AU, why do those 2.5 AU orbits for a 4 AU object cluster as well as they do? I'd just like to have a better intuition for why this is and under what circumstances it might not be the case. This is complete speculation, but it almost feels like you could discard the range guess because the true parameter that matters in range-rate. If the range doesn't change much over your observation period (range-rate is small or the duration of the observation period is small) then your range guess can be wildly off. If range changes a lot though (range-rate is large or the duration of the observation period is large), perhaps no fixed range guess will work. The Holman paper alluded to something like this by demonstrating that clusters were more sensitive to range-rate guesses than range guesses.

NEOs

Following on from both of the previous questions, how well will HelioLinC work with NEOs with rapidly varying ranges and is there a way to tune HelioLinC for NEOs specifically? In a separate test of a handful of objects in differing orbit categories I found that the NEO was the hardest object to detect with HelioLinC - even when I was varying range-rate. You can still link it, but you just can't as easily cluster all of the SSO detections into one link. You end up with a handful of links containing sequential, non-overlapping or minimally overlapping detections for the one object instead of one link containing all detections.

Tracklet selection without short duration repeat observation

I tested a lot of physically improbable tracklets in this study. That's partly by design - I'm trying to avoid using a repeat observation shortly after a first observation to build my tracklets. On the positive side it demonstrates that poor tracklet selections don't cluster - as one would hope. Still, I'd like to figure out a way to filter the tracklet combinations I test to reduce the number of combinations that must be solved, propagated and clustered. That is a goal of all surveys though, so I expect this will be very difficult at best.

Conclusion

It's good to finally have HelioLinC as a tool in my toolbelt. I'm excited to see how it can be tuned for specific categories of object detection. As I mentioned above, I'm not terribly optimistic it will be useful for my work with TESS, but there are other datasets I've recently discovered that I think it might be well suited to. I hope to present a study of that soon.

Published: 2/20/2022