Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DM-42039: Add TMA mount tracking errors to TMA mount motion profile #74

Merged
merged 21 commits into from
Feb 26, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
552b2a7
Initial commit
craiglagegit Dec 11, 2023
d141a13
Fixing linter errors.
craiglagegit Dec 11, 2023
198dfda
Change from snake case to camel case
mfisherlevine Dec 11, 2023
f71d2f2
Remove use of "I" in comment
mfisherlevine Dec 11, 2023
76b7361
WIP
mfisherlevine Dec 11, 2023
d310c6b
Changes to use MTMount demandPosition instead of MTPtg
craiglagegit Feb 14, 2024
5386052
Adding routine to filter out non-physical values
craiglagegit Feb 15, 2024
a98e8b3
Improve docstrings and minor changes
mfisherlevine Feb 15, 2024
98e8fb6
Changing filter algorithm to an interpolation
craiglagegit Feb 15, 2024
8a85851
Modifying filter routine to capture the number of points filtered
craiglagegit Feb 17, 2024
8525ead
Update filterBadValues and add tests
mfisherlevine Feb 21, 2024
c2c784a
Switch to using averaged replacements from extrapolations
mfisherlevine Feb 21, 2024
8cf2f7c
Only use tracking period when calculating residuals
mfisherlevine Feb 21, 2024
43f3c7d
Clip data to tracking period, fix flake8
mfisherlevine Feb 21, 2024
b0dd4c0
Fixup to docstrings, log messages and kwargs
mfisherlevine Feb 23, 2024
9735783
Make maxConsecutiveValues configurable
mfisherlevine Feb 23, 2024
240a819
Use last two values for correction, not first two
mfisherlevine Feb 23, 2024
8642430
Change to replacing with last two known-good values
mfisherlevine Feb 24, 2024
1f7d697
Check non-default maxDelta too
mfisherlevine Feb 24, 2024
b3ebb91
Add note about why we filter residuals
mfisherlevine Feb 24, 2024
56ca653
Remove functionally duplicate line
mfisherlevine Feb 26, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
200 changes: 191 additions & 9 deletions python/lsst/summit/utils/tmaUtils.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@
getDayObsForTime,
getDayObsStartTime,
getDayObsEndTime,
clipDataToEvent,
)

__all__ = (
Expand All @@ -55,6 +56,7 @@
'getSlewsFromEventList',
'getTracksFromEventList',
'getTorqueMaxima',
'filterBadValues',
)

# we don't want to use `None` for a no data sentinel because dict.get('key')
Expand All @@ -63,6 +65,10 @@
# means that we've not yet looked for the data.
NO_DATA_SENTINEL = "NODATA"

# The known time difference between the TMA demand position and the TMA
# position when tracking. 20Hz data times three points = 150ms.
TRACKING_RESIDUAL_TAIL_CLIP = -0.15 # seconds


def getSlewsFromEventList(events):
"""Get the slew events from a list of TMAEvents.
Expand Down Expand Up @@ -116,9 +122,20 @@ def getTorqueMaxima(table):
print(f"Max negative {axis:9} torque during seqNum {minPos:>4}: {minVal/1000:>7.1f}kNm")


def getAzimuthElevationDataForEvent(client, event, prePadding=0, postPadding=0):
def getAzimuthElevationDataForEvent(client,
event,
prePadding=0,
postPadding=0,
):
"""Get the data for the az/el telemetry topics for a given TMAEvent.

The error between the actual and demanded positions is calculated and added
to the dataframes in the az/elError columns. For TRACKING type events, this
error should be extremely close to zero, whereas for SLEWING type events,
this error represents the how far the TMA is from the demanded position,
and is therefore arbitrarily large, and tends to zero as the TMA get closer
to tracking the sky.

Parameters
----------
client : `lsst_efd_client.efd_helper.EfdClient`
Expand Down Expand Up @@ -150,11 +167,96 @@ def getAzimuthElevationDataForEvent(client, event, prePadding=0, postPadding=0):
prePadding=prePadding,
postPadding=postPadding)

azValues = azimuthData['actualPosition'].values
elValues = elevationData['actualPosition'].values
azDemand = azimuthData['demandPosition'].values
elDemand = elevationData['demandPosition'].values

azError = (azValues - azDemand) * 3600
elError = (elValues - elDemand) * 3600

azimuthData['azError'] = azError
elevationData['elError'] = elError

return azimuthData, elevationData


def plotEvent(client, event, fig=None, prePadding=0, postPadding=0, commands={},
azimuthData=None, elevationData=None):
def filterBadValues(values, maxDelta=0.1, maxConsecutiveValues=3):
"""Filter out bad values from a dataset, replacing them in-place.

This function replaces non-physical points in the dataset with an
extrapolation of the preceding two values. No more than 3 successive data
points are allowed to be replaced. Minimum length of the input is 3 points.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think anyone reading this in the future will wonder why do this at all instead of filtering/masking or highlighting the bad points in a plot? And I expect the answer will be some form of "we have a good reason to do it this way", but if you already have one or more in mind perhaps it's worth noting here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, agreed. Though I think perhaps it's more worth nothing where it's used that in the function which does it. Maybe some combination of both. Will look at sorting that out and make sure it's clear one way or another.


Parameters
----------
values : `list` or `np.ndarray`
The dataset containing the values to be filtered.
maxDelta : `float`, optional
The maximum allowed difference between consecutive values. Values with
a difference greater than `maxDelta` will be considered as bad values
and replaced with an extrapolation.
maxConsecutiveValues : `int`, optional
The maximum number of consecutive values to replace. Defaults to 3.

Returns
-------
nBadPoints : `int`
The number of bad values that were replaced out.
"""
# Find non-physical points and replace with extrapolation. No more than
# maxConsecutiveValues successive data points can be replaced.
badCounter = 0
consecutiveCounter = 0

log = logging.getLogger(__name__)

median = np.nanmedian(values)
# if either of the the first two points are more than maxDelta away from
# the median, replace them with the median
for i in range(2):
if abs(values[i] - median) > maxDelta:
log.warning(f"Replacing bad value of {values[i]} at index {i} with {median=}")
values[i] = median
badCounter += 1

# from the second element of the array, walk through and calculate the
# difference between each element and the previous one. If the difference
# is greater than maxDelta, replace the element with the average of the
# previous two known good values, i.e. ones which have not been replaced.
# if the first two points differ from the median by more than maxDelta,
# replace them with the median
lastGoodValue1 = values[1] # the most recent good value
lastGoodValue2 = values[0] # the second most recent good value
replacementValue = (lastGoodValue1 + lastGoodValue2) / 2.0 # in case we have to replace the first value
for i in range(2, len(values)):
if abs(values[i] - lastGoodValue1) >= maxDelta:
if consecutiveCounter < maxConsecutiveValues:
consecutiveCounter += 1
badCounter += 1
log.warning(f"Replacing value at index {i} with {replacementValue}")
values[i] = replacementValue
else:
log.warning(f"More than 3 consecutive replacements at index {i}. Stopping replacements"
" until the next good value.")
else:
lastGoodValue2 = lastGoodValue1
lastGoodValue1 = values[i]
replacementValue = (lastGoodValue1 + lastGoodValue2) / 2.0
consecutiveCounter = 0
return badCounter


def plotEvent(client,
event,
fig=None,
prePadding=0,
postPadding=0,
commands={},
azimuthData=None,
elevationData=None,
doFilterResiduals=False,
maxDelta=0.1):
"""Plot the TMA axis positions over the course of a given TMAEvent.

Plots the axis motion profiles for the given event, with optional padding
Expand All @@ -167,6 +269,18 @@ def plotEvent(client, event, fig=None, prePadding=0, postPadding=0, commands={},
strings, with values as astro.time.Time objects at which the command was
issued.

Due to a problem with the way the data is uploaded to the EFD, there are
occasional points in the tracking error plots that are very much larger
than the typical mount jitter. These points are unphysical, since it is not
possible for the mount to move that fast. We don't want these points, which
are not true mount problems, to distract from any real mount problems, and
these can be filtered out via the ``doFilterResiduals`` kwarg, which
replaces these non-physical points with an extrapolation of the average of
the preceding two known-good points. If the first two points are bad these
are replaced with the median of the dataset. The maximum difference between
the model and the actual data, in arcseconds, to allow before filtering a
data point can be set with the ``maxDelta`` kwarg.

Parameters
----------
client : `lsst_efd_client.efd_helper.EfdClient`
Expand All @@ -190,7 +304,12 @@ def plotEvent(client, event, fig=None, prePadding=0, postPadding=0, commands={},
elevationData : `pd.DataFrame`, optional
The elevation data to plot. If not specified, it will be queried from
the EFD.

doFilterResiduals : 'bool', optional
Enables filtering of unphysical data points in the tracking residuals.
maxDelta : `float`, optional
The maximum difference between the model and the actual data, in
arcseconds, to allow before filtering the data point. Ignored if
``doFilterResiduals`` is `False`.
Returns
-------
fig : `matplotlib.figure.Figure`
Expand All @@ -213,11 +332,19 @@ def tickFormatter(value, tick_number):
" Pass in a figure with fig = plt.figure(figsize=(10, 8)) to avoid this warning.")

fig.clear()
ax1, ax2 = fig.subplots(2,
sharex=True,
gridspec_kw={'wspace': 0,
'hspace': 0,
'height_ratios': [2.5, 1]})
ax1p5 = None # need to always be defined
if event.type.name == 'TRACKING':
ax1, ax1p5, ax2 = fig.subplots(3,
sharex=True,
gridspec_kw={'wspace': 0,
'hspace': 0,
'height_ratios': [2.5, 1, 1]})
else:
ax1, ax2 = fig.subplots(2,
sharex=True,
gridspec_kw={'wspace': 0,
'hspace': 0,
'height_ratios': [2.5, 1]})

if azimuthData is None or elevationData is None:
azimuthData, elevationData = getAzimuthElevationDataForEvent(client,
Expand Down Expand Up @@ -258,6 +385,55 @@ def tickFormatter(value, tick_number):
ax2.xaxis.set_major_locator(mdates.AutoDateLocator())
ax2.xaxis.set_major_formatter(mdates.DateFormatter('%H:%M:%S'))

if event.type.name == 'TRACKING':
# returns a copy
clippedAzimuthData = clipDataToEvent(azimuthData, event, postPadding=TRACKING_RESIDUAL_TAIL_CLIP)
clippedElevationData = clipDataToEvent(elevationData, event, postPadding=TRACKING_RESIDUAL_TAIL_CLIP)

azError = clippedAzimuthData['azError'].values
elError = clippedElevationData['elError'].values
elVals = clippedElevationData['actualPosition'].values
if doFilterResiduals:
# Filtering out bad values
nReplacedAz = filterBadValues(azError, maxDelta)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be useful to label/highlight/plot the replaced points in a different colour?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Errr, maybe in theory, but it's awkward, and will also ruin the scale of the plot - we just want to see what remains, I think.

nReplacedEl = filterBadValues(elError, maxDelta)
clippedAzimuthData['azError'] = azError
clippedElevationData['elError'] = elError
# Calculate RMS
az_rms = np.sqrt(np.mean(azError * azError))
el_rms = np.sqrt(np.mean(elError * elError))

# Calculate Image impact RMS
# We are less sensitive to Az errors near the zenith
image_az_rms = az_rms * np.cos(elVals[0] * np.pi / 180.0)
image_el_rms = el_rms
image_impact_rms = np.sqrt(image_az_rms**2 + image_el_rms**2)
ax1p5.plot(clippedAzimuthData['azError'],
label='Azimuth tracking error',
c=lineColors[colorCounter])
colorCounter += 1
ax1p5.plot(clippedElevationData['elError'],
label='Elevation tracking error',
c=lineColors[colorCounter])
colorCounter += 1
ax1p5.axhline(0.01, ls='-.', color='black')
ax1p5.axhline(-0.01, ls='-.', color='black')
ax1p5.yaxis.set_major_formatter(FuncFormatter(tickFormatter))
ax1p5.set_ylabel('Tracking error (arcsec)')
ax1p5.set_xticks([]) # remove x tick labels on the hidden upper x-axis
ax1p5.set_ylim(-0.05, 0.05)
ax1p5.set_yticks([-0.04, -0.02, 0.0, 0.02, 0.04])
ax1p5.legend()
ax1p5.text(0.1, 0.9,
f'Image impact RMS = {image_impact_rms:.3f} arcsec', transform=ax1p5.transAxes)
if doFilterResiduals:
ax1p5.text(
0.1,
0.8,
f'{nReplacedAz} bad azimuth values and {nReplacedEl} bad elevation values were replaced',
transform=ax1p5.transAxes
)

if prePadding or postPadding:
# note the conversion to utc because the x-axis from the dataframe
# already got automagically converted when plotting before, so this is
Expand All @@ -267,6 +443,9 @@ def tickFormatter(value, tick_number):
# extend lines down across lower plot, but do not re-add label
ax2_twin.axvline(event.begin.utc.datetime, c='k', ls='--', alpha=0.5)
ax2_twin.axvline(event.end.utc.datetime, c='k', ls='--', alpha=0.5)
if ax1p5:
ax1p5.axvline(event.begin.utc.datetime, c='k', ls='--', alpha=0.5)
ax1p5.axvline(event.end.utc.datetime, c='k', ls='--', alpha=0.5)

for command, commandTime in commands.items():
# if commands weren't found, the item is set to None. This is common
Expand All @@ -279,6 +458,9 @@ def tickFormatter(value, tick_number):
# extend lines down across lower plot, but do not re-add label
ax2_twin.axvline(commandTime.utc.datetime, c=lineColors[colorCounter],
ls='--', alpha=0.75)
if ax1p5:
ax1p5.axvline(commandTime.utc.datetime, c=lineColors[colorCounter],
ls='--', alpha=0.75)
colorCounter += 1

# combine the legends and put inside the plot
Expand Down
84 changes: 84 additions & 0 deletions tests/test_tmaUtils.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@
AxisMotionState,
getAxisAndType,
_initializeTma,
filterBadValues,
)
from utils import getVcr

Expand Down Expand Up @@ -346,6 +347,89 @@ def test_helperFunctions(self):
self.assertEqual(slews, foundSlews)
self.assertEqual(tracks, foundTracks)

def test_filterBadValues(self):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you should have at least one test with a non-default maxDelta.

# NB: if you add enough spurious values that the median is no longer
# the value around which your "good" values are oscillating the first
# two points will get replaced and this can be very confusing!

# test no bad values
# mean = median = 1.0
values = np.array([1.0, 0.96, 1.0, 1.04, 0.95, 1.0, 1.05, 1.0, 1.05, 1.0, 0.95])
mean = np.mean(values)
nReplaced = filterBadValues(values)
self.assertEqual(nReplaced, 0)
self.assertEqual(np.mean(values), mean)

# test with one bad values
values = np.array([1.0, 0.96, 1.0, 1.04, 2.95, 1.0, 1.05, 1.0, 1.05, 1.0, 0.95])
nReplaced = filterBadValues(values)
self.assertEqual(nReplaced, 1)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly, there ought to be at least one check that the expected elements were replaced with the expected values.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, if I'd done that I would have caught the bug you pointed out 🙂


# test with two consecutive bad values
values = np.array([1.0, 0.96, 1.0, 1.04, 2.95, 3.0, 1.05, 1.0, 1.05, 1.0, 0.95])
nReplaced = filterBadValues(values)
self.assertEqual(nReplaced, 2)

# test with three consecutive bad values
values = np.array([1.0, 0.96, 1.0, 1.04, 2.95, 3.0, 4.05, 1.0, 1.05, 1.0, 0.95])
nReplaced = filterBadValues(values)
self.assertEqual(nReplaced, 3)

# test with three consecutive bad values and another at the end
values = np.array([1.0, 0.96, 1.0, 1.04, 2.95, 3.0, 4.05, 1.0, 1.05, 1.0, 3.95])
nReplaced = filterBadValues(values)
self.assertEqual(nReplaced, 4)

# test with more than three consecutive bad values
values = np.array([1.0, 0.96, 1.0, 1.04, 2.95, 3.0, 4.05, 5.0, 1.05, 1.0, 0.95])
nReplaced = filterBadValues(values)
self.assertEqual(nReplaced, 3)
self.assertIn(5.0, values) # check the last bad value is still there specifically

# test with more than three consecutive bad values and another bad
# value at the end
values = np.array([1.0, 0.96, 1.0, 1.04, 2.95, 3.0, 4.05, 5.0, 1.05, 1.0, 2.95])
nReplaced = filterBadValues(values)
self.assertEqual(nReplaced, 4)

# test with bad values in first two positions
values = np.array([2.0, 1.96, 1.0, 1.04, 0.95, 1.0, 1.05, 1.0, 1.05, 1.0, 0.95]) # median = 1.0
nReplaced = filterBadValues(values)
self.assertEqual(nReplaced, 2)

# test with bad values in first two positions and one in the middle
values = np.array([2.0, 1.96, 1.0, 1.04, 0.95, 5.0, 1.04, 1.0, 1.05, 1.0, 0.95])
nReplaced = filterBadValues(values)
self.assertEqual(nReplaced, 3)

# check that the last two good values are always used for correction,
# including when there are more than three consecutive bad values.
values = np.array([1.0, 0.96, 1.0, 1.02, 2.95, 3.0, 4.05, 5.0, 1.05, 1.0, 2.95])
expected = np.array([1.0, 0.96, 1.0, 1.02, 1.01, 1.01, 1.01, 5.0, 1.05, 1.0, 1.025])
nReplaced = filterBadValues(values)
residuals = np.abs(values - expected)
self.assertEqual(nReplaced, 4)
self.assertTrue(np.all(residuals < 1e-6))

# check with one good point after an overflowing run of bad to make
# sure the correction is always applied with good values, not the naive
# average of the last two even if they might be bad
values = np.array([1.0, 0.96, 1.0, 1.02, 2.95, 3.0, 4.05, 5.0, 1.05, 2.95, 1.])
expected = np.array([1.0, 0.96, 1.0, 1.02, 1.01, 1.01, 1.01, 5.0, 1.05, 1.035, 1.0])
nReplaced = filterBadValues(values)
residuals = np.abs(values - expected)
self.assertEqual(nReplaced, 4)
self.assertTrue(np.all(residuals < 1e-6))

# check with non-default maxDelta
values = np.array([1.0, 0.96, 1.0, 1.02, 2.95, 3.0, 4.05, 5.0, 1.05, 1.0, 2.95])
nReplaced = filterBadValues(values, maxDelta=10)
self.assertEqual(nReplaced, 0)

values = np.array([1.0, 1.0, 1.0, 1.1, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, ])
nReplaced = filterBadValues(values, maxDelta=0.01)
self.assertEqual(nReplaced, 1)

@vcr.use_cassette()
def test_getEvent(self):
# test the singular event getter, and what happens if the event doesn't
Expand Down
Loading