-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[uss_qualifier] Add prod system version checking #532
Merged
BenjaminPelletier
merged 6 commits into
interuss:main
from
BenjaminPelletier:prod-version
Mar 7, 2024
Merged
Changes from 1 commit
Commits
Show all changes
6 commits
Select commit
Hold shift + click to select a range
ae3362a
Add prod system version checking
BenjaminPelletier c1b04fb
Add note per comments
BenjaminPelletier 5c075d2
Merge remote-tracking branch 'interuss/main' into prod-version
BenjaminPelletier 27cb61e
Remove default value for non-optional parameter
BenjaminPelletier e27847d
Merge remote-tracking branch 'interuss/main' into prod-version
BenjaminPelletier 191b53e
make format
BenjaminPelletier File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Empty file.
53 changes: 53 additions & 0 deletions
53
monitoring/uss_qualifier/scenarios/astm/utm/versioning/evaluate_system_versions.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,53 @@ | ||
# ASTM F3548-21 evaluate system versions test scenario | ||
|
||
## Overview | ||
|
||
ASTM F3548-21 GEN0305 requires that the USS's test system (provided per GEN0300) uses the currently deployed software version except when testing an update. This scenario checks the test and production versions of all participants' systems and ensures that no more than one participant's test system (presumably the participant who is testing an update) has a different version than their production system. | ||
|
||
## Resources | ||
|
||
### test_env_version_providers | ||
|
||
A [`VersionProvidersResource`](../../../../resources/versioning/client.py) containing the means by which to query test-environment system versions for each applicable participant. | ||
|
||
### prod_env_version_providers | ||
|
||
A [`VersionProvidersResource`](../../../../resources/versioning/client.py) containing the means by which to query production-environment system versions for each applicable participant. | ||
|
||
### system_identity | ||
|
||
A [`SystemIdentityResource`](../../../../resources/versioning/system_identity.py) indicating the identity of the system for which to query the version from all providers. | ||
|
||
## Evaluate versions test case | ||
|
||
### Get test environment test versions test step | ||
|
||
Each version provider is queried for the version of its system (identified by system_identity) in the test environment. | ||
|
||
#### ⚠️ Valid response check | ||
|
||
If a valid response is not received from a version provider, they will have failed to meet **[versioning.ReportSystemVersion](../../../../requirements/versioning.md)**. | ||
|
||
### Get production environment versions test step | ||
|
||
Each version provider is queried for the version of its system (identified by system_identity) in the production environment. | ||
|
||
#### ⚠️ Valid response check | ||
|
||
If a valid response is not received from a version provider, they will have failed to meet **[versioning.ReportSystemVersion](../../../../requirements/versioning.md)**. | ||
|
||
### Evaluate current system versions test step | ||
|
||
#### ⚠️ At most one participant is testing a new software version check | ||
|
||
Per GEN0305, a participant may temporarily have a different software version in the test environment than in production in order to test that new software version. But, as the purpose of testing is to ensure continued compliance and interoperation with other participants' systems that have already been demonstrated functional, if two or more participants have differing software versions between the test and production environments, at least one of these participants will have failed to meet **[astm.f3548.v21.GEN0305](../../../../requirements/astm/f3548/v21.md)**. | ||
|
||
#### ⚠️ Test software version matches production check | ||
|
||
For participants not testing a new software version, their test-environment software version must match their production-environment software version or that participant does not meet **[astm.f3548.v21.GEN0305](../../../../requirements/astm/f3548/v21.md)**. | ||
|
||
### Evaluate system version consistency test step | ||
|
||
#### ⚠️ Software versions are consistent throughout test run check | ||
|
||
If the system version reported by a participant at one point during the test run is different from the system version reported by that participant at a different point during the test run, that participant cannot meet **[astm.f3548.v21.GEN0305](../../../../requirements/astm/f3548/v21.md)** because the test environment and production environment system versions cannot be compared because the version in at least one of those environments does not have a consistent value. |
208 changes: 208 additions & 0 deletions
208
monitoring/uss_qualifier/scenarios/astm/utm/versioning/evaluate_system_versions.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,208 @@ | ||
from dataclasses import dataclass | ||
from datetime import datetime | ||
from typing import Dict, Optional, Tuple | ||
|
||
from implicitdict import ImplicitDict | ||
from monitoring.monitorlib.clients.versioning.client import VersionQueryError | ||
from monitoring.monitorlib.fetch import QueryType, Query | ||
from monitoring.uss_qualifier.configurations.configuration import ParticipantID | ||
from monitoring.uss_qualifier.resources.versioning import SystemIdentityResource | ||
from monitoring.uss_qualifier.resources.versioning.client import ( | ||
VersionProvidersResource, | ||
) | ||
from monitoring.uss_qualifier.scenarios.scenario import TestScenario | ||
from monitoring.uss_qualifier.suites.suite import ExecutionContext | ||
from uas_standards.interuss.automated_testing.versioning.api import GetVersionResponse | ||
|
||
|
||
@dataclass | ||
class _VersionInfo(object): | ||
participant_id: ParticipantID | ||
version: str | ||
query: Query | ||
|
||
|
||
class EvaluateSystemVersions(TestScenario): | ||
def __init__( | ||
self, | ||
system_identity: SystemIdentityResource, | ||
test_env_version_providers: VersionProvidersResource, | ||
prod_env_version_providers: VersionProvidersResource, | ||
): | ||
super(EvaluateSystemVersions, self).__init__() | ||
self._test_env_version_providers = test_env_version_providers.version_providers | ||
self._prod_env_version_providers = ( | ||
prod_env_version_providers.version_providers | ||
if prod_env_version_providers | ||
else [] | ||
) | ||
self._system_identity = system_identity.system_identity | ||
|
||
def run(self, context: ExecutionContext): | ||
self.begin_test_scenario(context) | ||
self.begin_test_case("Evaluate versions") | ||
|
||
test_env_versions, prod_env_versions = self._get_versions() | ||
self._evaluate_versions(test_env_versions, prod_env_versions) | ||
self._evaluate_consistency(context, test_env_versions) | ||
|
||
self.end_test_case() | ||
self.end_test_scenario() | ||
|
||
def _get_versions( | ||
self, | ||
) -> Tuple[Dict[ParticipantID, _VersionInfo], Dict[ParticipantID, _VersionInfo]]: | ||
test_env_versions: Dict[ParticipantID, _VersionInfo] = {} | ||
prod_env_versions: Dict[ParticipantID, _VersionInfo] = {} | ||
|
||
for (test_step, version_providers, env_versions) in ( | ||
( | ||
"Get test environment test versions", | ||
self._test_env_version_providers, | ||
test_env_versions, | ||
), | ||
( | ||
"Get production environment versions", | ||
self._prod_env_version_providers, | ||
prod_env_versions, | ||
), | ||
): | ||
self.begin_test_step(test_step) | ||
|
||
for version_provider in version_providers: | ||
with self.check( | ||
"Valid response", participants=[version_provider.participant_id] | ||
) as check: | ||
try: | ||
resp = version_provider.get_version(self._system_identity) | ||
self.record_query(resp.query) | ||
env_versions[version_provider.participant_id] = _VersionInfo( | ||
participant_id=version_provider.participant_id, | ||
version=resp.version, | ||
query=resp.query, | ||
) | ||
except VersionQueryError as e: | ||
for q in e.queries: | ||
self.record_query(q) | ||
check.record_failed( | ||
summary="Error querying version", | ||
details=str(e), | ||
query_timestamps=[q.request.timestamp for q in e.queries], | ||
) | ||
|
||
self.end_test_step() | ||
return test_env_versions, prod_env_versions | ||
|
||
def _evaluate_versions( | ||
self, | ||
test_env_versions: Dict[ParticipantID, _VersionInfo], | ||
prod_env_versions: Dict[ParticipantID, _VersionInfo], | ||
): | ||
self.begin_test_step("Evaluate current system versions") | ||
|
||
mismatched_participants = [] | ||
matched_participants = [] | ||
for participant_id in test_env_versions: | ||
if participant_id not in prod_env_versions: | ||
# Participant didn't provide a means of determining the version of their production system | ||
continue | ||
mickmis marked this conversation as resolved.
Show resolved
Hide resolved
|
||
if ( | ||
test_env_versions[participant_id].version | ||
== prod_env_versions[participant_id].version | ||
): | ||
matched_participants.append(participant_id) | ||
else: | ||
mismatched_participants.append(participant_id) | ||
for participant_id in matched_participants: | ||
with self.check( | ||
"Test software version matches production", | ||
participants=participant_id, | ||
) as check: | ||
check.record_passed() | ||
|
||
if len(mismatched_participants) == 1: | ||
self.record_note( | ||
"Participant testing new software", mismatched_participants[0] | ||
) | ||
# Move technically-mismatched participant over to matched participants to prepare for one-at-a-time check | ||
matched_participants.append(mismatched_participants[0]) | ||
mismatched_participants.clear() | ||
|
||
# Record appropriate failures for participants with mismatched software versions (when there are 2 or more) | ||
mismatch_timestamps = [] | ||
for participant_id in mismatched_participants: | ||
timestamps = [ | ||
test_env_versions[participant_id].query.timestamp, | ||
prod_env_versions[participant_id].query.timestamp, | ||
] | ||
with self.check( | ||
"Test software version matches production", participants=participant_id | ||
) as check: | ||
check.record_failed( | ||
summary="Test environment software version does not match production", | ||
details=f"{participant_id} indicated version '{test_env_versions[participant_id].version}' in the test environment and version '{prod_env_versions[participant_id].version}' in the production environment.", | ||
query_timestamps=timestamps, | ||
) | ||
mismatch_timestamps.extend(timestamps) | ||
|
||
# Record one-at-a-time check result | ||
if mismatched_participants: | ||
with self.check( | ||
"At most one participant is testing a new software version", | ||
participants=mismatched_participants, | ||
) as check: | ||
check.record_failed( | ||
summary="Test environment software version does not match production", | ||
details=f"At most, only one participant may be testing a software version that differs from production, but {', '.join(mismatched_participants)} all had differing versions between environments.", | ||
query_timestamps=mismatch_timestamps, | ||
) | ||
else: | ||
with self.check( | ||
"At most one participant is testing a new software version", | ||
participants=matched_participants, | ||
) as check: | ||
check.record_passed() | ||
|
||
self.end_test_step() | ||
|
||
def _evaluate_consistency( | ||
self, | ||
context: ExecutionContext, | ||
test_env_versions: Dict[ParticipantID, _VersionInfo], | ||
): | ||
self.begin_test_step("Evaluate system version consistency") | ||
for q in context.sibling_queries(): | ||
if ( | ||
"query_type" not in q | ||
or q.query_type != QueryType.InterUSSVersioningGetVersion | ||
or "participant_id" not in q | ||
): | ||
continue | ||
if ( | ||
q.participant_id in test_env_versions | ||
and q.request.url | ||
== test_env_versions[q.participant_id].query.request.url | ||
): | ||
try: | ||
resp = ImplicitDict.parse(q.response.json, GetVersionResponse) | ||
except (ValueError, KeyError): | ||
# Something was wrong with the response payload; ignore this query | ||
continue | ||
with self.check( | ||
"Software versions are consistent throughout test run", | ||
participants=q.participant_id, | ||
) as check: | ||
if ( | ||
resp.system_version | ||
!= test_env_versions[q.participant_id].version | ||
): | ||
check.record_failed( | ||
summary="Version of software under test changed during test run", | ||
details=f"When queried for the version of the '{self._system_identity}' system, earlier response indicated '{resp.system_version}' but later response indicated '{test_env_versions[q.participant_id].version}'", | ||
query_timestamps=[ | ||
q.request.timestamp, | ||
test_env_versions[q.participant_id].query.timestamp, | ||
], | ||
) | ||
|
||
self.end_test_step() |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I fully understand the impact of where the
?
is in the yaml file. Won't this scenario be skipped with the currentf3548_21.yaml
file ifprod_env_version_providers
is not provided? Although there is a?
in the resources required by the suite, there is not in the resources required by the scenario.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this probably warrants more explicit documentation. In the resources section of a test suite definition, a ? means the resource is optional. That is, it's ok to run the test suite even if that resource isn't provided. In a test scenario declaration, it means roughly the same thing: the test scenario can be run even if that resource isn't provided.
When a resource is marked with a ? in a test scenario, the constructor of that scenario must have a default value for the resource (e.g.,
None
). A test scenario in a test suite that can be run will be run. Therefore, if a test scenario wants resource X but resource X is not provided, then the test scenario will be run if X is marked with a ? in the test scenario declaration but not run if X is not marked with a ? in the test scenario declaration.In this case, the version providers resource may or may not be provided to the test suite -- the test suite can still run effectively if it's not provided. But, if it's not provided, then the test scenario will be skipped entirely rather than trying to run it without that resource.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK I see thanks for the clarification.
Then, if the resource is missing, the scenario is skipped, doesn't that mean that
prod_env_version_providers
will always evaluate as truthy here? Or is it possible for a resource to exist but to be empty?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, right, sorry -- my mistake; this conditional was left over from when I actually did have the prod version providers as an optional resource earlier in development (it no longer is). Removed -- thanks for the catch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah OK! Not that it's a big deal, but I felt like there was something I didn't get :)