You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The idea of DTS E2E tests is to test DTS workflows for every supported hardware automatically on QEMU going from DTS main menu up to rebooting. But there is one problem, not all DTS E2E tests adhere to real workflows for the hardware they are said to test.
What are these hardware workflows? Consider following diagram:
This diagram shows an example DTS installation workflow. As you can see, it consists of several steps and the correct way to complete this workflow is to execute all these steps in a specific way (green arrows on the diagram). Which steps to execute depends on the platform DTS workflow is being run on. The problem is that, though we can simulate a specific platform during E2E tests, not all needed steps are being executing in the tests (for example some E2E tests omit EC transition step as shown on the diagram), therefore not all DTS E2E tests reflect real DTS workflows.
How to fix this? As was said, whether a specific step is being executed or not depends on the platform and its features that are being detected by DTS. In case of testing on QEMU and simulating a specific platform, DTS has a platform simulation interface in a form of shell variables that begin with TEST_ (check common-mock-func.sh for more inf.) that could be exported at the start. These variables are responsible for configuring some features on simulated platform, including platform model, firmware version, EC presence etc..
The only problem that remains is the way how to prove, that the workflow being tested in E2E test is similar to the same DTS workflow on real hardware. The only way I see it right now, is to develop the same E2E test on QEMU and real hardware at the same time, and compare its results. In that case, this issue depends on #654.
The text was updated successfully, but these errors were encountered:
DaniilKl
changed the title
Fixing tests to adhere to real workflows on real hardware.
Fixing DTS E2E tests to adhere to real workflows on real hardware.
Jan 9, 2025
The idea of DTS E2E tests is to test DTS workflows for every supported hardware automatically on QEMU going from DTS main menu up to rebooting. But there is one problem, not all DTS E2E tests adhere to real workflows for the hardware they are said to test.
What are these hardware workflows? Consider following diagram:
This diagram shows an example DTS installation workflow. As you can see, it consists of several steps and the correct way to complete this workflow is to execute all these steps in a specific way (green arrows on the diagram). Which steps to execute depends on the platform DTS workflow is being run on. The problem is that, though we can simulate a specific platform during E2E tests, not all needed steps are being executing in the tests (for example some E2E tests omit EC transition step as shown on the diagram), therefore not all DTS E2E tests reflect real DTS workflows.
How to fix this? As was said, whether a specific step is being executed or not depends on the platform and its features that are being detected by DTS. In case of testing on QEMU and simulating a specific platform, DTS has a platform simulation interface in a form of shell variables that begin with
TEST_
(checkcommon-mock-func.sh
for more inf.) that could be exported at the start. These variables are responsible for configuring some features on simulated platform, including platform model, firmware version, EC presence etc..The only problem that remains is the way how to prove, that the workflow being tested in E2E test is similar to the same DTS workflow on real hardware. The only way I see it right now, is to develop the same E2E test on QEMU and real hardware at the same time, and compare its results. In that case, this issue depends on #654.
The text was updated successfully, but these errors were encountered: