Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avocado Coverage.py updates #5944

Merged
merged 4 commits into from
Jun 10, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .coveragerc
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
[run]
source = avocado/, optional_plugins/
6 changes: 4 additions & 2 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -132,9 +132,11 @@ jobs:
- name: Run pre script
run: ./cc-test-reporter before-build
- name: Run script
run: make develop && ./selftests/run_coverage
run: |
python setup.py develop --user
./selftests/run_coverage
- name: Run post script
run: ./cc-test-reporter after-build
run: ./cc-test-reporter after-build --debug
- run: echo "🥑 This job's status is ${{ job.status }}."


Expand Down
16 changes: 8 additions & 8 deletions avocado/plugins/runners/avocado_instrumented.py
Original file line number Diff line number Diff line change
Expand Up @@ -98,21 +98,21 @@ def _run_avocado(runnable, queue):

messages.start_logging(runnable.config, queue)

if "COVERAGE_RUN" in os.environ:
from coverage import Coverage

coverage = Coverage()
coverage.start()

instance = loader.load_test(test_factory)
early_state = instance.get_state()
early_state["type"] = "early_state"
queue.put(early_state)
instance.run_avocado()

# running the actual test
if "COVERAGE_RUN" in os.environ:
coverage.stop()
from coverage import Coverage

coverage = Coverage(data_suffix=True)
with coverage.collect():
instance.run_avocado()
coverage.save()
else:
instance.run_avocado()

state = instance.get_state()
fail_reason = state.get("fail_reason")
Expand Down
11 changes: 10 additions & 1 deletion avocado/plugins/runners/python_unittest.py
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,16 @@ def _run_unittest(cls, module_path, module_class_method, queue):
return

runner = TextTestRunner(stream=stream, verbosity=0)
unittest_result = runner.run(suite)
# running the actual test
if "COVERAGE_RUN" in os.environ:
from coverage import Coverage

coverage = Coverage(data_suffix=True)
with coverage.collect():
unittest_result = runner.run(suite)
coverage.save()
else:
unittest_result = runner.run(suite)

unittest_result_entries = None
if len(unittest_result.errors) > 0:
Expand Down
18 changes: 5 additions & 13 deletions docs/source/guides/writer/chapters/integrating.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,24 +9,16 @@ like which parts are being exercised by the tests, may help develop new tests.

`Coverage.py`_ is a tool designed for measuring code coverage of Python
programs. It runs monitoring the program's source, taking notes of which
parts of the code have been executed.

It is possible to use Coverage.py while running Avocado Instrumented tests.
As Avocado spawn sub-processes to run the tests, the `concurrency` parameter
should be set to `multiprocessing`.
parts of the code have been executed. It is possible to use Coverage.py while
running Avocado Instrumented tests or Python unittests.

To make the Coverage.py parameters visible to other processes spawned by
Avocado, create the ``.coveragerc`` file in the project's root folder.
Avocado, create the ``.coveragerc`` file in the project's root folder and set
``source`` parameter to your system under test.
Following is an example::

[run]
concurrency = multiprocessing
source = foo/bar
parallel = true

According to the documentation of Coverage.py, when measuring coverage in
a multi-process program, setting the `parallel` parameter will keep the data
separate during the measurement.

With the ``.coveragerc`` file set, one possible workflow to use Coverage.py to
measure Avocado tests is::
Expand All @@ -43,6 +35,6 @@ coverage measurement.
For other options related to `Coverage.py`_, visit the software documentation.

.. note:: Currently coverage support is limited working only with
`ProcessSpawner` (the default spawner).
`ProcessSpawner` (the default spawner) and Coverage.py>=7.5.

.. _Coverage.py: https://coverage.readthedocs.io/
2 changes: 1 addition & 1 deletion requirements-dev.txt
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ pylint==2.17.2
autopep8==1.6.0
black==24.3.0

coverage==5.5
coverage==7.5

# To run make check
psutil==5.9.5
Expand Down
16 changes: 0 additions & 16 deletions selftests/run

This file was deleted.

4 changes: 2 additions & 2 deletions selftests/run_coverage
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ echo "Using coverage utility: $COVERAGE"

$COVERAGE erase
rm -f .coverage.*
RUNNING_COVERAGE=1 AVOCADO_CHECK_LEVEL=1 UNITTEST_AVOCADO_CMD="$COVERAGE run -p --include 'avocado/*,optional_plugins/*' $PYTHON -m avocado" $COVERAGE run -p --include "avocado/*,optional_plugins/*" ./selftests/run
$COVERAGE combine .coverage*
$COVERAGE run selftests/check.py --skip=static-checks
$COVERAGE combine
echo
$COVERAGE report -m --include "avocado/core/*"
echo
Expand Down
Loading