Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ability to automatically collect *.feature files #342

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

uriyyo
Copy link

@uriyyo uriyyo commented Dec 18, 2019

I currently using this awesome library. And I want to say thank you.

But I was confused when I started to work with pytest-bdd.

It is unclear that you should register your *.feature file to the test script or use the scenarios function.

I think many of us want to have something like at behave library: you have *.feature files and steps definition and that all, no registration of *.feature files.

In this pull request, I want to add such feature to pytest-bdd.

In order to enable the auto collection of *.feature you should add bdd_auto_collect option to

pytest.ini:

[pytest]
bdd_auto_collect=true

In case if bdd_auto_collect enabled pytest will collect *.feature files using pytest_collect_file hook and create python module for them. Basically, it will create dummy python modules and fill them with features using the scenarios function.

Currently, this feature not compatible with manual registration of *.feature files using scenario or scenarios. It will duplicate tests and run them twice.

@codecov-io
Copy link

codecov-io commented Dec 18, 2019

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 95.87%. Comparing base (f3b92bd) to head (7845286).
Report is 515 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #342      +/-   ##
==========================================
+ Coverage   95.80%   95.87%   +0.06%     
==========================================
  Files          57       59       +2     
  Lines        2217     2253      +36     
  Branches      185      187       +2     
==========================================
+ Hits         2124     2160      +36     
  Misses         62       62              
  Partials       31       31              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@uriyyo uriyyo requested a review from youtux February 22, 2020 19:50
@uriyyo
Copy link
Author

uriyyo commented Mar 30, 2020

Hi @youtux

This PR is opened since December 2019.
I thought it will be a great idea to have such feature in pytest-bdd.
In case if you don't want to have this feature I will delete this PR.

@youtux
Copy link
Contributor

youtux commented Apr 19, 2020

I'm sorry for the long wait before my reaction.
It seems to me that this can be easily solved by having a test module that collects all the scenarios from the folder containing the feature files:

# tests/test_all.py
from pytest_bdd import scenarios

scenarios('features')  # this will traverse subdirectories as well

This is documented in https://github.com/pytest-dev/pytest-bdd#scenarios-shortcut

@uriyyo uriyyo force-pushed the master branch 3 times, most recently from 0bbddff to 6a80101 Compare April 19, 2020 16:02
@uriyyo
Copy link
Author

uriyyo commented Apr 20, 2020

Autocollecting of *.feature files is one of the features of this PR.

The main feature of this PR is that you can work with *.feature files like with regular python modules. So you can use such of pytest like --ignore and others. For instance. you can start tests from *.feature file by providing a path to it:

pytest temp.feature

Another example. For instance, we have such project structure:

features:
    __init__.py
    login.feature
    settings.feature

conftest.py

If we add test_all.py file with such content:

from pytest_bdd import scenarios

scenarios('features')

When we perform collecting of tests we will have such output:

<Package>
  <Module test_all.py>
    <Function test_login_to_site>
    <Function test_open_settings>

We can see that all tests located at one file.

In case when bdd_auto_collect option set to true and perform collecting of tests (there are no test_all.py), we will have such output:

<Package>
  <PytestBDDModule login.feature>
    <Function test_login_to_site>
  <PytestBDDModule settings.feature>
    <Function test_open_settings>

We used this feature with auto collecting at our project and I can say that this cool feature. It simplifies working with feature file and allow to easily group feature file. The main advantages you shouldn't register feature file and it correctly resolves steps from conftest.

We had a project structure something like this:

features:
    login:
       conftest.py
       login.feature
    settings:
       conftest.py
       settings.feature
conftest.py

In local conftest we had specific steps for concrete feature, at global conftest we had general steps. With such structure and auto collect feature is extremely easy to work with pytest-bdd.

@uriyyo
Copy link
Author

uriyyo commented Apr 21, 2020

@youtux can you rerun Travis CI builds? It looks like there some problems with codecov reports uploading, I have no rights to do that.

@youtux
Copy link
Contributor

youtux commented Apr 21, 2020

I see, it makes sense. @olegpidsadnyi do you agree on this?
I will restart the build, but it's potentially an issue of the codecov library

@olegpidsadnyi
Copy link
Contributor

@youtux I'm not sure about this mapping of the text files to py files. When we started on pytest-bdd that was what made it different from the others. Especially when the same feature file could be included by 2 or more scenarios or parametrized according to a different backend or environment.
I'd prefer your scenarios('features') way of doing things instead of a modality on the entire pytest env to collect or not to collect text files.

@uriyyo
Copy link
Author

uriyyo commented Apr 21, 2020

It's up to end-user to choose how they want to organize their project.

The idea is to provide more user-friendly API for pytest-bdd.
The idea to create this PR borne because I heard from many of my friends that they want to use a BDD approach and pytest but pytest-bdd didn't provide an easy way to work with feature files (because of it they choose behave).

I can create separate pytest plugin with this functionality but I thought it will be a great idea to add it to pytest-bdd.

It up to you decide will this feature be included to pytest-bdd.

@youtux
Copy link
Contributor

youtux commented Apr 21, 2020

That will still be possible of course, but I think it's a more "advanced" usage that many people just don't need.
And I see that the collection of feature files is better than the scenario('catch_all') way, as it will create test items following the directory hierarchy, with the nice side effect of being in sync with the fixture overrides at each level (as shown by the use case of @uriyyo).

I think the two ways can co-exist, and I think we won't even need a flag to switch mode.
Let's assume that users have a directory structure like this:

project/
    src/
        .../
    features/
        a.feature
        b.feature
    tests/
        test_a.py
        test_b.py
        test_other.py

running pytest tests will not collect any .feature file, as it's not part of that directory.
If users want to opt-in for the new way, they can just move features/ under tests/ (and remove tests/test_a.pym /tests/tests_b.py) and call pytest tests. This would be the new directory structure:

project/
    src/
        ...
    tests/
        features/
            a.feature
            b.feature
        test_other.py

Of course this would require a major version bump, as users that had their features folder already under tests/ would see tests being invoked twice, and/or failing.

@olegpidsadnyi
Copy link
Contributor

olegpidsadnyi commented Apr 21, 2020

@uriyyo the entry point to pytest tests is test_X.py files.
What's wrong with that. You can place a function there like scenarios that would explode to the test items.

The advantage of this is respecting the pytest environment. Since the test_X.py file has a place in the folder structure all the conftests and fixtures are respected.
How an ini file setting or a command line switch friendlier than creating a python file as your entry point? What if you need to run a particular test item using the path/to/test_X.py::test_item_function syntax?
Everything has it's price and if you run a collect on a specific file or subfolder to get the test item names - here you would need to collect the entire project to find out the filename which doesn't even exist.

@olegpidsadnyi
Copy link
Contributor

@uriyyo pytest-bdd is pytest in the first place. There's no goal to compete with behave at all

@olegpidsadnyi
Copy link
Contributor

olegpidsadnyi commented Apr 21, 2020

@uriyyo On the other hand if .feature files are underneath the tests dir we could build a support to collect them as the doc suggesting https://docs.pytest.org/en/latest/example/nonpython.html

I don't like modality much. So if we let .feature files to be collected then I'd prefer the major release that is not compatible (to avoid the setting). And that would probably mean we need to cleanup the tests of the pytest-bdd itself.

@uriyyo
Copy link
Author

uriyyo commented Apr 21, 2020

@olegpidsadnyi This PR was implement using docs you mentioned above 😃

@olegpidsadnyi @youtux It looks like it will be big changes to a project, what we should do next?

@youtux
Copy link
Contributor

youtux commented Apr 21, 2020

I can think of some step that have to be satisfied in order to have this feature:

  • Remove the flag
  • Decide if we want to support (and not break) projects that have feature files under their tests/ folder.
    • If we want to support them, then we need to disable scenarios that were automatically collected by this feature. Maybe this is not even necessary. We could collect only features that are named like test_*.feature, and ignore the rest. This should have minimal impact on users, as I don't expect many users to name their feature files like that already.
    • If we don't want to support them, we have to clearly mention it in the changelog (it would be a major version bump anyway).
      I still don't know how much effort it option 1 would require, but I would opt for that if it's not too much.
  • Test this feature more comprehensively
    • Find a way to override fixtures for a specific feature file (cannot just use conftest.py, as it would affect all the features files in the same folder).
    • Test that fixture override at each folder level works as expected
    • Test the expected name of collected items
    • Test that we can run pytest on a specific collected item (e.g. pytest tests/a.feature::test_a_scenario)
    • Test json report
    • Many other tests that may break when using this feature?
  • Update the documentation

@olegpidsadnyi
Copy link
Contributor

I don't mind if we are making a breaking change. But of course then we need to rewrite the doc a bit.

@uriyyo
Copy link
Author

uriyyo commented Apr 21, 2020

I will update this PR in order to meet all requirements mentioned by @youtux

But the last question is, should we support registration of feature files using scenarios?

@Karamann
Copy link

@uriyyo @olegpidsadnyi this is a very nice feature, would be glad to see it merging some time!

@youtux
Copy link
Contributor

youtux commented Oct 20, 2021

I'm still trying to figure out how to make it possible to override fixture (in the new approach) for a specific .feature file; overriding the fixture on the conftest.pty would impact other feature files within the same directory, which is not desired.

Option 1

One possibility is that feature files that need specific pytest overrides (fixtures, marks, hooks e.g. pytest_collection_modifyitems) can be placed under their own directory and have a conftest.py that does it for them:

project/
    src/
        ...
    tests/
        features/
            conftest.py  # fixtures for a, b and c
            test_a.feature
            test_b.feature
            c/
                test_c.feature
                conftest.py  # fixtures override specific to c

Option 2

An alternative is to provide also a test_*.py whose name matches the feature file, and where we can put custom fixtures and hooks:

project/
    src/
        ...
    tests/
        features/
            conftest.py  # fixtures for a, b and c
            test_a.feature
            test_b.feature
            test_c.feature
            test_c.py  # fixtures override specific to c

I'm not sure though how feasible and easy to implement is this second option.

Maybe we can go with Option 1 at first, since it shouldn't require any implementation, and add the Option 2 in the future if we deem it necessary.

@uriyyo
Copy link
Author

uriyyo commented Oct 20, 2021

I think option 1 will be perfect now.

I will update PR implementation to match what you describe.

"""
Automatically collect *.feature files and create test modules for them.
"""
if CONFIG_STACK[-1].getini("bdd_auto_collect") and path.ext == ".feature":
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be possible to support line number(s) too?

# run scenario defined on line 2
$ pytest features/foo.feature:2
# run scenarios defined on line 2 and 12
$ pytest features/foo.feature:2:12

@youtux
Copy link
Contributor

youtux commented Oct 20, 2021

Thanks, but I think I will want to eventually do a big rewrite to make it behave this way, so don't it's quite possible this PR wont be merged

@uriyyo
Copy link
Author

uriyyo commented Oct 21, 2021

Let me know, if you need help)

@elchupanebrej
Copy link

elchupanebrej commented Oct 17, 2022

I'm still trying to figure out how to make it possible to override fixture (in the new approach) for a specific .feature file; overriding the fixture on the conftest.pty would impact other feature files within the same directory, which is not desired.

Option 1

One possibility is that feature files that need specific pytest overrides (fixtures, marks, hooks e.g. pytest_collection_modifyitems) can be placed under their own directory and have a conftest.py that does it for them:

project/
    src/
        ...
    tests/
        features/
            conftest.py  # fixtures for a, b and c
            test_a.feature
            test_b.feature
            c/
                test_c.feature
                conftest.py  # fixtures override specific to c

Option 2

An alternative is to provide also a test_*.py whose name matches the feature file, and where we can put custom fixtures and hooks:

project/
    src/
        ...
    tests/
        features/
            conftest.py  # fixtures for a, b and c
            test_a.feature
            test_b.feature
            test_c.feature
            test_c.py  # fixtures override specific to c

I'm not sure though how feasible and easy to implement is this second option.

Maybe we can go with Option 1 at first, since it shouldn't require any implementation, and add the Option 2 in the future if we deem it necessary.

Feature files could be stored in a separate folder tree. Symlinks could be used (since git 2.32) to place them near appropriate conftest.py files. So if some fixture needs to be overloaded - a separate conftest.py should be created in a nested folder (usual pytest way to overload fixtures). Please check the proposed project layout https://github.com/elchupanebrej/pytest-bdd-ng#features-autoload .

@MartinB134
Copy link

MartinB134 commented Nov 21, 2023

Is there an alternative to this? I would really like to use this feature. I thought auto collection is the main advantage of behave, but if we have this in pytest-bdd this would be a giant leap.
See the comment from atuomation panda:
image
https://automationpanda.com/2018/10/22/python-testing-101-pytest-bdd/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants