-
Notifications
You must be signed in to change notification settings - Fork 0
guide testing
For testing please follow our general best practices:
-
Tests should have a clear goal that should also be documented.
-
Tests have to be classified into different integration levels.
-
Tests should follow a clear naming convention.
-
Automated tests need to properly assert the result of the tested operation(s) in a reliable way. E.g. avoid stuff like assertEquals(42, service.getAllEntities()) or even worse tests that have no assertion at all (might still be reasonable to test that an entire configuration setup such as spring config of application is intact).
-
Tests need to be independent of each other. Never write test-cases or tests (in Java @Test methods) that depend on another test to be executed before.
-
Use assert frameworks like AssertJ to write good readable and maintainable tests that also provide out-of-the-box good failure reports in case a test fails.
-
For easy understanding divide your test in three sections. Comment then with //given, //when and //then
-
Plan your tests and test data management properly before implementing.
-
Instead of having a too strong focus on test coverage better ensure you have covered your critical core functionality properly and review the code including tests.
-
Test code shall NOT be seen as second class code. You shall consider design, architecture and code-style also for your test code but do not over-engineer it.
-
Test automation is good but should be considered in relation to cost per use. Creating full coverage via automated system tests can cause a massive amount of test-code that can turn out as a huge maintenance hell. Always consider all aspects including product life-cycle, criticality of use-cases to test, and variability of the aspect to test (e.g. UI, test-data).
-
Use continuous integration and establish that the entire team wants to have clean builds and running tests.
-
Do not use inheritance for cross-cutting testing functionality: Sometimes cross-cutting functionality like opening/closing a database connection or code to fill a database with test data is put in a common parent class like AbstractTestCase that all test classes need to inherit from. Starting with some functions this classes tend to grow up to the point where they become real maintenance nightmares. Good places to put this needed kind of code can be realized using JUnit @Rule mechanism. In general favor delegation over inheritance. There are reasons why frameworks like JEE or JUnit do not use inheritance for technical features - and for the same reasons also project test frameworks should not do it.
Due to the non-consistent use and understanding of mocks/stubs/fakes/dummies for any kind of interface for testing purposes, we shortly want to give a common understanding about the different types of test doubles. Therefore we mainly stick on Gerard Meszaros’s definitions, who also introduced the term test doubles as generic term for mocks/stubs/fakes/dummies/spys. Another interesting discussion about stubs VS mocks has been published by Martin Fowler, which focuses more on the differences between stubs and mocks. A short summary (by Martin Fowler):
-
Dummy objects are passed around but never actually used. Usually they are just used to fill parameter lists.
-
Fake objects actually have working implementations, but usually take some shortcut which makes them not suitable for production (an in memory database is a good example).
-
Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what’s programmed in for the test. Stubs may also record information about calls, such as an email gateway stub that remembers the messages it 'sent', or maybe only how many messages it 'sent'.
-
Mocks are objects pre-programmed with expectations, which form a specification of the calls they are expected to receive.
What both authors do not cover is the applicability of the different concepts. We try to give some examples, which should make it somehow clearer:
Best Practices for applications:
-
A good way to replace small to medium large boundary systems, whose impact (e.g. latency) should be ignored during performing load and performance tests of the application under development.
-
As stub implementation will rely on state-based verification, there is the threat, that test developers will partially reimplement the state transitions based on the replaced code. This will immediately lead to a black maintenance whole, so better use mocks to assure the certain behavior on interface level.
-
Do NOT use stubs as basis of a large amount of test cases as due to state-based verification of stubs, test developers will enrich the stub implementation to become a large monster with its own hunger after maintenance efforts.
Best Practices for applications:
-
Replace not-needed dependencies of your system-under-test (SUT) to minimize the application context to start of your component framework.
-
Replace dependencies of your SUT to impact the control flow under test without establishing all the context parameters needed to match the control flow.
-
Remember: Not everything has to be mocked! Especially on lower levels of tests like isolated module tests you can be betrayed into a mocking delusion, where you end up in a hundred lines of code mocking the whole context and five lines executing the test and verifying the mocks behavior. Always keep in mind the benefit-cost ratio, when implementing tests using mocks.
If you need to mock remote connections such as HTTP-Servers, wiremock offers easy to use functionality. For a full description see the homepage or the github repository. Wiremock can be used either as a JUnit Rule, in Java outside of JUnit or as a standalone process. The mocked server can be configured to respond to specific requests in a given way via a fluent Java API, JSON files and JSON over HTTP. An example as an integration to JUnit can look as follows.
import static com.github.tomakehurst.wiremock.core.WireMockConfiguration.wireMockConfig;
import com.github.tomakehurst.wiremock.junit.WireMockRule;
public class WireMockOfferImport{
@Rule
public WireMockRule mockServer = new WireMockRule(wireMockConfig().dynamicPort());
@Test
public void requestDataTest() throws Exception {
int port = this.mockServer.port();
...}
This creates a server on a randomly chosen free port on the running machine. You can also specify the port to be used if wanted. Other than that there are several options to further configure the server. This includes HTTPs, proxy settings, file locations, logging and extensions.
@Test
public void requestDataTest() throws Exception {
this.mockServer.stubFor(get(urlEqualTo("/new/offers")).withHeader("Accept", equalTo("application/json"))
.withHeader("Authorization", containing("Basic")).willReturn(aResponse().withStatus(200).withFixedDelay(1000)
.withHeader("Content-Type", "application/json").withBodyFile("/wireMockTest/jsonBodyFile.json")));
}
This will stub the URL localhost:port/new/offers
to respond with a status 200 message containing a header (Content-Type: application/json
) and a body with content given in jsonBodyFile.json
if the request matches several conditions.
It has to be a GET request to ../new/offers
with the two given header properties.
Note that by default files are located in src/test/resources/__files/
. When using only one WireMock server one can omit the this.mockServer
in before the stubFor
call (static method).
You can also add a fixed delay to the response or processing delay with WireMock.addRequestProcessingDelay(time)
in order to test for timeouts.
WireMock can also respond with different corrupted messages to simulate faulty behaviour.
@Test(expected = ResourceAccessException.class)
public void faultTest() {
this.mockServer.stubFor(get(urlEqualTo("/fault")).willReturn(aResponse()
.withFault(Fault.MALFORMED_RESPONSE_CHUNK)));
...}
A GET request to ../fault
returns an OK status header, then garbage, and then closes the connection.
There are many discussions about the right level of integration for test automation. Sometimes it is better to focus on small, isolated modules of the system - whatever a "module" may be. In other cases it makes more sense to test integrated groups of modules. Because there is no universal answer to this question, OASP only defines a common terminology for what could be tested. Each project must make its own decision where to put the focus of test automation. There is no worldwide accepted terminology for the integration levels of testing. In general we we consider ISTQB. However, with a technical focus on test automation we want to get more precise.
The following picture shows a simplified view of an application based on the OASP reference architecture. We define four integration levels that are explained in detail below. The boxes in the picture contain parenthesized numbers. These numbers depict the lowest integration level, a box belongs to. Higher integration levels also contain all boxes of lower integration levels. When writing tests for a given integration level, related boxes with a lower integration level must be replaced by test doubles or drivers.

The main difference between the integration levels is the amount of infrastructure needed to test them. The more infrastructure you need, the more bugs you will find, but the more instable and the slower your tests will be. So each project has to make a trade-off between pros and contras of including much infrastructure in tests and has to select the integration levels that fit best to the project.
Consider, that more infrastructure does not automatically lead to a better bug-detection. There may be bugs in your software that are masked by bugs in the infrastructure. The best way to find those bugs is to test with very few infrastructure.
External systems do not belong to any of the integration levels defined here. OASP does not recommend involving real external systems in test automation. This means, they have to be replaced by test doubles in automated tests. An exception may be external systems that are fully under control of the own development team.
The following chapters describe the four integration levels.
The goal of a isolated module test is to provide fast feedback to the developer. Consequently, isolated module tests must not have any interaction with the client, the database, the file system, the network, etc.
An isolated module test is testing a single classes or at least a small set of classes in isolation. If such classes depend on other components or external resources, etc. these shall be replaced with a test double.
For an example see here.
A component test aims to test components or component parts as a unit. These tests typically run with a (light-weight) infrastructure such as spring-boot-starter-test and can access resources such as a database (e.g. for DAO tests). Further, no remote communication is intended here. Access to external systems shall be replaced by a test double.
A subsystem test runs against the external interfaces (e.g. HTTP service) of the integrated subsystem. In OASP4J the server (JEE application) is the subsystem under test. The tests act as a client (e.g. service consumer) and the server has to be integrated and started in a container.
Subsystem tests of the client subsystem are described in the OASP4JS-Wiki.
If you are using spring-boot, you should use spring-boot-starter-test
as lightweight and fast testing infrastructure that is already shipped with oasp4j-test
. In case you have to use a full blown JEE application server, we recommend to use arquillian.
To get started look here.
Do not confuse a subsystem test with a system integration test. A system integration test validates the interaction of several systems where we do not recommend test automation.
A system test has the goal to test the system as a whole against its official interfaces such as its UI or batches. The system itself runs as a separate process in a way close to a regular deployment. Only external systems are simulated by test doubles.
The OASP does only give advices for automated system test. In nearly every project there must be manual system tests, too. This manual system tests are out of scope here.
OASP4J defines Category-Interfaces that shall be used as JUnit Categories. Also OSAP4J provides abstract base classes that you may extend in your test-cases if you like.
OASP4J further pre-configures the maven build to only run integration levels 1-2 by default (e.g. for fast feedback in continuous integration). It offers the profiles subsystemtest (1-3) and systemtest (1-4). In your nightly build you can simply add -Psystemtest to run all tests.
This section introduces how to implement tests on the different levels with the given OASP infrastructure and the proposed frameworks.
In OASP4J you can extend the abstract class ModuleTest to basically get access to assertions. In order to test classes embedded in dependencies and external services one needs to provide mocks for that. As the technology stack recommends we use the Mockito framework to offer this functionality. The following example shows how to implement Mockito into a JUnit test.
import static org.mockito.Mockito.when;
import static org.mockito.Mockito.mock;
...
public class StaffmanagementImplTest extends ModuleTest {
@Rule
public MockitoRule rule = MockitoJUnit.rule();
@Test
public void testFindStaffMember() {
...}
}
Note that the test class does not use the @SpringApplicationConfiguration
annotation. In a module test one does not use the whole application.
The JUnit rule is the best solution to use in order to get all needed functionality of Mockito. Static imports are a convenient option to enhance readability within Mockito tests.
You can define mocks with the @Mock
annotation or the mock(*.class)
call. To inject the mocked objects into your class under test you can use the @InjectMocks
annotation. This automatically uses the setters of StaffmanagementImpl
to inject the defined mocks into the class under test (CUT) when there is a setter available. In this case the beanMapper
and the staffMemberDao
are injected. Of course it is possible to do this manually if you need more control.
@Mock
private BeanMapper beanMapper;
@Mock
private StaffMemberEntity staffMemberEntity;
@Mock
private StaffMemberEto staffMemberEto;
@Mock
private StaffMemberDao staffMemberDao;
@InjectMocks
StaffmanagementImpl staffmanagementImpl = new StaffmanagementImpl();
The mocked objects do not provide any functionality at the time being. To define what happens on a method call on a mocked dependency in the CUT one can use when(condition).thenReturn(result)
. In this case we want to test findStaffMember(Long id)
in the StaffmanagementImpl.
public StaffMemberEto findStaffMember(Long id) {
return getBeanMapper().map(getStaffMemberDao().find(id), StaffMemberEto.class);
}
In this simple example one has to stub two calls on the CUT as you can see below. For example the method call of the CUT staffMemberDao.find(id)
is stubbed for returning a mock object staffMemberEntity
that is also defined as mock.
//given
long id = 1L;
Class<StaffMemberEto> targetClass = StaffMemberEto.class;
when(this.staffMemberDao.find(id)).thenReturn(this.staffMemberEntity);
when(this.beanMapper.map(this.staffMemberEntity, targetClass)).thenReturn(this.staffMemberEto);
//when
StaffMemberEto resultEto = this.staffmanagementImpl.findStaffMember(id);
//then
assertThat(resultEto).isNotNull();
assertThat(resultEto).isEqualTo(this.staffMemberEto);
After the test method call one can verify the expected results. Mockito can check whether a mocked method call was indeed called. This can be done using Mockito verify
. Note that it does not generate any value if you check for method calls that are needed to reach the asserted result anyway. Call verification can be useful e.g. when you want to assure that statistics are written out without actually testing them.
In order to implement a component test one can extend the ComponentTest class
@SpringBootTest(classes = { SpringBootApp.class })
@WebAppConfiguration
public class TablemanagementTest extends ComponentTest { ... }
Note that a component test uses parts of the infrastructure given by the application. In this case the SpringBootApp
is started by annotation. The @WebAppConfiguration
may be necessary if a WebApplicationContext
is needed.
In the restaurant example one needs login credentials in order to execute actions. A TestUtil helper class is available to provide this functionality. If you for example need permission to save a table and find an offer one would use the following procedure in the test.
@Inject
private DbTestHelper dbTestHelper;
@Before
public void setUp() {
TestUtil.login("waiter", PermissionConstants.SAVE_ORDER_POSITION, PermissionConstants.SAVE_ORDER,
PermissionConstants.FIND_TABLE, PermissionConstants.FIND_ORDER, PermissionConstants.SAVE_TABLE,
PermissionConstants.FIND_OFFER);
this.dbTestHelper.setMigrationVersion("0002");
this.dbTestHelper.resetDatabase();
}
Of course one can add more permissions if needed. The "waiter" string does not mean the login includes all permissions a waiter has. This is simply a textual representation for further handling. Only the explicitly given permissions following this string are granted.
In the @Before
method the database is reset and migrated to a specific database version using Flyway in order to provide a well-regulated and reproducible test data environment.
To provide a controlled surrounding for other tests one logs out the user in the @After
method by TestUtil.logout()
.
As an example let us go to the class Tablemanagement. When testing the method deleteTable() there are several scenarios that can happen and thus should be covered by tests.
First let us see the valid conditions to delete a table:
-
One needs permission to delete a table PermissionConstants.DELETE_TABLE
-
The table to delete needs to exist (the table with the given id has to be in the database) and
-
The table to delete is required to be TableState.FREE
Invalid conditions are: No credentials, table does not exist or table is not free. If you combine one invalid condition with valid conditions this yields the following test cases. Note that not working actions yield exceptions that can be expected in a test method.
-
The caller of the method does not have the required credentials
@Test(expected = AccessDeniedException.class) public void testDeleteTableWithoutCredentials() {...}
-
The caller has the required credentials but the table to be deleted is occupied
@Test(expected = IllegalEntityStateException.class) public void testDeleteTableWithCredentialsButNotDeletable() {...}
-
The caller has the required credentials but the table to be deleted does not exist
@Test(expected = ObjectNotFoundUserException.class) public void testDeleteTableWithCredentialsNotExisting() {...}
-
The caller has the required credentials and the table to be deleted exists and is free
@Test public void testDeleteTableWithCredentials() {...}
This type of testing is known as equivalence class analysis. Note that this is a general practice and can be applied to every level of tests.
OASP4J provides a simple test infrastructure to aid with the implementation of subsystem tests. It becomes available by simply subclassing AbstractRestServiceTest.java.
/*
* Basic configuration of a *RestServiceTest
*/
@RunWith(SpringRunner.class)
@TestPropertySource(properties = { "flyway.locations=filesystem:src/test/resources/db/tablemanagement" })
public class TablemanagementRestServiceTest extends AbstractRestServiceTest { ... }
The base class provides simple helper and configuration classes for different tasks via getters. Currently the following are supported:
-
Resetting the in-memory database based on Flyway (see RestTestClientBuilder.java)
-
Login and logout functionality (see SecurityTestHelper.java)
Additionally, a central point for Java-based bean configuration is available.
-
Provision of beans in the test context only (see RestaurantTestConfig.java)
Java-based bean configuration can be turned on for a test by adding the @SpringBootTest
annotation as shown in the following listing:
@SpringBootTest(classes = { RestaurantTestConfig.class,
SpringBootApp.class }, webEnvironment = WebEnvironment.RANDOM_PORT)
public abstract class AbstractRestServiceTest extends SubsystemTest { ... }
It is important to notice that the config class does not necessarily need an @Configuration
annotation. In fact, by omitting this annotation the beans specified in the config class will be solely available to the subclasses of AbstractRestServiceTest
.
In the TablemanagementRestServiceTest example the default login credentials are name="waiter" and password="waiter" as given in the application.properties
file. If one needs other permissions such as those of a "chief" it is possible to overwrite this login in the specific test.
@Test
public void testDeleteTable() {
getRestTestClientBuilder().setUser("chief");
getRestTestClientBuilder().setPassword("chief");
this.service = getRestTestClientBuilder().build(TablemanagementRestService.class);
...}
The base classes of the four test levels (SystemTest, SubsystemTest, ComponentTest, ModuleTest) are defined in the oasp4j-test
project under the following fully qualified names:
io.oasp.module.test.common.base.SystemTest
io.oasp.module.test.common.base.SubsystemTest
io.oasp.module.test.common.base.ComponentTest
io.oasp.module.test.common.base.ModuleTest
These classes are annotated with JUnit’s @Category
annotation. There exists an according category for each test level. These categories are aswell located in the oasp4j-test
project under the following fully qualified names:
io.oasp.module.test.common.api.category.CategorySystemTest
io.oasp.module.test.common.api.category.CategorySubsystemTest
io.oasp.module.test.common.api.category.CategoryComponentTest
io.oasp.module.test.common.api.category.CategoryModuleTest
We have assigned categories to the base classes by adding @Category
in the following way:
@Category(CategorySystemTest.class)
public abstract class SystemTest extends BaseTest { ... }
@Category(CategorySubsystemTest.class)
public abstract class SubsystemTest extends BaseTest { ... }
@Category(CategoryComponentTest.class)
public abstract class ComponentTest extends BaseTest { ... }
@Category(CategoryModuleTest.class)
public abstract class ModuleTest extends BaseTest { ... }
Now, how can we control the execution of different test levels and categories respectively?
In the pom.xml
of the restaurant sample application you can define the following property:
<properties>
...
<oasp.test.excluded.groups>io.oasp.module.test.common.api.category.CategorySystemTest</oasp.test.excluded.groups>
...
</properties>
Here, you can exclude any test level(s) by adding the fully qualified name of the according category of one or more levels separated by comma.
E.g., to exclude CategorySubsystemTest
and CategoryComponentTest
write the following:
<properties>
...
<oasp.test.excluded.groups>io.oasp.module.test.common.api.category.CategorySubsystemTest,io.oasp.module.test.common.api.category.CategoryComponentTest</oasp.test.excluded.groups>
...
</properties>
So, if you now want to run tests using Maven (mvn test
), any tests of the excluded categories are not executed.
A deployment pipeline is a semi-automated process that gets software-changes from version control into production. It contains several validation steps, e.g. automated tests of all integration levels. Because OASP4J should fit to different project types - from agile to waterfall - it does not define a standard deployment pipeline. But we recommend to define such a deployment pipeline explicitly for each project and to find the right place in it for each type of test.
For that purpose, it is advisable to have fast running test suite that gives as much confidence as possible without needing too much time and too much infrastructure. This test suite should run in an early stage of your deployment pipeline. Maybe the developer should run it even before he/she checked in the code. Usually lower integration levels are more suitable for this test suite than higher integration levels.
Note, that the deployment pipeline always should contain manual validation steps, at least manual acceptance testing. There also may be manual validation steps that have to be executed for special changes only, e.g. usability testing. Management and execution processes of those manual validation steps are currently not in the scope of OASP.
We are using tools (SonarQube/Jacoco) to measure the coverage of the tests. Please always keep in mind that the only reliable message of a code coverage of X% is that (100-X)% of the code is entirely untested. It does not say anything about the quality of the tests or the software though it often relates to it.
This section covers test configuration in general without focusing on integration levels as in the first chapter.
Sometimes it can become handy to provide other or differently configured bean implementations via CDI than those available in production. For example, when creating beans using @Bean
-annotated methods they are usually configured within those methods. WebSecurityBeansConfig shows an example of such methods.
@Configuration
public class WebSecurityBeansConfig {
//...
@Bean
public AccessControlSchemaProvider accessControlSchemaProvider() {
// actually no additional configuration is shown here
return new AccessControlSchemaProviderImpl();
}
//...
}
AccessControlSchemaProvider
allows to programmatically access data defined in some XML file, e.g. access-control-schema.xml
. Now, one can imagine that it would be helpful if AccessControlSchemaProvider
would point to some other file than the default within a test class. That file could provide content that differs from the default.
The question is: how can I change resource path of AccessControlSchemaProviderImpl
wihtin a test?
One very helpful solution is to use static inner classes.
Static inner classes can contain @Bean
-annotated methods, and by placing them in the classes
parameter in @SpringBootTest(classes = { /* place class here*/ })
annotation the beans returned by these methods are placed in the application context during test execution. Combining this feature with inheritance allows to override methods defined in other configuration classes as shown in the following listing where TempWebSecurityConfig
extends WebSecurityBeansConfig
. This relationship allows to override public AccessControlSchemaProvider accessControlSchemaProvider()
. Here we are able to configure the instance of type AccessControlSchemaProviderImpl
before returning it (and, of course, we could also have used a completely different implementation of the AccessControlSchemaProvider
interface). By overriding the method the implementation of the super class is ignored, hence, only the new implementation is called at runtime. Other methods defined in WebSecurityBeansConfig
which are not overridden by the subclass are still dispatched to WebSecurityBeansConfig
.
//... Other testing related annotations
@SpringBootTest(classes = { TempWebSecurityConfig.class })
public class SomeTestClass {
public static class TempWebSecurityConfig extends WebSecurityBeansConfig {
@Override
@Bean
public AccessControlSchemaProvider accessControlSchemaProvider() {
ClassPathResource resource = new ClassPathResource(locationPrefix + "access-control-schema3.xml");
AccessControlSchemaProviderImpl accessControlSchemaProvider = new AccessControlSchemaProviderImpl();
accessControlSchemaProvider.setAccessControlSchema(resource);
return accessControlSchemaProvider;
}
}
}
The following chapter of the Spring framework documentation explains issue, but uses a slightly different way to obtain the configuration.
The following two sections describe two debugging approaches for tests. Tests are either run from within the IDE or from the command line using Maven.
Debugging with the IDE is as easy as always. Even if you want to execute a SubsystemTest
which needs a Spring context and a server infrastructure to run properly, you just set your breakpoints and click on Debug As → JUnit Test. The test infrastructure will take care of initializing the necessary infrastructure - if everything is configured properly.
Please refer to the following two links to find a guide for debugging tests when running them from Maven.
In essence, you first have to start execute a test using the command line. Maven will halt just before the test execution and wait for your IDE to connect to the process. When receiving a connection the test will start and then pause at any breakpoint set in advance. The first link states that tests are started through the following command:
mvn -Dmaven.surefire.debug test
Although this is correct, it will run every test class in your project and - which is time consuming and mostly unnecessary - halt before each of these tests.
To counter this problem you can simply execute a single test class through the following command (here we execute the TablemanagementRestServiceTest
from the restaurant sample application):
mvn test -Dmaven.surefire.debug test -Dtest=TablemanagementRestServiceTest
It is important to notice that you first have to execute the Maven command in the according submodule, e.g. to execute the TablemanagementRestServiceTest
you have first to navigate to the core module’s directory.
This documentation is licensed under the Creative Commons License (Attribution-NoDerivatives 4.0 International).