Skip to content

Latest commit

 

History

History
513 lines (419 loc) · 15 KB

TESTING.md

File metadata and controls

513 lines (419 loc) · 15 KB

Testing

This document describe how to add new unit tests for different components, and how to use the mockery tool.

Actually this repository is using testify library for unit testing, so you will find in the majority of the _test.go files the imports below:

import (
  "testing"
  "github.com/stretchr/testify/assert"
  "github.com/stretchr/testify/require"
  "github.com/stretchr/testify/mock"
)

More information about testify here.

Additional libraries and tools used are:

  • mockery to generate the mocks for the defined interfaces.
  • sqlmock to test the gorm components without a database running.

Generating mocks

If you have added code which add a new interface, you will want to generate mocks for your interface:

  • If you added a new directory, check that MOCK_DIRS at mk/go-rules.mk file contain your new directory.
  • Generate the mocks by: make generate-mock.
  • Now you can find the mocks at the base directory internal/test/mock.

General tips

  • Use vscode coverage tools to see the paths that are not covered, and take advantage of it by:
    • Testing from inner dependencies to outer.
      • interactor, presenter, and repository packages use to be into the leaves of the dependencies.
      • handlers (http and event), middleware.
      • routers, services.
      • commands
    • Testing from earlier paths in a method to the deeper paths, so the coverage tool in vscode can drive pending flows to cover.
  • Use mock objects generated by make generate-mock. This will save a lot of time avoiding to code the mock boilerplate.
  • Refactor if necessary to make the unit test simpler.

Running container with iqe shell

podman run -ti \
  -e ENV_FOR_DYNACONF=ephemeral \
  -e NAMESPACE="$( oc project -q )" \
  --entrypoint=/bin/bash \
  quay.io/cloudservices/iqe-tests:idmsvc

Launching integration tests on ephemeral

# IQE plugins are comma separated
IQE_PLUGINS="idm"
ENV_FOR_DYNACONF=clowder_smoke
bonfire deploy-iqe-cji \
  --plugins "${IQE_PLUGINS}" \
  --env "${ENV_FOR_DYNACONF}" \
  --cji-name idmsvc-backend \
  --namespace "$( oc project -q )" \
  idmsvc-backend
# see: bonfire deploy-iqe-cji --help

or using the rule: make ephemeral-test-backend

Unit test for interactor

For each method to test, define a unit test with a table so that we consider for the different inputs, the expected outputs and errors; the table will allow to gather several situations to be validated.

The unit tests on those components are about validate the data transformation.

Example:

type TestCaseGiven struct {
  Params *api_public.CreateTodoParams
  In     *api_public.Todo
  Out    *model.Todo
}
type TestCaseExpected struct {
  Err error
  Out *model.Todo
}
type TestCase struct {
  Name     string
  Given    TestCaseGiven
  Expected TestCaseExpected
}
testCases := []TestCase{...}
// Eventually interact the test cases and check the output
for _, testCase := range testCases {
  t.Log(testCase.Name)
  component := NewTodoInteractor()
  err := component.Create(testCase.Given.Params, testCase.Given.In, out)
  // Add assertions for the expected result
  // ...
}

Unit test for repository

For those components, we check that the database operations to launch are the expected ones, given the input data.

To check that, sqlmock is used. This allow to launch unit tests without a database up and running.

The boilerplate generated by the database layer is huge, and to make possible to mock the test execution, it is splited the sql statments mock at internal/test/sql/<first_table>_sql.go, and the helper function to prepare the scenario.

// Prepare the SQL query mock
func PrepSqlSelectSomething(mock, withError bool, expectedErr error, ...) {
	// TIP A deterministic way takes more time than let the
	// unit test just fails, and copy the current statement.
	expectQuery := mock.ExpectedQuery(regexp.QuoteMeta(`SELECT ... FROM <first_table> ...`)).
		withArgs(
			data.OrgId,
			data.DomainUuid,
			1,
		)
	if withError {
		expectQuery.WillReturnError(expectedErr)
	} else {
		expectQuery.WillReturnRows(sqlmock.NewRows([]string{
			"id", "created_at", "updated_at", "deleted_at",

			...
		}).
			AddRow(
				domainID,
				data.CreatedAt,
				data.UpdatedAt,
				nil,

				...
			))
	}
}

// Name the function as the one we are testing to prepare the scenario
func FindByID(stage int, mock sqlmock.Sqlmock, expectedEr error, domainID uint, data *model.Domain) {
	for i := 1; i <= stage; i++ {
		switch i {
		case 1:
			PrepSqlSomething(mock, WithPredicateExpectedError(i, stage, expectedErr), expectedErr, domainID, data)
		default:
			panic(fmt.Sprintf("scenario %d/%d is not supported", i, stage))
		}
	}
}

Finally the unit test is reduced to call the helper that prepare the scenario for the sql mock FindByID on this case. See at internal/usecase/repository/domain_repository_test.go:

func (s *DomainRepositorySuite) TestFindByID() {
	t := s.T()
	r := &domainRepository{}
	s.mock.MatchExpectationsInOrder(true)

	// ... Prepare data: TIP use here helpers at `internal/test/builder/model/`

	expectedErr = fmt.Errorf(`...`)
	test_sql.FindByID(1, s.mock, expectedErr, ...)
	domain, err = r.FindByID(s.Ctx, data.OrgId, data.DomainUuid)
	require.NoError(t, s.mockExpectationsWereMet())
	assert.EqualError(t, err, expectedErr.Error())
	assert.Nil(t, domain)
}

As the complexity grow, we can compose the helper scenarios as we need, that would match with the same composition implemented in the repository layer.

Below an example preparing a mock which envolve a dynamic time.Time field (at: internal/usecase/repository/domain_repository.go)

s.mock.ExpectQuery(regexp.QuoteMeta(`INSERT INTO "ipas" ("created_at","updated_at","deleted_at","realm_name","realm_domains","id") VALUES ($1,$2,$3,$4,$5,$6) RETURNING "id"`)).
		WithArgs(
			data.IpaDomain.Model.CreatedAt,
			data.IpaDomain.Model.UpdatedAt,
			nil,

			data.IpaDomain.RealmName,
			data.IpaDomain.RealmDomains,
			data.IpaDomain.ID).
		WillReturnRows(sqlmock.NewRows([]string{"id"}).
			AddRow(data.IpaDomain.ID))

Sometimes we could need more tracing; in that case, update your confis/config.yaml file to set the "trace" level; this will print the SQL statement that is generated by gorm.


One last tip when creating unit tests for the repository layer, we could duplicate code very quickly trying to cover all the different paths for the repository layer. See if the SQL statement that you are mocking already exists, and reuse that.


References:

Unit test for presenters

Presenters in this repository are translating the resulting business model into the API output. They are tested in a similar way to the interactors. They don't store any state, and gather a set of methods for the transformations.

We validate the transformation and errors returned are the expected in a similar way as the interactor.

Unit test for a middleware

Isolate tests for the middlewares require to set up a basic echo instance with the state of the middleware we want to test. To make things easier we will define a helper function that allow that by injecting the specified middleware. Such as the below code:

func helperNewEcho(middleware echo_middleware.MiddlewareFunc) error {
  // This will return an echo instance ready with the configuration we need
  e := echo.New()
	h := func(c echo.Context) error {
		return c.String(http.StatusOK, "Ok")
	}
	e.Use(middleware)
	e.Add("GET", testPath, h)

	return e
}

Now use that to help you to create the use cases, the below is something general.

type TestCaseGiven struct {
  Method string
  Path string
  Body string
  Headers map[string]string
}
type TestCaseExpected struct {
  Code int
  Body string
}
type TestCase struct {
  Name     string
  Given    TestCaseGiven
  Expected TestCaseExpected
}

testCases := []TestCase{}

Now loop your use cases and check that everything behave as expected:

	for _, testCase := range testCases {
		res := httptest.NewRecorder()
		req := httptest.NewRequest(testCase.Given.Method, testCase.Given.Path, testCase.Given.Body)
		if testCase.Given.Headers != nil {
			for key, value := range testCase.Given.Headers {
				req.Header.Add(key, value)
			}
		}
		e.ServeHTTP(res, req)

		// Check expectations
		data, err := io.ReadAll(res.Body)
		require.NoError(t, err)
		assert.Equal(t, testCase.Expected.Code, res.Code)
		assert.Equal(t, testCase.Expected.Body, string(data))
	}

Testing client components

Client components represent the integration with third party services that we use by http requests. The intention to create a specific package for them (and interface) is to provide a way to mock them into the handler (see next section).

How work the internals for this? The basic is by starting a service for the test which will return an expected status code and body response to some request, so when our client component that we are developing will get that response, and we can check if for that given pre-defined response it behaves as expected.

To make life easier, some code has been created at: ./internal/test/client/server.go which prepare an echo instance for it.

TODO Unit test for a handler

FIXME It was seen that could be complicated to define a helper function to check specific handlers without start the whole service, so this part would need some refactor.

See: internal/test/client/server.go

Smoke tests

Every new feature we want to create a set of successful tests. For every new resource we will create a suite test (for instance for the token we have SuiteToken).

We create a new file at internal/test/smoke such as token_test.go.

We add a new suite test from the SuiteBase type:

type SuiteToken struct {
	SuiteBase
}

SuiteBase include the logic below:

  • Load the configuration.
  • Start/stop the services (API and metrics).
  • Generate a user and system XRHID for an arbitrary organization.

If we need to check custom responses that are not deterministic for the given input or more complex, then add the below to use BodyFunc field in a more comfortable way:

// BodyFuncTokenResponse is the function that wrap
type BodyFuncTokenResponse func(t *testing.T, expect *public.DomainRegTokenResponse) error

// WrapBodyFuncTokenResponse allow to implement custom body expectations for the specific type of the response.
// expected is the specific BodyFuncTokenResponse for DomainRegTokenResponse type
// Returns a BodyFunc that wrap the generic expectation function.
func WrapBodyFuncTokenResponse(expected BodyFuncTokenResponse) BodyFunc {
	// To allow a generic interface for any body response type
	// I have to use `body []byte`; I cannot use `any` because
	// the response type is particular for the endpoint.
	// That would mean the input to the function is not in a golang
	// structure; to let the tests to be defined with less boilerplate,
	// every response type would implement a wrapper function like
	// this, which unmarshall the bytes, and call to the more specific
	// custom body function.
	if expected == nil {
		return func(t *testing.T, body []byte) bool {
			return len(body) == 0
		}
	}
	return func(t *testing.T, body []byte) bool {
		// Unserialize the response to the expected type
		var data public.DomainRegTokenResponse
		if err := json.Unmarshal(body, &data); err != nil {
			require.Fail(t, fmt.Sprintf("Error unmarshalling body:\n"+
				"error: %q",
				err.Error(),
			))
			return false
		}
		// Run body expectetion on the unserialized data
		if err := expected(t, &data); err != nil {
			require.Fail(t, fmt.Sprintf("Error in body response:\n"+
				"error: %q",
				err.Error(),
			))
			return false
		}
		return true
	}
}

Now you can define methods that fit BodyFuncTokenResponse and use them into the BodyFunc by calling to WrapBodyFuncTokenResponse.

Define your suite test by adding every success request at:

func (s *SuiteTokenCreate) TestToken() {
	// Prepare the tests
	testCases := []TestCase{
		{
			Name: "TestToken",
			Given: TestCaseGiven{
				XRHIDProfile: XRHIDUser,
				Method: http.MethodPost,
				URL:    DefaultBaseURL + "/domains/token",
				Header: http.Header{
					"X-Rh-Insights-Request-Id": {"test_token"},
					"X-Rh-Identity":            {xrhidEncoded},
				},
				Body: public.DomainRegTokenRequest{
					DomainType: "rhel-idm",
				},
			},
			Expected: TestCaseExpect{
				StatusCode: http.StatusInternalServerError,
				Header: http.Header{
					"X-Rh-Insights-Request-Id": {"test_token"},
					"X-Rh-Identity":            nil,
				},
				BodyFunc: WrapBodyFuncTokenResponse(func(t *testing.T, body *public.DomainRegTokenResponse) error {
					// It allows to keep expectations checks closer
					// to the test context.
					assert.NotEmpty(t, body.DomainToken)
					assert.Equal(t, "rhel-idm", body.DomainType)
					assert.NotEqual(t, uuid.UUID{}, body.DomainId)
					assert.Greater(t, int(time.Now().Unix()), body.Expiration)
					return nil
				}),
			},
		},
	}

	// Execute the test cases
	s.RunTestCases(testCases)
}

The TestCase has been designed to fit integration tests too, and to provide flexibility, different Body fields exist:

  • Request:

    • Body any as any golang struct pointer, so the request will be serialized as a json making easier to define requests.
    • BodyBytes []byte as an array of bytes, so we can customize the request content (some use case could be provide a test with a malformed json document).
  • Response:

    • BodyBytes []byte as the above, to specify a perfect match with the response received.
    • Body any as a golang structure that will be serialized and compared with the array of bytes received as response.
    • BodyFunc BodyFunc as a custom function, here we will use the specific Wrap... function to remove boilerplate unserializing the content.

Once we have defined the test case into the testCase slice, we run all the test cases by calling to s.RunTestCases(testCases) which is a method defined into the SuiteBase structure.

Finally add your suite test internal/test/smoke/something_suite_test.go, at TestSuite function by:

func TestSuite(t *testing.T) {
	// TODO Add here your test suites
	suite.Run(t, new(SuiteToken))
}

Scripts

Finally, we could want to reach out directly the API by using scripts.

We can use the scripts at ./test/scripts/{local,ephe}-*.sh.

  • You can specify which XRHID profile to use by using XRHID_AS set to user, system or service-account, for local requests.

For instance:

APP_CLIENTS_RBAC_PROFILE=domain-admin make mock-rbac-down mock-rbac-up run
XRHID_AS=service-account ./test/scripts/local-domains-token.sh

References