Skip to content

Testing

Shannon Atkinson edited this page Oct 30, 2024 · 1 revision

Testing

The No-Code Architects Toolkit includes test cases for each core functionality to ensure reliability and identify issues early. This section outlines how to run tests, descriptions of key test cases, and best practices for maintaining the tests.


Running Tests

All tests are located in the tests directory and can be executed using pytest, a popular testing framework for Python.

  1. Install pytest: Ensure that pytest is installed in your environment. If not, you can install it using:

    pip install pytest
  2. Run Tests: To run all tests in the tests folder, execute the following command from the project’s root directory:

    pytest tests/

    This command will scan the tests folder, locate all files prefixed with test_, and execute them.

  3. View Test Results: pytest will display the results of each test in the terminal, with detailed information for any failed tests. For a more detailed output, run:

    pytest -v tests/

Key Test Cases

The toolkit includes tests for each core endpoint and service module. Here are some of the main test cases:

1. Authentication Tests

  • Test Valid API Key: Verifies that requests with a valid API_KEY can access restricted endpoints.
  • Test Invalid API Key: Ensures requests with an incorrect or missing API_KEY receive a 401 Unauthorized response.

2. File Conversion Tests

  • Test MP3 Conversion: Confirms that valid media files are successfully converted to MP3 format, and the output file meets the specified bitrate.
  • Test Unsupported Format: Attempts to convert an unsupported file format and verifies that a 400 Bad Request error is returned.

3. Transcription Tests

  • Test Transcription Output: Verifies that audio or video files are correctly transcribed into text, srt, or vtt format as specified.
  • Test Timestamp Inclusion: Ensures that enabling timestamps includes word-level timestamps in the output file.
  • Test Invalid Media URL: Tests that an invalid or unreachable media_url returns an error and logs the issue.

4. Video Manipulation Tests

  • Test Video Combination: Confirms that multiple video files are correctly combined in the specified order.
  • Test Caption Embedding: Verifies that an SRT file is properly embedded in a video file to create a captioned video.
  • Test Keyframe Extraction: Ensures keyframes are extracted at specified intervals, and the correct number of frames is returned based on the video length and interval.

5. GDrive Upload Tests

  • Test File Upload: Confirms that files are uploaded to the correct Google Drive folder, and the returned URL is accessible.
  • Test Missing Folder ID: Tests that uploads default to the root directory when folder_id is not specified.
  • Test Invalid Credentials: Verifies that missing or incorrect credentials produce an error and log the failure.

6. S3 Upload Tests

  • Test File Upload to S3: Verifies that files are successfully uploaded to the specified S3 bucket or DigitalOcean Space.
  • Test Invalid S3 Credentials: Confirms that incorrect access keys or endpoint URLs return an error.

7. Webhook Notification Tests

  • Test Successful Webhook Delivery: Confirms that the toolkit sends a notification to the specified webhook_url after task completion.
  • Test Webhook Retry Mechanism: Simulates a webhook failure and verifies that the toolkit retries delivery according to the retry policy.

Best Practices for Testing

  1. Mock External Services: Use mocking libraries (e.g., unittest.mock) to simulate responses from external services like Google Cloud Storage or S3-compatible storage providers. This avoids dependencies on live systems and allows for predictable test results.

  2. Isolate Each Test Case: Each test case should run independently, without relying on the state from previous tests. Use temporary directories or in-memory storage for file operations when possible.

  3. Set Up Test Environment Variables: Create a dedicated .env.test file with environment variables configured for testing. This ensures that tests do not accidentally modify production resources.

  4. Clean Up Resources: Ensure that any temporary files or data created during testing are cleaned up afterward. This helps prevent side effects and ensures consistent test environments.

  5. Use Assertions for Validation: For each test, use assert statements to validate expected outcomes. For example, check that HTTP responses return the correct status codes, that files are created or deleted as expected, and that output files match specified formats.


Example Test Command

To run a single test file or a specific test case, specify the file path and optionally the test name. For example:

pytest tests/test_endpoints.py::test_mp3_conversion

This command will execute only the test_mp3_conversion test in the test_endpoints.py file, allowing for faster debugging and targeted testing.


By following these testing practices, you can ensure that the toolkit remains reliable and that any new features or updates work as expected. Regular testing is key to maintaining the toolkit's stability and robustness.