The laa-ccms-caab-ui requires multiple other microservices in order to run locally and function correctly. They are:
- laa-ccms-caab-api
- laa-ccms-caab-ebs-api
- laa-ccms-caab-soa-api
- laa-ccms-caab-saml-mock
- laa-ccms-caab-db
This step requires maven to be installed on your machine. You can use homebrew to install it.
brew install maven
Next steps:
cd ..
git clone [email protected]:ministryofjustice/laa-ccms-caab-saml-mock.git laa-ccms-caab-saml-mock
cd laa-ccms-caab-saml-mock
mvn -B package --file pom.xml
cp mujina-idp/target/laa-ccms-caab-saml-mock-1.0.0.jar laa-ccms-caab-saml-mock-1.0.0.jar
cd ../laa-ccms-caab
docker-compose --compatibility -p laa-ccms-caab-development up -d --build laa-ccms-caab-saml-mock
docker-compose --compatibility -p laa-ccms-caab-development up -d --build laa-ccms-caab-saml-mock
cd ..
git clone [email protected]:ministryofjustice/laa-ccms-provider-ui-database.git laa-ccms-caab-db
git clone https://github.com/ministryofjustice/docker-liquibase.git
cd docker-liquibase
docker build -t caab-liquibase .
cd ..
cd laa-ccms-caab
docker-compose --compatibility -p laa-ccms-caab-development up -d --build laa-ccms-caab-db laa-ccms-caab-liquibase
Now wait 10 mins for the db to be populated ia the liquidbase scripts. Have a look at the container logs to check its progress.
Liquibase 'updateSql' Successful
When you see this message you can stop the liquibase container.
docker-compose --compatibility -p laa-ccms-caab-development up -d --build laa-ccms-caab-db
This is required due to a dependency on the ordinance survey api, instead of calling the real thing we call this. The wiremock can handle and postcode request.
docker-compose --compatibility -p laa-ccms-caab-development up -d --build laa-ccms-caab-wiremock
To facilitate local virus scan when uploading files in the UI, a ClamAV container can be started using the following command:
docker-compose --compatibility -p laa-ccms-caab-development up -d --build laa-ccms-caab-clam-av
LocalStack provides lightweight instances of AWS components, such as S3. When running locally, it is
recommended to
install LocalStack Desktop
to monitor and manage components. LocalStack AWS CLI (
awslocal
)
can also be useful if more in-depth interactions are required.
Note: persistence is a pro feature, so files in S3 will not endure on container shutdown.
A configured LocalStack container can be started via docker compose:
docker-compose --compatibility -p laa-ccms-caab-development up -d --build laa-ccms-caab-localstack
An S3 bucket laa-ccms-documents
will be created on startup if it does not already exist, via
/localstack/init-s3.sh
.
The OPA/OIA and connector services are not required for the ui to start up locally, but if you want to test / develop features for the integration then this is required.
This connector guide can be followed to get the connector/opa/oia up and running.
If you are running on an M series mac and using colima. You will most likely need 2 docker contexts/vms. One running x86_64 architecture and the other running arm64 architecture.
See guide below M series Macbook/Colima development setup for more information.
create a secrets.gradle file in the root directory:
project.ext.gitPackageUser = "{your name}"
project.ext.gitPackageKey = "{your personal api token}"
Replace the username with your own name, and replace the key with a personal access token to read GitHub packages.
Find more information here for setting up presonal access tokens.
- Docker: Ensure Docker is installed and running on your machine.
- Colima: If you're using an M series Mac and Colima, you might need two Docker contexts/VMs: one for x86_64 architecture and another for arm64 architecture.
We need two VMs, one for x86_64 and one for aarch64:
VM Profile Overview Example:
PROFILE | STATUS | ARCH | CPUS | MEMORY | DISK | RUNTIME | ADDRESS |
---|---|---|---|---|---|---|---|
default | Running | x86_64 | 4 | 6GiB | 50GiB | docker | 192.168.106.2 |
aarch64 | Running | aarch64 | 6 | 6GiB | 50GiB | docker | 192.168.106.3 |
Create VMs using Colima:
For aarch64:
colima start --cpu 6 --memory 6 --disk 50 --network-address --arch aarch64 --vm-type=vz --vz-rosetta --profile aarch64
For x86_64:
colima start --arch x86_64 --cpu 4 --memory 6 --disk 50 --network-address
The x86_64 profile will be used by default for running the Oracle DB. All other services should be able to run on the aarch64 profile.
If a profile fails to run, you can restart it using:
For aarch64:
colima start --profile aarch64
For default (x86_64):
colima start --profile default
To switch between Docker contexts, use the following commands:
Switch to default (x86_64) context:
unset DOCKER_HOST
docker context use colima
Switch to aarch64 context:
unset DOCKER_HOST
docker context use colima-aarch64
To check the current Docker context, use:
docker context show
INFORMATION: When cloning this project for the first time, you do not need to run gulp as the static resources should already be created within the project. This tool is just in case you wish to recreate them.
To provide a clean way of recreating the static resources, a gulp workflow has been implemented. This helps automate the creation of frontend static resources when new versions of frontend toolkits have been released, and to compile the projects own style sheets into a minified format for better browser performance for the end user.
In order to download/recompile frontend static resources, you will need to have npm
installed.
It will also be used to download the dev dependencies required to compile the various resources.
With npm
installed on your machine, you will need to ensure you have gulp
installed on your
system globally rather than at project level.
npm install -g gulp
Check this has installed by checking the version.
gulp --version
Next download the dev dependencies for this project.
# Whilst in the project directory
npm install
Recreating the frontend resources can be done by just running gulp.
There is just one task defined in gulpfile.js
called default
. When that task runs,
it will re-compile all the .scss
files within the project into minified stylesheets, which are
then stored in src/main/resources/static/ccms
.
To run the default task, you can just run gulp without any additional parameters:
# Whilst in the project directory
gulp
This project publishes vulnerability scans to the LAA Snyk Dashboard (Google SSO).
If you cannot see the LAA organisation when logged into the dashboard, please ask your lead developer/architect to have you added.
Scans will be triggered in two ways:
- Main branch - on commit, a vulnerability scan will be run and published to both the Snyk server and GitHub Code Scanning. Vulnerabilites will not fail the build.
- Feature branches - on commit, a vulnerability scan will be run to identify any new vulnerabilites (compared to the main branch). If new vulnerabilites have been raised. A code scan will also run to identify known security issues within the source code. If any issues are found, the build will fail.
To run Snyk locally, you will need to install the Snyk CLI.
Once installed, you will be able to run the following commands:
snyk test
For open-source vulnerabilies and licence issues. See snyk test
.
snyk code test
For Static Application Security Testing (SAST) - known security issues. See snyk code test
.
A JetBrains Plugin is also available to integrate with your IDE. In addition to vulnerabilities, this plugin will also report code quality issues.
The .snyk file is used to configure exclusions for scanning. If a vulnerability is not deemed to be a threat, or will be dealt with later, it can be added here to stop the pipeline failing. See documentation for more details.
Snyk may report that new vulnerabilities have been introduced on a feature branch and fail the pipeline, even if this is not the case. As newly identified vulnerabilities are always being published, the report for the main branch may become outdated when a new vulnerability is published.
If you think this may be the case, simply re-run the monitor
command against the main
branch
to update the report on the Snyk server, then re-run your pipeline.
Please ensure this matches the command used by the pr-merge workflow to maintain consistency.
snyk monitor --org=legal-aid-agency --all-projects --exclude=build --target-reference=main
You should then see the new vulnerability in the LAA Dashboard, otherwise it is a new vulnerability introduced on the feature branch that needs to be resolved.