Releases: aws-solutions/mlops-workload-orchestrator
Releases · aws-solutions/mlops-workload-orchestrator
v1.4.0
Added
- A new pipeline to deploy AWS SageMaker Model Quality Monitor. The new pipeline monitors the performance of a deployed model by comparing the
predictions that the model makes with the actual ground truth labels that the model attempts to predict.
Updated
- The Model Monitor pipeline's API call. Now, the Model Monitor pipeline is split into two pipelines, Data Quality Monitor pipeline, and Model Quality Monitor pipeline.
- The format of CloudFormation templates parameters' names from
PARAMETERNAME
toParameterName
. - The APIs of the Realtime Inference pipeline to support passing an optional custom endpoint name.
- The data quality baseline's Lambda to use AWS SageMaker SDK to create the baseline, instead of using Boto3.
- AWS Cloud Development Kit (AWS CDK) and AWS Solutions Constructs to version 1.117.0.
Refer to changelog for more information.
v1.3.0
Added
- The option to use Amazon SageMaker Model Registry to deploy versioned models. The model registry allows you to catalog models for production, manage model versions, associate metadata with models, manage the approval status of a model, deploy models to production, and automate model deployment with CI/CD.
- The option to use an AWS Organizations delegated administrator account to orchestrate the deployment of Machine Learning (ML) workloads across the AWS Organizations accounts using AWS CloudFormation StackSets.
Updated
- The build of the AWS Lambda layer for Amazon SageMaker SDK using the lambda:build-python3.8 Docker image.
Refer to changelog for more information.
v1.2.0
Added
- Two stack deployment options that provision machine learning (ML) pipelines either in a single AWS account, or across multiple AWS accounts for development, staging/test, and production environments.
- Ability to provide an optional AWS Key Management Service (KMS) key to encrypt captured data from the real-time Amazon SageMaker endpoint, output of batch transform and data baseline jobs, output of model monitor, and Amazon Elastic Compute Cloud (EC2) instance's volume used by Amazon SageMaker to run the solution's pipelines.
- New pipeline to build and register Docker images for custom ML algorithms.
- Ability to use an existing Amazon Elastic Container Registry (Amazon ECR) repository, or create a new one, to store Docker images for custom ML algorithms.
- Ability to provide different input/output Amazon Simple Storage Service (Amazon S3) buckets per pipeline deployment.
Updated
- The creation of Amazon SageMaker resources using AWS CloudFormation.
- The request body of the solution's API calls to provision pipelines.
- AWS SDK to use the solution's identifier to track requests made by the solution to AWS services.
- AWS Cloud Development Kit (AWS CDK) and AWS Solutions Constructs to version 1.96.0.
Refer to changelog for more information.
v1.1.1
Updated
- AWS ECR image scan on push property's name from
scanOnPush
toScanOnPush
for image scanning based on the recently updated property name in AWS CloudFormation. - AWS ECR repository's name in the IAM policy's resource name from
<repository-name>*
to<pipeline_stack_name>*-<repository-name>*
to accommodate recent repository name being prefixed with AWS CloudFormation stack name.
Refer to changelog for more information.
v1.1.0
Added
- Allows you to provision multiple model monitor pipelines to periodically monitor the quality of deployed Amazon SageMaker's ML models.
- Ability to use an existing S3 bucket as the model artifact and data bucket, or create a new one to store model artifact and data.
Updated
- Updates AWS Cloud Development Kit (AWS CDK) and AWS Solutions Constructs to version 1.83.0.
- Updates request body of the Pipelines API's calls.
Refer to changelog for more information.
v1.0.0
Added
- Initiates a pre-configured pipeline through an API call or a Git repository
- Automatically deploys a trained model and provides an inference endpoint
- Supports running your own integration tests to ensure that the deployed model meets expectations
- Allows provisioning of multiple environments to support ML model's life cycle
- Notifies users of the pipeline outcome though SMS or email
Refer to changelog for more information.